Perceptron Learning Algorithm: Feedforward and Weight Update

NeuralNinja
4 min readJul 26, 2023

--

Introduction:

The Perceptron Learning Algorithm (PLA) is a binary classification supervised learning approach. By updating the weights, the method aims to predict the result for given inputs and minimizes the loss function. The goal is to correctly classify the given data so that the loss function is equal to zero for all data points. This can be accomplished by iteratively carrying out forward and backward passes on given data until the loss reaches zero. This blog highlights key concept behind PLA which enables understanding the core ideas of neural networks like forward pass, loss function minimization and weight update. Here is a breakdown of PLA’s working mechanism:

Forward pass:

Predict output using current weights:

output prediction

x0 is bias term i.e., 1.

Loss function:

Determine loss function using actual and predicted class. For PLA the loss function will be:

Loss function for PLA

where alpha is learning rate.

Backward pass:

If loss function is not equal to 0 i.e.,

update the weights.

Weight update:

weights are updated as:

weight update

where wi is current weight and xi is its corresponding input.

Epoch:

Forward and Backward pass for all the data points in one cycle.

Example:

Using PLA, update the weights until loss is minimized to 0 such that LR (aplha)=1, weights= [0,1, -1], activation function is step function and:

Data

Solution:

The solution is based on steps described earlier i.e.,

for given input, using the current weights and activation function output y is predicted.

Loss is determined at each step and weights are updated if loss is not 0.

Loss function converges to 0 for all inputs after 3rd epoch hence final solution is achieved.

For epoch=1 and 1st two inputs let’s understand these steps:

Input (0,0): y=0.5, L=-0.5, update weights

weight update

New weights will be w0=-0.5, w1=1 and w2=-1 and these weights will be used for the next input.

Input (0,1): y=0, L=1, update weights since loss function is not equal to zero:

weight update

New weights will be w0=0.5, w1=1 and w2=0 and these weights will be used for the next input and will be updated for other inputs in case loss function is not equal to 0.

Solution

Python:

import numpy as np
import matplotlib.pyplot as plt


def step(output):
if output > 0:
return 1
elif output==0:
return 1/2
else:
return 0

inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]
T = [0, 1, 0, 1]
w = [0, 1, -1]
LR = 1
loss_final=[]

for epoch in range(5):
for i in range(4):
data = inputs[i]
c = T[i]
output = step(w[0] * 1 + w[1] * data[0] + w[2] * data[1])
loss = LR * (c - output)
if loss !=0:
w[0] += loss * 1
w[1] += loss * data[0]
w[2] += loss * data[1]
loss_final.append({'epoch': epoch, 'input':data, 'loss': loss})

epochs = [key['epoch'] for key in loss_final]
losses = [key['loss'] for key in loss_final]

# Plotting the data using plt.scatter
plt.plot(epochs, losses)
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Loss over Epochs')
plt.show()
Loss over Epochs

It can be seen in plot that as the epochs increases the loss function tends to shrink and at 3rd epoch it reaches to 0 for all data points meaning that the algorithm becomes accurate enough to correctly classify all the given data points.

Conclusion:

Though PLA is very basic algorithm yet fundamental to understand the phenomena of forward pass and weight update for the complex neural networks. This blog not only focuses on theoretical understanding of PLA but also address math behind it and its implementation in python to get grasp on mentioned phenomena (forward pass and weight updating).

--

--

No responses yet