Single Layer Perceptron

NeuralNinja
4 min readMay 27, 2023

--

Single Layer Perceptron is the simplest type of neural network, yet the essential building block of deep learning. This network has potential to be used for classification and regression applications. It essentially takes the weighted sum of the inputs and uses the activation function to yield the output.

The framework encompasses inputs, weights, a summation function, an activation function, and an output. Let’s break it down for easy comprehension:

1. Inputs: The input might be a feature vector or raw data.

2. Weights: Each input is assigned a weight that defines its worth in the result computation.

3. Summation Function: This function adds the product of each input and the associated weight to produce a single output.

4. Activation Function: To generate the output, the summation function’s output is fed into the activation function.

5. Output: The outcome of the activation function is known as the output of a single perceptron.

Mathematical intuition:

Suppose we have “n” inputs i.e., x0, x1, x2, ……, xn, and that each input has a corresponding weight, such as w0, w1, w2, ……, wn. The perceptron’s output “y” after applying activation function “f” on weighted sum can be provided as:

Equation for single perceptron

This signifies that the sum of the product of inputs with corresponding weights is calculated. Finally, the out is derived by applying the activation function to the summed-up value. It is worth noting that x0 is a bias term, and its value is always unit value.

Example:

Using step function as activation function classify following points with the help of single perceptron as “+/- “such that weights are: w0=-2 , w1=1 and w2=2.

[0,0], [1,0], [0,1], [0,2], [2,0], [2,1], [1,2], [3,1].

Solution:

Before we go into classification, let’s have a look at how the step function works.

Step Function

This indicates that the step function returns +1 when the input value is greater than 0, returns 0 when the input value is less than 0 and for input 0 it will give 0.5. As a result, if the weighted total is less than 0 and the activation function utilized is a step function, the input point will be classed as -, and in the opposite scenario, the point will be classified as +.

We have three inputs for the task at hand, including the bias term, hence we have three determined weights. Because the purpose is to classify the points, the perceptron design will be as follows:

Desired Network Design

Since we have three inputs and three corresponding weights hence the perceptron equation will be as following:

Desired Network Equation

Now that we have designed the perceptron and we have figured out the required equation as well let’s classify the given points.

Python Solution:

import matplotlib.pyplot as plt
x0=1
weights=[[-2,1,2]]
points=[(0,0),(1,0),(0,1),(0,2),(2,0),(2,1),(1,2),(3,1)]
category=[]
for i in points:
for j in weights:
weightedsum=j[0]*x0+j[1]*i[0]+j[2]*i[1]
print("Weighted sum of point:",i,"is=",weightedsum)
if weightedsum>=0:
print(i, "will be classified as +ve","\n")
category.append("+")
else:
print(i,"will be classified as -ve","\n")
category.append("-")

for i, cat in zip(points, category):
plt.text(i[0], i[1], cat, c='red', fontsize=20)

plt.xlim(0, 3)
plt.ylim(0, 3)
plt.xticks(range(5))
plt.yticks(range(5))

# Add grid lines to the plot
plt.show()

Conclusion:

In this blog, the main principle behind single perceptron has been explained not only theoretically and mathematically, but also an example has been solved as well as implemented in Python to grasp the core notion of single layered perceptron.

--

--

No responses yet