ANN Series 8 - Perceptron
Perceptron
- Binary classifier : Class =
{1,-1}
- Linear Classifier
- Learns weight and threshold
- Activation Function –
- Step function –Binary or Sign
- Let input vector
- x = ( x1, x2,
... , xn)
- Associate weight vector for the
input
- w = ( w1, w2,
... , wn)
Perceptron Learning Algorithm
- Step 1 : Initialize weights
randomly
- Step 2 : Take a training sample
at random without replacement
- Step 3 : Perform Affine
transformation of input and weights
- Step 4 : Pass output of affine
transformation through activation function to find the class label
- Step 5 : Update weights based
on perceptron learning rule
- Step 6 : Repeat from step 2
till all the inputs are correctly classified
Geometric Intuition behind the Perceptron Learning Algorithm
Intuitive Explanation
The weight vector can be thought of as pointing in the "ideal" direction where the sum of the weighted inputs gives the correct classification. Updating the weights based on misclassified examples gradually rotates and shifts this vector (and thus the decision boundary) towards a position where the classifications are as accurate as possible given the training data.
The bias update adjusts where the decision boundary intersects the feature space, allowing for fine-tuning of the boundary's position even when the inputs are all zeros.
In essence, the perceptron learning algorithm iteratively adjusts the orientation and position of the decision boundary by updating the weights and bias, aiming to minimize the classification errors on the training set.
Geometric Interpretation
- The weight vector can be thought of as normal to the decision boundary. Changing rotates or tilts the boundary in the feature space.
- The bias shifts the boundary closer or further away from the origin without rotating it.
Case 1: If x ε class:1 and w.x < 0
Case 2 : If x ε class:-1 and w.x ≥ 0
Limitations
- Solves for Boolean output
- Single layer perceptron solves
only linearly separable problems cause they try to find a linear boundary
- Updates weight and bias for
every training sample
Comments
Post a Comment