🧠1. Artificial Neuron Model (Base Formula)
An artificial neuron computes a weighted sum of its inputs and then applies an activation function.
where:
-
= input signals
-
= weights
-
= bias
-
= activation function
-
= output of neuron
⚙️ 2. Concept of Bias
-
Bias is an additional constant input added to the weighted sum before applying the activation function.
-
It allows the activation function to shift left or right on the graph — helping the neuron activate even when all inputs are zero.
Mathematically:
If we remove bias, the equation is:
The neuron can only learn functions that pass through the origin (0,0).
By adding bias :
the line (or decision boundary) can shift away from the origin, improving flexibility.
🔹Analogy:
Think of bias as the intercept (c) in the line equation .
It helps control when the neuron "fires."
🔒 3. Concept of Threshold
-
The threshold is a value that determines whether the neuron should activate (output 1) or remain inactive (output 0).
-
It works like a cutoff point.
If:
else
Example:
Let threshold .
If weighted sum = 0.7 → output = 1
If weighted sum = 0.3 → output = 0
🔄 4. Relationship between Bias and Threshold
Bias and threshold serve opposite roles but are mathematically related.
So we can rewrite:
That’s why modern neural networks use “bias” instead of “threshold” — it’s more convenient for computation.
✅ 5. Summary Table
Concept | Meaning | Role |
---|---|---|
Bias (b) | Constant added to weighted sum | Shifts activation curve left/right |
Threshold (θ) | Minimum value needed to activate neuron | Decides when neuron fires |
Relation | Bias is negative of threshold |
Example (Binary Step Neuron):
Here, bias = -threshold, and it adjusts when the neuron turns ON.
0 Comment to "Concept of Bias and Threshold in Artificial Neural Networks (ANNs)"
Post a Comment
Note: Only a member of this blog may post a comment.