🧠 1. Artificial Neuron Model (Base Formula)
An artificial neuron computes a weighted sum of its inputs and then applies an activation function.
y=f(i=1∑nwixi+b)
where:
⚙️ 2. Concept of Bias
-
Bias is an additional constant input added to the weighted sum before applying the activation function.
-
It allows the activation function to shift left or right on the graph — helping the neuron activate even when all inputs are zero.
Mathematically:
If we remove bias, the equation is:
y=f(∑wixi)
The neuron can only learn functions that pass through the origin (0,0).
By adding bias b:
y=f(∑wixi+b)
the line (or decision boundary) can shift away from the origin, improving flexibility.
🔹Analogy:
Think of bias as the intercept (c) in the line equation y=mx+c.
It helps control when the neuron "fires."
🔒 3. Concept of Threshold
If:
∑wixi≥threshold, output =1
else
output=0
Example:
Let threshold θ=0.5.
If weighted sum = 0.7 → output = 1
If weighted sum = 0.3 → output = 0
🔄 4. Relationship between Bias and Threshold
Bias and threshold serve opposite roles but are mathematically related.
b=−θ
So we can rewrite:
y=f(∑wixi−θ)=f(∑wixi+b)
That’s why modern neural networks use “bias” instead of “threshold” — it’s more convenient for computation.
✅ 5. Summary Table
Concept | Meaning | Role |
---|
Bias (b) | Constant added to weighted sum | Shifts activation curve left/right |
Threshold (θ) | Minimum value needed to activate neuron | Decides when neuron fires |
Relation | b=−θ | Bias is negative of threshold |
Example (Binary Step Neuron):
y={1,0,if (w1x1+w2x2+b)≥0otherwise
Here, bias = -threshold, and it adjusts when the neuron turns ON.