Thursday

Numerical Questions


Q1: Single-layer Perceptron Output

Inputs: x₁ = 1, x₂ = 0; Weights: w₁ = 0.6, w₂ = 0.4; Bias: b = -0.5. Activation: Binary Step.

Solution:
Weighted sum: net = w₁*x₁ + w₂*x₂ + b = 0.6*1 + 0.4*0 - 0.5 = 0.1
Binary step function: y = 1 if net ≥ 0 else 0
Output: y = 1
            

Q2: Hidden Layer Output in MLP

Inputs: X = [1, 0]; Weights: w₁₁=0.5, w₁₂=-0.4, w₂₁=0.3, w₂₂=0.2; Biases: b₁=0.1, b₂=-0.2; Activation: Sigmoid.

Solution:
Hidden neuron 1: net₁ = 0.5*1 + 0.3*0 + 0.1 = 0.6
h₁ = 1/(1 + e^(-0.6)) ≈ 0.645

Hidden neuron 2: net₂ = -0.4*1 + 0.2*0 -0.2 = -0.6
h₂ = 1/(1 + e^(0.6)) ≈ 0.354

Hidden layer outputs: [0.645, 0.354]
            

Q3: Backpropagation Weight Update

Output neuron: y = 0.6, Desired output: d = 1, Input to neuron: h = 0.5, Learning rate: η = 0.1, Activation: Sigmoid.

Solution:
Delta error: δ = (d - y) * y * (1 - y)
δ = (1 - 0.6) * 0.6 * 0.4 = 0.096

Weight update: Δw = η * δ * h = 0.1 * 0.096 * 0.5 = 0.0048
Updated weight: w_new = w_old + Δw
            


            

Concept of Bias and Threshold in Artificial Neural Networks (ANNs)

 

🧠 1. Artificial Neuron Model (Base Formula)

An artificial neuron computes a weighted sum of its inputs and then applies an activation function.

y=f(i=1nwixi+b)y = f\left(\sum_{i=1}^{n} w_i x_i + b\right)

where:

  • xix_i = input signals

  • wiw_i = weights

  • bb = bias

  • ff = activation function

  • yy = output of neuron


⚙️ 2. Concept of Bias

  • Bias is an additional constant input added to the weighted sum before applying the activation function.

  • It allows the activation function to shift left or right on the graph — helping the neuron activate even when all inputs are zero.

Mathematically:
If we remove bias, the equation is:

y=f(wixi)y = f\left(\sum w_i x_i\right)

The neuron can only learn functions that pass through the origin (0,0).

By adding bias bb:

y=f(wixi+b)y = f\left(\sum w_i x_i + b\right)

the line (or decision boundary) can shift away from the origin, improving flexibility.

🔹Analogy:
Think of bias as the intercept (c) in the line equation y=mx+cy = mx + c.
It helps control when the neuron "fires."


🔒 3. Concept of Threshold

  • The threshold is a value that determines whether the neuron should activate (output 1) or remain inactive (output 0).

  • It works like a cutoff point.

If:

wixithreshold, output =1\sum w_i x_i \geq \text{threshold}, \text{ output } = 1

else

output=0\text{output} = 0

Example:
Let threshold θ=0.5\theta = 0.5.
If weighted sum = 0.7 → output = 1
If weighted sum = 0.3 → output = 0


🔄 4. Relationship between Bias and Threshold

Bias and threshold serve opposite roles but are mathematically related.

b=θb = -\theta

So we can rewrite:

y=f(wixiθ)=f(wixi+b)y = f\left(\sum w_i x_i - \theta\right) = f\left(\sum w_i x_i + b\right)

That’s why modern neural networks use “bias” instead of “threshold” — it’s more convenient for computation.


5. Summary Table

ConceptMeaningRole
Bias (b)Constant added to weighted sumShifts activation curve left/right
Threshold (θ)Minimum value needed to activate neuronDecides when neuron fires
Relationb=θb = -θBias is negative of threshold

Example (Binary Step Neuron):

y={1,if (w1x1+w2x2+b)00,otherwisey = \begin{cases} 1, & \text{if } (w_1x_1 + w_2x_2 + b) \ge 0 \\ 0, & \text{otherwise} \end{cases}

Here, bias = -threshold, and it adjusts when the neuron turns ON.