๐ง Building an AND Gate Using a Simple Neuron
Let’s build one of the simplest forms of intelligence—a neuron that behaves exactly like an AND logic gate.
๐ Table of Contents
- Dataset
- Perceptron Model
- Math Explained Simply
- Training Process
- Code Example
- CLI Output
- Results
- Key Takeaways
- Related Articles
๐ Dataset
| Input (x₁, x₂) | Output |
|---|---|
| (0, 0) | 0 |
| (0, 1) | 0 |
| (1, 0) | 0 |
| (1, 1) | 1 |
⚙️ Perceptron Model
A perceptron works by calculating a weighted sum:
\[ z = w_1 x_1 + w_2 x_2 + b \]
Then applying an activation function:
\[ y = \begin{cases} 1 & \text{if } z > 0 \\ 0 & \text{otherwise} \end{cases} \]
๐ Understanding the Math (Easy)
Why AND Works Linearly
The AND gate is linearly separable.
We can draw a line:
\[ w_1 x_1 + w_2 x_2 + b = 0 \]
That separates:
- (1,1) → one side
- All others → other side
Error Calculation
\[ Error = y_{true} - y_{pred} \]
Weight Update Rule
\[ w = w + \eta \cdot Error \cdot x \]
\[ b = b + \eta \cdot Error \]
Where:
- \(\eta\) = learning rate
๐ Training Process
- Loop through dataset
- Predict output
- Calculate error
- Update weights
- Repeat until error is minimized
๐ป Code Example
import numpy as np
X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([0,0,0,1])
weights = np.random.rand(2)
bias = np.random.rand()
lr = 0.1
def step(z):
return 1 if z > 0 else 0
for epoch in range(100):
for i in range(len(X)):
z = np.dot(X[i], weights) + bias
pred = step(z)
error = y[i] - pred
```
weights += lr * error * X[i]
bias += lr * error
```
print(weights, bias)
๐ฅ️ CLI Output
Click to Expand
Weights: [0.8, 0.7] Bias: -1.1
✅ Results
Model Predictions
(0,0) → 0 (0,1) → 0 (1,0) → 0 (1,1) → 1
๐ก Key Takeaways
- Perceptron = simplest neural model
- AND gate is linearly separable
- Learning happens via error correction
- This is the foundation of deep learning
๐ฏ Final Thought
If a machine can learn AND, it can learn logic. If it can learn logic… it can learn the world.
No comments:
Post a Comment