Skip to content

Step 4: Deep Learning Foundations

Step 4: Deep Learning Foundations

Deep Learning uses multi-layered neural networks to solve complex problems like computer vision and natural language processing. PyTorch is the standard framework for research and industry.


🛠️ Code Example: Building a Simple Neural Network

This code demonstrates a single-layer neural network using the torch.nn module.

import torch
import torch.nn as nn
import torch.optim as optim

# 1. Define the Architecture
class SimpleNet(nn.Module):
    def __init__(self):
        super(SimpleNet, self).__init__()
        # Input layer to 10 hidden neurons
        self.fc1 = nn.Linear(4, 10) 
        # Hidden to output (binary classification)
        self.fc2 = nn.Linear(10, 1)  
        self.sigmoid = nn.Sigmoid()

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return self.sigmoid(x)

# 2. Instantiate and define Loss/Optimizer
model = SimpleNet()
criterion = nn.BCELoss() # Binary Cross Entropy
optimizer = optim.SGD(model.parameters(), lr=0.01)

# 3. Dummy Training Step
input_data = torch.randn(1, 4) # 1 sample with 4 features
target = torch.tensor([[1.0]])

# Forward pass
output = model(input_data)
loss = criterion(output, target)

# Backward pass (Learning!)
optimizer.zero_grad()
loss.backward()
optimizer.step()

print(f"Loss after 1 step: {loss.item()}")

🏗️ The “Big Three” Concepts

  1. Activation Functions (ReLU): Add non-linearity so the model can learn complex shapes.
  2. Backpropagation: Automatically calculating gradients for all layers using the Chain Rule.
  3. Optimizers (Adam/SGD): The logic that updates weights based on those gradients.

🥅 Your Goal

  • Understand the flow: Input -> Layers -> Output -> Loss -> Gradients -> Update Weights.
  • Try running a CNN (Convolutional Neural Network) example for image recognition.