Forward Pass & Backpropagation: Neural Networks 101 | by Egor Howell | Nov, 2023


Explaining how neural networks “train” and “learn” patterns in data by hand and in code using PyTorch

Egor Howell
Towards Data Science
Image by juicy_fish from Flaticon. Author has license from Flaticon to use image.

In my past two articles, we dived into the origins of the neural network from a single perceptron to a large interconnected (multi-layer perceptron (MLP)) non-linear optimisation engine. I highly recommend you check my previous posts if you are unfamiliar with the perceptron, MLP, and activation functions as we will discuss quite a bit in this article:

Now it’s time to understand how these neural networks get “trained” and “learn” the patterns in the data that you pass into it. There are two key components: forward pass and backpropagation. Let’s get into it!

Let’s quickly recap the general structure of a neural network:

A basic two-hidden multi-layer perceptron. Diagram by author.

Where each hidden neuron is carrying out the following process:

The process carried out inside each neuron. Diagram by author.
  • Inputs: These are the features of our data.
  • Weights: Some coefficients we multiply the inputs by. The goal of the algorithm is to find the most optimal weights.
  • Linear Weighted Sum: Sum up the products of the inputs and weights and add a bias/offset term, b.
  • Hidden Layer: This is where the multiple neurons are stored to learn patterns in the data. The superscript refers



Source link

This post originally appeared on TechToday.

Leave a Reply

Your email address will not be published. Required fields are marked *