Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

1.2 - Multivariate Linear Regression

If you already understand Simple Linear Regreession, then we can make things a little more complicated. Multivariate Linear Regression considers inputs with multiples features. This will be help us to develop a dense layer for the next part.

The goal of multivariate linear regression is similar to simple linear regression, estimate f()f(\cdot) by a linear approximation f^()\hat{f}(\cdot)

y=f(X)+ϵ\mathbf{y} = f\left( \mathbf{X} \right) + \epsilon

Note that input data X\mathbf{X} is now a matrix.

Purpose of this Notebook:

  1. Create a dataset for multivariate linear regression task

  2. Create our own Perceptron class from scratch

  3. Calculate the gradient descent from scratch

  4. Train our Perceptron

  5. Compare our Perceptron to the one prebuilt by PyTorch

Setup

print('Start package installation...')
Start package installation...
%%capture
%pip install torch
%pip install scikit-learn
print('Packages installed successfully!')
Packages installed successfully!
import torch
from torch import nn

from platform import python_version
python_version(), torch.__version__
('3.12.12', '2.9.0+cu128')
device = 'cpu'
if torch.cuda.is_available():
    device = 'cuda'
device
'cpu'
torch.set_default_dtype(torch.float64)
def add_to_class(Class):
    """Register functions as methods in created class."""
    def wrapper(obj): setattr(Class, obj.__name__, obj)
    return wrapper

Dataset

create dataset

The dataset D\mathcal{D} is consists of the input data X\mathbf{X} and the target data y\mathbf{y}

D={(x1,y1),,(xm,ym)}\mathcal{D} = \left\{(\mathbf{x}_{1}^{\top}, y_{1}), \cdots, (\mathbf{x}_{m}^{\top}, y_{m}) \right\}

The input data XRm×n\mathbf{X} \in \mathbb{R}^{m \times n} can be represented as a matrix

X=[x11x1nxm1xmn]=[x1xm]\begin{align} \mathbf{X} &= \begin{bmatrix} x_{11} & \cdots & x_{1n} \\ \vdots & \ddots & \vdots \\ x_{m1} & \cdots & x_{mn} \end{bmatrix} \\ &= \begin{bmatrix} \mathbf{x}_{1}^{\top} \\ \vdots \\ \mathbf{x}_{m}^{\top} \end{bmatrix} \end{align}

where mm is the number of samples, nn is the number of features, and xi=[xi1xin]R1×n\mathbf{x}_{i}^{\top} = \begin{bmatrix} x_{i1} & \cdots & x_{in} \end{bmatrix} \in \mathbb{R}^{1 \times n}.

The target data yRm\mathbf{y} \in \mathbb{R}^{m} still without changes

y=[y1ym]\mathbf{y} = \begin{bmatrix} y_{1} \\ \vdots \\ y_{m} \end{bmatrix}
from sklearn.datasets import make_regression
import random


M: int = 10_100 # number of samples
N: int = 4 # number of features

X, Y = make_regression(
    n_samples=M, 
    n_features=N, 
    n_targets=1,
    n_informative=N - 1, # let's add a features as a linear combination of others
    bias=random.random(), # random true bias
    noise=1
)

print(X.shape)
print(Y.shape)
(10100, 4)
(10100,)

split dataset

X_train = torch.tensor(X[:100], device=device)
Y_train = torch.tensor(Y[:100], device=device)
X_train.shape, Y_train.shape
(torch.Size([100, 4]), torch.Size([100]))
X_valid = torch.tensor(X[100:], device=device)
Y_valid = torch.tensor(Y[100:], device=device)
X_valid.shape, Y_valid.shape
(torch.Size([10000, 4]), torch.Size([10000]))

delete raw dataset

del X
del Y

Scratch multivariate perceptron

weight and bias

Our model y^()\hat{\mathbf{y}}(\cdot) still have two trainable parameters b,wb, \mathbf{w}. But now note that weight is a vector

wRn\mathbf{w} \in \mathbb{R}^{n}

and bRb \in \mathbb{R}.

class MultiLinearRegression:
    def __init__(self, n_features: int):
        self.b = torch.randn(1, device=device)
        self.w = torch.randn(n_features, device=device)

    def copy_params(self, torch_layer: nn.modules.linear.Linear):
        """
        Copy the parameters from a module.linear to this model.

        Args:
            torch_layer: Pytorch module from which to copy the parameters.
        """
        self.b.copy_(torch_layer.bias.detach().clone())
        self.w.copy_(torch_layer.weight[0,:].detach().clone())

weighted sum

y^:Rm×nRmXy^(X)=b+Xw\begin{align} \hat{\mathbf{y}}: \mathbb{R}^{m \times n} &\to \mathbb{R}^{m} \\ \mathbf{X} &\mapsto \hat{\mathbf{y}}(\mathbf{X}) = b + \mathbf{Xw} \end{align}

Remark: We can add a scalar bRb \in \mathbb{R} to a vector due to broadcasting mechanism.

For one prediction

y^i=b+j=1nxijwj=b+xiw\begin{align} \hat{y}_{i} &= b + \sum_{j=1}^{n} x_{ij} w_{j}\\ &= b + \mathbf{x}_{i}^{\top} \mathbf{w} \end{align}

this will be useful for gradient descent.

@add_to_class(MultiLinearRegression)
def predict(self, x: torch.Tensor) -> torch.Tensor:
    """
    Predict the output for input x.

    Args:
        x: Input tensor of shape (n_samples, n_features).

    Returns:
        y_pred: Predicted output tensor of shape (n_samples,).
    """
    return torch.matmul(x, self.w) + self.b

MSE

MSE still without changes.

L:RmR+y^L(y^),  y^Rm\begin{align} L: \mathbb{R}^{m} &\to \mathbb{R}^{+} \\ \hat{\mathbf{y}} &\mapsto L(\hat{\mathbf{y}}), \; \hat{\mathbf{y}} \in \mathbb{R}^{m} \end{align}
L(y^)=1mi=1m(y^iyi)2L (\hat{\mathbf{y}}) = \frac{1}{m} \sum_{i=1}^{m} \left( \hat{y}_{i} - y_{i} \right)^{2}
@add_to_class(MultiLinearRegression)
def mse_loss(self, y_true: torch.Tensor, y_pred: torch.Tensor):
    """
    MSE loss function between target y_true and y_pred.

    Args:
        y_true: Target tensor of shape (n_samples,).
        y_pred: Predicted tensor of shape (n_samples,).

    Returns:
        loss: MSE loss between predictions and true values.
    """
    return ((y_pred - y_true)**2).mean().item()

@add_to_class(MultiLinearRegression)
def evaluate(self, x: torch.Tensor, y_true: torch.Tensor):
    """
    Evaluate the model on input x and target y_true using MSE.

    Args:
        x: Input tensor of shape (n_samples, n_features).
        y_true: Target tensor of shape (n_samples,).

    Returns:
        loss: MSE loss between predictions and true values.
    """
    y_pred = self.predict(x)
    return self.mse_loss(y_true, y_pred)

gradients

Let’s follow the same strategy as before:

  • First, determine the derivatives to be computed

  • Then, ascertain the shape of each derivative

  • Finally, compute the derivatives

⭐️ We are using Einstein notation, that implies summation. For example

aibiiaibia_{i} b_{i} \equiv \sum_{i} a_{i} b_{i}

we will use Einstein notation for chain rule summation, for example

fgigixifgigix\frac{\partial f}{\partial g_{i}} \frac{\partial g_{i}}{\partial x} \equiv \sum_{i} \frac{\partial f}{\partial g_{i}} \frac{\partial g_{i}}{\partial x}

Derivative of MSE respect to bias

Lb=Ly^py^pb\frac{\partial L}{\partial b} = \frac{\partial L}{\partial \hat{y}_{p}} \frac{\partial \hat{y}_{p}}{\partial b}

and derivative of MSE respect to weight

Lwq=Ly^py^pwq\frac{\partial L}{\partial w_{q}} = \frac{\partial L}{\partial \hat{y}_{p}} \frac{\partial \hat{y}_{p}}{\partial w_{q}}

where the shape of each derivative is

LbR,LwRn,Ly^Rm,y^bRm,y^wRm×n\frac{\partial L}{\partial b} \in \mathbb{R}, \frac{\partial L}{\partial \mathbf{w}} \in \mathbb{R}^{n}, \frac{\partial L}{\partial \hat{\mathbf{y}}} \in \mathbb{R}^{m}, \frac{\partial \hat{\mathbf{y}}}{\partial b} \in \mathbb{R}^{m}, \frac{\partial \hat{\mathbf{y}}}{\partial \mathbf{w}} \in \mathbb{R}^{m \times n}

MSE derivative

Derivative of MSE respect to predicted data is

Ly^p=y^p(1mi=1m(y^iyi)2)=1mi=1my^p((y^iyi)2)=2mi=1m(y^iyi)y^iy^p=2mi=1m(y^iyi)δip=2mi=1m[y^y]iδip=2m[y^y]p=2m(y^pyp)\begin{align} \frac{\partial L}{\partial \hat{y}_{p}} &= \frac{\partial}{\partial \hat{y}_{p}} \left( \frac{1}{m} \sum_{i=1}^{m} \left( \hat{y}_{i} - y_{i} \right)^{2} \right) \\ &= \frac{1}{m} \sum_{i=1}^{m} \frac{\partial}{\partial \hat{y}_{p}} \left( \left( \hat{y}_{i} - y_{i} \right)^{2} \right) \\ &= \frac{2}{m} \sum_{i=1}^{m} \left( \hat{y}_{i} - y_{i} \right) \frac{\partial \hat{y}_{i}}{\partial \hat{y}_{p}} \\ &= \frac{2}{m} \sum_{i=1}^{m} \left( \hat{y}_{i} - y_{i} \right) \delta_{ip} \\ &=\frac{2}{m} \sum_{i=1}^{m} \left[ \hat{\mathbf{y}} - \mathbf{y} \right]_{i} \delta_{ip} \\ &= \frac{2}{m} \left[ \hat{\mathbf{y}} - \mathbf{y} \right]_{p} \\ &= \frac{2}{m} \left( \hat{y}_{p} - y_{p} \right) \end{align}

for p=1,,mp = 1, \ldots, m.

The vectorized form is

Ly^=2m(y^y)\frac{\partial L}{\partial \hat{\mathbf{y}}} = \frac{2}{m} \left( \hat{\mathbf{y}} - \mathbf{y} \right)

weighted sum derivative

respect to bias
y^pb=b(b+xpw)=1\begin{align} \frac{\partial \hat{y}_{p}}{\partial b} &= \frac{\partial}{\partial b} \left( b + \mathbf{x}_{p}^{\top} \mathbf{w} \right) \\ &= 1 \end{align}

for p=1,,mp = 1, \ldots, m.

The vectorized form is

y^b=1\frac{\partial \hat{\mathbf{y}}}{\partial b} = \mathbf{1}

where 1Rm\mathbf{1} \in \mathbb{R}^{m}.

respect to weight
y^pwq=wq(b+xpw)=wq(xpw)=wq(xp1w1++xpqwq++xpnwn)=wq(xpkwk)=xpkδkq=xpq\begin{align} \frac{\partial \hat{y}_{p}}{\partial w_{q}} &= \frac{\partial}{\partial w_{q}} \left( b + \mathbf{x}_{p}^{\top} \mathbf{w} \right) \\ &= \frac{\partial}{\partial w_{q}} \left(\mathbf{x}_{p}^{\top} \mathbf{w} \right) \\ &= \frac{\partial}{\partial w_{q}} \left( x_{p1}w_{1} + \ldots + x_{pq}w_{q} + \ldots + x_{pn}w_{n} \right) \\ &= \frac{\partial}{\partial w_{q}} \left( x_{pk} w_{k} \right) \\ &= x_{pk} \delta_{kq} \\ &= x_{pq} \end{align}

for p=1,,mp = 1, \ldots, m, and q=1,,nq = 1, \ldots, n.

Vectoring for all q=1,,nq = 1, \ldots, n

y^pw=xpR1×n\frac{\partial \hat{y}_{p}}{\partial \mathbf{w}} = \mathbf{x}_{p}^{\top} \in \mathbb{R}^{1 \times n}

Vectorizing for all p=1,,mp = 1, \ldots, m

y^w=XRm×n\frac{\partial \hat{\mathbf{y}}}{\partial \mathbf{w}} = \mathbf{X} \in \mathbb{R}^{m \times n}

full chain rule

Derivative of MSE respect to bias

Lb=Ly^py^pb=2m(y^pyp)1p=2m<y^y,1>=2m(y^y)1\begin{align} \frac{\partial L}{\partial b} &= {\color{Cyan} \frac{\partial L}{\partial \hat{y}_{p}}} {\color{Orange} \frac{\partial \hat{y}_{p}}{\partial b}} \\ &= {\color{Cyan} \frac{2}{m} \left( \hat{y}_{p} - y_{p} \right)} {\color{Orange} 1_{p}} \\ &= \frac{2}{m} \left< \hat{\mathbf{y}} - \mathbf{y}, \mathbf{1} \right> \\ &= \frac{2}{m} \left( \hat{\mathbf{y}} - \mathbf{y} \right)^{\top} \mathbf{1} \end{align}

Derivative of MSE respect to weight

Lwq=Ly^py^pwq=2m(y^pyp)xpq=2m<y^y,x:,q>=2m(x:,q)(y^y)\begin{align} \frac{\partial L}{\partial w_{q}} &= {\color{Cyan} \frac{\partial L}{\partial \hat{y}_p}} {\color{Magenta} \frac{\partial \hat{y}_{p}}{\partial w_{q}}} \\ &= {\color{Cyan} \frac{2}{m} \left(\hat{y}_{p} - y_{p} \right)} {\color{Magenta} x_{pq}} \\ &= \frac{2}{m} \left< \hat{\mathbf{y}} - \mathbf{y}, \mathbf{x}_{:,q} \right> \\ &= \frac{2}{m} \left( \mathbf{x}_{:,q} \right)^{\top} \left( \hat{\mathbf{y}} - \mathbf{y} \right) \end{align}

for q=1,,nq = 1, \ldots, n, where x:,q=[x1qxmq]Rm×1\mathbf{x}_{:,q} = \begin{bmatrix} x_{1q} & \cdots & x_{mq} \end{bmatrix}^{\top} \in \mathbb{R}^{m \times 1}.

Vectorized form is

Lw=2mX(y^y)\begin{align} \frac{\partial L}{\partial \mathbf{w}} &= \frac{2}{m} \mathbf{X}^{\top} \left( \hat{\mathbf{y}} - \mathbf{y} \right) \end{align}

final gradients

bL=2m(y^y)1\nabla_{b}L = \frac{2}{m} \left( \hat{\mathbf{y}} - \mathbf{y} \right)^{\top} \mathbf{1}
wL=2mX(y^y)\nabla_{\mathbf{w}} L = \frac{2}{m} \mathbf{X}^{\top} \left( \hat{\mathbf{y}} - \mathbf{y} \right)

parameters update

bbηbL=bη(2m(y^y)1)\begin{align} b &\leftarrow b -\eta \nabla_{b}L \\ &= b -\eta \left( \frac{2}{m} (\hat{\mathbf{y}} - \mathbf{y})^{\top} \mathbf{1} \right) \end{align}
wwηwL=wη(2mX(y^y))\begin{align} \mathbf{w} &\leftarrow \mathbf{w} -\eta \nabla_{\mathbf{w}}L \\ &= \mathbf{w} -\eta \left( \frac{2}{m} \mathbf{X}^{\top} (\hat{\mathbf{y}} - \mathbf{y}) \right) \end{align}

where ηR+\eta \in \mathbb{R}^{+} is called learning rate.

@add_to_class(MultiLinearRegression)
def update(self, x: torch.Tensor, y_true: torch.Tensor, 
           y_pred: torch.Tensor, lr: float):
    """
    Update the model parameters.

    Args:
       x: Input tensor of shape (n_samples, n_features).
       y_true: Target tensor of shape (n_samples,).
       y_pred: Predicted output tensor of shape (n_samples,).
       lr: Learning rate. 
    """
    delta = 2 * (y_pred - y_true) / len(y_true)
    self.b -= lr * delta.sum()
    self.w -= lr * torch.matmul(x.T, delta)

gradient descent

@add_to_class(MultiLinearRegression)
def fit(self, x: torch.Tensor, y: torch.Tensor, 
        epochs: int, lr: float, batch_size: int, 
        x_valid: torch.Tensor, y_valid: torch.Tensor):
    """
    Fit the model using gradient descent.
    
    Args:
        x: Input tensor of shape (n_samples, n_features).
        y: Target tensor of shape (n_samples,).
        epochs: Number of epochs to fit.
        lr: learning rate.
        batch_size: Int number of batch.
        x_valid: Input tensor of shape (n_valid_samples, n_features).
        y_valid: Target tensor of shape (n_valid_samples,).
    """
    for epoch in range(epochs):
        loss = []
        for batch in range(0, len(y), batch_size):
            end_batch = batch + batch_size

            y_pred = self.predict(x[batch:end_batch])

            loss.append(self.mse_loss(
                y[batch:end_batch],
                y_pred
            ))

            self.update(
                x[batch:end_batch], 
                y[batch:end_batch], 
                y_pred, 
                lr
            )

        loss = round(sum(loss) / len(loss), 4)
        loss_v = round(self.evaluate(x_valid, y_valid), 4)
        print(f'epoch: {epoch} - MSE: {loss} - MSE_v: {loss_v}')

Scrath vs Torch.nn

Torch.nn model

class TorchLinearRegression(nn.Module):
    def __init__(self, n_features):
        super(TorchLinearRegression, self).__init__()
        self.layer = nn.Linear(n_features, 1, device=device)
        self.loss = nn.MSELoss()

    def forward(self, x):
        return self.layer(x)
    
    def evaluate(self, x, y):
        self.eval()
        with torch.no_grad():
            y_pred = self.forward(x)
            return self.loss(y_pred, y).item()
    
    def fit(self, x, y, epochs, lr, batch_size, x_valid, y_valid):
        optimizer = torch.optim.SGD(self.parameters(), lr=lr)
        for epoch in range(epochs):
            loss_t = [] # train loss
            for batch in range(0, len(y), batch_size):
                end_batch = batch + batch_size

                y_pred = self.forward(x[batch:end_batch])
                loss = self.loss(y_pred, y[batch:end_batch])
                loss_t.append(loss.item())

                optimizer.zero_grad()
                loss.backward()
                optimizer.step()

            loss_t = round(sum(loss_t) / len(loss_t), 4)
            loss_v = round(self.evaluate(x_valid, y_valid), 4)
            print(f'epoch: {epoch} - MSE: {loss_t} - MSE_v: {loss_v}')
        optimizer.zero_grad()
torch_model = TorchLinearRegression(N)

scratch model

model = MultiLinearRegression(N)

evals

We will use a metric to compare our model with the PyTorch model.

import MAPE modified

We will use a modification of MAPE as a metric

MAPE(y,y^)=1mi=1mL(yi,y^i)\text{MAPE}(\mathbf{y}, \hat{\mathbf{y}}) = \frac{1}{m} \sum^{m}_{i=1} \mathcal{L} (y_{i}, \hat{y}_{i})

where

L(yi,y^i)={yiy^iyiif yi0y^iif y^i=0\mathcal{L} (y_{i}, \hat{y}_{i}) = \begin{cases} \left| \frac{y_{i} - \hat{y}_{i}}{y_{i}} \right| & \text{if } y_{i} \neq 0 \\ \left| \hat{y}_{i} \right| & \text{if } \hat{y}_{i} = 0 \end{cases}
# This cell imports torch_mape 
# if you are running this notebook locally 
# or from Google Colab.

import os
import sys

module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
    sys.path.append(module_path)

try:
    from tools.torch_metrics import torch_mape as mape
    print('mape imported locally.')
except ModuleNotFoundError:
    import subprocess

    repo_url = 'https://raw.githubusercontent.com/PilotLeoYan/inside-deep-learning/main/content/tools/torch_metrics.py'
    local_file = 'torch_metrics.py'
    
    subprocess.run(['wget', repo_url, '-O', local_file], check=True)
    try:
        from torch_metrics import torch_mape as mape # type: ignore
        print('mape imported from GitHub.')
    except Exception as e:
        print(e)
mape imported locally.

predictions

Let’s compare the predictions of our model and PyTorch’s using modified MAPE.

mape(
    model.predict(X_valid),
    torch_model.forward(X_valid).squeeze(-1)
)
13.10236558222539

They differ considerably because each model has its own parameters initialized randomly and independently of the other model.

copy parameters

We copy the values of the PyTorch model parameters to our model.

model.copy_params(torch_model.layer)

predictions after copy parameters

We measure the difference between the predictions of both models again.

mape(
    model.predict(X_valid),
    torch_model.forward(X_valid).squeeze(-1)
)
0.0

We can see that their predictions do not differ greatly.

loss

mape(
    model.evaluate(X_valid, Y_valid),
    torch_model.evaluate(X_valid, Y_valid.unsqueeze(-1))
)
0.0

training

We are going to train both models using the same hyperparameters’ value. If our model is well designed, then starting from the same parameters it should arrive at the same parameters’ values as the PyTorch model after training.

LR: float = 0.01 # learning rate
EPOCHS: int = 16 # number of epochs
BATCH: int = len(X_train) // 3 # number of minibatch
torch_model.fit(
    X_train, 
    Y_train.unsqueeze(-1),
    EPOCHS, LR, BATCH,
    X_valid,
    Y_valid.unsqueeze(-1)
)
epoch: 0 - MSE: 11397.2866 - MSE_v: 8707.6512
epoch: 1 - MSE: 8993.6666 - MSE_v: 7320.302
epoch: 2 - MSE: 7155.589 - MSE_v: 6216.6786
epoch: 3 - MSE: 5744.2143 - MSE_v: 5330.7036
epoch: 4 - MSE: 4655.3057 - MSE_v: 4612.4923
epoch: 5 - MSE: 3810.5595 - MSE_v: 4024.314
epoch: 6 - MSE: 3151.1116 - MSE_v: 3537.5675
epoch: 7 - MSE: 2632.6743 - MSE_v: 3130.5135
epoch: 8 - MSE: 2221.892 - MSE_v: 2786.5756
epoch: 9 - MSE: 1893.6122 - MSE_v: 2493.0666
epoch: 10 - MSE: 1628.8397 - MSE_v: 2240.2322
epoch: 11 - MSE: 1413.2041 - MSE_v: 2020.5346
epoch: 12 - MSE: 1235.8103 - MSE_v: 1828.114
epoch: 13 - MSE: 1088.3777 - MSE_v: 1658.3839
epoch: 14 - MSE: 964.5936 - MSE_v: 1507.7271
epoch: 15 - MSE: 859.6285 - MSE_v: 1373.2668
model.fit(
    X_train, Y_train,
    EPOCHS, LR, BATCH,
    X_valid, Y_valid
)
epoch: 0 - MSE: 11397.2866 - MSE_v: 8707.6512
epoch: 1 - MSE: 8993.6666 - MSE_v: 7320.302
epoch: 2 - MSE: 7155.589 - MSE_v: 6216.6786
epoch: 3 - MSE: 5744.2143 - MSE_v: 5330.7036
epoch: 4 - MSE: 4655.3057 - MSE_v: 4612.4923
epoch: 5 - MSE: 3810.5595 - MSE_v: 4024.314
epoch: 6 - MSE: 3151.1116 - MSE_v: 3537.5675
epoch: 7 - MSE: 2632.6743 - MSE_v: 3130.5135
epoch: 8 - MSE: 2221.892 - MSE_v: 2786.5756
epoch: 9 - MSE: 1893.6122 - MSE_v: 2493.0666
epoch: 10 - MSE: 1628.8397 - MSE_v: 2240.2322
epoch: 11 - MSE: 1413.2041 - MSE_v: 2020.5346
epoch: 12 - MSE: 1235.8103 - MSE_v: 1828.114
epoch: 13 - MSE: 1088.3777 - MSE_v: 1658.3839
epoch: 14 - MSE: 964.5936 - MSE_v: 1507.7271
epoch: 15 - MSE: 859.6285 - MSE_v: 1373.2668

predictions after training

mape(
    model.predict(X_valid),
    torch_model.forward(X_valid).squeeze(-1)
)
3.4679191706207704e-16

bias

We directly measure the difference between the bias values of both models.

mape(
    model.b.clone(),
    torch_model.layer.bias.detach()
)
0.0

weight

And measure the difference between the weight values of both models.

mape(
    model.w.clone(),
    torch_model.layer.weight.detach().squeeze(0)
)
7.449571250865014e-17

All right, our implementation is correct respect to PyTorch. Now, we can finally tackle Multioutput in the next notebook.