Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

1.3 - Multioutput Linear Regression

Now we are going to increase the complexity, instead of the perceptron have a single output, it will have multiple outputs. Multioutput perceptron will allow us to create Classification for the next section.

We can think multioutput perceptron as a layer of several multivariate perceptrons, and each perceptron output corresponds to an output feature.

The goal of multioutput perceptron is estimate f()f(\cdot) by a linear approximation f^()\hat{f}(\cdot) such that

Y=f^(X)+ϵ\mathbf{Y} = \hat{f}(\mathbf{X}) + \epsilon

Note that the target Y\mathbf{Y} is now a matrix.

Purpose of this Notebook:

  1. Create a dataset for multioutput linear regression task

  2. Create our own Multioutput Perceptron class from scratch

  3. Calculate the gradient descent from scratch

  4. Train our Perceptron

  5. Compare our Perceptron to the one prebuilt by PyTorch

Setup

print('Start package installation...')
Start package installation...
%%capture
%pip install torch
%pip install scikit-learn
print('Packages installed successfully!')
Packages installed successfully!
import torch
from torch import nn

from platform import python_version
python_version(), torch.__version__
('3.12.12', '2.9.0+cu128')
device = 'cpu'
if torch.cuda.is_available():
    device = 'cuda'
device
'cpu'
torch.set_default_dtype(torch.float64)
def add_to_class(Class):  
    """Register functions as methods in created class."""
    def wrapper(obj): setattr(Class, obj.__name__, obj)
    return wrapper

Dataset

create dataset

The dataset D\mathcal{D} is consists of the input data X\mathbf{X} and the target data Y\mathbf{Y}

D={(x1,y1),,(xm,ym)}\mathcal{D} = \left\{ (\mathbf{x}^{\top}_{1}, \mathbf{y}^{\top}_{1}), \cdots, (\mathbf{x}^{\top}_{m}, \mathbf{y}^{\top}_{m}) \right\}

The input data XRm×n\mathbf{X} \in \mathbb{R}^{m \times n} can be represented as a matrix

X=[x11x1nxm1xmn]\mathbf{X} = \begin{bmatrix} x_{11} & \cdots & x_{1n} \\ \vdots & \ddots & \vdots \\ x_{m1} & \cdots & x_{mn} \end{bmatrix}

where mm is the number of samples, and nn is the number of input features.

The target data YRm×no\mathbf{Y} \in \mathbb{R}^{m \times n_{o}}

Y=[y11y1noym1ymno]\mathbf{Y} = \begin{bmatrix} y_{11} & \cdots & y_{1n_{o}} \\ \vdots & \ddots & \vdots \\ y_{m1} & \cdots & y_{m n_{o}} \end{bmatrix}

where non_{o} is the number of output features.

from sklearn.datasets import make_regression
import random

M: int = 10_100 # number of samples
N: int = 6 # number of input features
NO: int = 3 # number of output features

X, Y = make_regression(
    n_samples=M, 
    n_features=N, 
    n_targets=NO, 
    n_informative=N - 1,
    bias=random.random(),
    noise=1
)

print(X.shape)
print(Y.shape)
(10100, 6)
(10100, 3)

split dataset

X_train = torch.tensor(X[:100], device=device)
Y_train = torch.tensor(Y[:100], device=device)
X_train.shape, Y_train.shape
(torch.Size([100, 6]), torch.Size([100, 3]))
X_valid = torch.tensor(X[100:], device=device)
Y_valid = torch.tensor(Y[100:], device=device)
X_valid.shape, Y_valid.shape
(torch.Size([10000, 6]), torch.Size([10000, 3]))

delete raw dataset

del X
del Y

Scratch multiout perceptron

weight and bias

Our model Y^()\hat{\mathbf{Y}}(\cdot) have two trainable parameters b,W\mathbf{b, W} called bias and weight respectively

bRnoWRn×no\mathbf{b} \in \mathbb{R}^{n_{o}} \\ \mathbf{W} \in \mathbb{R}^{n \times n_{o}}
class MultioutputRegression:
    def __init__(self, n_features: int, out_features: int):
        self.b = torch.randn(out_features, device=device)
        self.w = torch.randn(n_features, out_features, device=device)

    def copy_params(self, torch_layer: torch.nn.modules.linear.Linear):
        """
        Copy the parameters from a module.linear to this model.

        Args:
            torch_layer: Pytorch module from which to copy the parameters.
        """
        self.b.copy_(torch_layer.bias.detach().clone())
        self.w.copy_(torch_layer.weight.T.detach().clone())

weighted sum

Y^:Rm×nRm×noXY^(X)=b+XW\begin{align} \hat{\mathbf{Y}}: \mathbb{R}^{m \times n} &\to \mathbb{R}^{m \times n_{o}} \\ \mathbf{X} &\mapsto \hat{\mathbf{Y}} \mathbf{(X) = b + XW} \end{align}

Remark: We cann add a vector b\mathbf{b} to a matrix XW\mathbf{XW} due to broadcasting mechanism.

For one prediction in the ii-th sample at the jj-th output feature

y^ij=bj+k=1nxikwkj\hat{y}_{ij} = b_{j} + \sum_{k=1}^{n} x_{ik} w_{kj}

for i=1,,mi = 1, \ldots, m, and j=1,,noj = 1, \ldots, n_{o}.

@add_to_class(MultioutputRegression)
def predict(self, x: torch.Tensor) -> torch.Tensor:
    """
    Predict the output for input x

    Args:
        x: Input tensor of shape (n_samples, n_features).

    Returns:
        y_pred: Predicted output tensor of shape (n_samples, out_features).
    """
    return torch.matmul(x, self.w) + self.b

MSE

MSE is now defined as

L:Rm×noR+Y^L(Y^),Y^Rm×no\begin{align} L: \mathbb{R}^{m \times n_{o}} &\to \mathbb{R}^{+} \\ \hat{\mathbf{Y}} &\mapsto L(\hat{\mathbf{Y}}), \hat{\mathbf{Y}} \in \mathbb{R}^{m \times n_{o}} \end{align}
L(Y^)=1mnoi=1mj=1no(y^ijyij)2L(\hat{\mathbf{Y}}) = \frac{1}{m n_{o}} \sum_{i=1}^{m} \sum_{j=1}^{n_{o}} \left( \hat{y}_{ij} - y_{ij} \right)^{2}

Vectorized form

L(Y^)=1mnosum((Y^Y)2)L(\hat{\mathbf{Y}}) = \frac{1}{m n_{o}} \text{sum} \left( \left( \hat{\mathbf{Y}} - \mathbf{Y} \right)^{2} \right)

where A2\mathbf{A}^{2} is element-wise power A2=AA\mathbf{A}^{2} = \mathbf{A \odot A}.

Note: \odot is called Hadamard product.

@add_to_class(MultioutputRegression)
def mse_loss(self, y_true: torch.Tensor, y_pred: torch.Tensor):
    """
    MSE loss function between target y_true and y_pred.

    Args:
        y_true: Target tensor of shape (n_samples, out_features).
        y_pred: Predicted tensor of shape (n_samples, out_features).

    Returns:
        loss: MSE loss between predictions and true values.
    """
    return ((y_pred - y_true)**2).mean().item()

@add_to_class(MultioutputRegression)
def evaluate(self, x: torch.Tensor, y_true: torch.Tensor):
    """
    Evaluate the model on input x and target y_true using MSE.

    Args:
        x: Input tensor of shape (n_samples, n_features).
        y_true: Target tensor of shape (n_samples, out_features).

    Returns:
        loss: MSE loss between predictions and true values.
    """
    y_pred = self.predict(x)
    return self.mse_loss(y_true, y_pred)

gradients

Let’s follow the same strategy as before:

  • First, determine the derivatives to be computed

  • Then, ascertain the shape of each derivative

  • Finally, compute the derivatives

⭐️ We are using Einstein notation, that implies summation

aibiiaibia_{i} b_{i} \equiv \sum_{i} a_{i} b_{i}

we will use Einstein notation for chain rule summation, for example

fgigixifgigix\frac{\partial f}{\partial g_{i}} \frac{\partial g_{i}}{\partial x} \equiv \sum_{i} \frac{\partial f}{\partial g_{i}} \frac{\partial g_{i}}{\partial x}

Derivative of MSE respect to bias

Lbr=Ly^pqy^pqbr\frac{\partial L}{\partial b_{r}} = \frac{\partial L}{\partial \hat{y}_{pq}} \frac{\partial \hat{y}_{pq}}{\partial b_{r}}

and derivative of MSE respect to weight

Lwrs=Ly^pqy^pqwrs\frac{\partial L}{\partial w_{rs}} = \frac{\partial L}{\partial \hat{y}_{pq}} \frac{\partial \hat{y}_{pq}}{\partial w_{rs}}

where the shape of each derivative is

LbRno\frac{\partial L}{\partial \mathbf{b}} \in \mathbb{R}^{n_{o}}
LWRn×no\frac{\partial L}{\partial \mathbf{W}} \in \mathbb{R}^{n \times n_{o}}
LY^Rm×no\frac{\partial L}{\partial \hat{\mathbf{Y}}} \in \mathbb{R}^{m \times n_{o}}
Y^bR(m×no)×no\frac{\partial \hat{\mathbf{Y}}}{\partial \mathbf{b}} \in \mathbb{R}^{(m \times n_{o}) \times n_{o}}
Y^WR(m×no)×(n×no)\frac{\partial \hat{\mathbf{Y}}}{\partial \mathbf{W}} \in \mathbb{R}^{(m \times n_{o}) \times (n \times n_{o})}

MSE derivative

Ly^pq=y^pq(1mnoi=1mj=1no(y^ijyij)2)=1mnoi=1mj=1noy^pq((y^ijyij)2)=2mnoi=1mj=1no(y^ijyij)y^ijy^pq=2mnoi=1mj=1no(y^ijyij)δipδjq=2mnoi=1mj=1no[Y^Y]ijδipδjq=2mnoi=1mj=1no[Y^Y]pjδjq=2mno[Y^Y]pq=2mno(y^pqypq)\begin{align} \frac{\partial L}{\partial \hat{y}_{pq}} &= \frac{\partial}{\partial \hat{y}_{pq}} \left( \frac{1}{mn_{o}} \sum_{i=1}^{m} \sum_{j=1}^{n_o} \left( \hat{y}_{ij} - y_{ij} \right)^{2} \right) \\ &= \frac{1}{mn_{o}} \sum_{i=1}^{m} \sum_{j=1}^{n_o} \frac{\partial}{\partial \hat{y}_{pq}} \left( \left( \hat{y}_{ij} - y_{ij} \right)^{2} \right) \\ &= \frac{2}{mn_{o}} \sum_{i=1}^{m} \sum_{j=1}^{n_o} \left( \hat{y}_{ij} - y_{ij} \right) \frac{\partial \hat{y}_{ij}}{\partial \hat{y}_{pq}} \\ &= \frac{2}{mn_{o}} \sum_{i=1}^{m} \sum_{j=1}^{n_o} \left( \hat{y}_{ij} - y_{ij} \right) \delta_{ip} \delta_{jq} \\ &= \frac{2}{mn_{o}} \sum_{i=1}^{m} \sum_{j=1}^{n_o} \left[ \hat{\mathbf{Y}} - \mathbf{Y} \right]_{ij} \delta_{ip} \delta_{jq} \\ &= \frac{2}{mn_{o}} \sum_{i=1}^{m} \sum_{j=1}^{n_o} \left[ \hat{\mathbf{Y}} - \mathbf{Y} \right]_{pj} \delta_{jq} \\ &= \frac{2}{mn_{o}} \left[ \hat{\mathbf{Y}} - \mathbf{Y} \right]_{pq} \\ &= \frac{2}{mn_{o}} \left( \hat{y}_{pq} - y_{pq} \right) \\ \end{align}

for p=1,,mp = 1, \ldots, m, and q=1,,noq = 1, \ldots, n_{o}.

Vectorized form

LY^=2mno(Y^Y)\frac{\partial L}{\partial \hat{\mathbf{Y}}} = \frac{2}{mn_{o}} \left( \hat{\mathbf{Y}} - \mathbf{Y} \right)

weighted sum derivative

respect to bias
y^pqbr=br(bq+xpkwkq)=bqbr=δqr\begin{align} \frac{\partial \hat{y}_{pq}}{\partial b_{r}} &= \frac{\partial}{\partial b_{r}} \left( b_{q} + x_{pk} w_{kq} \right) \\ &= \frac{\partial b_{q}}{\partial b_{r}} \\ &= \delta_{qr} \end{align}

for p=1,,mp = 1, \ldots, m, and q,r=1,,noq, r = 1, \ldots, n_{o}

respect to weight
y^pqwrs=wrs(bq+xpkwkq)=xpkwkqwrs=xpkδkrδqs=xprδqs\begin{align} \frac{\partial \hat{y}_{pq}}{\partial w_{rs}} &= \frac{\partial}{\partial w_{rs}} \left( b_{q} + x_{pk} w_{kq} \right) \\ &= x_{pk} \frac{\partial w_{kq}}{\partial w_{rs}} \\ &= x_{pk} \delta_{kr} \delta_{qs} \\ &= x_{pr} \delta_{qs} \end{align}

for p=1,,mp = 1, \ldots, m, for q,s=1,,noq, s = 1, \ldots, n_{o}, and r=1,,nr = 1, \ldots, n.

Note: xpkwkqx_{pk} w_{kq} implies summation over the free index kk.

full chain rule

Derivative of MSE respect to bias

Lbr=Ly^pqy^pqbr=p=1mq=1no2mno(y^pqypq)δqr=2mnop=1mq=1no[Y^Y]pqδqr=2mnop=1m[Y^Y]pr=2mnop=1m(y^prypr)=2mno<1,y^:,ry:,r>=2mno1(y^:,ry:,r)\begin{align} \frac{\partial L}{\partial b_{r}} &= {\color{Cyan} \frac{\partial L}{\partial \hat{y}_{pq}}} {\color{Orange} \frac{\partial \hat{y}_{pq}}{\partial b_{r}}} \\ &= \sum_{p=1}^{m} \sum_{q=1}^{n_{o}} {\color{Cyan} \frac{2}{mn_{o}} \left( \hat{y}_{pq} - y_{pq} \right)} {\color{Orange} \delta_{qr}} \\ &= \frac{2}{mn_{o}} \sum_{p=1}^{m} \sum_{q=1}^{n_{o}} \left[ \hat{\mathbf{Y}} - \mathbf{Y} \right]_{pq} \delta_{qr} \\ &= \frac{2}{mn_{o}} \sum_{p=1}^{m} \left[ \hat{\mathbf{Y}} - \mathbf{Y} \right]_{pr} \\ &= \frac{2}{mn_{o}} \sum_{p=1}^{m} \left( \hat{y}_{pr} - y_{pr} \right) \\ &= \frac{2}{mn_{o}} \left< \mathbf{1}, \hat{\mathbf{y}}_{:,r} - \mathbf{y}_{:,r} \right> \\ &= \frac{2}{mn_{o}} \mathbf{1}^{\top} \left( \hat{\mathbf{y}}_{:,r} - \mathbf{y}_{:,r} \right) \\ \end{align}

for r=1,,nor = 1, \ldots, n_{o}, where 1Rm\mathbf{1} \in \mathbb{R}^{m}.

Note: We made the summations explicit for greater clarity for the inner product.

Vectorized form

Lb=2mno1(Y^Y)\frac{\partial L}{\partial \mathbf{b}} = \frac{2}{mn_{o}} \mathbf{1}^{\top} \left( \hat{\mathbf{Y}} - \mathbf{Y} \right)

Derivative of MSE respect to weight

Lwrs=Ly^pqy^pqwrs=p=1mq=1no2mno(y^pqypq)xprδqs=2mnop=1mq=1no[Y^Y]pqxprδqs=2mnop=1m[Y^Y]psxpr=2mnop=1m(y^psyps)xpr=2mno<x:,r,y^:,sy:,s>=2mno(x:,r)(y^:,sy:,s)\begin{align} \frac{\partial L}{\partial w_{rs}} &= {\color{Cyan} \frac{\partial L}{\partial \hat{y}_{pq}}} {\color{Orange} \frac{\partial \hat{y}_{pq}}{\partial w_{rs}}} \\ &= \sum_{p=1}^{m} \sum_{q=1}^{n_{o}} {\color{Cyan} \frac{2}{mn_{o}} \left( \hat{y}_{pq} - y_{pq} \right)} {\color{Orange} x_{pr} \delta_{qs}} \\ &= \frac{2}{mn_{o}} \sum_{p=1}^{m} \sum_{q=1}^{n_{o}} \left[ \hat{\mathbf{Y}} - \mathbf{Y} \right]_{pq} x_{pr} \delta_{qs} \\ &= \frac{2}{mn_{o}} \sum_{p=1}^{m} \left[ \hat{\mathbf{Y}} - \mathbf{Y} \right]_{ps} x_{pr} \\ &= \frac{2}{mn_{o}} \sum_{p=1}^{m} \left( \hat{y}_{ps} - y_{ps} \right) x_{pr} \\ &= \frac{2}{mn_{o}} \left< \mathbf{x}_{:,r}, \hat{\mathbf{y}}_{:,s} - \mathbf{y}_{:,s} \right> \\ &= \frac{2}{mn_{o}} (\mathbf{x}_{:,r})^{\top} \left( \hat{\mathbf{y}}_{:,s} - \mathbf{y}_{:,s} \right) \\ \end{align}

for r=1,,nr = 1, \ldots, n, and s=1,,nos = 1, \ldots, n_{o}.

Note:

x:,r=[x1rxmr]\mathbf{x}_{:,r} = \begin{bmatrix} x_{1r} & \cdots & x_{mr} \end{bmatrix}^{\top}

is the ff-th input features of all input samples.

y^:,r=[y^1sy^ms]y:,r=[y1syms]\hat{\mathbf{y}}_{:,r} = \begin{bmatrix} \hat{y}_{1s} & \cdots & \hat{y}_{ms} \end{bmatrix}^{\top} \\ \mathbf{y}_{:,r} = \begin{bmatrix} y_{1s} & \cdots & y_{ms} \end{bmatrix}^{\top}

y^:,r,y:,r\hat{\mathbf{y}}_{:,r}, \mathbf{y}_{:,r} is the ss-th output feature of all predicted and target data samples respectively.

Vectorized form

LW=2mnoX(Y^Y)\frac{\partial L}{\partial \mathbf{W}} = \frac{2}{mn_{o}} \mathbf{X}^{\top} \left( \hat{\mathbf{Y}} - \mathbf{Y} \right)

final gradient

bL=2mno1(Y^Y)\nabla_{\mathbf{b}}L = \frac{2}{m n_{o}} \mathbf{1}^{\top} \left( \hat{\mathbf{Y}} - \mathbf{Y} \right)
WL=2mnoX(Y^Y)\nabla_{\mathbf{W}}L = \frac{2}{m n_{o}} \mathbf{X}^{\top} \left( \hat{\mathbf{Y}} - \mathbf{Y} \right)

parameters update

bbηbL=bη(2mno1(Y^Y))\begin{align} \mathbf{b} &\leftarrow \mathbf{b} - \eta \nabla_{\mathbf{b}} L \\ &= \mathbf{b} - \eta \left( \frac{2}{m n_{o}} \mathbf{1}^{\top} \left( \hat{\mathbf{Y}} - \mathbf{Y} \right) \right) \end{align}
WWηWL=Wη(2mnoX(Y^Y))\begin{align} \mathbf{W} &\leftarrow \mathbf{W} - \eta \nabla_{\mathbf{W}} L \\ &= \mathbf{W} - \eta \left( \frac{2}{m n_{o}} \mathbf{X}^{\top} \left( \hat{\mathbf{Y}} - \mathbf{Y} \right) \right) \end{align}

where etaR+eta \in \mathbb{R}^{+} is called learning rate.

@add_to_class(MultioutputRegression)
def update(self, x: torch.Tensor, y_true: torch.Tensor, 
           y_pred: torch.Tensor, lr: float):
    """
    Update the model parameters.

    Args:
       x: Input tensor of shape (n_samples, n_features).
       y_true: Target tensor of shape (n_samples, out_features).
       y_pred: Predicted output tensor of shape (n_samples, out_features).
       lr: Learning rate. 
    """
    delta = 2 * (y_pred - y_true) / y_true.numel()
    self.b -= lr * delta.sum(axis=0)
    self.w -= lr * torch.matmul(x.T, delta)

gradient descent

@add_to_class(MultioutputRegression)
def fit(self, x_train: torch.Tensor, y_train: torch.Tensor, 
        epochs: int, lr: float, batch_size: int, 
        x_valid: torch.Tensor, y_valid: torch.Tensor):
    """
    Fit the model using gradient descent.
    
    Args:
        x_train: Input tensor of shape (n_samples, n_features).
        y_train: Target tensor of shape (n_samples, out_features).
        epochs: Number of epochs to fit.
        lr: learning rate.
        batch_size: Int number of batch.
        x_valid: Input tensor of shape (n_valid_samples, n_features).
        y_valid: Target tensor of shape (n_valid_samples, out_features)
    """
    for epoch in range(epochs):
        loss = []
        for batch in range(0, len(y_train), batch_size):
            end_batch = batch + batch_size

            y_pred = self.predict(x_train[batch:end_batch])

            loss.append(self.mse_loss(
                y_train[batch:end_batch], 
                y_pred
            ))

            self.update(
                x_train[batch:end_batch], 
                y_train[batch:end_batch], 
                y_pred, 
                lr
            )

        loss = round(sum(loss) / len(loss), 4)
        loss_v = round(self.evaluate(x_valid, y_valid), 4)
        print(f'epoch: {epoch} - MSE: {loss} - MSE_v: {loss_v}')

Scratch vs Torch.nn

Torch.nn model

class TorchLinearRegression(nn.Module):
    def __init__(self, n_features, n_out_features):
        super(TorchLinearRegression, self).__init__()
        self.layer = nn.Linear(n_features, n_out_features, device=device)
        self.loss = nn.MSELoss()

    def forward(self, x):
        return self.layer(x)
    
    def evaluate(self, x, y):
        self.eval()
        with torch.no_grad():
            y_pred = self.forward(x)
            return self.loss(y_pred, y).item()
    
    def fit(self, x, y, epochs, lr, batch_size, x_valid, y_valid):
        optimizer = torch.optim.SGD(self.parameters(), lr=lr)
        for epoch in range(epochs):
            loss_t = []
            for batch in range(0, len(y), batch_size):
                end_batch = batch + batch_size

                y_pred = self.forward(x[batch:end_batch])
                loss = self.loss(y_pred, y[batch:end_batch])
                loss_t.append(loss.item())

                optimizer.zero_grad()
                loss.backward()
                optimizer.step()

            loss_t = round(sum(loss_t) / len(loss_t), 4)
            loss_v = round(self.evaluate(x_valid, y_valid), 4)
            print(f'epoch: {epoch} - MSE: {loss_t} - MSE_v: {loss_v}')
torch_model = TorchLinearRegression(N, NO)

scratch model

model = MultioutputRegression(N, NO)

evals

We will use a metric to compare our model with the PyTorch model.

import MAPE modified

We will use a modification of MAPE as a metric

MAPE(Y,Y^)=1mnoi=1mj=1noL(yij,y^ij)\text{MAPE}(\mathbf{Y}, \hat{\mathbf{Y}}) = \frac{1}{mn_{o}} \sum^{m}_{i=1} \sum^{n_{o}}_{j=1} \mathcal{L} (y_{ij}, \hat{y}_{ij})

where

L(yij,y^ij)={yijy^ijyijif yij0y^ijif y^ij=0\mathcal{L} (y_{ij}, \hat{y}_{ij}) = \begin{cases} \left| \frac{y_{ij} - \hat{y}_{ij}}{y_{ij}} \right| & \text{if } y_{ij} \neq 0 \\ \left| \hat{y}_{ij} \right| & \text{if } \hat{y}_{ij} = 0 \end{cases}
# This cell imports torch_mape 
# if you are running this notebook locally 
# or from Google Colab.

import os
import sys

module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
    sys.path.append(module_path)

try:
    from tools.torch_metrics import torch_mape as mape
    print('mape imported locally.')
except ModuleNotFoundError:
    import subprocess

    repo_url = 'https://raw.githubusercontent.com/PilotLeoYan/inside-deep-learning/main/content/tools/torch_metrics.py'
    local_file = 'torch_metrics.py'
    
    subprocess.run(['wget', repo_url, '-O', local_file], check=True)
    try:
        from torch_metrics import torch_mape as mape # type: ignore
        print('mape imported from GitHub.')
    except Exception as e:
        print(e)
mape imported locally.

predictions

Let’s compare the predictions of our model and PyTorch’s using modified MAPE.

mape(
    model.predict(X_valid),
    torch_model.forward(X_valid)
)
22.88791240727532

They differ considerably because each model has its own parameters initialized randomly and independently of the other model.

copy parameters

We copy the values of the PyTorch model parameters to our model.

model.copy_params(torch_model.layer)
parameters = (model.b.clone(), model.w.clone())

predictions after copy parameters

We measure the difference between the predictions of both models again.

mape(
    model.predict(X_valid),
    torch_model.forward(X_valid)
)
0.0

loss

mape(
    model.evaluate(X_valid, Y_valid),
    torch_model.evaluate(X_valid, Y_valid)
)
0.0

training

We are going to train both models using the same hyperparameters’ value. If our model is well designed, then starting from the same parameters it should arrive at the same parameters’ values as the PyTorch model after training.

LR: float = 0.01 # learning rate
EPOCHS: int = 16 # number of epochs
BATCH: int = len(X_train) // 3 # batch size
torch_model.fit(
    X_train, Y_train, 
    EPOCHS, LR, BATCH, 
    X_valid, Y_valid
)
epoch: 0 - MSE: 15202.6208 - MSE_v: 18460.5819
epoch: 1 - MSE: 14456.9148 - MSE_v: 17699.6765
epoch: 2 - MSE: 13761.3257 - MSE_v: 16975.3579
epoch: 3 - MSE: 13110.6882 - MSE_v: 16285.2623
epoch: 4 - MSE: 12500.5685 - MSE_v: 15627.259
epoch: 5 - MSE: 11927.145 - MSE_v: 14999.4183
epoch: 6 - MSE: 11387.11 - MSE_v: 14399.9842
epoch: 7 - MSE: 10877.5873 - MSE_v: 13827.3522
epoch: 8 - MSE: 10396.0639 - MSE_v: 13280.0497
epoch: 9 - MSE: 9940.334 - MSE_v: 12756.7203
epoch: 10 - MSE: 9508.4513 - MSE_v: 12256.11
epoch: 11 - MSE: 9098.6904 - MSE_v: 11777.0554
epoch: 12 - MSE: 8709.5135 - MSE_v: 11318.4742
epoch: 13 - MSE: 8339.5437 - MSE_v: 10879.3564
epoch: 14 - MSE: 7987.542 - MSE_v: 10458.7572
epoch: 15 - MSE: 7652.3883 - MSE_v: 10055.7907
model.fit(
    X_train, Y_train, 
    EPOCHS, LR, BATCH, 
    X_valid, Y_valid
)
epoch: 0 - MSE: 15202.6208 - MSE_v: 18460.5819
epoch: 1 - MSE: 14456.9148 - MSE_v: 17699.6765
epoch: 2 - MSE: 13761.3257 - MSE_v: 16975.3579
epoch: 3 - MSE: 13110.6882 - MSE_v: 16285.2623
epoch: 4 - MSE: 12500.5685 - MSE_v: 15627.259
epoch: 5 - MSE: 11927.145 - MSE_v: 14999.4183
epoch: 6 - MSE: 11387.11 - MSE_v: 14399.9842
epoch: 7 - MSE: 10877.5873 - MSE_v: 13827.3522
epoch: 8 - MSE: 10396.0639 - MSE_v: 13280.0497
epoch: 9 - MSE: 9940.334 - MSE_v: 12756.7203
epoch: 10 - MSE: 9508.4513 - MSE_v: 12256.11
epoch: 11 - MSE: 9098.6904 - MSE_v: 11777.0554
epoch: 12 - MSE: 8709.5135 - MSE_v: 11318.4742
epoch: 13 - MSE: 8339.5437 - MSE_v: 10879.3564
epoch: 14 - MSE: 7987.542 - MSE_v: 10458.7572
epoch: 15 - MSE: 7652.3883 - MSE_v: 10055.7907

predictions after training

mape(
    model.predict(X_valid),
    torch_model.forward(X_valid)
)
6.023391963615669e-16

bias

We directly measure the difference between the bias values of both models.

mape(
    model.b.clone(),
    torch_model.layer.bias.detach()
)
2.3068030920040977e-16

weight

mape(
    model.w.clone(),
    torch_model.layer.weight.detach().T
)
1.199589170858017e-16

All right, our implementation is correct respect to PyTorch. We have finished this section.

Compute gradients with einsum

This implementation is similar, but only changed parameters update with einsum. We will show how to compute the chain rule using explicit Einstein notation.

If we change the weighted sum to avoid broadcasting

Y^(X)=1b+XW\hat{\mathbf{Y}} (\mathbf{X}) = \mathbf{1 \otimes b + XW}

where 1Rm\mathbf{1} \in \mathbb{R}^{m}.

Remark: \otimes is outer product.

1b=[b1bnob1bno]Rm×no\mathbf{1 \otimes b} = \begin{bmatrix} b_{1} & \cdots & b_{n_{o}} \\ \vdots & \ddots & \vdots \\ b_{1} & \cdots & b_{n_{o}} \end{bmatrix} \in \mathbb{R}^{m \times n_{o}}
y^pqbr=br(1pbq+xpkwkq)=1pbqbr=1pδqr\begin{align} \frac{\partial \hat{y}_{pq}}{\partial b_{r}} &= \frac{\partial}{\partial b_{r}} \left( 1_{p} b_{q} + x_{pk} w_{kq} \right) \\ &= 1_{p} \frac{\partial b_{q}}{\partial b_{r}} \\ &= 1_{p} \delta_{qr} \end{align}

The derivative of weighted sum respect to bias is compute as

b_der = torch.einsum(
    'p,qr->pqr', 
    torch.ones(m), # the tensor of 1
    torch.eye(no) # kronecker delta
) # result is a tensor of order 3

The derivative of weighted sum respect to weight no changes

y^pqwrs=xprδqs\frac{\partial \hat{y}_{pq}}{\partial w_{rs}} = x_{pr} \delta_{qs}

and it is compute as

w_der = torch.einsum(
    'pr,qs->pqrs', 
    x, 
    torch.eye(no)
) # result is a tensor of order 4

To compute the chain rule

Lbr=Ly^pqy^pqbr\frac{\partial L}{\partial b_{r}} = {\color{Cyan} \frac{\partial L}{\partial \hat{y}_{pq}}} {\color{Orange} \frac{\partial \hat{y}_{pq}}{\partial b_{r}}}
torch.einsum('pq,pqr->r', delta, b_der)
Lwrs=Ly^pqy^pqwrs\frac{\partial L}{\partial w_{rs}} = {\color{Cyan} \frac{\partial L}{\partial \hat{y}_{pq}}} {\color{Orange} \frac{\partial \hat{y}_{pq}}{\partial w_{rs}}}
torch.einsum('pq,pqrs->rs', delta, w_der)
class EinsumLinearRegression(MultioutputRegression):
    def update(self, x: torch.Tensor, y_true: torch.Tensor, 
               y_pred: torch.Tensor, lr: float):
        """
        Update the model parameters.

        Args:
            x: Input tensor of shape (n_samples, n_features).
            y_true: Target tensor of shape (n_samples, n_features).
            y_pred: Predicted output tensor of shape (n_samples, n_features).
            lr: Learning rate. 
        """
        m, no = y_true.shape

        # d L / d Y_pred
        delta = 2 * (y_pred - y_true) / y_true.numel()

        # d L / d b
        b_der = torch.einsum(
            'p,qr->pqr', 
            torch.ones(m, device=device), # the tensor of 1
            torch.eye(no, device=device))
        self.b -= lr * torch.einsum('pq,pqr->r', delta, b_der)

        # d L / d W
        w_der = torch.einsum('pr,qs->pqrs', x, torch.eye(no, device=device))
        self.w -= lr * torch.einsum('pq,pqrs->rs', delta, w_der)
einsum_model = EinsumLinearRegression(N, NO)
einsum_model.b.copy_(parameters[0])
einsum_model.w.copy_(parameters[1])
tensor([[ 0.4003, -0.1125, -0.2734], [-0.0249, -0.3901, 0.3011], [-0.0633, -0.0902, 0.1784], [ 0.2862, -0.0737, -0.2647], [ 0.1376, -0.2813, 0.0322], [-0.0995, -0.3533, -0.0366]])
einsum_model.fit(
    X_train, Y_train, 
    EPOCHS, LR, BATCH, 
    X_valid, Y_valid
)
epoch: 0 - MSE: 15202.6208 - MSE_v: 18460.5819
epoch: 1 - MSE: 14456.9148 - MSE_v: 17699.6765
epoch: 2 - MSE: 13761.3257 - MSE_v: 16975.3579
epoch: 3 - MSE: 13110.6882 - MSE_v: 16285.2623
epoch: 4 - MSE: 12500.5685 - MSE_v: 15627.259
epoch: 5 - MSE: 11927.145 - MSE_v: 14999.4183
epoch: 6 - MSE: 11387.11 - MSE_v: 14399.9842
epoch: 7 - MSE: 10877.5873 - MSE_v: 13827.3522
epoch: 8 - MSE: 10396.0639 - MSE_v: 13280.0497
epoch: 9 - MSE: 9940.334 - MSE_v: 12756.7203
epoch: 10 - MSE: 9508.4513 - MSE_v: 12256.11
epoch: 11 - MSE: 9098.6904 - MSE_v: 11777.0554
epoch: 12 - MSE: 8709.5135 - MSE_v: 11318.4742
epoch: 13 - MSE: 8339.5437 - MSE_v: 10879.3564
epoch: 14 - MSE: 7987.542 - MSE_v: 10458.7572
epoch: 15 - MSE: 7652.3883 - MSE_v: 10055.7907
mape(
    einsum_model.w.clone(),
    torch_model.layer.weight.detach().T
)
1.199589170858017e-16
mape(
    einsum_model.b.clone(),
    torch_model.layer.bias.detach()
)
3.4964897774944853e-16