Linear Regression in Python, from Scratch

By: Trevor Rowland (dBCooper2)

Creating Linear Regression Models from Scratch.

| SLR_Model | MLR_Model|


Scripts containing each of the functions written in this notebook can be found at the following links

OLS_Linear_Regression Class MLR_Class

Table of Contents

1-2 Simple Linear Regression

.. 1. The Theory

.... a. Introduction to Simple Linear Regression

.... b. The Mean Squared Error

.... c. The Partial Derivatives of the Error Function

.... d. The Gradient Descent

.. 2. Applying the Theory to Python

.... a. The Gradient Descent Function in Python

.... b. Performing Simple Linear Regression

.... c. Testing the Linear Regression Model

.... d. Plotting the Regression Line

3-4 Multiple Linear Regression

.. 3. The Theory

.... a. The Model

.... b. The Error Function

.... c. Computing the Error Function for the MLR Model

.... d. The Partial Derivative of the Error Function

.... e. Interpreting the Theory to Translate into an Algorithm

.. 4. Applying the Theory to Python

.... a. Function Definitions

.... b. Accessing the Data and Performing Multiple Linear Regression

...... i. The 3-Factor Model

...... ii. The 3-Factor Model in Python

...... iii. The 5-Factor Model

...... iv. The 5-Factor Model in Python

.... c. Visualizing the Regression Results

1-2. Simple Linear Regression

Regression Analysis is a tool used in statistics and finance to see how strongly related an dependent variable and one or more independent variables are.

The Simple Linear Regression model was made by following the Simple Linear Regression Tutorial by NeuralNine

1. The Theory

1.a. Introduction to Simple Linear Regression

The Simple Linear Regression model uses an Ordinary Least Squares(OLS) approach to regression. The OLS model plots a line on a scatter plot, measures how far away it is from each point, then iteratively adjusts the slope and y-intercept in the linear equation to provide the line of best fit for the data.

How does this happen?

The Regression Model plots a line through all of the points in our dataset.

When the line is plotted, the points on the line will be different from the points in the dataset. The difference between the actual point and the point estimated by the line (YiYi^Y_i-\hat{Y_i}) can be called an error.

The sum of those errors can be calculated to find the total error in the regression line.

Squaring those errors and dividing that sum of all squared errors by the number of y-values gives us a measure called the Mean Squared Error, or MSEMSE:

MSE=1ni=0n(YiY^i)2Y^i=mxi+b\begin{align*} MSE &= \frac{1}{n} \sum_{i=0}^{n}(Y_i - \hat{Y}_i)^2 \\ \hat{Y}_i &= mx_i+b \end{align*}

1.b. The Mean Squared Error

The Mean Squared Error describes what the average error is, and to make the best-fit regression line, that error must be minimized.

Because the data cannot be modified, to develop the best-fit regression line the slope (mm) and the y-intercept(bb) must be modified. This involves iterating over many different calculated values of mm and bb, so how will the program know how to adjust the values across iterations?

The program will adjust the values by calculating the gradient descent of the Error(EE) with respect to mm and with respect to bb. This can be done using partial derivatives of the Error function to find the fastest way to increase the Error because derivatives measure a rate of change. Here are the calculations to find those gradient descent functions:

MSE=E(Y^i)=(1n)i=0n(YiY^i)2\begin{align*} MSE = E(\hat{Y}_i) &= (\frac{1}{n}) \sum_{i=0}^{n}(Y_i - \hat{Y}_i)^2 \end{align*}

Which decomposes into:

MSE=E(m,b)=(1n)i=0n(Yi(mxi+b))2\begin{align*} MSE = E(m,b) &= (\frac{1}{n}) \sum_{i=0}^{n}(Y_i - (mx_i+b))^2 \end{align*}

And to calculate the formulas for the optimization of the regression line that will be performed later, the formula can be fully expanded into:

E(m,b)=(1n)i=0n(Yi22Yimxi2Yib+m2xi2+2mxib+b2)\begin{align*} E(m,b) &= (\frac{1}{n}) \sum_{i=0}^{n}(Y_i^2 -2Y_imx_i-2Y_ib+m^2x_i^2+2mx_ib+b^2) \end{align*}

Now that the fully expanded Error Function has been found, the gradient descent formulas to optimize the regression line can be computed.

1.c. The Partial Derivatives of the Error Function

Taking the Partial Derivative of E(m,b)E(m,b) with respect to mm:

(m)E(m,b)=(m)(1n)i=0n(Yi22Yimxi2Yib+m2xi2+2mxib+b2)Em=(1n)i=0n(2Yixi+2mxi2+2xib)Em=(2n)i=0n(Yiximxi2xib)Em=(2n)i=0nxi(Yimxib)Em=(2n)i=0n[xi(Yi(mxi+b))]\begin{align*} (\frac{\partial}{\partial m})E(m,b) &= (\frac{\partial}{\partial m})(\frac{1}{n}) \sum_{i=0}^{n}(Y_i^2 -2Y_imx_i-2Y_ib+m^2x_i^2+2mx_ib+b^2) \\ \frac{\partial E}{\partial m} &= (\frac{1}{n}) \sum_{i=0}^{n}( -2Y_ix_i+2mx_i^2+2x_ib) \\ \frac{\partial E}{\partial m} &= (\frac{-2}{n}) \sum_{i=0}^{n}(Y_ix_i-mx_i^2-x_ib) \\ \frac{\partial E}{\partial m} &= (\frac{-2}{n}) \sum_{i=0}^{n}x_i(Y_i-mx_i-b) \\ \frac{\partial E}{\partial m} &= (\frac{-2}{n}) \sum_{i=0}^{n}[x_i(Y_i-(mx_i+b))] \end{align*}

Taking the Partial Derivative of E(m,b)E(m,b) with respect to bb:

(b)E(m,b)=(b)(1n)i=0n(Yi22Yimxi2Yib+m2xi2+2mxib+b2)Eb=(1n)i=0n(2Yi+2mxi+2b)Eb=(2n)i=0n(Yimxib)Eb=(2n)i=0n(Yi(mxi+b))\begin{align*} (\frac{\partial}{\partial b})E(m,b) &= (\frac{\partial}{\partial b})(\frac{1}{n}) \sum_{i=0}^{n}(Y_i^2 -2Y_imx_i-2Y_ib+m^2x_i^2+2mx_ib+b^2) \\ \frac{\partial E}{\partial b} &= (\frac{1}{n}) \sum_{i=0}^{n}(-2Y_i+2mx_i+2b) \\ \frac{\partial E}{\partial b} &= (\frac{-2}{n}) \sum_{i=0}^{n}(Y_i-mx_i-b) \\ \frac{\partial E}{\partial b} &= (\frac{-2}{n}) \sum_{i=0}^{n}(Y_i-(mx_i+b)) \end{align*}

After these calculations, the partial derivatives of the error function are as follows:

Em=(2n)i=0n[xi(Yi(mxi+b))]Eb=(2n)i=0n(Yi(mxi+b))\begin{align*} \frac{\partial E}{\partial m} &= (\frac{-2}{n}) \sum_{i=0}^{n}[x_i(Y_i-(mx_i+b))] \\ \frac{\partial E}{\partial b} &= (\frac{-2}{n}) \sum_{i=0}^{n}(Y_i-(mx_i+b)) \end{align*}

These Partial Derivatives are now ready to be plugged into the Gradient Descent.

1.d. The Gradient Descent

The gradient descent is an optimization technique that seeks to locate a minimum value for each coefficient of the regression line. A good way to think about Gradient Descent is to visualize someone hiking down a mountain. Every 5 or so steps, the hiker must look at their surroundings and determine what the steepest path down is, and then take that path. After they have gone another 5 steps, the hiker checks their surroundings again, determines the steepest path down, and uses that process until they reach the ground.

In this example, those 5 steps the hiker takes can be called the Learning Rate in the gradient descent process. This is a coefficient for the partial derivatives that controls how much the model will try to fit the line to the data. The closer the learning rate is to 0, the more precise the model should be.

Note: This can lead to the model being "overfitted", so an optimized version of the learning rate should also be found through testing.

The gradient descent function is calculated by iteratively subtracting the partial derivative from the current value of mm or bb, respectively, so our equations for the learning rate look like this:

mnew=mcurrentEmbnew=mcurrentEb\begin{align*} m_{new} &= m_{current} - \frac{\partial E}{\partial m} \\ b_{new} &= m_{current} - \frac{\partial E}{\partial b} \end{align*}

Where 0<L<10 < L < 1.

Now that the Math for the functions have been worked out, the python code to create the simple linear regression model is ready to be written. [Rewrite this]

2. Applying the Theory to Python

2.a. The Gradient Descent Function in Python

Short description about the gradient descent

def gradient_descent(m_current, b_current, df, learning_rate):
    m_gradient = 0
    b_gradient = 0

    n = len(df) # The number of rows in the dataset

    # Calculate the partial derivative summations
    for i in range(n):
        x = df.iloc[i].x
        y = df.iloc[i].y

        # These are a pythonic representation of partial derivative equations found in the theory section
        m_gradient += (-2/n) * x * (y - (m_current * x + b_current))
        b_gradient += (-2/n) * (y - (m_current * x + b_current))

    # Calculate the Gradient Descent equations from the theory section
    m = m_current - learning_rate * m_gradient
    b = b_current - learning_rate * b_gradient

    return m,b

2.b. Performing Simple Linear Regression

Now that the gradient descent function is complete, a function to iteratively call that function is needed to minimize the error of the regression line

def ols_regression(learning_rate, iterations, df):
    m = 0
    b = 0

    for i in range(iterations):
        m,b = gradient_descent(m, b, df, learning_rate)

    return m,b

2.c. Testing the Linear Regression Model

To apply the OLS linear regression functions, the program will take a dataset and perform the regression on it. The dataset being used is just a simple CSV file by luddarell from kaggle.

import pandas as pd

# Import the Data
file = '/path/to/repos/github/portfolio-backtesting/docs/data/1.01_Simple_linear_regression.csv'

df = pd.read_csv(file)
df2 = pd.DataFrame()

df2['y'] = df['SAT']
df2['x'] = df['GPA']

df = df2

Finally, set the learning rate and number of iterations, then call the linear regression function:

# Run the Regression Model:
learning_rate = .001
iterations = 10000
m,b = ols_regression(learning_rate, iterations, df)
print(m,b)
477.47759913775326 250.49383109375495

2.d. Plotting the Regression Line

Now that the regression line has been computed, the line should be visualized to gain a better understanding of what the data analysis performed looks like.

import seaborn as sns

sns.regplot(x ='x', y='y',data=df)

From this plot, the line's fit of the data can be clearly seen and the variance is shown with the shadows around the line.

3-4. Multiple Linear Regression

3. The Theory

Regression models take a series of predictor(X) variables and a single response(Y) variable, and estimates a line of best fit that can be used to predict unknown response variables.

This regression model can be applied to any series of predictor and response variables, however for the purpose of the pythonic-finance project, this model will be used in the Fama-French 3 and 5 factor analyses of portfolios, which will have a brief overview in the python section, where the beta coefficients for this model will be calculated.

3.a. The Model

The Multiple Linear Regression Model is:

y=β0+β1x1+β2x2+...+βpxp+ϵi\begin{align*} y = \beta_0 + \beta_1x_1+\beta_2x_2+...+\beta_px_p + \epsilon_i \end{align*}

Which can be translated into the Matrix Form:

Yi=[β0β1...βp][X0X1...Xp],X0=1\begin{align*} Y_i &= \begin{bmatrix} \beta_0 & \beta_1 & ... & \beta_p \end{bmatrix} \begin{bmatrix} X_0 \\ X_1 \\ ... \\ X_p \\ \end{bmatrix} , X_0 = 1 \end{align*}

Setting X0=1X_0 = 1 allows the matrices to be the same size, which simplifies the calculations by including the Y-intercept Beta(β0\beta_0) in the coefficient matrix.

3.b. The Error Function

The error function that will be minimized in the model is the Sum of Squared Errors, which measures variation within a cluster of data.

The Formula for the Sum Squared Errors(SSESSE) is:

E=SSE=i=1nϵi2=i=1n(yiyi^)2\begin{align*} E = SSE &= \sum_{i=1}^{n} \epsilon_i^2 \\ &= \sum_{i=1}^{n}(y_i-\hat{y_i})^2 \end{align*}

This is a sum of each of the squared differences between the observed response variable yiy_i and the estimated response variable yi^\hat{y_i}.

The Matrix form of the SSE formula is:

E=SSE=i=1nϵi2=ETE\begin{align*} E = SSE &= \sum_{i=1}^{n} \epsilon_i^2 \\ & = \mathcal{E}^T\mathcal{E} \end{align*}

Instead of squaring the matrices, the error matrix is multiplied by its transpose. This is done because the errors are an n×1n{\times}1 matrix, and computing ϵi2\epsilon_i^2 is not possible, so the transpose is used instead.

An expansion of this equation using vectors is provided below:

E=i=1n[y1y1^y2y2^...ynyn^][y1y1^y2y2^ynyn^]\begin{align*} E &=\sum_{i=1}^{n}& \begin{bmatrix} y_1 - \hat{y_1}& y_2 - \hat{y_2}& ... & y_n - \hat{y_n} \end{bmatrix} \begin{bmatrix} y_1 - \hat{y_1}\\ y_2 - \hat{y_2}\\ \vdots \\ y_n - \hat{y_n} \end{bmatrix} \end{align*}

3.c. Computing the Error Function for the MLR Model

In Linear Algebra, the transpose of a sum can be decomposed in the following ways:

(A+B)T=AT+BT(AB)T=ATBT\begin{align*} (A+B)^T &= A^T+B^T \\ (A-B)^T &= A^T-B^T \end{align*}

Which means the transpose operator in E=E^TE^E = \hat{\mathcal{E}}^T\hat{\mathcal{E}} can be distributed, making the function:

E=i=1n(YTY^T)(YY^)\begin{align*} E &= \sum_{i=1}^{n}(Y^T-\hat{Y}^T)(Y-\hat{Y}) \end{align*}

Substituting the matrix form Y^=Xβ\hat{Y} = X \beta into the error function returns:

E=i=1n(YT(Xβ)T)(Y(Xβ))E=i=1n[YTYYTXβY(Xβ)T+(Xβ)T(Xβ)]\begin{align*} E &= \sum_{i=1}^{n}(Y^T-(X \beta)^T)(Y-(X \beta)) \\ E &= \sum_{i=1}^{n}[Y^T Y - Y^T X \beta - Y(X \beta)^T + (X \beta)^T (X \beta)] \end{align*}

In order to finish simplifying the equations, the following terms must be proven equal in order to simplify into the solution β^=(XTX1)(XTY)\hat{\beta} = (X^T X^{-1})(X^T Y):

(Xβ)TY=YT(Xβ)\begin{align*} (X \beta)^T Y = Y^T (X \beta) \end{align*}

Let Y=A,Xβ=BY = A, X \beta=B:

Therefore the equation (Xβ)TY=YT(Xβ)(X \beta)^T Y = Y^T (X \beta) becomes ATB=BTAA^T B = B^T A

(AB)T=BTAT,(A+B)T=AT+BT(ATB)T=BTA,(AB)T=ATBT\begin{align*} (AB)^T = B^T A^T,& (A+B)^T = A^T + B^T \\ (A^T B)^T = B^T A,& (A-B)^T = A^T - B^T \end{align*}

Therefore

ATB=BTA=(ATB)TYT(Xβ)=(YT(Xβ))T\begin{align*} A^T B = B^T A &= (A^T B)^T \\ Y^T (X \beta) &= (Y^T (X \beta))^T \end{align*}

Substituting this back into the SSESSE equation allows it to be simplified.

E=i=1nYTYYTXβY(Xβ)T+(Xβ)T(Xβ)E=i=1nYTY2YTXβ+(Xβ)T(Xβ)\begin{align*} E &= \sum_{i=1}^{n} Y^T Y - Y^T X \beta - Y(X \beta)^T + (X \beta)^T (X \beta) \\ E &= \sum_{i=1}^{n} Y^T Y - 2Y^T X \beta + (X \beta)^T (X \beta) \end{align*}

3.d. The Partial Derivative of the Error Function

Now that the error function is expanded to include the equation of the MLR Model, the partial derivative of the error function can be computed.

The partial derivative is used to compute how much the error within the model is changing, and is iteratively calculated to minimize each coefficient β\beta. It is important to note that this partial derivative is merely an estimate, as the data is a series of discrete observations and not continuous.

The vector of the minimized values of each β\beta are labeled β^\hat{\beta} in the Normal Equations, which are the results of the minimization process.

Eβ=β(i=1nYTY2YTXβ+(Xβ)T(Xβ))Eβ=i=1n02YTX+2XTβTX\begin{align*} \frac{\partial E}{\partial \beta} &= \frac{\partial}{\partial \beta} ( \sum_{i=1}^{n} Y^T Y - 2Y^T X \beta + (X \beta)^T (X \beta)) \\ \frac{\partial E}{\partial \beta} &= \sum_{i=1}^{n} 0 - 2Y^T X + 2X^T \beta^T X \end{align*}

Then setting the partial derivative equal to 0 and solving for β^\hat{\beta}, the equation becomes:

0=02YTX+XTβ^TX2YTX=XTβ^TXβ^T=2YTX2XTX=(YTX)(XTX)1β^=[(YTX)(XTX)1]Tβ^=(YTX)T[(XTX)1]Tβ^=(XTY)(XTX)1\begin{align*} 0 &= 0 - 2Y^T X + X^T \hat{\beta}^T X \\ 2Y^T X &= X^T \hat{\beta}^T X \\ \hat{\beta}^T &= \frac{2Y^T X}{2X^T X} = (Y^T X)(X^T X)^{-1} \\ \hat{\beta} &= [(Y^T X)(X^T X)^{-1}]^T \\ \hat{\beta} &= (Y^T X)^T [(X^T X)^{-1}]^T \\ \hat{\beta} &= (X^T Y) (X^T X)^{-1} \end{align*}

Lastly, the Normal Equations can be found by rearranging the equation:

XTXβ^=XTY\begin{align*} X^TX\hat{\beta} &= X^T Y \end{align*}

3.e. Interpreting the Theory to Translate into an Algorithm

β^\hat{\beta} is the Least Squares Estimator for the model. This means that it is a coefficient matrix that can take the observed xx and yy values and estimate a value for each β\beta in the model.

In the Simple Linear Regression Model, the Formula for the Gradient Descent of the slope was:

mnew=mcurrentEm\begin{align*} m_{new} &= m_{current} - \frac{\partial E}{\partial m} \end{align*}

The Normal Equations bypass this iterative gradient descent process, and perform the gradient descent in one step. By plugging the dataset into the Normal Equations formula for β^\hat{\beta}, the optimal β\beta coefficients for each predictor variable are computed without needing iteration like in the Simple Linear Regression Model.

4. Applying the Theory to Python

The necessary packages for this section are NumPy, Pandas, YFinance, MatPlotLib and Seaborn.

The CSV Data for the 3 and 5 Factor Models can be found in Dr. Kenneth French's Data Library and will be downloaded and added to a CSV file in another Python Script to reduce the complexity of this notebook. The CSV files with the combined datasets will be available here.

import numpy as np
import pandas as pd
import yfinance as yf
import matplotlib.pyplot as plt

4.a. Function Definitions

Because the Normal Equations solve for the gradient descent in one step, the Multiple Linear Regression function needs to take a DataFrame and convert it into the necessary NumPy arrays, then compute each part of the Normal Equations to solve for β^\hat{\beta}. Recall that the formula for the vector β^\hat{\beta} is:

β^=(XTY)(XTX)1\begin{align*} \hat{\beta} &= (X^T Y) (X^T X)^{-1} \end{align*}
def multiple_linear_regression(df:pd.DataFrame)->pd.DataFrame:
    x = df.filter(like='x_').values
    y = df.filter(like='y_').values

    xT = x.T
    xTx = np.dot(xT, x)
    xTx_inv = np.linalg.inv(xTx)

    xTy = np.dot(xT,y)

    betas = np.dot(xTx_inv, xTy)

    return betas

4.b. Accessing the Data and Performing Multiple Linear Regression

The Data was written to a CSV file using the Script 'dataset_creator.py' in the /notebooks/regression_models/multiple_linear_regression_files/ directory in the project. This CSV contains the Fama-French Library Data, as well as the Stock returns for AMD.

ff_3_df = pd.read_csv('/path/to/repos/github/pythonic-finance/notebooks/regression_models/multiple_linear_regression_files/ff_3_factor.csv')
ff_5_df = pd.read_csv('/path/to/repos/github/pythonic-finance/notebooks/regression_models/multiple_linear_regression_files/ff_5_factor.csv')

Now that the data has been accessed, the Multiple Linear Regression can be run on each dataset:

4.b.i. The 3-Factor Model

The Fama-French 3 Factor Model is an extension of the Capital Asset Pricing Model, aiming to describe a stock or portfolio's returns through market risk as well as the outperformance of small-cap companies relative to large-cap companies and the outperformance of high market-to-book value companies relative to low market-to-book value companies.

The model suggests that both small-cap stocks and stocks with a high market-to-book ratio tend to regularly outperform the overall market, and thus should be factored into the model.

This data can be found in Dr. Kenneth French's data library and will be used for this model.

The formula for the 3-factor model is:

RiRrf=α+β1(RrfRm)+β2(SMB)+β3(HML)+ϵi\begin{align*} R_i-R_{rf} &= \alpha + \beta_1(R_{rf}-R_m) + \beta_2(SMB) + \beta_3(HML) + \epsilon_i \end{align*}

Where:

RiR_i is the expected rate of return

RrfR_{rf} is the risk-free rate

SMBSMB = Small Minus Big, the historic excess returns of small-caps over large-caps

HMLHML = High Minus Low, the historic excess returns of high market-to-book ratio companies over low market-to-book ratio companies

β1,2,3\beta_{1,2,3} are the coefficients of each factor, estimated by the regression model

α\alpha is the excess return on investment

ϵi\epsilon_i is the noise within the model

4.b.ii. The 3-Factor Model in Python
betas_3_factor = pd.DataFrame(multiple_linear_regression(ff_3_df)).T

new_cols = {0:'alpha', 1:'mkt-rf', 2:'smb', 3:'hml'}
betas_3_factor.rename(columns=new_cols, inplace=True)

betas_3_factor
alphamkt-rfsmbhml
0.1175171.5208560.104164-0.78295

4.b.iii. The 5-Factor Model

The Fama-French 5-Factor model is another iteration of the 3-Factor Model, including 2 new factors. These are:

  • RMWRMW = Robust Minus Weak, the average return on two robust operating-profitability portfolios minus the average return on two weak operating-profitability portfolios.

  • CMACMA = Conservative Minus Aggressive, the average return on two conservative investment portfolios minus the average return on two aggressive investment portfolios.

These Factors are also found in Dr. Kenneth French's data library, and will be used for this model.

The formula for the 5-Factor Model is:

RiRrf=α+β1(RrfRm)+β2(SMB)+β3(HML)+β4(RMW)+β5(CMA)+ϵi\begin{align*} R_i-R_{rf} &= \alpha + \beta_1(R_{rf}-R_m) + \beta_2(SMB) + \beta_3(HML) + \beta_4(RMW) + \beta_5(CMA) + \epsilon_i \end{align*}
4.b.iv. The 5-Factor Model in Python
betas_5_factor = pd.DataFrame(multiple_linear_regression(ff_5_df)).T

new_cols = {0:'alpha', 1:'mkt-rf', 2:'smb', 3:'hml', 4:'rmw', 5:'cma'}
betas_5_factor.rename(columns=new_cols, inplace=True)

betas_5_factor

Doing this returns the following values:

alphamkt-rfsmbhmlrmwcma
0.1276461.467586-0.067626-0.557057-0.151023-0.574484

4.c. Visualizing the Regression Results

Now that the Regression Results have been computed for both the 3 and 5 factor models, some visualizations are needed to examine what the results look like. Currently, I am trying to put these notebooks together for a school project, so the results are not yet finished. This will be updated later, but for now I need to get these posts done. Check back soon!


The Functions used in this Notebook will be translated into Python Classes within the scripts folder of this project. The last part of this project involves generating ANOVA Tables for each model, and then the OOP implementations will begin to be added. If the programs are not here when you are reading this, they will be soon so check back later : )

References

Simple Linear Regression

NeuralNine. "Linear Regression from Scratch in Python." NeuralNine, NeuralNine, https://www.neuralnine.com/linear-regression-from-scratch-in-python/

Multiple Linear Regression

Dash, Debidutta. "Multiple Linear Regression from scratch using only numpy." Medium, 2022, https://medium.com/analytics-vidhya/multiple-linear-regression-from-scratch-using-only-numpy-98fc010a1926.

Ramesh, Bhanumathi. "Deriving Normal Equation for Multiple Linear Regression." Medium, 2022, https://medium.com/@bhanu0925/deriving-normal-equation-for-multiple-linear-regression-85241965ee3b.

LearnChemE. "Matrix Approach to Multiple Linear Regression." YouTube, uploaded by LearnChemE, 2022, https://youtu.be/NzuK4iAfxhU?si=cxU-v8ZBgbA1s-FG.

Boer Commander. "Matrix Form Multiple Linear Regression MLR." YouTube, uploaded by Boer Commander, 2022, https://youtu.be/Imjfp1cxy6g?si=gWXnA9F_XisVzFA4.

French, Kenneth R. "Data Library." Kenneth R. French - Data Library, Tuck School of Business at Dartmouth, https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html.