The Shortest ML Code in Python

Sourabh Gupta
1 min readMar 13, 2023

--

The first ML model, I assume, that we learn is Linear Regression. A sort of “Hello world” of this world.

In this article, you will learn how concisely you can train a LR for regression in a few lines of code. So without further ado, let’s start.

  1. Our Model Equation: Let’s assume a very simple LR model without the intercept term. I have discarded the intercept term ‘b’ to keep it simple.

y = w*x

where w is the parameter which we want to train.

2. Loss function: The loss function that we use for regression is the mean squared error loss:

loss = sum of (y_pred — y_true)²

3. Let’s calculate the partial derivative of the loss function w.r.t our parameter w:

loss_dw = (y_pred — y_train)y_train

4. Training dataset: Let’s say we only have only sample!

x_train =2, y_train=4

Assuming a learning rate of 0.1, the below code should get you the most optimum value of w

That’s it. As the loop iterates, you will start observing the loss value getting lower, and the value of w getting closer to 2.

Cheers!

--

--