This is based on code from the following book
The follow blog post walks through what PyTorch’s Autograd is.
Link to Jupyter Notebook that this blog post was made from
|
|
Taking our input from the previous notebook and applying our scaling
|
|
Same model and loss function as before.
|
|
|
|
This time instead of keeping track of our parameters and applying the gradient with respect to the parameters we’ll leverage torch
’s auto gradient feature.
|
|
How does requires_grad
work?
Internally, autograd represents this graph as a graph of Function objects (really expressions), which can be apply() ed to compute the result of evaluating the graph. When computing the forwards pass, autograd simultaneously performs the requested computations and builds up a graph representing the function that computes the gradient (the .grad_fn attribute of each torch.Tensor is an entry point into this graph). When the forwards pass is completed, we evaluate this graph in the backwards pass to compute the gradients. [1]
This can be done as long as our model is differentiable.
Torch will track a graph of operations used to compute our current tensor.
|
|
True
We apply a single forward and backward pass and can print out the
|
|
tensor([4517.2969, 82.6000])
|
|
Notice that we are not ready to perform our training_loop
and we only had to define our model
and loss_fn
.
|
|
|
|
params.grad tensor([-0.2252, 1.2748])
Epoch 500, Loss 7.860116
params.grad tensor([-0.0962, 0.5448])
Epoch 1000, Loss 3.828538
params.grad tensor([-0.0411, 0.2328])
Epoch 1500, Loss 3.092191
params.grad tensor([-0.0176, 0.0995])
Epoch 2000, Loss 2.957697
params.grad tensor([-0.0075, 0.0425])
Epoch 2500, Loss 2.933134
params.grad tensor([-0.0032, 0.0182])
Epoch 3000, Loss 2.928648
params.grad tensor([-0.0014, 0.0078])
Epoch 3500, Loss 2.927830
params.grad tensor([-0.0006, 0.0033])
Epoch 4000, Loss 2.927679
params.grad tensor([-0.0003, 0.0014])
Epoch 4500, Loss 2.927652
params.grad tensor([-9.7513e-05, 6.1291e-04])
Epoch 5000, Loss 2.927647
tensor([ 5.3671, -17.3012], requires_grad=True)
Same loss as previous notebook
References
If anything is unclear, please post a comment below!