Concise Implementation of Linear Regression

http://d2l.ai/chapter_linear-networks/linear-regression-concise.html

One thing here we missed from the scratch implementation is attaching the grad to weight parameters
w.attach_grad()
b.attach_grad()
Does this automatically taken care of by net.initialize(init.Normal(sigma=0.01)) code?

Yes. When you are using Gluon you actually never call .attach_grad()

2 Likes

In 3.3.2, the following statement should be removed:
Since data is often used as a variable name, we will replace it with the pseudonym gdata (adding the first letter of Gluon), to differentiate the imported data module from a variable we might define.

It was used in an old version.

Is anybody receiving a ValueError In HybridBlock, there must be one NDArray or one Symbol in the input. Please check the type of the args. when using HuberLoss()?

Thank you very much

I received a TypeError instead of ValueError. According to the explain given by the tracer, this eror is caused by the ‘abs’ operator used in this loss function. It can only accept legacy ndarray.

Hi @vermicelli, can I know the current version of d2l & mxnet that you are using? Please check if you are using the latest version as https://d2l.ai/chapter_installation/index.html

@gold_piggy d2l version is 0.11.1 and mxnet version is mxnet-cu101(1.6.0b20191122)

Hi @vermicelli, thanks for your feedback. I will correct it!

Hi,
I’m a beginner of AI and mxnet, with solid base of C/C++ programming, so my question should seems gullible.
What is the exact meaning of statement:
with autograd.record(): ?
The explaination in documentation seems equally criptic:
and captures code that needs...
Thanks for help me.

Hi Andrea,

The statement with autograd.record(): means, as mentioned in Automatic Differentiation of Ch.2,

we want MXNet to generate a computational graph on the fly. We could imagine that MXNet would be turning on a recording device to capture the exact path by which each variable is generated.

For more on computational graphs, check this out Calculus on Computational Graphs: Backpropagation

Thus, its an instruction to MXNet to generate the computational graph based on the mathematical expressions in its scope. Finally, we generate the gradients (vector of partial derivatives) by calling y.backward(), post the scope of the autograd.record.

Hope this helps.

1 Like

Is this issue fixed? I still get the error.