Test if this thread shows
nice! what is the relationship between this and The Straight Dope? https://gluon.mxnet.io/ Does this replace The Straight Dope as the one-stop-shop for learning both MXNet and DL?
Yes, this book aims to replace the straight dope.
Is there any way to enroll in the Berkeley course linked to this book ?
Or at least to get the assignments ?
Assignments and videos will be online. Need to figure out the enrollment, the classroom can only host 100 people, but we have close to 200 people on the waiting list right now
This is really a nice resource to learn deep learning, thanks!
Welcome to our community
I just wanted to congratulate the authors for this totally awesome book. The straight dope was already great, but this is better. From basic concepts deep into state of the art models.
Thanks for the hard work. I hope I can contribute at some point.
I’m really excited to read and practice the contents of this book. Thanks for the authors.
“The next four chapters focus on modern deep learning techniques” should be “The next 5 chapters focus on modern deep learning techniques”
hello everyone, I feel very happy to improve my knowledge in deep learning.
Excited to start learning the book and hopes to learn the intricacies well.
I am really excited to start this new journey of learning for the first time MXNet and also learn more about DeepL. Hope I will be able to help other people too!
Very excited about this resource! Diving into the book immediately
Hi there, congratulations and thank you so much for the book.
I just started researching in the field, and this book is gonna be incredibly useful!
Mostly out of curiosity, I see there are basically two version of the book, based either on the NDArray or the numpy interface, and that the numpy version is recommended.
Why is that? Are numpy arrays more efficient?
I was wondering about it because I visited the GluonCV website, and most the tutorials there mainly use NDArrays. Which structure is generally more compatible with the libraries and better to use?
We are on the progress of migrating to DeepNumpy. That’s why partial of our operators are still NDArrays. The new version is compatible with the numpy package, with a faster accelerator . Check here for more details. https://d2l.ai/chapter_preliminaries/ndarray.html