If you’ve tried to learn about neural networks and deep learning, you’ve
probably encountered an abundance of resources, from blog posts to MOOCs (massive open online courses, such as those offered on Coursera and Udacity) of varying quality and even some books—I know I did when I
started exploring the subject a few years ago. However, if you’re reading this preface, it’s likely that each explanation of neural networks that you’ve come across is lacking in some way. I found the same thing when I started learning: the various explanations were like blind men describing different parts of an elephant, but none describing the whole thing. That is what led me to write this book.
These existing resources on neural networks mostly fall into two categories.
Some are conceptual and mathematical, containing both the drawings one
typically finds in explanations of neural networks, of circles connected by
lines with arrows on the ends, as well as extensive mathematical
explanations of what is going on so you can “understand the theory.” A
prototypical example of this is the very good book Deep Learning by Ian
Goodfellow et al. (MIT Press).