This series of blog posts aims to introduce and explain the most important mathematical concepts from linear algebra for machine learning. If you understand the contents of this series, you have all the linear algebra you’ll need to understand deep neural networks and statistical machine learning algorithms on a technical level. Most of my examples reference machine learning to give you an understanding of how mathematical concepts relate to practical applications. However, the concepts are domain-agnostic. So, if you come from a different domain, the explanations are hopefully still useful for you.
Here’s an overview in chronological order.
For writing these posts I’ve relied on the following books:
Mathematics for Machine Learning by Deisenroth, Faisal and Ong
Deep Learning by Ian Goodfellow , Yoshua Bengio , et al.
Coding the Matrix by Philipp Klein
The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani, et al.
Mathematik für Informatiker by Gerald Teschl and Susanne Teschl (in German)
as well as Kahn Academy’s Video Library on linear algebra.
I want to thank the authors for creating these amazing resources and recommend picking them up if you want to explore these topics more in-depth.
I intend to write further series on multivariate calculus, probability, and statistics, as well as neural networks. So stay tuned and sign up to my email list for updates!