Linear Algebra
Matrix as a rulebook of walking in space

Getting an understanding of matrix and matrix transformations by using a geometric interpretation. We look at how different matrices of shape (2,N) allow one to navigate 2D space in different ways.


Matrix as a linear transformation of points in space

We view a matrix as a function with vectors as input and output. Looking at how a matrix transforms a circle into ellipses of various shapes provides some delightful insights into 2D matrices.


Visualizing the behavior of transpose of a matrix

We analyze the behavior of transpose of A and its products with A. We find some beautiful insights about how these matrices behave as A is rotated or modified gradually.


How is projection of a vector on matrix A related to the matrix (Transpose of A * A)?

We observe how the matrix (Transpose of A * A) ignores component of any vector perpendicular to its column space. We then proceed to appreciate the simplicity of the scary looking formula for projection of a vector.


Symmetric and Symmetric Positive Definite Matrices

A look at some elementary properties of symmetric positive (semi-)definite matrices alongwith some pleasant visualizations of transformations of singular, random and symmetric matrices.


Symmetric Positive Definiteness and Minima

We study the relationship between quadratic functions and matrices. We also see how the positive definiteness of the matrix implies convexity of function and hence presence of global minima.



Probability
What does Bayesian probability look like?

Visualizing Bayes theorem and understanding the ratios that Bayes probabilities represent. We look at what do terms like P(X), P(X|Y) and P(Y|X) represent and try to understand why P(X|Y) can be totally different from P(Y|X).


A look at Base Rate Fallacy and Monty Hall Problem

Demystifying the counterintuitive nature of problems like Base Rate Fallacy and Monty Hall Problem in the context of Bayes theorem. We also look at different variants of Monty Hall Problem in a single unified visual.


What does Bayesian inference look like?

Using Bayes theorem to do inference on coin flip data; understanding terms like likelihood, prior and posterior probabilities; realizing how large volume of data makes accuracy of priors less important.


Maximum Likelihood Estimation vis-a-vis Maximum A Posteriori

What are MLE and MAP? What do they correspond to? When are these value the same and when are they different? We answer these questions using a simple example.



Dualities
Understanding Galois connections using rectangles

In this first article related to Category Theory, we take a look at Galois connections and their properties. As usual, we use a certain set of visuals which allows us to translate mathematical expressions into simple properties of rectangles.