This came up in a math reading group, so I figured I’d write a note. This is relatively computational, but I will try to keep it accessible to someone who has had a first course in linear algebra.
The Schur Complement comes up a lot when talking about a block matrix
where A is an block and D is , (thus M is . Then the Schur complement, if D is invertible, is .
The reason for how often it appears, is most readily appears explained by the factorization
which simply comes from first (upside-down) row reducing and then (upside down) column reducing.
This factorization has a few immediate results. One of the most common is that . Compare with the formula for matrices, by setting m=n=1. So M is invertible if and only if D and are both invertible.
This also has the benefit of giving a factorization of
So the Schur complement comes up pretty much whenever you want to invert a block matrix.
Another less intuitive place where this comes up is in quadratic forms. For this case, assume that M is symmetric, and hence A and D are symmetric and . If we want to write a quadratic form , in terms of a block vector
for a fixed . Then we have the following formula
To prove this formula for symmetric M, inverting the decomposition above
My class is starting to cover Markov Chains, and I thought that it would be good to share some videos which have some gentle introductions with examples and questions that go along with it.
I am a fan of PBS’ Infinite Series, a YouTube channel which features a lot of explanations of interesting math topics that you probably don’t see in school. Even if you have a higher STEM degree.
The first video that I want to share talks about random walks.
The second talks about more general Markov Chains, and has a nice trick for calculating the stationary distribution.
In the past few lectures I have found myself saying “reasonable sets” a few times. (“The probability that is in any ‘reasonable set’ is “) I thought I would take some time to explain what I mean by this.
It turns out that if we want to define a uniform random variable (or any continuous random variable), there are unreasonable sets — sets which is it impossible to even define the probability that is in that set. Perhaps the most mind-blowing way to see this can’t be the case is the Banach–Tarski Paradox. This paradox illustrates a way to rearrange a sphere into two spheres of the same size… thus if we could define the probability that our uniform random variable was in those sets, there would be two disjoint sets where this probability was 1. This breaks the axioms of probability. So for the axioms of probability to hold we need to restrict ourselves to measurable sets (what mathematicians call reasonable sets).
Banach–Tarski is not the easiest example of an unreasoable set, but it is the most striking. Also, VSauce has a beautiful and fairly comprehensive video explaining the paradox…
Last week, a student came to me after class and asked if I could explain the idea behind Bayes Formula. While I would like to say I gave an eloquent explanation, in truth, my account was a bit, um, ramble-y. So I thought that I would write down what I was trying to say in hopes of attaining my ideal.
My ramblings placed the meaning of Bayes’ formula into two buckets. The first being reversing a conditional probability…i.e. converting from the the probability of given to the probability of given . That is a good explanation for the following formula.
On the left we are given that event has happened, and on the right we are conditioning on the (-algebra generated by) .
Thinking of it as the formula for switching conditionals highlights this formula’s role in many misconceptions — while, when written in the language of math, it is obvious , our language can easily conflate the two. For example, saying a test for a disease is 99% accurate is not the same as saying that 99% of those who test positive actually have the disease. Translating into math, the former may correspond to either where as the latter would be $P(are~positive|test~positive) = .99$. This mistake pops up everywhere once you start looking for it.
The other main use of the formula is predicting the future based on the past. Bayes said the formula was necessary for a “…sure foundation for all our reasonings based on past facts…”. Here, Bayes’ formula tells you how to accurately incorporate a new observation into previous knowledge. This strays a little from mathematics in that we need to pick Bayesian priors, which, to varying extents, need to be guessed at. The example that I most often use for this is how one should factor in a new piece of evidence at a trial…. although be careful, that could get you in trouble.
A good source for examples on this is in Nate Silvers’ book, which is aimed at an audience which is interested in math, but does not need more than some basic statistics knowledge.
What do you think? Does this reflect how you have used Bayes’ theorem, or how you have seen it used? Would it be helpful if I worked through some specific examples?
There are a few places where you can find information about me across the internet, here is a list of them:
This is the excerpt for your very first post.
There will be more here soon!