Uniform Sampling of Some Quadrics

I recently encountered a problem in which I needed to uniformly sample vectors with positive values in \(\mathbb{R}^n \) that had unit \(L_1 \) norm, where the \(L_1 \) norm is\[ \lVert (x_1, x_2, \ldots, x_n)^T \rVert = |x_1| + |x_2| + \ldots + |x_n| \] The solution was actually quite straightforward: The isosurfaces of the \(L_1\) norm in \(\mathbb{R}^n\) look something like this, which each color corresponding to a particular isosurface:...

January 25, 2021 · 5 min · William Arnold

Some Null Hypersurface Visualizations

On manifolds with positive definite metrics (e.g. dot product), any localized embedding of an \(n-1\) dimensional submanifold will have tangent spaces that partition those of the original manifold into an \(n-1\) dimensional subspace (that of the submanifold) and a \(1\) dimensional subspace (the part of the tangent space orthogonal to the submanifold). The black vector here denotes the subspace orthogonal to the tangent space of the submanifold, and the red is the tangent space of the submanifold....

January 24, 2021 · 3 min · William Arnold

Computer Vision: Fun with Filters and Frequencies

This is a the result of the second project of CS194-26 from Fall 2020 1.1: Finite Difference Operators We compute the magnitude of the gradient by first finding the gradient in the x and y directions, dx and dy respectively, then computing (dx**2 + dy**2)**0.5 to get the true magnitude. For the following image: The squared magnitude of the gradient is shown below. The problem with this specific method is that we get a fair amount of noise: we want the edges of the image but the gradient still has a significant value in smaller places such as the grass....

September 26, 2020 · 7 min · William Arnold

Markov's Inequality Visually Explained

I recently came across the definition for Markov’s Inequality as it’s used in measure-theory and was shocked at how intuitive it was. It turns out this definition is roughly equivalent to the one taught in most undergraduate probability theory courses and helped me a lot to understand it. Firstly, for a positive random variable \(X\) and any \(a > 0\), Markov’s inequality states that \[ Pr(X \geq a) \leq \frac{\mathbb{E}[X]}{a}\] or \[ a Pr(X \geq a) \leq \mathbb{E}[X] \]...

August 28, 2020 · 3 min · William Arnold

Understanding Modspaces Visually

A lot of students in office hours seemed to be unsure of what a Modspace is and exactly what \(\mod m\) means (is it a function?) or how \(3 \equiv 18 \mod 15\). To start out, we need to talk about what a Modspace is. (Note: super mathy things that you should read about if you’re interested are written in italics, ex: tropical math). Consider the integers, \(\mathbb{Z}\). For integers, \(+\) is some operation called an ‘addition’ that takes in two numbers and adds them together....

February 15, 2019 · 4 min · William Arnold

Linear Algebra for Graph Algorithms and Massively Parallel Machines

Breadth-First search is one of the most widely applied graph algorithms. It’s used in bioinformatics to determine the locality of areas of the brain, social network analysis, recommender systems, any many other applications. It is a simple algorithm used to determine the number of edges from one vertex to another, its depth in the BFS tree. While the algorithm is simple and the naive implementation has a fairly good runtime, this algorithm is commonly used on the scale of billions (or even trillions) of vertices in a graph....

January 30, 2019 · 3 min · William Arnold