Δ ℚuantitative √ourneyhttp://outlace.com/2022-07-10T22:35:00-05:00Science, Math, Statistics, Machine Learning ...Learning Lean (Part 2)2022-07-10T22:35:00-05:002022-07-10T22:35:00-05:00Brandon Browntag:outlace.com,2022-07-10:/Lean_part_2.html<p>In part 2 we learn how to prove our first theorems about the natural numbers in Lean.</p><p>In <a href="http://outlace.com/Lean_part_1.html">part 1</a> we learned some basics about Lean and we defined the natural numbers as a type <code>Nat</code> and the addition function for <code>Nat</code>. </p>
<div class="highlight"><pre><span></span><span class="nf">namespace</span><span class="w"> </span><span class="kt">Tutorial</span>
<span class="nf">inductive</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Type</span><span class="w"> </span><span class="kr">where</span>
<span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span>
<span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span>
<span class="nf">def</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="n">b</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:=</span>
<span class="w"> </span><span class="n">match</span><span class="w"> </span><span class="n">b</span><span class="w"> </span><span class="n">with</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">a</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">c</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="p">(</span><span class="n">add</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">c</span><span class="p">)</span>
<span class="nf">end</span><span class="w"> </span><span class="kt">Tutorial</span>
</pre></div>
<p>Now we will learn how to prove our first theorems about the natural numbers. [Note that all of the code in these posts are in the namespace <code>Tutorial</code> to avoid name clashes with types and functions in Lean's standard library.]</p>
<p>Here we declare our first theorem:</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">add_zero_zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">sorry</span>
</pre></div>
<p>Almost every expression in Lean is either an inductive type or a function type (constructed using <code>def</code>), but again, Lean has <em>a lot</em> of syntactic sugar and aliases for things. The keyword <code>theorem</code> is just an alias for <code>def</code>, and indeed, theorems are just terms of a specified type (in fact, often theorems are just functions). We used the <code>def</code> keyword in the last post to define functions, but as the name suggests, it is a general way to define a named term of some specific type, or in other words, it lets you assign a type and expression or value to an identifier.</p>
<p>For example,
<code>def x : Nat := succ zero</code></p>
<p>We've defined a term named <code>x</code> of type <code>Nat</code> assigned to the value <code>zero</code>. Now you can use it in other expressions, such as:
<code>#reduce add x x</code></p>
<p>Back to the theorem. </p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">add_zero_zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">sorry</span>
</pre></div>
<p>So we are defining a theorem called <code>add_zero_zero</code> and this term has the type <code>add zero zero = zero</code>. We have assigned the term to the expression <code>sorry</code>, which is a keyword in Lean that serves as a placeholder when you want to be able to state a theorem without proving it yet.</p>
<p>It is interesting that this theorem has the type <code>add zero zero = zero</code>, what is going on here? We are trying to state the theorem that <code>0 + 0 = 0</code> (but we haven't built the nice <code>+</code> notation for our <code>Nat</code> type yet), and we do so by defining a term with that type. If we can actually specify an expression after the assignment operator <code>:=</code> that constructs a term of type <code>0 + 0 = 0</code>, then our term will <em>type check</em> as a valid term, or in this context, Lean will verify we have proved the theorem.</p>
<p>Firstly, it might seem ridiculous that we should prove <code>0 + 0 = 0</code>, isn't that totally obvious? Well Lean is a powerful programming environment but it doesn't generate theorems automatically, it doesn't know what we care about, so even extremely trivial statements like this must be explicitly stated and proved. However, Lean does agree that proving this is obvious in the sense that the proof is trivial. And recall, by proof we mean an implementation of an expression that generates a term of this type.</p>
<p>Well what exactly is this type <code>add zero zero = zero</code>? We are trying to prove two things are equal, and recall the (mostly true) mantra <em>everything in Lean is an inductive type or function type</em>, so this expression must be one of those. Indeed, this is an equality type, and the equality type is defined in Lean as an inductive type. Here's how it is defined:</p>
<div class="highlight"><pre><span></span><span class="nf">inductive</span><span class="w"> </span><span class="kt">Eq</span><span class="w"> </span><span class="p">{</span><span class="n">α</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Sort</span><span class="w"> </span><span class="kr">_</span><span class="p">}</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Prop</span><span class="w"> </span><span class="kr">where</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">refl</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Eq</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">a</span>
</pre></div>
<p>Okay there's a lot more going on here than our <code>Nat</code> type, and we will need to learn more features of Lean to understand it. First, notice this is defining an inductive type called <code>Eq</code>, but then there are these curly braces and an alpha symbol and sort... Confusing.</p>
<p><code>{α : Sort _}</code> is an <em>implicit</em> input to this type. </p>
<p>Recall our identity function,
<code>def natId (a : Nat) : Nat := a</code></p>
<p>The <code>a</code> is an explicit input to this function; without supplying a term <code>a</code>, this function doesn't work. However, some functions have inputs that do not need to be supplied explicitly, but can be inferred by Lean automatically. When this is the case, these inputs can be put in curly braces instead of parentheses.</p>
<p>As an example, let's make a more general version of our identity function. Before our identity function only worked on terms of type <code>Nat</code>, but we can make a <em>polymorphic</em> identity function that works on terms of any type.</p>
<div class="highlight"><pre><span></span><span class="nf">def</span><span class="w"> </span><span class="n">id</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Type</span><span class="w"> </span><span class="kr">_</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="p">)</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">a</span>
</pre></div>
<p>This function has one <em>implicit</em> (optional because it can be automatically inferred) input <code>α</code> (which you can type in Visual Studio Code by typing <code>\a</code>) and one <em>explicit</em> (required) <code>a</code>. If you apply the function <code>id</code> to a term of <em>any</em> type it will work and just return that term unchanged.</p>
<p>The implicit input <code>α</code> is assigned the type <code>Type _</code> because it is itself representing some type, and we want to make sure our function works on all types in the type hierarchy <code>Type 0, Type 1, Type 2 ...</code> that we discussed in the last post. These numbered type levels are called <em>universes</em>, by the way. The underscore character <code>_</code> is like a "wildcard" that tells lean to infer what should go there based on the context.</p>
<p>When we apply <code>id</code> to a term of type <code>Nat</code>,</p>
<div class="highlight"><pre><span></span><span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="n">id</span><span class="w"> </span><span class="n">zero</span>
<span class="c1">--Output: id zero : Nat</span>
</pre></div>
<p>Lean will infer that <code>α</code> must be <code>Nat</code> and <code>Nat</code> lives in the type universe <code>Type 0</code>, so the wildcard <code>_</code> can be inferred to be <code>0</code>. The underscore wildcard is quite useful as Lean can often infer simple <em>holes</em> we leave for it.</p>
<p>Okay, back to trying to understand the equality type.</p>
<div class="highlight"><pre><span></span><span class="nf">inductive</span><span class="w"> </span><span class="kt">Eq</span><span class="w"> </span><span class="p">{</span><span class="n">α</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Sort</span><span class="w"> </span><span class="kr">_</span><span class="p">}</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Prop</span><span class="w"> </span><span class="kr">where</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">refl</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Eq</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">a</span>
</pre></div>
<p>We now know what the <code>{α : Sort _}</code> means, oh wait, no we don't, because it says <code>Sort _</code> instead of <code>Type _</code>. Ok, so <code>Type u</code> (where <code>u</code> is some non-negative number) is actually an alias for <code>Sort (u + 1)</code>, so e.g. <code>Type 0 = Sort 1</code>. What's the point of this alias? Well, because <code>Sort 0</code> is special. <code>Sort 0</code> itself has another alias, namely <code>Prop</code>. <code>Prop</code> (<code>Sort 0</code>) is a special type universe because it is where <em>propositions</em> live. </p>
<p>A proposition is a statement with a truth value, i.e. something that we think is either true or false, like <code>0 + 0 = 0</code>. A proposition in Lean, therefore, is just a type that lives in the <code>Prop</code> universe, but everything else we've talked about with regard to "regular" types and universes like <code>Nat</code> and <code>Type 0</code> still applies. And as mentioned earlier, a proof of a proposition (or theorem) is just an expression that generates or constructs a term of that specific type, and that type will itself have the type <code>Prop</code>. </p>
<p>So the type <code>0 + 0 = 0</code> is of type <code>Prop</code> (aka <code>Sort 0</code>), and a proof of <code>0 + 0 = 0</code> is an expression that constructs a term of type <code>0 + 0 = 0</code>.</p>
<p>Oh, and the main reason <code>Prop</code> is special is that in Lean, any two terms of a given <code>Prop</code> are considered definitionally equal. So if we came up with two different ways of proving that <code>0 + 0 = 0</code>, then Lean will consider them definitionally equal, unlike in other type universes <code>Type 0, Type 1,</code> etc... If we have proved a proposition to be true, it no logner matters in <code>Prop</code> how exactly it was proved, we just care that it was indeed proved.</p>
<p>And again, let's get back to trying to understand the equality type.</p>
<div class="highlight"><pre><span></span><span class="nf">inductive</span><span class="w"> </span><span class="kt">Eq</span><span class="w"> </span><span class="p">{</span><span class="n">α</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Sort</span><span class="w"> </span><span class="kr">_</span><span class="p">}</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Prop</span><span class="w"> </span><span class="kr">where</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">refl</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Eq</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">a</span>
</pre></div>
<p>So far we have explained everything up to the colon. After the colon, we are defining the type of our new inductive type, and unlike <code>Nat</code> which has the (default) type <code>Type 0</code>, <code>Eq</code> is given the type <code>α → α → Prop</code>, which you should recognize as a function type. The input types to functions are all the types before the last type after a chain <code>... → ... → ...</code>. So in this case, this function has two inputs of type <code>α</code>, and remember, <code>α</code> is an implicit input type that Lean will infer for us. </p>
<p>Inductive types like this are actually defining a <em>family</em> of inductive types, each family is parameterized by the specific <code>α</code> given. If we parameterize <code>Eq</code> with <code>Nat</code>, then we get a family of inductive equality types where some expression of <code>Nat</code> (propositionally) equals some other expression of type <code>Nat</code>.</p>
<p>We can force <em>implicit</em> inputs to become explicit inputs to a function by prefixing the function name with the <code>@</code> character, for example:</p>
<div class="highlight"><pre><span></span><span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="o">@</span><span class="kt">Eq</span><span class="w"> </span><span class="kt">Nat</span>
<span class="c1">--Output: Eq : Nat → Nat → Prop</span>
</pre></div>
<p>This defines the family of equality propositions between natural numbers. A specific proposition in this family of equality types <code>Nat → Nat → Prop</code> would be what we have been trying to prove, namely <code>add zero zero = zero</code> (aka <code>0 + 0 = 0</code>).</p>
<div class="highlight"><pre><span></span><span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="kt">Eq</span><span class="w"> </span><span class="p">(</span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">zero</span><span class="p">)</span><span class="w"> </span><span class="n">zero</span>
<span class="c1">-- Which is the same as:</span>
<span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">zero</span>
<span class="c1">--Output: add zero zero = zero : Prop</span>
</pre></div>
<p>By supplying <code>Eq</code> with two terms of type <code>Nat</code>, we get a proposition, as the type signature <code>Nat → Nat → Prop</code> would predict.</p>
<p>Back to the equality type.</p>
<div class="highlight"><pre><span></span><span class="nf">inductive</span><span class="w"> </span><span class="kt">Eq</span><span class="w"> </span><span class="p">{</span><span class="n">α</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Sort</span><span class="w"> </span><span class="kr">_</span><span class="p">}</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Prop</span><span class="w"> </span><span class="kr">where</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">refl</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Eq</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">a</span>
</pre></div>
<p>We should now understand the top line, now let's figure out what's going on in the one constructor for this type. The one constructor is called <code>refl</code> for <em>reflexivity</em>, which is a math term, and it allows us a construct a term of type <code>Eq _ _</code> (there are those wildcards again) simply by giving it a term of type <code>α</code>.</p>
<p>For example,</p>
<div class="highlight"><pre><span></span><span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="kt">Eq</span><span class="o">.</span><span class="n">refl</span><span class="w"> </span><span class="n">zero</span>
<span class="c1">--Output: Eq.refl zero : zero = zero</span>
</pre></div>
<p>The only things that are equal are those terms that are literally the same term, e.g. <code>zero = zero</code>. This defines <em>syntactic</em> equality, the notion that expressions in Lean are equal only when they represent the exact same string of characters. There is another kind of equality in Lean that is not an inductive type, namely, <em>definitional</em> equality. If I define <code>def x := zero</code> then <code>x = zero</code> by definition, because anywhere there is an <code>x</code> in an expression it can be replaced with <code>zero</code>.</p>
<p>Now we can get back to studying our first theorem.</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">add_zero_zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">sorry</span>
</pre></div>
<p>We now know that in order to prove this theorem (replace the <code>sorry</code> with a valid expression of type <code>add zero zero = zero</code>), we need to construct an equality term, and we now know the way to do that is using equality's single constructor <code>refl</code>.</p>
<p>At long last, we are ready for the proof.</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">add_zero_zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="kt">Eq</span><span class="o">.</span><span class="n">refl</span><span class="w"> </span><span class="n">zero</span>
</pre></div>
<p>We proved this theorem with <code>Eq.refl zero</code>. How do we know it is proved? Because Lean doesn't give us any errors and if we do
<code>#check add_zero_zero</code> we don't get an error.</p>
<p>But wait, what just happened when we did <code>Eq.refl zero</code>? Well, Lean will automatically compute (or reduce) <code>add zero zero</code>, which reduces to <code>zero</code>, so the proposition becomes <code>zero = zero</code>, which is a type whose term is constructed with <code>Eq.refl zero</code>. What this illustrates is that proofs of equality where one side of the equality expression can be automatically reduced (by definition of the function) can be proved by <code>Eq.refl _</code>. This pattern occurs frequently enough that, yep, there is an alias for it called <code>rfl</code>.</p>
<p>So this does the same thing, and is more commonly used:</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">add_zero_zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">rfl</span>
</pre></div>
<p>Here <code>rfl</code> is essentially an alias for <code>Eq.refl _</code> where again, the wildcard <code>_</code> tells Lean to infer, in this case, that <code>_</code> needs to be <code>zero</code> for the expression to type check.</p>
<p>Okay, so now we understand how to state the proposition that <code>0 + 0 = 0</code> and how to prove it Lean.</p>
<p>Let's prove something slightly more general, namely that for any <code>a : Nat</code>, <code>a + zero = a</code>, or in math-y notation that <span class="math">\(\forall a \in \mathbb{N},\ a + 0 = a\)</span>, where the upside down A <span class="math">\(\forall\)</span> means "for all", <span class="math">\(\in\)</span> means "in", and <span class="math">\(\mathbb{N}\)</span> is the standard notation for the set of natural numbers, if that isn't already familiar to you.</p>
<p>Actually, let me take some time to degress to talk about the difference between mathematics in Lean and mathematics in say a typical university-level mathematics course. One could think of mathematics in very broad strokes as starting with some rules that are assumed to be true (i.e. axioms) and then just seeing what you can create from there. In this sense, a game like Chess or Go are mathematical because we can study the space of things that can happen given the rules.</p>
<p>It is possible to come up with all sorts of different starting rules (axioms), but one must take care that they do not lead to inconsistencies such as being able to prove <code>0 = 1</code>, or like a board game where no one can win. But there are many axiom systems that are consistent, and many of them can be proved to be equivalent to other axiom systems, in the sense that expressions in one system can be faithfully translated into another system.</p>
<p>But it seems that by historical happenstance, modern mathematics worldwide is dominated by a particular axiom system called Zermelo–Fraenkel (with Choice) set theory, typically abbreviated ZFC. In ZFC, everything in mathematics is a <em>set</em>, which is an unordered collection of things without repeats. Then there are a bunch of rules (axioms) about how sets work and what you can do with them. Sets are denoted using curly braces, for example, we can say <span class="math">\(\{1,2,3\}\)</span> is a set of the numbers <span class="math">\(1,2,3\)</span>. However, in set theory, <em>everything</em> is a set, even the numbers. You denote that some element is in a set using the <span class="math">\(\in\)</span> notation, e.g. <span class="math">\(1 \in \{1,2,3\}\)</span>.</p>
<p>Many propositions in set theory are stated as propositions <em>for all</em> members of a set, such as our proposition that <span class="math">\(\forall a \in \mathbb{N},\ a + 0 = a\)</span>. Another form of proposition is the existence of some element or set, for example, <span class="math">\(\forall a \in \mathbb{N}, \exists b \in \mathbb{N}, b > a\)</span>, which is read as "for all natural numbers <span class="math">\(a\)</span>, there exists another natural number <span class="math">\(b\)</span> such that <span class="math">\(b\)</span> is greater than <span class="math">\(a\)</span>.</p>
<p>My goal is not to explain set theory because I only know the basics myself and that's a whole massive subject in its own right. The point I have been leading up to is that Lean does not use ZFC set theory as it's logical/mathematical foundation (i.e. axiom system). Lean uses <em>type theory</em>, which is actually a family of axiom systems. Lean uses a particular type theory called the Calculus of Constructions, which doesn't matter for our purposes unless you're into that sort of thing.</p>
<p>In type theory, everything is not a set, everything is a type as I have already exclaimed earlier. Types and sets are very similar, and as far as I know, unless you're a set theorist or into studying axiom systems, we can mostly ignore the differences. It turns out that every expression in Lean's type theory
can be translated into ZFC set theory and vice versa, so all the mathematics that has been developed on top of ZFC can be easily translated into Lean's type theory. Perhaps in parallel universe, modern mathematics was developed in type theory rather than set theory.</p>
<p>In any case, the main reason for that digression was to point out that if we state our theorem in Lean and check its type:</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">add_zero</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">sorry</span>
<span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="n">add_zero</span>
<span class="c1">-- Output: ∀ (a : Nat), add a zero = a</span>
</pre></div>
<p>The type of <code>add_zero</code> is actually <code>∀ (a : Nat), add a zero = a</code>, unlike what we actually typed in, which did not include the <span class="math">\(\forall\)</span> "for all" symbol. Remember that <code>theorem</code> is just an alias for <code>def</code>, and in this particular theorem we have an explicit input <code>a</code>, so this theorem is actually a function that takes an <code>a</code> (of type <code>Nat</code>) and produces a proposition <code>add a zero = a</code>, which recall, is an equality type. </p>
<p>But remember that the equality type is actually parameterized by two inputs of the same type, and when it is parameterized the equality type has the type <code>Prop</code> instead of <code>Type</code>. Remember we said Lean is a dependently-typed functional programming language? Well here is our first introduction to dependent types. The concrete output type of our theorem (function in this case) <code>add_zero</code> <em>depends on</em> the first input type. Hence, it is a dependent function type.</p>
<p>We actually already saw an example of a dependent function type when we made a polymorphic identity type: </p>
<div class="highlight"><pre><span></span><span class="nf">def</span><span class="w"> </span><span class="n">id</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Type</span><span class="w"> </span><span class="kr">_</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="p">)</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">a</span>
<span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="n">id</span>
<span class="c1">-- Output: {α : Type u_1} → α → α</span>
</pre></div>
<p>This function is a dependent function type because the output type will <em>depend on</em> the input type. But there's none of that <code>∀</code> notation here... But watch what happens when we change <code>Type _</code> to <code>Prop</code></p>
<div class="highlight"><pre><span></span><span class="nf">def</span><span class="w"> </span><span class="n">id</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">α</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Prop</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">α</span><span class="p">)</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">a</span>
<span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="n">id</span>
<span class="c1">-- Output: ∀ {α : Prop}, α → α</span>
</pre></div>
<p>We get that <code>∀</code> notation again. There is nothing fundamenetally different going on in these two examples, we're just changing the type universe that <code>α</code> lives in. But by default, Lean will utilize the <code>∀</code> notation in place of the <code>→</code> notation in the <code>Prop</code> type level, because when we're restricting ourselves to <code>Prop</code> it's usually because we're doing mathematics, and then the <code>∀</code> notation is more convenient.</p>
<p>Now, this sort of type polymorphism is present in non-dependently typed functional programming languages, so the real power of dependent types comes from the fact that the output type can depend on specific values (terms), and not just types. For example, we can (and will in a later post), define lists of a specific length, so the type will depend on a particular term of type natural number.</p>
<p>Okay, that was another digression to explain some more Lean syntactic sugar. Back to our theorem.</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">add_zero</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">sorry</span>
</pre></div>
<p>If you look back at our definition for the <code>add</code> function, you will see that when the second input is <code>zero</code>, it will just return the first input. So Lean can automatically reduce <code>add a zero</code> to just <code>a</code>, so we will get <code>a = a</code>. How do we construct a term of type <code>a = a</code>? We use the one constructor for the equality type, <code>Eq.refl</code>. So we can easily prove this theorem:</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">add_zero</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="kt">Eq</span><span class="o">.</span><span class="n">refl</span><span class="w"> </span><span class="n">a</span>
</pre></div>
<p>Or equivalently:</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">add_zero</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="kt">Eq</span><span class="o">.</span><span class="n">refl</span><span class="w"> </span><span class="kr">_</span>
</pre></div>
<p>Or equivalently:</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">add_zero</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">rfl</span>
</pre></div>
<p>Let's move on to proving something very similar appearing, but significantly more difficult to prove:</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">zero_add</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">sorry</span>
</pre></div>
<p>So instead of proving <code>a + 0 = a</code> like we just did, we want to prove <code>0 + a = a</code>. We know that in math <code>a + b = b + a</code>, because addition is commutative (the order of the inputs doesn't matter). But we haven't proved that yet, and in fact our proof of the commutativity of addition will use our proof of <code>zero_add</code> and <code>add_zero</code>.</p>
<p>Why is <code>zero_add</code> harder than proving <code>add_zero</code>? Because <code>0 + a</code> is not reduced to <code>a</code> by definition. Look back at our addition function, if the first argument is <code>zero</code> and the second argument is <code>a</code>, then without knowing what <code>a</code> is, Lean can't just reduce it. So in this case, since we don't know what <code>a</code> is, we have to do a <code>match</code> on it, and we will have a branch of code for if <code>a</code> is <code>zero</code> and a separate branch of code for if <code>a</code> has the pattern <code>succ b</code> where <code>b</code> is another natural number.</p>
<p>I'm going to build up the proof incrementally and explain the steps.</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">zero_add</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span>
<span class="w"> </span><span class="n">match</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">with</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="kr">_</span>
<span class="o">/-</span><span class="w"> </span>
<span class="kt">Proof</span><span class="w"> </span><span class="n">state</span><span class="kt">:</span>
<span class="nf">a</span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span>
<span class="err">⊢</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">zero</span>
<span class="o">-/</span>
</pre></div>
<p>I've started a proof for our theorem, and I've left a wildcard <code>_</code>. In this case, I'm not asking Lean to infer what to put there, because it can't, but what Lean will do is tell me what data I have access to and what term I need to construct in place of the wildcard. You should be able to see this by putting your cursor after the <code>_</code> in Visual Studio Code and looking at the <em>Lean Infoview</em> pane. This is what makes Lean an interactive theorem prover, because Visual Studio Code is continuously running Lean in the background and updating the state depending on where our cursor is.</p>
<p>The expression after the turnstile <code>⊢</code> symbol shows us that for the case where <code>a = zero</code>, we need to prove that <code>add zero zero = zero</code>. Well we already know how to do this, we can just use <code>rfl</code>.</p>
<p>Let's keep going.</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">zero_add</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span>
<span class="w"> </span><span class="n">match</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">with</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">rfl</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">c</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="kr">_</span>
<span class="o">/-</span>
<span class="kt">Proof</span><span class="w"> </span><span class="n">state</span><span class="kt">:</span>
<span class="nf">a</span><span class="w"> </span><span class="n">c</span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span>
<span class="err">⊢</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="p">(</span><span class="n">succ</span><span class="w"> </span><span class="n">c</span><span class="p">)</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">c</span>
<span class="o">-/</span>
</pre></div>
<p>We've "closed the subgoal" where <code>a = zero</code>, but in order to prove the general case that <code>0 + a = a</code> we also need to prove that it holds when <code>a = succ c</code>. We see that this sub-goal is <code>add zero (succ c) = succ c</code>. So now we are stuck. We cannot prove this with <code>rfl</code>. However, look back at our definition for <code>add</code>. Notice that by definition <code>add a (succ b) => succ (add a b)</code>. So we can use this fact to change the (sub)goal expression a bit:</p>
<div class="highlight"><pre><span></span><span class="err">⊢</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="p">(</span><span class="n">succ</span><span class="w"> </span><span class="n">c</span><span class="p">)</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">c</span>
<span class="c1">-- By definition we can change it to:</span>
<span class="err">⊢</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="p">(</span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">c</span><span class="p">)</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">c</span>
</pre></div>
<p>Let's take stock of what we already know. We have already proved a subgoal that <code>add zero c = c</code> when <code>c = zero</code> (the base case). Can we use that proof to produce a proof of <code>succ (add zero c) = succ c</code>? If we could somehow apply <code>succ</code> to both sides of the equation, we could, but Lean doesn't know if that is a valid move yet.</p>
<p>In order to solve this, we need to prove an intermediate proposition or theorem, that if <code>a = b</code> then that implies <code>succ a = succ b</code>. This should be obviously true, because a function has to map equal input terms to the same output term, otherwise it would be a non-deterministic function. But even the obvious things need to be stated and formally proved.</p>
<p>So here we go.</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">congr</span><span class="w"> </span><span class="p">{</span><span class="n">a</span><span class="w"> </span><span class="n">b</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">}</span><span class="w"> </span><span class="p">(</span><span class="n">h</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">b</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="p">(</span><span class="n">succ</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">b</span><span class="p">)</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">sorry</span>
</pre></div>
<p>We are stating a theorem named <code>congr</code> (for <em>congruence</em>), which takes two implicit parameters <code>a</code> and <code>b</code> and a term (called a hypothesis in this context) <code>h : a = b</code> and is of the type <code>succ a = succ b</code>.</p>
<p>If we check the type of this theorem, we get
<code>∀ {a b : Nat}, a = b → succ a = succ b</code>
To translate this into English: "For all natural numbers <code>a</code> and <code>b</code>, if we have a proof that <code>a = b</code> then that implies that <code>succ a = succ b</code>."</p>
<p>In type theory, implication is simply a function type, so we can interpret the arrow as "... implies ..." </p>
<p>Let me show you the proof of this:</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">congr</span><span class="w"> </span><span class="p">{</span><span class="n">a</span><span class="w"> </span><span class="n">b</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">}</span><span class="w"> </span><span class="p">(</span><span class="n">h</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">b</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="p">(</span><span class="n">succ</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">b</span><span class="p">)</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">by</span><span class="w"> </span><span class="n">rw</span><span class="w"> </span><span class="p">[</span><span class="n">h</span><span class="p">]</span>
</pre></div>
<p>The proof is <code>by rw [h]</code>, which is something we haven't seen before. This is opening a can of worms that we cannot fully explain in this post, but I will introduce it here. The keyword <code>by</code> puts us into <em>tactic mode</em>. Tactic mode is a high-level Lean language feature that lets us use <em>tactics</em>. Tactics are metaprograms, which is to say they are not regular language constructs but can read, construct and manipulate the regular language. These metaprograms can make proving theorems a lot easier. For one, they allow us to write proofs in a sequential manner, avoiding the hassle that comes with pure functional programming. Secondly, they can do a lot of automation behind the scenes for us. That's as much as I'll say about tactics in this post as that is not the focus of this post and we will learn more about tactics later.</p>
<p>The <code>rw</code> tactic stands for <em>rewrite</em>, as it allows us to rewrite expressions in terms of equalities that are known to us. In our case of proving <code>∀ {a b : Nat}, a = b → succ a = succ b</code>, we rewrite the final goal <code>succ a = succ b</code> with the hypothesis <code>h : a = b</code>, which gives us a new goal <code>succ b = succ b</code>, which is true by <code>rfl</code>.</p>
<p>Now that we have this intermediate theorem <code>congr</code> available, we can get back to proving that <code>0 + a = a</code>.</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">zero_add</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span>
<span class="w"> </span><span class="n">match</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">with</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">rfl</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">c</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="kr">_</span>
<span class="o">/-</span>
<span class="kt">Proof</span><span class="w"> </span><span class="n">state</span><span class="kt">:</span>
<span class="nf">a</span><span class="w"> </span><span class="n">c</span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span>
<span class="err">⊢</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="p">(</span><span class="n">succ</span><span class="w"> </span><span class="n">c</span><span class="p">)</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">c</span>
<span class="o">-/</span>
</pre></div>
<p>The goal is technically <code>add zero (succ c) = succ c</code> but as we previously discussed, that is definitionally equal to <code>succ (add zero c) = succ c</code>, which Lean knows, so we can treat the goal as the latter. Since we proved the base case that <code>add zero c = c</code> when <code>c = zero</code>, we can use this as our hypothesis <code>h</code> in the <code>congr</code> theorem to get a proof that <code>succ (add zero c) = succ c</code>. Thus to prove this theorem we use recursion, which is going to be a common approach. As we will learn later, proving theorems using recursion is identical to the mathematical concept of <em>proof by induction</em>.</p>
<p>So here's the final proof.</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">zero_add</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span>
<span class="w"> </span><span class="n">match</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">with</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">rfl</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">c</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">congr</span><span class="w"> </span><span class="p">(</span><span class="n">zero_add</span><span class="w"> </span><span class="n">c</span><span class="p">)</span>
</pre></div>
<p>Let's work through a concrete input, because remember <code>zero_add</code> is a function, so we can apply it as a function.</p>
<p>Let's say we give it the input <code>a = zero.succ.succ</code> (which, recall is the number 2). It will match the pattern <code>succ c</code>, and the output will be <code>congr (zero_add zero.succ)</code>. Now we do the recursive call with <code>zero_add zero.succ</code>, and again, we match the pattern <code>succ c</code>, so we get <code>congr (zero_add zero)</code>. We do another recursive call, this time matching the base case <code>zero => rfl</code>, which will return <code>zero = zero</code>.</p>
<p>So now we can start subtituting these recursive outputs back into the first function call; here's the sequence of steps happening:</p>
<p><code>congr (zero_add zero.succ)</code> <br>
<code>congr (congr (zero_add zero))</code> <br>
<code>congr (congr (zero = zero))</code> <br>
<code>congr (succ zero = succ zero)</code> <br>
<code>succ succ zero = succ succ zero</code> </p>
<p>And this will of course work the same way for any natural number, hence the theorem is proved for all natural numbers.</p>
<p>The theorem <code>congr</code> we proved only works for inputs of type <code>Nat</code>, but this theorem must be true for any type and any function on that type, because if it were not true then the function would not be deterministic, and all functions in Lean are pure, which implies they are deterministic. The Lean standard library includes a polymorphic version of this theorem called <code>congrArg</code>, and we could have used that in place of our own.</p>
<div class="highlight"><pre><span></span><span class="nf">theorem</span><span class="w"> </span><span class="n">zero_add</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span>
<span class="w"> </span><span class="n">match</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">with</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">rfl</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">c</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">congrArg</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="p">(</span><span class="n">zero_add</span><span class="w"> </span><span class="n">c</span><span class="p">)</span>
</pre></div>
<p>The only difference is that <code>congrArg</code> also requires the function <code>succ</code> as input, because it is a theorem that <code>∀ {α : Sort u_1} {β : Sort u_2} {a₁ a₂ : α} (f : α → β), a₁ = a₂ → f a₁ = f a₂</code>, which is read as "for some types <code>α</code> and <code>β</code>, some terms <code>a₁ a₂ : α</code>, and a function <code>f : α → β</code>, if we have a proof that <code>a₁ = a₂</code>, then that implies <code>f a₁ = f a₂</code>. So we need to supply the function <code>f</code> which is <code>succ</code> in this case.</p>
<p>That's it for now. Next time we will take a brief pause from the natural numbers to discuss type classes, structures, custom notation, and construct some other kinds of types.</p>
<h5>References</h5>
<ol>
<li>https://golem.ph.utexas.edu/category/2013/01/from_set_theory_to_type_theory.html</li>
<li>https://leanprover.github.io/lean4/doc/</li>
<li>https://leanprover.github.io/theorem_proving_in_lean4/</li>
<li>Friendly people in the Lean zulip.</li>
</ol>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Learning Lean (Part 1)2022-07-04T14:35:00-05:002022-07-04T14:35:00-05:00Brandon Browntag:outlace.com,2022-07-04:/Lean_part_1.html<p>An introduction to functional programming and mathematics in the dependently-typed functional programming language and theorem prover Lean 4.</p><p>I'm going to be starting a series of blog posts on Lean 4, a dependently typed functional programming language and theorem prover. If none of that sounds familiar, no problem, we'll get to that. First, Lean 4 has not been officially released as a 1.0, so breaking changes to the code in these posts may occur, but I'll try to update from time to time. The current stable release of Lean is Lean 3.</p>
<p>Let me also preface that these blog posts, although meant to be instructive, mirror my own learning of Lean. I've been playing with Lean off and on since April 2020 but I am currently <em>not</em> an expert at Lean or functional programming in general, so there are likely going to be instances of me writing non-idiomatic Lean and doing things in not the most efficient way. Also, my goal in learning Lean was mostly to learn formalizing mathematics rather than functional programming, so these blogs will hence be mostly focused on the mathematics use of Lean. In terms of pre-requisites if you want to be able to follow along, anyone with a background programming in <em>some</em> programming language and at least a high school mathematics background should be able to follow along.</p>
<p>Go ahead and get Lean installed by following the guide here: <a href="https://leanprover.github.io/lean4/doc/quickstart.html">Lean Setup Guide</a>.</p>
<p>There's also a lively online community for Lean where friendly individuals from around the world will answer your questions surprisingly quickly: <a href="https://leanprover.zulipchat.com/">https://leanprover.zulipchat.com/</a></p>
<h3>What is Lean?</h3>
<p>Lean is a programming language. More specifically, Lean is a functional programming language. Probably the most well-known (at least to me) functional programming language is Haskell. A functional programming language usually refers to a <strong><em>purely</em></strong> functional programming language. Many programming languages support functions as first-class citizens, allowing functions to be passed as arguments and returned, but a purely functional language imposes that all functions have a well-specified and static type signature and no other side effects can occur within a function (e.g. writing to disk, playing a sound file, etc.). These are called pure functions. Moreover, the only data these pure functions have access to is what gets explicitly passed as an argument. And every expression in Lean has a type, including types themselves.</p>
<p>The mandate that all functions are pure is quite onerous coming from typical imperative programming languages such as C or Python. It is no longer trivial in a pure functional language to do something like read an input string, do some computation, write to a log and return a value. The benefit is that pure functions are completely predictable at compile-time, preventing a whole class of bugs.</p>
<p>Lean is not only a functional programming language, but a dependently-typed functional programming language. This means that types can depend on other types and values. For example, in Lean, you can define a type of list of integers of length 3, and thus if you try to construct an instance of that type by giving it 4 integers, it will fail. In essence, dependent types give you extreme expressiveness and granularity when defining new types.</p>
<h3>Why should you learn Learn?</h3>
<p>I think Lean is going to be <em>the next big thing</em> in the functional programming world and hopefully the formal software verification and mathematics world. So if any of those things interest you, it may be worth learning Lean.</p>
<p>One goal for me in learning Lean is to learn proofs in mathematics. I do data analysis, statistics and machine learning so my understanding of mathematics is very applied and calculational. I want to get a flavor of pure mathematics and how mathematical structures are made and theorems are proved. Because Lean is a dependently typed pure functional programming language, it can be used to encode all of modern mathematics using its type theory in place of traditional set theory.</p>
<p>In any case, doing math in Lean is actually <em>fun</em> a lot of the time. It's like a programming and logic puzzle. So stop with the Sudoku and just prove theorems in Lean instead.</p>
<h3>Natural Numbers</h3>
<p>Before we can do much of anything in mathematics, we're going to need some numbers. Now Lean already has numbers defined, of course, but we will pretend it doesn't.</p>
<p>Let's define the natural numbers, that is, the set of numbers 0, 1, 2, 3 ..., or the counting numbers.</p>
<div class="highlight"><pre><span></span><span class="nf">namespace</span><span class="w"> </span><span class="kt">Tutorial</span>
<span class="nf">inductive</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Type</span><span class="w"> </span><span class="kr">where</span><span class="w"> </span>
<span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span>
<span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span>
<span class="nf">end</span><span class="w"> </span><span class="kt">Tutorial</span>
</pre></div>
<p>The namespace section is what it sounds like and works similarly to how namespaces work in other programming languages. Outside the namespace you have to refer to identifiers inside the namespace by prefixing <code>[namespace name].[identifier]</code> (without the brackets). We enclose our code in this namespace to avoid name clashes because we are defining types and functions that already exist in Lean with the same names.</p>
<p>First, the keyword <code>inductive</code> means we are defining a new (inductive) type. In Lean's type theory, there are basically just two kinds of types, inductive types and function types, so we will use <strong><em>inductive</em></strong> a lot.</p>
<p>After the keyword <code>inductive</code> comes the name of the type we are defining, in this case, we are calling it <code>Nat</code>, for natural numbers. After the name of the type comes a colon and the keyword <code>Type</code>. </p>
<p>Recall, <em>every expression in Lean must have a type</em>, even the new type we are defining must itself be assigned a type. So in general, new inductive types will by default be assigned the type <code>Type</code>, which is an alias for <code>Type 0</code>, because if you ask Lean what the type of <code>Type</code> is, it must give you an answer, so it just creates an infinite hierarchy of types <code>Type : Type 1, Type 1 : Type 2</code> etc., meaning that the type of <code>Type</code> is <code>Type 1</code>.</p>
<p>You can ask Lean what the type of an expression is by using the <code>#check</code> command.</p>
<div class="highlight"><pre><span></span><span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="kt">Nat</span>
<span class="c1">--Output: Nat : Type</span>
<span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="kt">Type</span>
<span class="c1">--Output: Type : Type 1</span>
</pre></div>
<p>Now Lean tries to be smart and save you keystokes, so whenever possible, it will infer types when it can do so unambiguously. So it is fine to also declare our <code>Nat</code> type without explicitly assigning its type:</p>
<div class="highlight"><pre><span></span><span class="c1">-- This also works</span>
<span class="nf">inductive</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kr">where</span>
<span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span>
<span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span>
</pre></div>
<p>And two dashes is how you start a comment line. Or a multiline comment can be delimited using </p>
<div class="highlight"><pre><span></span>/-
Multi
line
comment
-/
</pre></div>
<p>Back to our new type.</p>
<div class="highlight"><pre><span></span><span class="nf">inductive</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Type</span><span class="w"> </span><span class="kr">where</span><span class="w"> </span>
<span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span>
<span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span>
</pre></div>
<p>After assigning <code>Nat</code> as being the type <code>Type</code> is the keyword <code>where</code> which is also optional, as this is also valid:</p>
<div class="highlight"><pre><span></span><span class="nf">inductive</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span>
<span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span>
<span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span>
</pre></div>
<p>I guess <code>where</code> is mostly of descriptive use, as you can translate the type declaration into English as "We are making a new inductive type named <code>Nat</code> of type <code>Type</code> where <code>zero</code> is declared to be of type <code>Nat</code> and <code>succ</code> is a function that maps values of type <code>Nat</code> to values of type <code>Nat</code>.</p>
<p>Back to the more verbose type declaration:</p>
<div class="highlight"><pre><span></span><span class="nf">inductive</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Type</span><span class="w"> </span><span class="kr">where</span><span class="w"> </span>
<span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span>
<span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span>
</pre></div>
<p>So after the <code>where</code> keyword, we start a new line beginning with the pipe <code>|</code> character. Each line beginning with <code>|</code> is called a <em>constructor</em> since these lines specify how to construct terms (aka elements, values or members) of the new inductive type.</p>
<p>First, we invent a name for a value of type <code>Nat</code>, in this case I am declaring that the string of characters <code>zero</code> is hereby defined to be a term (or value) of type <code>Nat</code>. </p>
<p>We could stop here and we'd have a new (inductive) type with a single value, but that wouldn't help us create numbers, which we expect to be indefinite (infinite).</p>
<p>Next, we start a new line beginning with a pipe, and this time instead of declaring a new term of type <code>Nat</code>, we declare the first function that operates on terms of type <code>Nat</code>. We call this function <code>succ</code> (short for successor). We know that <code>succ</code> is a function and not a term because we assign it the type <code>Nat → Nat</code> after the colon. Whenever you see the pattern <em>some type</em> → <em>some type</em>, you're looking at a function type. </p>
<p>Functions are programs that map terms from one type to another (or the same) type. In this case, <code>succ</code> is a function that does not compute anything, in fact it doesn't do anything at all. All we can do with it is <strong><em>apply</em></strong> it.</p>
<div class="highlight"><pre><span></span><span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="kt">Nat</span><span class="o">.</span><span class="n">succ</span><span class="w"> </span><span class="p">(</span><span class="kt">Nat</span><span class="o">.</span><span class="n">succ</span><span class="w"> </span><span class="kt">Nat</span><span class="o">.</span><span class="n">zero</span><span class="p">)</span>
<span class="c1">--- Output: Nat.succ (Nat.succ Nat.zero) : Nat</span>
</pre></div>
<p>As you can see, we apply a function by putting the function name, a space, and then the term (value). To avoid ambiguity we must use parentheses when applying multiple times. Declaring the type <code>Nat</code> creates a local namespace <code>Nat</code> so we must prefix references to the value <code>zero</code> or function <code>succ</code> with <code>Nat.</code> e.g. <code>Nat.zero</code></p>
<p>We can save keystokes by opening the namespace.</p>
<div class="highlight"><pre><span></span><span class="nf">open</span><span class="w"> </span><span class="kt">Nat</span>
<span class="c1">--- Now we can do this</span>
<span class="o">#</span><span class="n">check</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="p">(</span><span class="n">succ</span><span class="w"> </span><span class="n">zero</span><span class="p">)</span>
<span class="c1">--- Output: succ (succ zero) : Nat</span>
</pre></div>
<p>Now we have defined the natural numbers, 0, 1, 2, ... and so on, identified in our new type as <code>zero, succ (zero), succ (succ (zero))</code>. Obviously writing numbers using function application is not as convenient as using our normal numerals 1, 2 etc. There is a way to map numerals to our more verbose natural numbers, but we will wait awhile before doing that.</p>
<p>When I first understood what was going on here in this very simple inductive type, it was quite profound. By declaring this empty function <code>succ</code> and applying it to the only "real" term of type <code>Nat</code>, we get an infinite new set of terms of type <code>Nat</code>. Any string of characters that fits the pattern <code>succ x</code> where <code>x</code> is either <code>zero</code> or also of the pattern <code>succ (succ ... zero )</code> is also of type <code>Nat</code>.</p>
<p>You can think of types as specifiying a pattern of characters, a type-checker as checking whether some expression matches a particular pattern, and a value or term is just an expression that matches a particular type pattern. So <code>succ (succ zero)</code> is of type <code>Nat</code> because that pattern of characters matches the pattern we defined as type <code>Nat</code>.</p>
<p>Let's define our first function on our new <code>Nat</code> type. In Lean, all functions are pure as we discussed earlier, and they are also <em>total</em>. A total function is one where every possible input term gets mapped to an output term. That means for a function of type <code>Nat → Nat</code>, every natural number must get mapped to some output term that is also a natural number. You cannot have input terms that are left undefined. In mathematics, if we treat division as a function, we say that <span class="math">\(\frac{x}{0}\)</span> is <em>undefined</em>. In Lean, that is not acceptable, even <span class="math">\(\frac{x}{0}\)</span> is defined.</p>
<p>Total functions can sometimes be an onerous constraint when using Lean for non-mathematical purposes, especially as you have to prove to Lean that your function is total, so Lean does provide a way to define <em>partial</em> functions, but we will not address that yet.</p>
<p>Here's our first function:</p>
<div class="highlight"><pre><span></span><span class="nf">def</span><span class="w"> </span><span class="n">natId</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">fun</span><span class="w"> </span><span class="n">x</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">x</span>
</pre></div>
<p>First we define a new function using the <code>def</code> keyword, then the name of our function, in this case we're calling it <code>natId</code>, then we have a colon, which indicates we're going to be assigning a type and after the colon we have the type <code>Nat → Nat</code>. Following that, we have the symbol <code>:=</code> which is an assignment operator, and then we have the body of the function which is <code>fun x : Nat => x</code></p>
<p>This last part is called an anonymous function (or lambda function). An anonymous function is a function expression without giving it a name. As is clear, an anonymous function is declared using the <code>fun</code> keyword, following by an identifier (can be more than one) representing the input, then its type annotation, then the <code>=></code> symbol following by the function body, which in this case just returns the input <code>x</code>, and hence this is defining the identity function that does nothing but return its input unadulterated.</p>
<p>One challenge when getting started with Lean is that Lean has a lot of syntactic sugar, so there are often multiple ways to express the same thing. Here are 3 other ways we could have defined the same identity function:</p>
<div class="highlight"><pre><span></span><span class="nf">def</span><span class="w"> </span><span class="n">natId2</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:=</span>
<span class="w"> </span><span class="n">fun</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=></span>
<span class="w"> </span><span class="n">match</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">with</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">a</span>
</pre></div>
<p>And:</p>
<div class="highlight"><pre><span></span><span class="nf">def</span><span class="w"> </span><span class="n">natId3</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span>
<span class="o">|</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">a</span>
</pre></div>
<p>And:</p>
<div class="highlight"><pre><span></span><span class="nf">def</span><span class="w"> </span><span class="n">natId4</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">a</span>
</pre></div>
<p>The first of these alternatives also uses an anonymous function but then has a <code>match ... with</code> pattern. As we discussed, inductive types are essentially defined as a set of base terms and then one or more functions that define a pattern for creating other terms of that type using the base terms. So since we construct types by defining base terms and patterns over those base terms, we also define functions on types by deconstructing types into their base terms and patterns over those base terms, and then map each deconstructed pattern into terms of another type.</p>
<p>Notice that the variable <code>a</code> after <code>fun</code> represents the input term and we can name it whatever we want. For multiple input functions we will have to introduce multiple input variables after <code>fun</code>.</p>
<p>Also we can check how Lean actually defines types by using the <code>#print</code> command. Let's check the last of these alternatives for <code>idNat4</code></p>
<div class="highlight"><pre><span></span><span class="o">#</span><span class="n">print</span><span class="w"> </span><span class="n">natId4</span>
<span class="o">/-</span><span class="w"> </span>
<span class="kt">Output:</span><span class="w"> </span>
<span class="nf">def</span><span class="w"> </span><span class="kt">Tutorial</span><span class="o">.</span><span class="n">natId4</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:=</span><span class="w"> </span><span class="n">fun</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">a</span><span class="w"> </span>
<span class="o">-/</span>
</pre></div>
<p>As you can see, even though Lean lets us omit the explicit anonymous function, behind the scenes it is filling in the anonymous function for us.</p>
<p>Okay, moving on. Let's write a simple function that just subtracts one from any natural number.</p>
<div class="highlight"><pre><span></span><span class="nf">def</span><span class="w"> </span><span class="n">subOne</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="err">→</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:=</span>
<span class="w"> </span><span class="n">fun</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="ow">=></span>
<span class="w"> </span><span class="n">match</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">with</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">zero</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">b</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">b</span>
</pre></div>
<p>We define a function called <code>subOne</code> with the type <code>Nat → Nat</code>. We implement the function by assigning it to an anonymous function that takes an input variable <code>a</code> (which must be of type <code>Nat</code> according to the function type signature) and matches it against the patterns that a term of type <code>Nat</code> can have, namely it can either be the base term <code>zero</code> or of the pattern <code>succ b</code> where <code>b</code> is just a placeholder for whatever is inside the <code>succ</code> function. We could have also used <code>a</code> in place of <code>b</code> and Lean is smart enough to figure out what we mean based on the context.</p>
<p>If the input <code>a</code> happens to be <code>zero</code> then we just return <code>zero</code> since natural numbers don't get any lower than <code>zero</code>. If the input <code>a</code> happens to be <code>succ b</code> with <code>b</code> being a term of type <code>Nat</code> then we return <code>b</code>, which effectively removes one application of <code>succ</code> and thus decrements the number by one.</p>
<p>There are no other possible patterns that a term of type <code>Nat</code> could be and since we covered them in our function pattern match, Lean is satisfied our function is total.</p>
<p>We can ask Lean to evaluate our function on the input <code>zero.succ.succ</code> (the number 2) by using the <code>#reduce</code> command.</p>
<div class="highlight"><pre><span></span><span class="o">#</span><span class="n">reduce</span><span class="w"> </span><span class="n">subOne</span><span class="w"> </span><span class="n">zero</span><span class="o">.</span><span class="n">succ</span><span class="o">.</span><span class="n">succ</span>
<span class="c1">--Output: succ zero</span>
</pre></div>
<p>It works, if we subtract one from 2 we get 1. Notice that the expression <code>zero.succ.succ</code> is equivalent to <code>succ (succ zero)</code> but easier to read as it avoids parentheses. Again, this is one challenge in learning Lean; there are many ways to do the same thing. But ultimately these are ways to save keystrokes and improve readability, at the expense of taking longer to learn.</p>
<p>We can also write a function where we explicitly name the inputs and then pattern match on them:</p>
<div class="highlight"><pre><span></span><span class="nf">def</span><span class="w"> </span><span class="n">subOne</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:=</span>
<span class="w"> </span><span class="n">match</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">with</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">zero</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">b</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">b</span>
</pre></div>
<p>In this style we name the inputs and annotate their types and then we give the output type after the last free colon.</p>
<p>Now let's define the addition function on natural numbers.</p>
<div class="highlight"><pre><span></span><span class="nf">def</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="p">(</span><span class="n">a</span><span class="w"> </span><span class="n">b</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="p">)</span><span class="w"> </span><span class="kt">:</span><span class="w"> </span><span class="kt">Nat</span><span class="w"> </span><span class="kt">:=</span>
<span class="w"> </span><span class="n">match</span><span class="w"> </span><span class="n">b</span><span class="w"> </span><span class="n">with</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">zero</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">a</span>
<span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="n">c</span><span class="w"> </span><span class="ow">=></span><span class="w"> </span><span class="n">succ</span><span class="w"> </span><span class="p">(</span><span class="n">add</span><span class="w"> </span><span class="n">a</span><span class="w"> </span><span class="n">c</span><span class="p">)</span>
</pre></div>
<p>We define our function <code>add</code> to take two inputs named <code>a</code> and <code>b</code> and both are of type <code>Nat</code> so we include them together separated by a space. The output type of our function is also of type <code>Nat</code>. We then match the pattern of input <code>b</code> to define the computation of the function.</p>
<p>If the second input <code>b</code> is zero, then that means we are dealing with <code>a + 0</code> and that obviously just equals <code>a</code>, so we return <code>a</code>. </p>
<p>If <code>b</code> is greater than 0, i.e. of the form <code>succ c</code> where <code>c</code> is another natural number, then we add together <code>a</code> and <code>c</code> (and <code>c = b - 1</code>), then we apply <code>succ</code> to the result, which is the same as adding one.</p>
<p>In other words, we are recursively doing <code>1 + (a + (b - 1))</code>. Because we are in a purely functional programming language, we do not have access to things like <code>for</code> loops or <code>while</code> loops. Any iterative computations must be done using recursive (self-referential) function calls.</p>
<p>When we compute <code>1 + (a + (b - 1))</code>, Lean will then call the <code>add</code> function again, with input <code>a</code> (the same as the original input <code>a</code>), and the second input will be <code>b - 1</code>. It keeps recursively calling itself until <code>b - 1 = 0</code> and then we hit the base case where the second input is <code>0</code> and <code>add</code> just returns the first input <code>a</code>.</p>
<div class="highlight"><pre><span></span><span class="o">#</span><span class="n">reduce</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">zero</span><span class="o">.</span><span class="n">succ</span><span class="w"> </span><span class="n">zero</span><span class="o">.</span><span class="n">succ</span>
<span class="c1">--Output: succ (succ zero)</span>
</pre></div>
<p>As you can see, our function successfully computes <code>1 + 1 = 2</code>. Let's do it by hand to make sure we really understand what is going on.</p>
<p>First,
<code>add zero.succ zero.succ</code> (again, this represents <code>1 + 1</code>)
will pattern match on the second argument <code>b = zero.succ</code>, so it will return <code>succ (add zero.succ zero)</code> since the pattern match will "pull off" a <code>succ</code> from the input <code>b</code>.</p>
<p>So now we're calling <code>add</code> within itself, namely <code>add zero.succ zero</code> (<code>1 + 0</code>). Now again, we pattern match on the second input <code>b = zero</code> and that matches the base case where we just return <code>a</code>. So <code>add zero.succ zero = zero.succ</code>. </p>
<p>Now we substitute that into the expression above, <code>succ (add zero.succ zero)</code>, so we get <code>succ (succ zero)</code>, which is the final answer. So the <code>add</code> function works by recursively decrementing <code>b</code> by 1 while adding 1 to <code>a</code> until <code>b = 0</code>.</p>
<p>In order for functions to be total (as described above), they need to be terminating. Lean has a component called a <strong>termination checker</strong> that makes sure every function you define will terminate in a finite number of steps. It does this by making sure that when you're recursively calling a function that the input arguments are <strong><em>structurally decreasing</em></strong>. In the case of the <code>add</code> function, the second input <code>b</code> will structurally decrease each recursive call of <code>add</code> because a <code>succ</code> is "pulled off" (i.e. <code>b</code> becomes <code>b - 1</code> each call). Once <code>b = zero</code> in the recursive calls then the function terminates.</p>
<p>We'll end this post here, but we have a lot more to learn. In the next post we'll prove our first theorems about the natural numbers and learn a lot more Lean along the way.</p>
<p>PS: My goal with these <em>Learning Lean</em> posts are to assume as few pre-requisites as possible, so please leave a comment or email me if anything needs additional explanation and you meet the pre-requisites of knowing how to program in some language and having a high school math background.</p>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Deep Reinforcement Learning in Action is Published2020-05-05T02:16:00-05:002020-05-05T02:16:00-05:00Brandon Browntag:outlace.com,2020-05-05:/deep-reinforcement-learning-in-action-is-published.html<p>Deep Reinforcement Learning in Action is out!</p><p>Our book <a href="https://www.manning.com/books/deep-reinforcement-learning-in-action">Deep Reinforcement Learning in Action</a> is out at Manning.com and will be out on <a href="https://www.amazon.com/Deep-Reinforcement-Learning-Action-Alexander/dp/1617295434/ref=sr_1_5?crid=17JK1O42F2L91&dchild=1&keywords=deep+reinforcement+learning&qid=1588663023&s=books&sprefix=deep+reinforcement%2Caps%2C176&sr=1-5#customerReviews">Amazon</a> once Amazon starts accepting new book shipments (has been delayed due to the covid19 pandemic). We hope you'll buy a copy and tell your friends and colleagues and leave us a review. We put a lot of work into this book and think it's one of the best ways to learn Deep Reinforcement Learning from the fundamentals to implementing some of the latest research papers.</p>
<p>Here's the table of contents:</p>
<h5>PART 1: FOUNDATIONS</h5>
<ul>
<li>1 What Is Reinforcement Learning</li>
<li>2 Modeling Reinforcement Learning Problems: Markov Decision Processes</li>
<li>3 Predicting the Best States and Actions: Deep Q-Networks</li>
<li>4 Learning to Pick the Best Action: Policy Gradients</li>
<li>5 Tackling more Complex Environments with Actor-Critic Methods</li>
</ul>
<h5>PART 2: ABOVE AND BEYOND</h5>
<ul>
<li>6 Alternative Optimization Methods: Evolutionary Algorithms</li>
<li>7 Distributional DQN: Getting the full story</li>
<li>8 Curiosity-Driven Exploration</li>
<li>9 Multi-Agent Reinforcement Learning</li>
<li>10 Interpretable Reinforcement Learning: Attention and Relational Models</li>
<li>11 Conclusion</li>
</ul>
<h5>APPENDICES</h5>
<ul>
<li>A Deep Learning, Mathematics, PyTorch</li>
</ul>
<p>-Brandon</p>Deep Reinforcement Learning in Action (Announcement)2018-06-19T23:57:00-05:002018-06-19T23:57:00-05:00Brandon Browntag:outlace.com,2018-06-19:/deep-reinforcement-learning-in-action-announcement.html<p>I'm co-authoring a book on Deep Reinforcement Learning!</p><p>Punchline: Go check out <a href="https://www.manning.com/books/deep-reinforcement-learning-in-action">Deep Reinforcement Learning in Action</a></p>
<p>In part due to the attention my posts on reinforcement learning have attracted, I teamed up
with my friend Alex, a bona fide machine learning engineer most recently at Amazon, to write
a book about Deep Reinforcement Learning. I'm happy to say that we are publishing with the big technical publishing company, Manning.</p>
<p>Manning has this neat program called the MEAP (Manning Early Access Program) where
early adopters can actually buy a book in advance and read drafts of each chapter as the author's write them, and then will receive the full book at the end when it's all done. This is great for the book enthusiasts and also for the authors, since we get more constant feedback during the writing process and can make changes as necessary before getting
to final publication.</p>
<p>So our book <a href="https://www.manning.com/books/deep-reinforcement-learning-in-action">Deep Reinforcement Learning in Action</a> is now available in MEAP. As of this post, the first three chapters have been released but the next couple are not far behind. If you buy by June 23, you can use the discount code <strong>mlzai</strong> to get 50% off. Needless to say, if you're interested in reinforcement learning and already have the basics of deep learning down, I hope you give the book a try.</p>
<p>-Brandon</p>Tensor Networks2018-06-19T23:40:00-05:002018-06-19T23:40:00-05:00Brandon Browntag:outlace.com,2018-06-19:/TensorNets1.html<p>In this post we explore tensor networks, their mathematical properties, and implement a tensor network as a machine learning model for a toy problem.</p><h1>On Deep Tensor Networks and the Nature of Non-Linearity</h1>
<p><strong>Abstract</strong>: We cover what tensor networks are, how they work, and go off on a tangent discussion about linearity to derive insight into how tensor networks can learn non-linear transformations without an explicit nonlinear activation function like traditional neural networks. We demonstrate how a simple tensor network can perform reasonably well on the FashionMNIST dataset using PyTorch's <code>einsum</code> function.</p>
<p><img src="images/TensorNetwork/Tensor_Network.png" width=400px></p>
<p>Deep learning algorithms (neural networks) in their simplest form are
just a sequence of two operations composed to some depth: a linear transformation (i.e. a matrix-vector multiplication) followed by the element-wise application of a reasonably well-behaved non-linear function (called the activation function). Together the linear transformation and the non-linear function are called a "layer" of the network, and the composition of many layers forms a deep neural network. Both the depth and the non-linearity of deep neural networks are crucial for their generalizability and learning power.</p>
<p>The most credible explanation for why depth matters in neural networks is that depth models hierarchy, i.e. the idea that some data are at a higher and lower levels of abstraction. "My Jack Terrier dog Spot" is a concrete instance of the more abstract concept of "dog" or even "animal." Hence depth naturally models these kinds of hierarchical relationships in data, and it turns out pretty much all of the data we care about has a hierarchical structure (e.g. letters -> words -> sentences, or musical notes -> chords -> bars -> songs). Another word for hierarchy is composition; complex things are often composed of simpler things, which are composed of yet simpler things. Hence <em>deep</em> neural networks have an (inductive) bias toward learning these compositional/hierarchical relationships, which is exactly what we want them to do.</p>
<p>Okay so depth is important. But why is the non-linear "activation" function so important for deep learning?
An almost tautological answer is that if we want our neural network to be able to model non-linear relationships, it must have some non-linearity itself.</p>
<p>Another reason is that without the non-linear functions, a sequence of just linear transformation layers
can always be "collapsed" into a single linear transformation. This is basic linear algebra; the product
of any sequence of matrices (i.e. matrix composition, hence composition of linear transformations)
can always be reduced to a single matrix/linear transformation. So without the non-linear function in each layer
we forfeit the compositionality (depth) property of deep neural networks, which allow hierarchical abstraction as
you go deeper in the network.</p>
<p>Since we generally use "bias units" in the linear portion of a layer, every layer can actually perform
affine transformations (linear transformation + a translation). We know from linear algebra that
affine transformations can only uniformly stretch and shrink (scale), rotate, sheer and translate vectors in a vector space. If we think of some data of interest as a point cloud in some N-dimensional space, then if it is non-random data, it will have some sort of shape.</p>
<p>For example, some fairly non-complex periodic system might produce data points that lie on a 2D circle or a higher-dimensional loop. The typical goal for a supervised neural network is to map this point cloud onto some other point cloud living in a different space with a different shape. When we one-hot encode the output of a neural network then we're mapping our point cloud onto essentially orthonormal unit basis vectors in a different space. Quite literally, we're transforming some geometric shape (or space) into a different shape. The task of the neural network then is to figure how to construct a new shape (the output/target space) from a starting shape (the input data point cloud) using only the tools of affine transformations and (usually) a single type of non-linear function, such as the rectified linear unit (ReLU) or sigmoid.</p>
<div style="display:table">
<div style="display: table-row;">
<div style="float:left;display: table-cell; margin-right:50px; "><img src="images/TensorNetwork/sigmoid_activation.png" width=350px></div>
<div style="display: table-cell; "><img src="images/TensorNetwork/relu_activation.png" width=350px></div>
</div>
</div>
<p>On the left is a graph of the sigmoid activation function. Clearly it is a curved function and hence not linear. Sigmoid has mostly fallen out of favor and ReLU has become the de facto standard activation function. The right figure is the graph for ReLU. ReLU is just <span class="math">\(max(0,x)\)</span>, and has two "pieces," the flat line for when <span class="math">\(x \lt 0\)</span> and the sloped line for when <span class="math">\(x \geq 0\)</span>. So both of ReLU's parts are lines, why does it count as a non-linear function? A standard, but unintuitive, mathematical definition for a linear function is a function that has the following properties:</p>
<p>(Definition 1.1: Linear Function)
</p>
<div class="math">$$
f : X \rightarrow Y \\
\text{$f$ is a function from some domain X to some codomain Y} \\
f(x_1 + x_2) = f(x_1) + f(x_2) \\
f(a \times x) = a \times f(x)
$$</div>
<p>If you check to see if ReLU has these properties you'll find that it fails both properties, and hence it is not a linear function. For example, <span class="math">\(relu(-1+1) \neq relu(-1) + relu(1)\)</span> and <span class="math">\(relu(-2 * 5) \neq -2*relu(5)\)</span>.</p>
<p>One interesting consequence of linearity is that input data cannot be copied and used in the function expression. For example, if you have a non-linear function like <span class="math">\(f(x) = x^2\)</span>, it is non-linear because it violates the definition of linearity above, but also the input <span class="math">\(x\)</span> is copied. In a handwavy way, linear functions seem to follow some sort of conservation of data law.</p>
<p>However, consider a function like <span class="math">\(f(x,y) = x*y\)</span>, it is called multi-linear because it is linear with respect to each of its input variables. Interestingly, if <span class="math">\(y = x\)</span> (always) then this function behaves exactly like <span class="math">\(f(x)=x^2\)</span> but it's just in a different form. More generally, if <span class="math">\(f(x,y) = x*y\)</span> but y is merely correlated with x, say <span class="math">\(y = x + e\)</span> (e = noise) then you'll get <span class="math">\(f(x,y) = x(x+e) = x^2 + ex\)</span>, which for a sufficiently small <span class="math">\(e\)</span> will behave nearly the same as <span class="math">\(f(x) = x^2\)</span>. So just a linear function of two variables that are highly correlated will produce non-linear behavior. Hence, if you have say two separate_linear_ dynamic systems that start off completely un-entangeled with each having their own maximum degrees of freedom, if they start to become correlated/entangled with each other by interaction, then when taken together as a composite system, it may exhibit non-linear dynamics.</p>
<h3>A brief detour into calculus</h3>
<p>Let's take a bit of a detour into calculus, because thinking of linear vs nonlinear got me rethinking some basic calculus from high school. You know that the derivative of a function <span class="math">\(f(x)\)</span> at a point <span class="math">\(a\)</span> is the slope of the tangent line at that point. More generally for multivariate and vector-valued functions, the derivative is thought of as the best linear approximation to that function. You've seen the definition of the derivative in terms of limits or in non-standard analysis as an algebraic operation with an extended real number system (the hyperreals).</p>
<p>You know how to differentiate simple functions such as <span class="math">\(f(x) = x^3\)</span> by following some derivative rules. For example, in this case we apply the rule: "take the exponent of <span class="math">\(x\)</span> and multiply it by <span class="math">\(x\)</span> and then reduce the power of <span class="math">\(x\)</span> by 1." So we calculate <span class="math">\(\frac{df}{dx} = 3x^2\)</span> (where <span class="math">\(\frac{df}{dx}\)</span> is Leibniz notation, read as "the derivative of the function <span class="math">\(f\)</span> with respect to variable <span class="math">\(x\)</span>"). In a a formula we declare that if <span class="math">\(f(x)=x^{r}\)</span>, then <span class="math">\(df/dx = rx^{r-1}\)</span>. And of course we've <em>memorized</em> a bunch of other simple rules, which together form a powerful toolset we can use to compute the derivative of many functions.</p>
<p>Since differentiation is all about linear approximations, what new insights might we come to if we think of nonlinearity as involving copying (or deleting) data?
Well, let's walk through what would happen if we took a non-linear function and re-construed it as a linear function?</p>
<p>Take our simple function <span class="math">\(f(x) = x^3\)</span> and expand it to make explicit the copying that is happening, <span class="math">\(f(x) = x \cdot x \cdot x\)</span>. This function makes 3 copies of <span class="math">\(x\)</span> and then reacts them together (multiplication) to produce the final result. The copying operation is totally hidden with traditional function notation, but it happened.</p>
<p>What does this have to do with calculus? Well let's convert <span class="math">\(f(x)=x\cdot x \cdot x\)</span> into a multi-linear function, i.e. <span class="math">\(f(x,y,z) = x\cdot y \cdot z\)</span>, except that we know <span class="math">\(x = y = z\)</span> (or that they're highly correlated). It is multi-linear because if we assumed data independence, then the function is linear with respect to each variable. For example, <span class="math">\(f(5+3,y,z) = 8yz\)</span> which is the same as <span class="math">\(f(5,y,z) + f(3,y,z) = 5yz + 3yz = 8yz\)</span>. But since we know our data is not independent, technically this function <em>isn't</em> linear because we can't independently control for each variable. But let's for a momement pretend we didn't know our input data is correlated in this way.</p>
<p>So remember, we already remember from school that the derivative of <span class="math">\(f(x)=x^3\)</span> is <span class="math">\(\frac{df}{dx} = 3x^2\)</span>. We can't take "the derivative" of our new multi-linear function anymore, we can only take partial derivatives with respect to each variable. To take the partial derivative of <span class="math">\(x\)</span>, we assume the other variables <span class="math">\(y,z\)</span> are held as constants, and then it becomes a simple linear equation. We do the same for <span class="math">\(y,z\)</span>.</p>
<div class="math">$$
\begin{align}
\frac{\partial{f}}{\partial{x}} = yz && \frac{\partial{f}}{\partial{y}} = xz && \frac{\partial{f}}{\partial{z}} = yx
\end{align}
$$</div>
<p>But wait, we can't really hold the other variables constant because we know they're perfectly correlated (actually equal). What we're getting at is a case of the total derivative (see < https://en.wikipedia.org/wiki/Total_derivative >). In this particular case, since <span class="math">\(x = y = z\)</span>, we just need to combine (sum) all these partial derivatives together and make all the variables the same, say <span class="math">\(x\)</span>.</p>
<div class="math">$$ \frac{df}{dx} = yz + xz + yx = xx + xx + xx = 3x^2 $$</div>
<p>Exactly what we get with through the traditional route. Or consider a slightly more complicated function:
</p>
<div class="math">$$
f(x) = 3x^3+x^2 \rightarrow \frac{df}{dx} = 9x^2 + 2x \\
f(a,b,c,d,e) = 3abc+de \rightarrow \frac{df}{d\{abcde\}} = 3bc+3ac+3ab + e + d \rightarrow 9x^2 + 2x
$$</div>
<p>Hence an intuition non-linearity as involving copying makes computing derivatives more intuitive, at least for me.</p>
<h3>Tensor Networks</h3>
<p>In the tensor algebra, a scalar is a 0-tensor, a vector is a 1-tensor, a matrix is a 2-tensor, and higher order tensors don't generally have names. But in order for the linear algebra operation we just did, we had to promote a 0-tensor to a 1-tensor. Tensors are often notated by labeled indices, as if they were containers and we find elements of the container by an addressing mechanism. A 0-tensor (scalar) isn't a container, it is the "atomic ingredient" of higher tensors, hence it does not have indices. Once you box together a bunch of scalars, you get a 1-tensor (a vector), and now to locate the individual scalars in the box, we label each one with a positive integer. </p>
<p>If we have a vector <span class="math">\(A = \langle{a,b,c}\rangle\)</span>, where <span class="math">\(a,b,c\)</span> are scalars, then we could label them <span class="math">\(1,2,3\)</span> in order. Hence we could refer to the <span class="math">\(i'th\)</span> element of <span class="math">\(A\)</span> as <span class="math">\(A_i\)</span> or <span class="math">\(A(i)\)</span>. So <span class="math">\(A_1 = a\)</span> or <span class="math">\(A(3) = c\)</span>. Now we can box together a bunch of 1-tensors (vectors) and we'll get a 2-tensor (matrix). A matrix <span class="math">\(M(i,j)\)</span> hence would have two indices, so we need to supply two numbers to find a single scalar in the matrix. If we supply a partial address, such as <span class="math">\(M(1,j)\)</span> then this would return a 1-tensor, whereas <span class="math">\(M(1,3)\)</span> would return a 0-tensor. We can box together a bunch of 2-tensors to get a 3-tensor and so on. Importantly, anytime you box some <span class="math">\(k\)</span>-tensors together to form a higher order tensor, they must be of the same size.</p>
<p>Tensors are in a sense compositional mathematical objects. Scalars are "made of" nothing. Vectors are "made of" scalars, matrices are made of vectors, 3-tensors are made of 2-tensors. Perhaps this suggests that tensors have even more power to represent compositionality in data than do conventional neural networks, which usually only represent depth-wise hierarchy.</p>
<p>If we have the natural compositionality of individual tensors, we can network them together to form (deep) tensor networks! As we'll soon see, however, there is no explicit non-linear activation function application in a tensor network, everything appears perfectly linear. Yet we Networking in a neural network is nothing more impressive than matrix multiplication. Tensors have a generalization of matrix multiplication called <strong>tensor contraction</strong>.</p>
<h4>Tensor Contraction</h4>
<p>Take a 2-tensor (matrix) and multiply it with a vector (1-tensor).</p>
<div class="highlight"><pre><span></span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">],[</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">]])</span> <span class="o">@</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span><span class="mi">5</span><span class="p">])</span>
</pre></div>
<div class="highlight"><pre><span></span>array([-2, 5])
</pre></div>
<p>The matrix on the left is the identity matrix, so it didn't do anything to our vector, but what's the computation happening and how can it be generalized to tensors?</p>
<p>The answer is tensor (or index) contraction. We'll denote a matrix <span class="math">\(A_{rt}\)</span> and a vector <span class="math">\(B_t\)</span>. Multiplying them together is a simple operation really.</p>
<div class="math">$$ C_r = \sum_{t}A_{rt}\cdot B_t $$</div>
<p>This equation is saying that a tensor contraction between matrix matrix <span class="math">\(A_{rt}\)</span> and a vector <span class="math">\(B_t\)</span> results in a new vector <span class="math">\(C_r\)</span> (because it only has one index), and each element of <span class="math">\(C_r\)</span> indexed by <span class="math">\(r\)</span> is determined by taking the sum of the product of each subset of <span class="math">\(A_{rt}\)</span> with <span class="math">\(B_t\)</span> for every value of <span class="math">\(t\)</span>. Often times the summation symbol is ommitted, so we express a tensor contraction just by juxtaposition, <span class="math">\(A_{rt}B_{t}\)</span>. This often goes by the name <strong>Einstein summation notation</strong>, so whenever you juxtapose two tensors that have at least one shared index it denotes a tensor contraction.</p>
<p>A simpler example is easier to calculate. The inner product of two vectors returns a scalar.</p>
<div class="highlight"><pre><span></span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">])</span> <span class="o">@</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">3</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">1</span><span class="p">])</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="mf">10</span>
</pre></div>
<p>This result is from <span class="math">\(1*3+2*2+3*1\)</span>. Let's see why using a tensor contraction. We'll define two vectors <span class="math">\(F_h = \langle{1,2,3}\rangle, G_i = \langle{3,2,1}\rangle $. We can take the inner product of two vectors when they're of equal length, in which case they are, so we can change the index label of $F_h\)</span> to <span class="math">\(F_i\)</span>. Now they have the same indices, we can do the tensor summation.</p>
<div class="math">$$ H = \sum_i F_i\cdot G_i $$</div>
<p>It should be clear that we just multiply together corresponding elements then sum them all up, getting the inner product that numpy computed for us. Note that once we've done the sum over products, the matching indices have been fully contracted and the resulting tensor will thus have 2 less indices than the combined tensor. That is, combining the two vectors we get a tensor with two indices (a 2-tensor), but once we contract them, we get a 0-tensor.</p>
<p>There is a convenient graphical representation of tensors that is really helpful in reasoning about tensor contractions and tensor networks. All you do is represent a tensor as some simple geometric shape, we'll use a square, and then for each index the tensor has, you draw that many number of "legs" or <strong>strings</strong> emanating from the box. For example, here is a matrix and a 3-tensor with 2 and 3 strings, respectively:</p>
<div style="display:table">
<div style="display: table-row;">
<div style="float:left;display: table-cell;"><img src="images/TensorNetwork/matrix1.png" width=200px></div>
<div style="display: table-cell;"><img src="images/TensorNetwork/3tensor1.png" width=200px></div>
</div>
</div>
<p>Each index has been labeled. Now we can form tensor networks by connecting these tensors if they have compatible indices. For example, here is that inner product we just worked out:</p>
<p><img src="images/TensorNetwork/inner_product1.png"></p>
<p>And here is a matrix vector multiplication:</p>
<p><img src="images/TensorNetwork/matrix_vector_mult.png" width=300px>
You can see the vector on the left since it only has one string exiting from it whereas the matrix on the right has two strings. Once they share a string (an index), that represents a contraction. Notice that the number of remaining open-ended strings represents the rank or order of the tensor that results after the contraction. The matrix-vector multiplication diagram has 1 open string, so we know this contraction returns a new vector.</p>
<p>Since a simple feedforward neural network is nothing more than a series of matrix multiplications with non-linear activation functions, we can represent this easily diagrammatically: </p>
<p><img src="images/TensorNetwork/nn3.png" width=370px></p>
<p>Where the <span class="math">\(\sigma\)</span> (sigma) symbol represents the non-linear function. If we removed that, we would have a tensor network. But remember, this whole post is about non-linearity and tensor networks, so how do we get back the non-linearity in a tensor network?</p>
<p>Copying.</p>
<p>All we need to do is violate the no-cloning rule in the topology of the tensor network, and it will be able to learn non-linear functions. Consider the following two tensor networks. One has two component tensors that are both 3-tensors. Since there are 2 open strings, we know the result is a 2-tensor, however, you can also think of this as a directional network in which we plug in an input vector (or matrix if it's a minibatch) on the left and the network produces an output vector on the right.
<br /><br />
<div style="display:table">
<div style="display: table-row;">
<div style="float:left;display: table-cell; margin-right:50px; margin-bottom:25px;"><img src="images/TensorNetwork/tensornet1.png" width=350px></div>
<div style="display: table-cell; "><img src="images/TensorNetwork/tensornet2.png" width=350px></div>
</div>
</div></p>
<p>The tensor network on the left has a tensor <span class="math">\(A\)</span> can can produce two strings from 1 input string, so it has the ability to copying its input, however, both copies get passed to a single tensor <span class="math">\(B\)</span>, so this network cannot produce non-linear behavior because both copies will be entangled and cannot be transformed independently by tensor <span class="math">\(B\)</span>. In contrast, the tensor network on the right <em>can</em> produce non-linear behavior (as we'll soon show) and that's because tensor <span class="math">\(A\)</span> can produce two copies of its input and each copy gets independently transformed by two different tensors <span class="math">\(B,C\)</span>, which then pass their result to tensor <span class="math">\(D\)</span> which computes the final result.</p>
<p>Ready to see some non-linearity arise from what appears to be a purely linear network? Let's see if we can train the tensor network on the right to learn the ReLU non-linear activation function. That would surely be a sign it can do something non-linear. It turns out numpy, PyTorch and TensorFlow all have functions called <strong>einsum</strong> that can compute tensor contractions as we've discussed. Let's see.</p>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">import</span> <span class="nn">torch.nn.functional</span> <span class="k">as</span> <span class="nn">F</span>
<span class="kn">import</span> <span class="nn">torch.optim</span> <span class="k">as</span> <span class="nn">optim</span>
<span class="kn">from</span> <span class="nn">matplotlib</span> <span class="kn">import</span> <span class="n">pyplot</span> <span class="k">as</span> <span class="n">plt</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
</pre></div>
<p>Here we define the tensor network by first setting up our individual tensor components. Then we define a function that will accept some input tensor (in this case a scalar) and will connect up the tensors into a network by defining the tensor contractions that will happen.</p>
<p>The notation used for the <em>einsum</em> function in PyTorch, you write the indices of the involved tensors in a string and then pass the actual tensor objects as the second argument in a tuple. In the string, all the tensor indices for each involved tensor should be listed together (as single characters), then a comma for the next tensor with all of its indices and so on. Then you write '->' to indicate the resultant tensor with its indices. </p>
<p>Take the first string we see, 'sa,abc->sbc'. This means we will contract two tensors, the first has two indices, the second has three. We label the indices in the way we want the indices to contract. In this case, the 's' index represents the batch size and the 'a' index is the actual data. So we want to contract the data with the 2nd tensor, so we label its first index as 'a' as well. The resulting tensor indices will be whatever indices were not contracted.</p>
<div class="highlight"><pre><span></span><span class="n">b1</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">10</span><span class="p">,</span><span class="mi">10</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">b2</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span><span class="mi">10</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">b3</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span><span class="mi">10</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">b4</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">tensorNet1</span><span class="p">(</span><span class="n">x</span><span class="p">):</span>
<span class="n">r1</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'sa,abc->sbc'</span><span class="p">,(</span><span class="n">x</span><span class="p">,</span><span class="n">b1</span><span class="p">))</span>
<span class="n">r2</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'sbc,bd->sd'</span><span class="p">,(</span><span class="n">r1</span><span class="p">,</span><span class="n">b2</span><span class="p">))</span>
<span class="n">r3</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'sbc,bd->sd'</span><span class="p">,(</span><span class="n">r1</span><span class="p">,</span><span class="n">b3</span><span class="p">))</span>
<span class="n">r4</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'sd,sd,df->sf'</span><span class="p">,(</span><span class="n">r2</span><span class="p">,</span><span class="n">r3</span><span class="p">,</span><span class="n">b4</span><span class="p">))</span>
<span class="k">return</span> <span class="n">r4</span>
</pre></div>
<p>If you're familiar with PyTorch, this is just a simple training loop. Normalizing the data seems to be important empirically for tensor contractions. Each tensor in the tensor network is a "trainable object." So if our tensor network contains a 3-tensor with indices of size <span class="math">\(10\times 5\times 10\)</span> then this tensor has <span class="math">\(10 * 5 * 10 = 500\)</span> total number of parameters. The network we've just defined above has a total of <span class="math">\(1*10*10 + 10*10 + 10*10 + 10*1 = 3,000\)</span> parameters.</p>
<div class="highlight"><pre><span></span><span class="n">optimizer</span> <span class="o">=</span> <span class="n">optim</span><span class="o">.</span><span class="n">Adam</span><span class="p">([</span><span class="n">b1</span><span class="p">,</span><span class="n">b2</span><span class="p">,</span><span class="n">b3</span><span class="p">,</span><span class="n">b4</span><span class="p">],</span> <span class="n">lr</span><span class="o">=</span><span class="mf">0.001</span><span class="p">)</span>
<span class="n">criterion</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">MSELoss</span><span class="p">()</span>
<span class="n">losses</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">5000</span><span class="p">):</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">zero_grad</span><span class="p">()</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span><span class="mi">1</span><span class="p">),</span><span class="n">dim</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span>
<span class="n">target</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">tensorNet1</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="n">loss</span> <span class="o">=</span> <span class="n">criterion</span><span class="p">(</span><span class="n">out</span><span class="p">,</span> <span class="n">target</span><span class="p">)</span>
<span class="n">losses</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">loss</span><span class="p">)</span>
<span class="k">if</span> <span class="n">epoch</span> <span class="o">%</span> <span class="mi">500</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Epoch: </span><span class="si">{}</span><span class="s2"> | Loss: </span><span class="si">{}</span><span class="s2">"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">epoch</span><span class="p">,</span> <span class="n">loss</span><span class="p">))</span>
<span class="n">loss</span><span class="o">.</span><span class="n">backward</span><span class="p">()</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">step</span><span class="p">()</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">Epoch</span><span class="o">:</span><span class="w"> </span><span class="mi">0</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Loss</span><span class="o">:</span><span class="w"> </span><span class="mf">286692.1875</span>
<span class="n">Epoch</span><span class="o">:</span><span class="w"> </span><span class="mi">500</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Loss</span><span class="o">:</span><span class="w"> </span><span class="mf">949.7288818359375</span>
<span class="n">Epoch</span><span class="o">:</span><span class="w"> </span><span class="mi">1000</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Loss</span><span class="o">:</span><span class="w"> </span><span class="mf">0.07024684548377991</span>
<span class="n">Epoch</span><span class="o">:</span><span class="w"> </span><span class="mi">1500</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Loss</span><span class="o">:</span><span class="w"> </span><span class="mf">0.025008967146277428</span>
<span class="n">Epoch</span><span class="o">:</span><span class="w"> </span><span class="mi">2000</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Loss</span><span class="o">:</span><span class="w"> </span><span class="mf">0.02917313575744629</span>
<span class="n">Epoch</span><span class="o">:</span><span class="w"> </span><span class="mi">2500</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Loss</span><span class="o">:</span><span class="w"> </span><span class="mf">0.019280623644590378</span>
<span class="n">Epoch</span><span class="o">:</span><span class="w"> </span><span class="mi">3000</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Loss</span><span class="o">:</span><span class="w"> </span><span class="mf">0.03735805302858353</span>
<span class="n">Epoch</span><span class="o">:</span><span class="w"> </span><span class="mi">3500</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Loss</span><span class="o">:</span><span class="w"> </span><span class="mf">0.039074432104825974</span>
<span class="n">Epoch</span><span class="o">:</span><span class="w"> </span><span class="mi">4000</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Loss</span><span class="o">:</span><span class="w"> </span><span class="mf">0.0267774797976017</span>
<span class="n">Epoch</span><span class="o">:</span><span class="w"> </span><span class="mi">4500</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Loss</span><span class="o">:</span><span class="w"> </span><span class="mf">0.026810143142938614</span>
</pre></div>
<p>The loss went steadily down, but let's see if it reliably learns the ReLU function. Remember, we're expecting that it will set negative numbers to 0 (or close to 0) and leave positive numbers as their original value (or at least close to their original values).</p>
<div class="highlight"><pre><span></span><span class="n">t1</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span><span class="mi">1</span><span class="p">),</span><span class="n">dim</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="c1">#Some test data</span>
<span class="nb">print</span><span class="p">(</span><span class="n">t1</span><span class="p">)</span>
<span class="n">tensorNet1</span><span class="p">(</span><span class="n">t1</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>tensor([[-0.3854],
[-0.6216],
[ 0.2522],
[-0.6337]])
tensor([[ 0.1582],
[ 0.4116],
[ 0.0678],
[ 0.4277]])
</pre></div>
<p>Well it learned something! It actually looks like it learned how to square the numbers, rather than the relu function, but still, it's a nonlinear function. And if you tried this same training task using the "<em>linear</em> topology" from the diagram on the right above, you would be completely unable to learn this non-linear function no matter how you tune the hyperparameters and no matter how big the tensors are. The wiring is everything. The non-linearity arises from the network structure (global) not the individual tensor contractions (locally linear).</p>
<p>To me, this is quite interesting. It has been shown that certain kinds of tensor networks are directly related to convolutional neural networks with max pooling, see < https://arxiv.org/abs/1603.00162 ></p>
<p>The fact that the nonlinearity is due to the network itself suggests that you can dynamically tune how nonlinear the network is, by controlling the degree of data copying that the network can do. I think the generality and flexibility of tensor networks can allow you to design networks with just the right amount of inductive bias given your data. You can design tree networks, grid networks, or any topology you can think of as long as the contractions are possible.</p>
<p><img src="images/TensorNetwork/tensor_topologies.png" /></p>
<h3>Are Tensor Networks useful?</h3>
<p>Okay, so that's all theoretically very interesting, but are tensor networks useful? Are they better than neural networks at anything? Well, tensor networks arose in the Physics community as a powerful tool to simulate quantum systems, so they're definitely useful in that domain. Unfortunately the jury is still out on whether they can be "better" than neural networks. So far, traditional deep networks continue to dominate. </p>
<p>So far, I can't tell either from my own experiments with tensor networks or from the literature. However, because the components of tensor networks, tensors, are just high-order structured data, they have shown to be much more interpretable as parameters than the weights of a neural network that get rammed through activation functions. One small benefit I've noticed (but take it with a big grain of salt) is that I can use a much higher learning rate with a tensor network than a neural network without causing divergence during training.</p>
<p>Tensor networks can be very useful in a method called <strong>tensor decompositions</strong>. The idea is that if you have a huge matrix of data, say 1,000,000 x 1,000,000 entries, you can decompose it into a the contraction of a series of smaller tensors, such that when contracted they will sufficiently approximate the original matrix. It turns out that when you train a tensor decomposition network to approximate your data, it will often learn rather interpretable features.</p>
<p><img src="images/TensorNetwork/tensornet5.png" /></p>
<p>Below, I've included some code for a linear (i.e. it cannot learn a non-linear function) tensor network that can be trained to classify FashionMNIST clothes. A 2-layer convolutional neural network with about 500,000 parameters can achieve over 92% accuracy, whereas this simple linear tensor network can achieve about 88.5% accuracy (but quite quickly). The reason for using a linear tensor network is mostly because a decently sized tensor network with non-linear topology (e.g. a hierarchical tree like the figure in the beginning of the post) would be too much code, and a bit too confusing to read with raw einsum notation. </p>
<p>I do include a slightly more complex non-linear topology at the very end that does achieve up to 94% train/ 90% test accuracy, which is getting competitive conventional neural networks. Whether more complex topologies of tensor networks can achieve even better results is left as an exercise for the reader.</p>
<h3>Training a (Linear) Tensor Network on Fashion MNIST</h3>
<h4>Setup a training function</h4>
<div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">train</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">epochs</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">params</span><span class="o">=</span><span class="p">[],</span> <span class="n">lr</span><span class="o">=</span><span class="mf">0.001</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<span class="n">optimizer</span> <span class="o">=</span> <span class="n">optim</span><span class="o">.</span><span class="n">Adam</span><span class="p">(</span><span class="n">params</span><span class="p">,</span> <span class="n">lr</span><span class="o">=</span><span class="mf">0.001</span><span class="p">)</span>
<span class="n">criterion</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">CrossEntropyLoss</span><span class="p">()</span>
<span class="n">train_loss</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">train_accu</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">epochs</span><span class="p">):</span>
<span class="c1"># trainning</span>
<span class="n">ave_loss</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="n">correct_cnt</span><span class="p">,</span> <span class="n">ave_loss</span> <span class="o">=</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span>
<span class="n">total_cnt</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="k">for</span> <span class="n">batch_idx</span><span class="p">,</span> <span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">target</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">train_loader</span><span class="p">):</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">zero_grad</span><span class="p">()</span>
<span class="n">x</span><span class="p">,</span> <span class="n">target</span> <span class="o">=</span> <span class="n">x</span><span class="p">,</span> <span class="n">target</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">squeeze</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="k">if</span> <span class="n">shape</span><span class="p">:</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,(</span><span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span><span class="o">*</span><span class="n">shape</span><span class="p">))</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="n">loss</span> <span class="o">=</span> <span class="n">criterion</span><span class="p">(</span><span class="n">out</span><span class="p">,</span> <span class="n">target</span><span class="p">)</span>
<span class="n">ave_loss</span> <span class="o">=</span> <span class="n">ave_loss</span> <span class="o">*</span> <span class="mf">0.9</span> <span class="o">+</span> <span class="n">loss</span><span class="o">.</span><span class="n">data</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*</span> <span class="mf">0.1</span>
<span class="n">_</span><span class="p">,</span> <span class="n">pred_label</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">max</span><span class="p">(</span><span class="n">out</span><span class="o">.</span><span class="n">data</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">total_cnt</span> <span class="o">+=</span> <span class="n">x</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">size</span><span class="p">()[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">correct_cnt</span> <span class="o">+=</span> <span class="nb">float</span><span class="p">((</span><span class="n">pred_label</span> <span class="o">==</span> <span class="n">target</span><span class="o">.</span><span class="n">data</span><span class="p">)</span><span class="o">.</span><span class="n">sum</span><span class="p">())</span>
<span class="n">acc</span> <span class="o">=</span> <span class="n">correct_cnt</span> <span class="o">/</span> <span class="n">total_cnt</span>
<span class="n">train_loss</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">loss</span><span class="p">)</span>
<span class="n">train_accu</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">acc</span><span class="p">)</span>
<span class="n">loss</span><span class="o">.</span><span class="n">backward</span><span class="p">()</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">step</span><span class="p">()</span>
<span class="w"> </span><span class="sd">'''if (batch_idx+1) % 100 == 0 or (batch_idx+1) == len(train_loader):</span>
<span class="sd"> print('==>>> epoch: {}, batch index: {}, train loss: {:.6f}, accuracy: {}'.format(</span>
<span class="sd"> epoch, batch_idx+1, ave_loss, acc))'''</span>
<span class="c1"># testing</span>
<span class="n">correct_cnt</span><span class="p">,</span> <span class="n">ave_loss</span> <span class="o">=</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span>
<span class="n">total_cnt</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="k">for</span> <span class="n">batch_idx</span><span class="p">,</span> <span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">target</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">test_loader</span><span class="p">):</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">squeeze</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="k">if</span> <span class="n">shape</span><span class="p">:</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,(</span><span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span><span class="o">*</span><span class="n">shape</span><span class="p">))</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="n">loss</span> <span class="o">=</span> <span class="n">criterion</span><span class="p">(</span><span class="n">out</span><span class="p">,</span> <span class="n">target</span><span class="p">)</span>
<span class="n">_</span><span class="p">,</span> <span class="n">pred_label</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">max</span><span class="p">(</span><span class="n">out</span><span class="o">.</span><span class="n">data</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">total_cnt</span> <span class="o">+=</span> <span class="n">x</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">size</span><span class="p">()[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">correct_cnt</span> <span class="o">+=</span> <span class="nb">float</span><span class="p">((</span><span class="n">pred_label</span> <span class="o">==</span> <span class="n">target</span><span class="o">.</span><span class="n">data</span><span class="p">)</span><span class="o">.</span><span class="n">sum</span><span class="p">())</span>
<span class="c1"># smooth average</span>
<span class="n">ave_loss</span> <span class="o">=</span> <span class="n">ave_loss</span> <span class="o">*</span> <span class="mf">0.9</span> <span class="o">+</span> <span class="n">loss</span><span class="o">.</span><span class="n">data</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*</span> <span class="mf">0.1</span>
<span class="n">acc</span> <span class="o">=</span> <span class="n">correct_cnt</span> <span class="o">*</span> <span class="mf">1.0</span> <span class="o">/</span> <span class="n">total_cnt</span>
<span class="k">if</span><span class="p">(</span><span class="n">batch_idx</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="o">%</span> <span class="mi">100</span> <span class="o">==</span> <span class="mi">0</span> <span class="ow">or</span> <span class="p">(</span><span class="n">batch_idx</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="o">==</span> <span class="nb">len</span><span class="p">(</span><span class="n">test_loader</span><span class="p">):</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'==>>> epoch: </span><span class="si">{}</span><span class="s1">, batch index: </span><span class="si">{}</span><span class="s1">, test loss: </span><span class="si">{:.6f}</span><span class="s1">, acc: </span><span class="si">{:.3f}</span><span class="s1">'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span> \
<span class="n">epoch</span><span class="p">,</span> <span class="n">batch_idx</span><span class="o">+</span><span class="mi">1</span><span class="p">,</span> <span class="n">ave_loss</span><span class="p">,</span> <span class="n">acc</span><span class="p">))</span>
<span class="k">return</span> <span class="n">train_loss</span><span class="p">,</span> <span class="n">train_accu</span>
</pre></div>
<h4>Load up the FashionMNIST Data</h4>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torchvision</span>
<span class="kn">import</span> <span class="nn">torchvision.datasets</span> <span class="k">as</span> <span class="nn">dset</span>
<span class="kn">import</span> <span class="nn">torchvision.transforms</span> <span class="k">as</span> <span class="nn">transforms</span>
<span class="n">transform</span> <span class="o">=</span> <span class="n">transforms</span><span class="o">.</span><span class="n">Compose</span><span class="p">([</span><span class="n">transforms</span><span class="o">.</span><span class="n">ToTensor</span><span class="p">(),</span> <span class="n">transforms</span><span class="o">.</span><span class="n">Normalize</span><span class="p">((</span><span class="mf">0.5</span><span class="p">,),</span> <span class="p">(</span><span class="mf">1.0</span><span class="p">,))])</span>
<span class="n">train_set</span> <span class="o">=</span> <span class="n">torchvision</span><span class="o">.</span><span class="n">datasets</span><span class="o">.</span><span class="n">FashionMNIST</span><span class="p">(</span><span class="s2">"fashion_mnist"</span><span class="p">,</span> <span class="n">train</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">download</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">transform</span><span class="o">=</span><span class="n">transform</span><span class="p">)</span>
<span class="n">test_set</span> <span class="o">=</span> <span class="n">torchvision</span><span class="o">.</span><span class="n">datasets</span><span class="o">.</span><span class="n">FashionMNIST</span><span class="p">(</span><span class="s2">"fashion_mnist"</span><span class="p">,</span> <span class="n">train</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">download</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">transform</span><span class="o">=</span><span class="n">transform</span><span class="p">)</span>
<span class="n">batch_size</span> <span class="o">=</span> <span class="mi">100</span>
<span class="n">train_loader</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">DataLoader</span><span class="p">(</span>
<span class="n">dataset</span><span class="o">=</span><span class="n">train_set</span><span class="p">,</span>
<span class="n">batch_size</span><span class="o">=</span><span class="n">batch_size</span><span class="p">,</span>
<span class="n">shuffle</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span><span class="n">drop_last</span><span class="o">=</span> <span class="kc">True</span><span class="p">)</span>
<span class="n">test_loader</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">DataLoader</span><span class="p">(</span>
<span class="n">dataset</span><span class="o">=</span><span class="n">test_set</span><span class="p">,</span>
<span class="n">batch_size</span><span class="o">=</span><span class="n">batch_size</span><span class="p">,</span>
<span class="n">shuffle</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">drop_last</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'==>>> total trainning batch number: </span><span class="si">{}</span><span class="s1">'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">train_loader</span><span class="p">)))</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'==>>> total testing batch number: </span><span class="si">{}</span><span class="s1">'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">test_loader</span><span class="p">)))</span>
</pre></div>
<div class="highlight"><pre><span></span>==>>> total trainning batch number: 600
==>>> total testing batch number: 100
</pre></div>
<h4>Define the Tensor Network</h4>
<h5>Total Num. Parameters: 784 * 25 * 25 + 25 * 25 * 10 = 496,250</h5>
<p><img src="images/TensorNetwork/tensornet4a.png" width=350px></p>
<p>In this case I labeled each string with its dimension size, which is often called the <strong>bond dimension</strong>. As you'll see below, the 4 interior strings all have a bond dimension of 25.</p>
<div class="highlight"><pre><span></span><span class="n">A</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">784</span><span class="p">,</span><span class="mi">25</span><span class="p">,</span><span class="mi">25</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span><span class="o">.</span><span class="n">float</span><span class="p">()</span>
<span class="n">B</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">25</span><span class="p">,</span><span class="mi">25</span><span class="p">,</span><span class="mi">10</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span><span class="o">.</span><span class="n">float</span><span class="p">()</span>
<span class="k">def</span> <span class="nf">tensorNet2</span><span class="p">(</span><span class="n">x</span><span class="p">):</span>
<span class="k">try</span><span class="p">:</span>
<span class="n">C</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'sa,abc->sbc'</span><span class="p">,(</span><span class="n">x</span><span class="p">,</span><span class="n">A</span><span class="p">)))</span>
<span class="n">D</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'sbc,bct->st'</span><span class="p">,(</span><span class="n">C</span><span class="p">,</span><span class="n">B</span><span class="p">)))</span>
<span class="k">return</span> <span class="n">F</span><span class="o">.</span><span class="n">softmax</span><span class="p">(</span><span class="n">D</span><span class="p">)</span>
<span class="k">except</span> <span class="ne">Exception</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Error: </span><span class="si">{}</span><span class="s2">"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">))</span>
</pre></div>
<h4>Train</h4>
<div class="highlight"><pre><span></span><span class="o">%%</span><span class="n">time</span>
<span class="n">loss</span><span class="p">,</span> <span class="n">acc</span> <span class="o">=</span> <span class="n">train</span><span class="p">(</span><span class="n">tensorNet2</span><span class="p">,</span> <span class="n">epochs</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">params</span><span class="o">=</span><span class="p">[</span><span class="n">A</span><span class="p">,</span><span class="n">B</span><span class="p">],</span> <span class="n">lr</span><span class="o">=</span><span class="mf">0.1</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">784</span><span class="p">,))</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="o">/</span><span class="nx">Users</span><span class="o">/</span><span class="nx">brandonbrown</span><span class="o">/</span><span class="nx">anaconda3</span><span class="o">/</span><span class="nx">envs</span><span class="o">/</span><span class="nx">deeprl</span><span class="o">/</span><span class="nx">lib</span><span class="o">/</span><span class="nx">python3</span><span class="m m-Double">.6</span><span class="o">/</span><span class="nx">site</span><span class="o">-</span><span class="nx">packages</span><span class="o">/</span><span class="nx">ipykernel</span><span class="o">/</span><span class="nx">__main__</span><span class="p">.</span><span class="nx">py</span><span class="p">:</span><span class="mi">7</span><span class="p">:</span><span class="w"> </span><span class="nx">UserWarning</span><span class="p">:</span><span class="w"> </span><span class="nx">Implicit</span><span class="w"> </span><span class="nx">dimension</span><span class="w"> </span><span class="kd">choice</span><span class="w"> </span><span class="k">for</span><span class="w"> </span><span class="nx">softmax</span><span class="w"> </span><span class="nx">has</span><span class="w"> </span><span class="nx">been</span><span class="w"> </span><span class="nx">deprecated</span><span class="p">.</span><span class="w"> </span><span class="nx">Change</span><span class="w"> </span><span class="nx">the</span><span class="w"> </span><span class="nx">call</span><span class="w"> </span><span class="nx">to</span><span class="w"> </span><span class="nx">include</span><span class="w"> </span><span class="nx">dim</span><span class="p">=</span><span class="nx">X</span><span class="w"> </span><span class="k">as</span><span class="w"> </span><span class="nx">an</span><span class="w"> </span><span class="nx">argument</span><span class="p">.</span>
<span class="o">/</span><span class="nx">Users</span><span class="o">/</span><span class="nx">brandonbrown</span><span class="o">/</span><span class="nx">anaconda3</span><span class="o">/</span><span class="nx">envs</span><span class="o">/</span><span class="nx">deeprl</span><span class="o">/</span><span class="nx">lib</span><span class="o">/</span><span class="nx">python3</span><span class="m m-Double">.6</span><span class="o">/</span><span class="nx">site</span><span class="o">-</span><span class="nx">packages</span><span class="o">/</span><span class="nx">ipykernel</span><span class="o">/</span><span class="nx">__main__</span><span class="p">.</span><span class="nx">py</span><span class="p">:</span><span class="mi">20</span><span class="p">:</span><span class="w"> </span><span class="nx">UserWarning</span><span class="p">:</span><span class="w"> </span><span class="nx">invalid</span><span class="w"> </span><span class="nx">index</span><span class="w"> </span><span class="nx">of</span><span class="w"> </span><span class="nx">a</span><span class="w"> </span><span class="mi">0</span><span class="o">-</span><span class="nx">dim</span><span class="w"> </span><span class="nx">tensor</span><span class="p">.</span><span class="w"> </span><span class="nx">This</span><span class="w"> </span><span class="nx">will</span><span class="w"> </span><span class="nx">be</span><span class="w"> </span><span class="nx">an</span><span class="w"> </span><span class="nx">error</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="nx">PyTorch</span><span class="w"> </span><span class="m m-Double">0.5</span><span class="p">.</span><span class="w"> </span><span class="nx">Use</span><span class="w"> </span><span class="nx">tensor</span><span class="p">.</span><span class="nx">item</span><span class="p">()</span><span class="w"> </span><span class="nx">to</span><span class="w"> </span><span class="nx">convert</span><span class="w"> </span><span class="nx">a</span><span class="w"> </span><span class="mi">0</span><span class="o">-</span><span class="nx">dim</span><span class="w"> </span><span class="nx">tensor</span><span class="w"> </span><span class="nx">to</span><span class="w"> </span><span class="nx">a</span><span class="w"> </span><span class="nx">Python</span><span class="w"> </span><span class="nx">number</span>
<span class="o">/</span><span class="nx">Users</span><span class="o">/</span><span class="nx">brandonbrown</span><span class="o">/</span><span class="nx">anaconda3</span><span class="o">/</span><span class="nx">envs</span><span class="o">/</span><span class="nx">deeprl</span><span class="o">/</span><span class="nx">lib</span><span class="o">/</span><span class="nx">python3</span><span class="m m-Double">.6</span><span class="o">/</span><span class="nx">site</span><span class="o">-</span><span class="nx">packages</span><span class="o">/</span><span class="nx">ipykernel</span><span class="o">/</span><span class="nx">__main__</span><span class="p">.</span><span class="nx">py</span><span class="p">:</span><span class="mi">45</span><span class="p">:</span><span class="w"> </span><span class="nx">UserWarning</span><span class="p">:</span><span class="w"> </span><span class="nx">invalid</span><span class="w"> </span><span class="nx">index</span><span class="w"> </span><span class="nx">of</span><span class="w"> </span><span class="nx">a</span><span class="w"> </span><span class="mi">0</span><span class="o">-</span><span class="nx">dim</span><span class="w"> </span><span class="nx">tensor</span><span class="p">.</span><span class="w"> </span><span class="nx">This</span><span class="w"> </span><span class="nx">will</span><span class="w"> </span><span class="nx">be</span><span class="w"> </span><span class="nx">an</span><span class="w"> </span><span class="nx">error</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="nx">PyTorch</span><span class="w"> </span><span class="m m-Double">0.5</span><span class="p">.</span><span class="w"> </span><span class="nx">Use</span><span class="w"> </span><span class="nx">tensor</span><span class="p">.</span><span class="nx">item</span><span class="p">()</span><span class="w"> </span><span class="nx">to</span><span class="w"> </span><span class="nx">convert</span><span class="w"> </span><span class="nx">a</span><span class="w"> </span><span class="mi">0</span><span class="o">-</span><span class="nx">dim</span><span class="w"> </span><span class="nx">tensor</span><span class="w"> </span><span class="nx">to</span><span class="w"> </span><span class="nx">a</span><span class="w"> </span><span class="nx">Python</span><span class="w"> </span><span class="nx">number</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">0</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.2</span><span class="mi">04465</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.7</span><span class="mi">95</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">99457</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">12</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">96803</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">23</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">94834</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">27</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">4</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">93722</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">37</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">5</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">91949</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">39</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">6</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">91317</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">44</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">7</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">90564</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">47</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">8</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">90050</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">50</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">9</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">89429</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">53</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">10</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">88894</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">56</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">11</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">88893</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">57</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">12</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">88916</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">59</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">13</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">87949</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">59</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">14</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">87448</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">61</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">15</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">87306</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">64</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">16</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">87102</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">63</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">17</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">87094</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">64</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">18</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">86368</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">66</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">19</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">86422</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">65</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">20</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">86429</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">67</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">21</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">85773</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">67</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">22</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">86021</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">67</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">23</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">85484</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">68</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">24</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">86032</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">70</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">25</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">85131</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">71</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">26</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">85170</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">72</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">27</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">85003</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">69</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">28</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84911</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">73</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">29</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84867</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">74</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">30</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84924</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">74</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">31</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84951</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">74</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">32</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84638</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">75</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">33</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84458</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">74</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">34</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84469</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">74</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">35</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84160</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">75</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">36</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83932</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">76</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">37</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84098</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">75</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">38</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84113</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">78</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">39</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83863</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">78</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">40</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83771</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">78</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">41</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">84383</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">77</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">42</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83535</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">78</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">43</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83703</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">78</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">44</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83593</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">78</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">45</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83371</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">78</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">46</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83289</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">80</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">47</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83360</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">79</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">48</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83175</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">81</span>
<span class="o">==>></span><span class="p">></span><span class="w"> </span><span class="nx">epoch</span><span class="p">:</span><span class="w"> </span><span class="mi">49</span><span class="p">,</span><span class="w"> </span><span class="nx">batch</span><span class="w"> </span><span class="nx">index</span><span class="p">:</span><span class="w"> </span><span class="mi">100</span><span class="p">,</span><span class="w"> </span><span class="nx">test</span><span class="w"> </span><span class="nx">loss</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">2.1</span><span class="mi">83238</span><span class="p">,</span><span class="w"> </span><span class="nx">acc</span><span class="p">:</span><span class="w"> </span><span class="m m-Double">0.8</span><span class="mi">80</span>
<span class="nx">CPU</span><span class="w"> </span><span class="nx">times</span><span class="p">:</span><span class="w"> </span><span class="nx">user</span><span class="w"> </span><span class="mi">40</span><span class="nx">min</span><span class="w"> </span><span class="mi">35</span><span class="nx">s</span><span class="p">,</span><span class="w"> </span><span class="nx">sys</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="nx">min</span><span class="w"> </span><span class="mi">23</span><span class="nx">s</span><span class="p">,</span><span class="w"> </span><span class="nx">total</span><span class="p">:</span><span class="w"> </span><span class="mi">42</span><span class="nx">min</span><span class="w"> </span><span class="mi">58</span><span class="nx">s</span>
<span class="nx">Wall</span><span class="w"> </span><span class="nx">time</span><span class="p">:</span><span class="w"> </span><span class="mi">16</span><span class="nx">min</span><span class="w"> </span><span class="mi">17</span><span class="nx">s</span>
</pre></div>
<p>Not too bad right?</p>
<h3>Simple Non-Linear Tensor Network</h3>
<h4>Total Params: 784*20^2+4*20^2+20^4+20^3 = 479,200</h4>
<p><img src="images/TensorNetwork/tensornet4.png" width=500px></p>
<div class="highlight"><pre><span></span><span class="n">nl1</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">784</span><span class="p">,</span><span class="mi">20</span><span class="p">,</span><span class="mi">20</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="c1"># a,b,c</span>
<span class="n">nl2</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">20</span><span class="p">,</span><span class="mi">20</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="c1"># c,e</span>
<span class="n">nl3</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">20</span><span class="p">,</span><span class="mi">20</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="c1"># d,f</span>
<span class="n">nl4</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">20</span><span class="p">,</span><span class="mi">20</span><span class="p">,</span><span class="mi">20</span><span class="p">,</span><span class="mi">20</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="c1"># e,f,g</span>
<span class="n">nl5</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">20</span><span class="p">,</span><span class="mi">20</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="c1"># c,e</span>
<span class="n">nl6</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">20</span><span class="p">,</span><span class="mi">20</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="c1"># d,f</span>
<span class="n">nl7</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">randn</span><span class="p">(</span><span class="mi">20</span><span class="p">,</span><span class="mi">20</span><span class="p">,</span><span class="mi">10</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="c1"># e,f,g</span>
<span class="k">def</span> <span class="nf">tensorNet3</span><span class="p">(</span><span class="n">x</span><span class="p">):</span>
<span class="n">r1</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'sb,bcd->scd'</span><span class="p">,(</span><span class="n">x</span><span class="p">,</span><span class="n">nl1</span><span class="p">)))</span>
<span class="n">r2</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'scd,ce->se'</span><span class="p">,(</span><span class="n">r1</span><span class="p">,</span><span class="n">nl2</span><span class="p">)))</span>
<span class="n">r3</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'scd,df->sf'</span><span class="p">,(</span><span class="n">r1</span><span class="p">,</span><span class="n">nl3</span><span class="p">)))</span>
<span class="n">r4</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'se,sf,efgh->sgh'</span><span class="p">,(</span><span class="n">r2</span><span class="p">,</span><span class="n">r3</span><span class="p">,</span><span class="n">nl4</span><span class="p">)))</span>
<span class="n">r5</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'scd,ce->se'</span><span class="p">,(</span><span class="n">r4</span><span class="p">,</span><span class="n">nl5</span><span class="p">)))</span>
<span class="n">r6</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'scd,df->sf'</span><span class="p">,(</span><span class="n">r4</span><span class="p">,</span><span class="n">nl6</span><span class="p">)))</span>
<span class="n">r7</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">normalize</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">'se,sf,efg->sg'</span><span class="p">,(</span><span class="n">r5</span><span class="p">,</span><span class="n">r6</span><span class="p">,</span><span class="n">nl7</span><span class="p">)))</span>
<span class="k">return</span> <span class="n">F</span><span class="o">.</span><span class="n">softmax</span><span class="p">(</span><span class="n">r7</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="o">%%</span><span class="n">time</span>
<span class="n">loss1b</span><span class="p">,</span> <span class="n">acc1b</span> <span class="o">=</span> <span class="n">train</span><span class="p">(</span><span class="n">model1b</span><span class="p">,</span> <span class="n">epochs</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">params</span><span class="o">=</span><span class="p">[</span><span class="n">nl1</span><span class="p">,</span> <span class="n">nl2</span><span class="p">,</span> <span class="n">nl3</span><span class="p">,</span> <span class="n">nl4</span><span class="p">,</span> <span class="n">nl5</span><span class="p">,</span> <span class="n">nl6</span><span class="p">,</span> <span class="n">nl7</span><span class="p">],</span> <span class="n">lr</span><span class="o">=</span><span class="mf">0.1</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">784</span><span class="p">,))</span>
</pre></div>
<p>Unfortunately, this non-linear tensor network performs significantly better with the training data (getting over 94% accuracy at 50 epochs) but the test accuracy tops out at around 88%, similar to the linear network. This just demonstrates that with the added non-linear ability, we can much more easily overfit the data. One thing to try is to add trainable bias tensors into the network as we have in neural networks.</p>
<h3>Conclusion</h3>
<p>Tensor Networks can be seen as a generalization of neural networks. Tensor Networks are being actively studied and improved and I believe it is likely they will be a great tool in your machine learning toolkit.</p>
<h3>References</h3>
<ul>
<li>Yau, D. (2015). Operads of Wiring Diagrams. Retrieved from http://arxiv.org/abs/1512.01602</li>
<li>Maina, S. A. (2017). Graphical Linear Algebra.</li>
<li>Cichocki, A. (2014). Era of Big Data Processing: A New Approach via Tensor Networks and Tensor Decompositions, 1–30. http://doi.org/abs/1403.2048</li>
<li>Genovese, F. (2017). The Way of the Infinitesimal. Retrieved from http://arxiv.org/abs/1707.00459</li>
<li>Mangan, T. (2008). A gentle introduction to tensors. J. Biomedical Science and Engineering, 1, 64–67. http://doi.org/10.1016/S0004-3702(98)00053-8</li>
<li>Kissinger, A., & Quick, D. (2015). A first-order logic for string diagrams, 171–189. http://doi.org/10.4230/LIPIcs.CALCO.2015.171</li>
<li>Barber, A. G. (1997). Linear Type Theories , Semantics and Action Calculi. Computing. Retrieved from http://hdl.handle.net/1842/392</li>
<li>Girard, J.-Y. (1995). Linear Logic: its syntax and semantics. Advances in Linear Logic, 1–42. http://doi.org/10.1017/CBO9780511629150.002</li>
<li>Wadler, P., Val, A., & Arr, V. A. (1990). Philip Wadler University of Glasgow, (April), 1–21.</li>
<li>Martin, C. (2004). Tensor Decompositions Workshop Discussion Notes. Palo Alto, CA: American Institute of Mathematics (AIM), 1–27. Retrieved from http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:Tensor+Decompositions+Workshop+Discussion+Notes#0</li>
<li>Kasai, H. (2017). Fast online low-rank tensor subspace tracking by CP decomposition using recursive least squares from incomplete observations, 1–21. Retrieved from http://arxiv.org/abs/1709.10276</li>
<li>Zhang, Y., Zhou, G., Zhao, Q., Cichocki, A., & Wang, X. (2016). Fast nonnegative tensor factorization based on accelerated proximal gradient and low-rank approximation. Neurocomputing, 198, 148–154. http://doi.org/10.1016/j.neucom.2015.08.122</li>
<li>Smith, S., Beri, A., & Karypis, G. (2017). Constrained Tensor Factorization with Accelerated AO-ADMM. Proceedings of the International Conference on Parallel Processing, 111–120. http://doi.org/10.1109/ICPP.2017.20</li>
<li>Stoudenmire, E. M., & Schwab, D. J. (2016). Supervised Learning with Tensor Networks. Advances in Neural Information Processing Systems 29 (NIPS 2016), (Nips), 4799–4807. Retrieved from https://papers.nips.cc/paper/6211-supervised-learning-with-tensor-networks.pdf%0Ahttps://papers.nips.cc/paper/6211-supervised-learning-with-tensor-networksAdvances in Neural Information Processing Systems 29 (NIPS 2016)%0Ahttp://arxiv.org/abs/1605.05775</li>
<li>Rabanser, S., Shchur, O., & Günnemann, S. (2017). Introduction to Tensor Decompositions and their Applications in Machine Learning, 1–13. Retrieved from http://arxiv.org/abs/1711.10781</li>
<li>Zafeiriou, S. (2009). Discriminant nonnegative tensor factorization algorithms. IEEE Transactions on Neural Networks, 20(2), 217–235. http://doi.org/10.1109/TNN.2008.2005293</li>
<li>Cohen, J. E., & Gillis, N. (2018). Dictionary-Based Tensor Canonical Polyadic Decomposition. IEEE Transactions on Signal Processing, 66(7), 1876–1889. http://doi.org/10.1109/TSP.2017.2777393</li>
<li>Phan, A. H., & Cichocki, A. (2010). Tensor decompositions for feature extraction and classification of high dimensional datasets. Nonlinear Theory and Its Applications, IEICE, 1(1), 37–68. http://doi.org/10.1587/nolta.1.37</li>
<li>Kossaifi, J., Lipton, Z. C., Khanna, A., Furlanello, T., & Anandkumar, A. (2017). Tensor Contraction & Regression Networks, 1–10. Retrieved from http://arxiv.org/abs/1707.08308</li>
<li>Ecognition, R., Lipton, Z. C., & Anandkumar, A. (2018). D Eep a Ctive L Earning, 1–15.</li>
<li>Stoudenmire, E. M. (2017). Learning Relevant Features of Data with Multi-scale Tensor Networks, 1–12. http://doi.org/10.1088/2058-9565/aaba1a</li>
</ul>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Announcement2017-04-26T19:41:00-05:002017-04-26T19:41:00-05:00Brandon Browntag:outlace.com,2017-04-26:/announcement.html<p>Some announcements about this blog's migration to a different static site generator as well as some new GitHub repos I've created.</p><p>I've decided to migrate this blog to Pelican from Jekyll. I did this largely because
Pelican has a plugin for allowing Jupyter notebooks to be served automatically
as blog posts. With Jekyll, I had to use nbconvert to convert my ipynb document to html
every time I wanted to publish a new post, and I also had to often manually edit that html document to get it formatted correctly when embedded into the site. That was fine once, but if you want to make
edits or updates, it becomes a real hassle. Now I can directly edit the notebook
and the post will automatically update. There are a number of typos and code bugs
in some of my old posts that I just never got around to fixing because it was
so annoying to re-publish the post. But now I will work on them. Unfortunately some formatting/display
issues have arisen from the migration but I will fix them all soon.</p>
<p>Additionally, I've created some GitHub repos for some of the code used in various projects.
Namely, I've turned the Gridworld game from RL part 3 into a separate project on GitHub
so you can use it in other projects more easily. Find it here: <a href="https://github.com/outlace/Gridworld">https://github.com/outlace/Gridworld</a></p>
<p>I also made a simple Data Augmentation library for creating synthetic image data
from originals so you can amplify your data for use in deep learning models.
You can find it here: <a href="https://github.com/outlace/Data-Augmentation">https://github.com/outlace/Data-Augmentation</a></p>
<p>As far as a roadmap for the next few months, I plan to complete the second
sub-series in my series on Topological Data Analysis (TDA). That is, I will roll out
a series of posts on the Mapper algorithm. Simultaneously I will be slowly working on
an open source TDA library in Python called <a href="https://github.com/outlace/OpenTDA">OpenTDA</a>.</p>
<p>After that, I want to get back into reinforcement learning as that's my biggest passion
in machine learning. I'm not exactly sure what I plan to work on yet, but will likely
use OpenAI's gym library as a testing ground. Feel free to drop a comment if you have any suggestions.</p>
<p>-Brandon</p>Persistent Homology (Part 5)2017-02-26T00:40:00-06:002017-02-26T00:40:00-06:00Brandon Browntag:outlace.com,2017-02-26:/TDApart5.html<p>In part 5 we combine everything we've learned and compute persistent homology barcodes from raw data.</p><h2>Topological Data Analysis - Part 5 - Persistent Homology</h2>
<p>This is Part 5 in a series on topological data analysis.
See <a href="TDApart1.html">Part 1</a> | <a href="TDApart2.html">Part 2</a> | <a href="TDApart3.html">Part 3</a> | <a href="TDApart4.html">Part 4</a></p>
<p><a href="https://github.com/outlace/OpenTDA/PersistentHomology.py">Download this notebook</a> | <a href="https://github.com/outlace/outlace.github.io/notebooks/TDApart5.ipynb">Download the code</a></p>
<p>In this part we finally utilize all we've learned to compute the persitent homology groups and draw persistence diagrams to summarize the information graphically.</p>
<p>Let's summarize what we know so far.</p>
<p>We know...
1. how to generate a simplicial complex from point-cloud data using an arbitrary <span class="math">\(\epsilon\)</span> distance parameter
2. how to calculate homology groups of a simplicial complex
3. how to compute Betti numbers of a simplicial complex</p>
<p>The jump from what we know to persistent homology is small conceptually. We just need to calculate Betti numbers for a set of simplicial complexes generated by continuously varying <span class="math">\(\epsilon: 0 \rightarrow \infty\)</span>. Then we can see which topological features persist significantly longer than others, and declare those to be signal not noise. </p>
<blockquote>
<p>Note: I'm ignoring an objective definition of "significantly longer" since that is really a statistical question that is outside the scope of this exposition. For all the examples we consider here, it will be obvious which features persist significantly longer just by visual inspection.</p>
</blockquote>
<p>Unfortunately, while the coneptual jump is small, the technical jump is more formidable. Especially because we also want to be able to ask which data points in my original data set lie on some particular topological feature.</p>
<p>Let's revisit the code we used to sample points (with some intentional randomness added) from a circle and build a simplicial complex.</p>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
<span class="n">n</span> <span class="o">=</span> <span class="mi">30</span> <span class="c1">#number of points to generate</span>
<span class="c1">#generate space of parameter</span>
<span class="n">theta</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mf">2.0</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">pi</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">r</span> <span class="o">=</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">5.0</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">r</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">cos</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">b</span> <span class="o">+</span> <span class="n">r</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span>
<span class="c1">#code to plot the circle for visualization</span>
<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_4_0.png"></p>
<div class="highlight"><pre><span></span><span class="n">x2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mf">0.75</span><span class="p">,</span><span class="mf">0.75</span><span class="p">,</span><span class="n">n</span><span class="p">)</span> <span class="o">+</span> <span class="n">x</span> <span class="c1">#add some "jitteriness" to the points</span>
<span class="n">y2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mf">0.75</span><span class="p">,</span><span class="mf">0.75</span><span class="p">,</span><span class="n">n</span><span class="p">)</span> <span class="o">+</span> <span class="n">y</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">()</span>
<span class="n">ax</span><span class="o">.</span><span class="n">scatter</span><span class="p">(</span><span class="n">x2</span><span class="p">,</span><span class="n">y2</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_5_0.png"></p>
<div class="highlight"><pre><span></span><span class="n">newData</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="nb">list</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">x2</span><span class="p">,</span><span class="n">y2</span><span class="p">)))</span>
<span class="kn">import</span> <span class="nn">SimplicialComplex</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">graph</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">3.0</span><span class="p">)</span> <span class="c1">#Notice the epsilon parameter is 3.0</span>
<span class="n">ripsComplex</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">rips</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">graph</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">ripsComplex</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_7_0.png"></p>
<p>As you can see, setting <span class="math">\(\epsilon = 3.0\)</span> produces a nice looking simplicial complex that captures the single 1-dimensional "hole" in the original data.</p>
<p>However, let's play around with <span class="math">\(\epsilon\)</span> to see how it changes our complex.</p>
<div class="highlight"><pre><span></span><span class="n">graph</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">2.0</span><span class="p">)</span>
<span class="n">ripsComplex</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">rips</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">graph</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">ripsComplex</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_9_0.png"></p>
<p>We decreased <span class="math">\(\epsilon\)</span> to <span class="math">\(2.0\)</span> and now we have a "break" in our circle. If we calculate the homology and Betti numbers of this complex, we will no longer have a 1-dimensional cycle present. We will only see a single connected component. </p>
<p>Let's decrease it a little bit more to 1.9</p>
<div class="highlight"><pre><span></span><span class="n">newData</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="nb">list</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">x2</span><span class="p">,</span><span class="n">y2</span><span class="p">)))</span>
<span class="n">graph</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">1.9</span><span class="p">)</span>
<span class="n">ripsComplex</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">rips</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">graph</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">ripsComplex</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_11_0.png"></p>
<p>Now we have three connected components and no cycles/holes in the complex. Ok, let's go the other direction and increase <span class="math">\(\epsilon\)</span> to 4.0</p>
<div class="highlight"><pre><span></span><span class="n">newData</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="nb">list</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">x2</span><span class="p">,</span><span class="n">y2</span><span class="p">)))</span>
<span class="n">graph</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">4.0</span><span class="p">)</span>
<span class="n">ripsComplex</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">rips</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">graph</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">ripsComplex</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_13_0.png"></p>
<p>Unlike going down by 1, by increasing <span class="math">\(\epsilon\)</span> to 4.0, we haven't changed anything about our homology groups. We still have a single connected component and a single 1-dimensional cycle.</p>
<p>Let's make an even bigger jump and set <span class="math">\(\epsilon = 7.0\)</span>, an increase of 3.</p>
<div class="highlight"><pre><span></span><span class="n">graph</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">7.0</span><span class="p">)</span>
<span class="n">ripsComplex</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">rips</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">graph</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">ripsComplex</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_15_0.png"></p>
<p>Alas, even though we've gone up by 4 units from our original nice value of 3.0, we still get a complex with the same topological features: a single connected component and a 1-dimensional cycle.</p>
<p>This is the primary insight of <strong>persistence</strong> in persistent homology. These features are persistent over a wide range of <span class="math">\(\epsilon\)</span> scale parameters and thus are likely to be true features of the underlying data rather than noise.</p>
<p>We can diagram our findings with two major styles: a barcode or a persistence diagram (not shown).
Here's what our barcode might look like for the above example:</p>
<p><img src="images/TDAimages/barcode_example.png" width="500px" /></p>
<blockquote>
<p>NOTE: I've prepared this barcode "by band," i.e. it is not the precise computed barcode. I've highlighted the "true" topological features amongst the noise. <span class="math">\(H_0, H_1, H_2\)</span> refer to the respective homology groups and Betti numbers.</p>
</blockquote>
<p>Importantly, it is possible that two different true topological features may exist at different scales and thus can only be captured with a persistent homology, they will be missed in a simplicial complex with a single fixed scale. For example, if we have data that presents a large circle next to a small circle, it is possible at a small <span class="math">\(\epsilon\)</span> value only the small circle will be connected, giving rise to a single 1-dimensionl hole, then at a larger <span class="math">\(\epsilon\)</span> the big circle will be connected and the small circle will get "filled in." So at no single <span class="math">\(\epsilon\)</span> value will both circles be revealed.</p>
<h4>Filtrations</h4>
<p>It turns out there is a relatively straightforward way to extend our previous work on calculating Betti numbers with boundary matrices to the setting of persistent homology where we're dealing with collections of ever expanding complexes.</p>
<p>We define a <em>filtration complex</em> as the sequence of simplicial complexes generated by continuously increasing the scale parameter <span class="math">\(\epsilon\)</span>.</p>
<p>But rather than building multiple simplicial complexes at various <span class="math">\(\epsilon\)</span> parameters and then combining them into a sequence, we can just build a single simplicial complex over our data using a large (maximal) <span class="math">\(\epsilon\)</span> value. But we will keep track of the distance between all points of pairs (we already do this with the algorithm we wrote) so we know at what <span class="math">\(\epsilon\)</span> scale each pair of points form an edge. Thus "hidden" in any simplicial complex at some <span class="math">\(\epsilon\)</span> value is a filtration (sequence of nested complexes) up to that value of <span class="math">\(\epsilon\)</span>.</p>
<p>Here's a really simple example:
<img src="images/TDAimages/simplicialComplex9a.png" /></p>
<p>So if we take the maximum scale, <span class="math">\(\epsilon = 4\)</span>, our simplicial complex is:
</p>
<div class="math">$$ S = \text{ { {0}, {1}, {2}, {0,1}, {2,0}, {1,2}, {0,1,2} } } $$</div>
<p>But if we keep track of the pair-wise distances between points (i.e. the length/weight of all the edges), then we already have the information necessary for a filtration.</p>
<p>Here are the weights (lengths) of each edge (1-simplex) in this simplicial complex (the vertical bars indicate weight/length):
</p>
<div class="math">$$ |{0,1}| = 1.4 \\
|{2,0}| = 2.2 \\
|{1,2}| = 3
$$</div>
<p>
And this is how we would use that information to build a filtration:
</p>
<div class="math">$$
S_0 \subseteq S_1 \subseteq S_2 \\
S_0 = \text{ { {0}, {1}, {2} } } \\
S_1 = \text{ { {0}, {1}, {2}, {0,1} } } \\
S_2 = \text{ { {0}, {1}, {2}, {0,1}, {2,0}, {1,2}, {0,1,2} } } \\
$$</div>
<p>Basically each simplex in a subcomplex of the filtration will appear when its longest edge appears. So the 2-simplex {0,1,2} appears only once the edge {1,2} appears since that edge is the longest and doesn't show up until <span class="math">\(\epsilon \geq 2.2\)</span></p>
<p>For it to be a filtration that we can use in our (future) algorithm, it needs to have a <strong>total order</strong>. A total order is an ordering of the simplices in our filtration such that there is a valid "less than" relationship between any two simplices (i.e. no two simplices are equal in "value"). The most famous example of a set with a total order would be the natural numbers {0,1,2,3,4...} since no two numbers are equal, we can always say one number is greater than or less than another.</p>
<p>How do we determine the "value" (henceforth: filter value) of a simplex in a filtration (and thus determine the ordering of the filtration)? Well I already said part of it. The filter value of a simplex is partly determined by the length of its maximum edge. But sometimes two distinct simplices have maximum edges of the same length, so we have to define a heirarchy of rules for determining the value (the ordering) of our simplices.</p>
<p>For any two simplices, <span class="math">\(\sigma_1, \sigma_2\)</span>...
1. 0-simplices must be less than 1-simplices must be less than 2-simplices, etc. This implies that any face of a simplex (i.e. <span class="math">\(f \subset \sigma\)</span>) is automatically less than (comes before in the ordering) of the simplex. I.e. if <span class="math">\(dim(\sigma_1) < dim(\sigma_2) \implies \sigma_1 < \sigma_2\)</span> (dim = dimension, the symbol <span class="math">\(\implies\)</span> means "implies").
<br /><br />
2. If <span class="math">\(\sigma_1, \sigma_2\)</span> are of an equal dimension (and hence one is not the face of the other), then the value of each simplex is determined by its longest (highest weight) 1-simplex (edge). In our example above, <span class="math">\(\{0,1\} \lt \{2,0\} \lt \{1,2\}\)</span> due to the weights of each of those. To compare higher-dimensional simplices, you still just compare them by the value of their greatest edge. I.e. if <span class="math">\(dim(\sigma_1) = dim(\sigma_2)\)</span> then <span class="math">\(max\_edge(\sigma_1) < max\_edge(\sigma_2) \implies \sigma_1 < \sigma_2\)</span>
<br /><br />
3. If <span class="math">\(\sigma_1,\sigma_2\)</span> are of an equal dimension AND their longest edges are of equal value (i.e. their maximum weight edges enter the filtration at the same <span class="math">\(\epsilon\)</span> value), then <span class="math">\(max\_vertex(\sigma_1) < max\_vertex(\sigma_2) \implies \sigma_1 < \sigma_2\)</span>. What is a maximum node? Well we just have to place an arbitrary ordering over the vertices even though they all appear at the same time.</p>
<blockquote>
<p>Just as an aside, we just discussed a <em>total order</em>. The corollary to that idea is a <em>partial order</em> where we have "less than" relationships defined between some but not all elements, and some elements may be equal to others.</p>
</blockquote>
<p>Remember from part 3 how we setup the boundary matrices by setting the columns to represent the n-simplices in the n-chain group and the rows to represent the (n-1)-simplices in the (n-1)-chain group? Well we can extend this procedure to calculate Betti numbers across an entire filtration complex in the following way.</p>
<p>Let's use the filtration from above:
</p>
<div class="math">$$
S_0 \subseteq S_1 \subseteq S_2 \\
S_0 = \text{ [ {0}, {1}, {2} } ] \\
S_1 = \text{ [ {0}, {1}, {2}, {0,1} ] } \\
S_2 = S = \text{ [ {0}, {1}, {2}, {0,1}, {2,0}, {1,2}, {0,1,2} ] } \\
$$</div>
<p>
Notice I already have the simplices in each subcomplex of the filtration in order (I've imposed a total order on the set of simplices) indicated by the square brackets rather than curly braces (although I may abuse this notation).</p>
<p>So we'll build a boundary matrix for the full filtration in the same way we built individual boundary matrices for each homology group before. We'll make a square matrix where the columns (label: <span class="math">\(j\)</span>) and rows (label: <span class="math">\(i\)</span>) are the simplices in the filtration in their proper (total) order. </p>
<p>Then, as before, we set each cell <span class="math">\([i,j] = 1\)</span> if <span class="math">\(\sigma_i\)</span> is a face of <span class="math">\(\sigma_j\)</span> (<span class="math">\(\sigma\)</span> meaning simplex). All other cells are <span class="math">\(0\)</span>.</p>
<p>Here's what it looks like in our very small filtration from above:</p>
<div class="math">$$
\partial_{filtration} =
\begin{array}{c|lcr}
\partial & \{0\} & \{1\} & \{2\} & \{0,1\} & \{2,0\} & \{1,2\} & \{0,1,2\} \\
\hline
\{0\} & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\
\{1\} & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\
\{2\} & 0 & 0 & 0 & 0 & 1 & 1 & 0 \\
\{0,1\} & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\{2,0\} & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\{1,2\} & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\{0,1,2\} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{array}
$$</div>
<p>As before, we will apply an algorithm to change the form of this matrix. However, unlike before, we are now going to convert this boundary matrix into Smith normal form, we are going to change it into something else called column-echelon form. This conversion process is called a <strong>matrix reduction</strong>, implying we're kind of reducing it into a simpler form.</p>
<h5>Matrix reduction</h5>
<p>Now here's where I have to apologize for a mistake I made in our last post, because I never explained <em>why</em> we had to convert our boundary matrix into Smith normal form, I just told you <em>how</em> to do it.</p>
<p>So here's the deal, our boundary matrices from before gave us a linear map from a n-chain group down to the (n-1)-chain group. We could just multiply the boundary matrix by any element in the n-chain and the result would be the corresponding (mapped) element in the (n-1)-chain. When we reduced the matrix to Smith normal form, we altered the boundary matrix such that we can no longer just multiply by it to map elements in that way. What we did was actually apply another linear map over our boundary matrix, the result being the Smith normal form.</p>
<p>More formally, the Smith normal form <span class="math">\(R\)</span> of a matrix <span class="math">\(A\)</span> is the matrix product: <span class="math">\(R = SAT\)</span> where <span class="math">\(S,T\)</span> are other matrices. Hence we have a composition of linear maps that forms <span class="math">\(R\)</span>, and we can in principle decompose <span class="math">\(R\)</span> into the individual linear maps (matrices) that compose it.</p>
<p>So the algorithm for reducing to Smith normal form is essentially finding two other matrices <span class="math">\(S,T\)</span> such that <span class="math">\(SAT\)</span> produces a matrix with 1s along the diagonal (at least partially).</p>
<p>But why do we do that? Well remember that a matrix being a linear map means it maps one vector space to another. If we have a matrix <span class="math">\(M: V_1 \rightarrow V_2\)</span>, then it is mapping the basis vectors in <span class="math">\(V_1\)</span> to basis vectors in <span class="math">\(V_2\)</span>. So when we reduce a matrix, we're essentially redefining the basis vectors in each vector space. It just so happens that Smith normal form finds the bases that form cycles and boundaries. There are many different types of reduced matrix forms that have useful interpretations and properties. I'm not going to get into anymore of the mathematics about it here, I just wanted to give a little more explanation to this voo doo matrix reduction we're doing.</p>
<p>When we reduce a filtration boundary matrix into column-echelon form via an algorithm, it tells us the information about when certain topological features at each dimension are formed or "die" (by being subsumed into a larger feature) at various stages in the filtration (i.e. at increasing values of <span class="math">\(\epsilon\)</span>, via our total order implied on the filtration). Hence, once we reduce the boundary matrix, all we need to do is read off the information as intervals when features are born and die, and then we can graph those intervals as a barcode plot.</p>
<p>The column-echelon form <span class="math">\(C\)</span> is likewise a composition of linear maps such that <span class="math">\(C = VB\)</span> where <span class="math">\(V\)</span> is some matrix that makes the composition work, and <span class="math">\(B\)</span> is a filtration boundary matrix. We will actually keep a copy of <span class="math">\(V\)</span> once we're done reducing <span class="math">\(B\)</span> because <span class="math">\(V\)</span> records the information necessary to dermine which data points lie on interesting topological features.</p>
<p>The general algorithm for reducing a matrix to column-echelon form is a type of <a src="https://en.wikipedia.org/wiki/Gaussian_elimination">Gaussian elemination</a>:</p>
<div class="highlight"><pre><span></span><span class="k">for</span> <span class="n">j</span> <span class="o">=</span> <span class="mi">1</span> <span class="n">to</span> <span class="n">n</span>
<span class="k">while</span> <span class="n">there</span> <span class="n">exists</span> <span class="n">i</span> <span class="o"><</span> <span class="n">j</span> <span class="k">with</span> <span class="n">low</span><span class="p">(</span><span class="n">i</span><span class="p">)</span> <span class="o">=</span> <span class="n">low</span><span class="p">(</span><span class="n">j</span><span class="p">)</span>
<span class="n">add</span> <span class="n">column</span> <span class="n">i</span> <span class="n">to</span> <span class="n">column</span> <span class="n">j</span>
<span class="n">end</span> <span class="k">while</span>
<span class="n">end</span> <span class="k">for</span>
</pre></div>
<p>The function <code>low</code> accepts a column <code>j</code> and returns the row index with the lowest <span class="math">\(1\)</span>. For example, if we have a column of a matrix:
<span class="math">\( j = <div class="math">\begin{pmatrix} 1 \\ 0 \\1 \\1 \\ 0 \\0 \\0 \end{pmatrix}</div> $<br />
Then <code>low(j) = 3</code> (with indexing starting from 0) since the lowest $1\)</span> in the column is in the fourth row (which is index 3).</p>
<p>So basically the algorithm scans each column in the matrix from left to right, so if we're currently at column <code>j</code>, the algorithm looks for all the columns <code>i</code> before <code>j</code> such that <code>low(i) == low(j)</code>, and if it finds such a column <code>i</code>, it will add that column to <code>j</code>. And we keep a log of everytime we add a column to another in the form of another matrix. If a column is all zeros, then <code>low(j) = -1</code> (meaning undefined).</p>
<p>Let's try out the algorithm by hand on our boundary matrix from above. I've removed the column/row labels to be more concise:</p>
<div class="math">$$
\partial_{filtration} =
\begin{Bmatrix}
0 & 0 & 0 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{Bmatrix}
$$</div>
<p>So remember, columns are some index <code>j</code> and rows are some index <code>i</code>. We scan from left to right. The first 3 columns are all zeros so <code>low(j)</code> is undefined and we don't do anything. And when we get to column 4 (index <code>j=3</code>), since all the prior columns were zero, then there's also nothing to do. When we get to column 5 (index <code>j=4</code>), then <code>low(4) = 2</code> and <code>low(3) = 1</code> so since <code>low(4) != low(3)</code> we don't do anything and just move on. It isn't until we get to column 6 (index <code>j=5</code>) that there is a column <code>i < j</code> (in this case column <code>4 < 5</code>) such that <code>low(4) = low(5)</code>. So we add column 5 to column 6. Since these are binary (using the field <span class="math">\(\mathbb Z_2\)</span>) columns, <span class="math">\(1+1=0\)</span>. The result of adding column 5 to 6 is shown below:</p>
<div class="math">$$
\partial_{filtration} =
\begin{Bmatrix}
0 & 0 & 0 & 1 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{Bmatrix}
$$</div>
<p>Now we continue on to the end, and the last column's lowest 1 is in a unique row so we don't do anything. Now we start again from the beginning on the left. We get to column 6 (index <code>j=5</code>) and we find that column 4 has the same lowest 1, <code>low(3) = low(5)</code>, so we add column 4 to 6. The result is shown below:</p>
<div class="math">$$
\partial_{filtration} =
\begin{Bmatrix}
0 & 0 & 0 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{Bmatrix}
$$</div>
<p>Look, we now have a new column of all zeros! What does this mean? Well it means that column is a new topological feature. It either represent a connected component or some n-dimensional cycle. In this case it represents a 1-dimensional cycle, the cycle formed from the three 1-simplices.</p>
<p>Notice now that the matrix is fully reduced to column-echelon form since all the lowest <span class="math">\(1\)</span>s are in unique rows, so our algorithm halts in satisfaction. Now that the boundary matrix is reduced, it is no longer the case that each column and row represents a single simplex in the filtration. Since we've been adding columns together, each column may represent multiple simplices in the filtration. In this case, we only added columns together two times and both we're adding to column 6 (index <code>j = 5</code>), so column 6 represents the simplices from columns 4 and 5 (which happen to be {0,1} and {2,0}). So column 6 is the group of simplices: <span class="math">\(\text{ {0,1}, {2,0}, {1,2} }\)</span>, and if you refer back to the graphical depiction of the simplex, those 1-simplices form a 1-dimensional cycle (albeit immediately killed off by the 2-simplex {0,1,2}).</p>
<p>It is important to keep track of what the algorithm does so we can find out what each column represents when the algorithm is done. We do this by setting up another matrix that I call the <em>memory matrix</em>. It starts off just being the identity matrix with the same dimensions as the boundary matrix.</p>
<div class="math">$$
M_{memory} =
\begin{Bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{Bmatrix}
$$</div>
<p>But everytime we add a column <code>i</code> to column <code>j</code> in our reducing algorithm, we record the change in the memory matrix by putting a <code>1</code> in the cell <code>[i,j]</code>. So in our case, we recorded the events of adding columns 4 and 5 to column 6. Hence in our memory matrix, we will put a 1 in the cells <code>[3,5]</code> and <code>[4,5]</code> (using indices). This is shown below:</p>
<div class="math">$$
M_{memory} =
\begin{Bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{Bmatrix}
$$</div>
<p>Once the algorithm is done running, we can always refer to this memory matrix to remember what the algorithm actually did and figure out what the columns in the reduced boundary matrix represent.</p>
<p>Let's refer back to our <em>reduced</em> (column-echelon form) boundary matrix of the filtration:</p>
<div class="math">$$
\partial_{reduced} =
\begin{Bmatrix}
0 & 0 & 0 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{Bmatrix}
$$</div>
<p>To record the intervals of birth and death of topological features, we simply scan each column from left to right. If column <code>j</code> has all zeros (i.e. <code>low(j) = -1</code>) then we record this as the birth of a new feature (being whatever column <code>j</code> represents, maybe a single simplex, maybe a group of simplices). </p>
<p>Otherwise, if a column is not all zeros but has some 1s in it, then we say that the column with index equal to <code>low(j)</code> dies at <code>j</code>, and hence is the end point of the interval for that feature.</p>
<p>So in our case, all three vertices (the first three columns) are new features that are born (their columns are all zeros, <code>low(j) = -1</code>) so we record 3 new intervals with starting points being their column indices. Since we're scanning sequentially from left to right, we don't yet know if or when these features will die, so we'll just tentatively set the end point as <code>-1</code> to indicate the end or infinity. Here are the first three intervals:</p>
<div class="highlight"><pre><span></span><span class="c1">#Remember the start and end points are column indices</span>
<span class="p">[</span><span class="mi">0</span><span class="p">,</span><span class="o">-</span><span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span><span class="o">-</span><span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">2</span><span class="p">,</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>
</pre></div>
<p>Then we keep scanning left to right and hit column 4 (index <code>j=3</code>) and we calculate <code>low(3) = 1</code>. So this means the feature that was born in column <code>j=1</code> (column 2) just died at <code>j=3</code>. Now we can go back and update the tentative end point for that interval, our update intervals being:</p>
<div class="highlight"><pre><span></span><span class="c1">#updating intervals...</span>
<span class="p">[</span><span class="mi">0</span><span class="p">,</span><span class="o">-</span><span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">],</span> <span class="p">[</span><span class="mi">2</span><span class="p">,</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>
</pre></div>
<p>So we just continue this process until the last column and we get all our intervals:</p>
<div class="highlight"><pre><span></span><span class="c1">#The final set of intervals</span>
<span class="p">[</span><span class="mi">0</span><span class="p">,</span><span class="o">-</span><span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">],</span> <span class="p">[</span><span class="mi">2</span><span class="p">,</span><span class="mi">4</span><span class="p">],</span> <span class="p">[</span><span class="mi">5</span><span class="p">,</span><span class="mi">6</span><span class="p">]</span>
</pre></div>
<p>The first three features are 0-simplices and since they are dimension 0, they represent the connected components of the filtration. The 4th feature is the 1-dimensional cycle since its interval indices refer to a group of 1-simplices.</p>
<p>Believe it or not, we've just done persistent homology. That's all there is to it. Once we have the intervals, all we need to do is graph them as a barcode. We should convert the start/end points in these intervals to values of <span class="math">\(\epsilon\)</span> by referring back to our set of weights on the edges and assigning an <span class="math">\(\epsilon\)</span> value (the value of <span class="math">\(\epsilon\)</span> the results in the formation of a particular simplex in the filtration) to each simplex. Here's the barcode:</p>
<p><img src="images/TDAimages/barcode_example2.png" width="300px" /></p>
<blockquote>
<p>I drew a dot in the <span class="math">\(H_1\)</span> group to indicate that the 1-dimensional cycle is born and immediately dies at the same point (since as soon as it forms the 2-simplex subsumes it). Most real barcodes do not produce those dots. We don't care about such ephemeral features.</p>
</blockquote>
<p>Notice how we have a bar in the <span class="math">\(H_0\)</span> group that is significantly longer than the other two. This suggests our data has only 1 connected component. Groups <span class="math">\(H_1, H_2\)</span> don't really have any bars so our data doesn't have any true holes/cycles. Of course with a more realistic data set we would expect to find some cycles.</p>
<h4>Let's write some code</h4>
<p>Alright, so we've basically covered the conceptual framework for computing persistent homology. Let's actually write some code to compute persistent homology on (somewhat) realistic data. I'm not going to spend too much effort explaining all the code and how it works since I'm more concerned with explaining in more abstract terms so you can go write your own algorithms. I've tried to add inline comments that should help. Also keep in mind that since this is educational, these algorithms and data structures will <em>not</em> be very efficient, but will be simple. I hope to write a follow up post at some point that demonstrates how we can make efficient versions of these algorithms and data structures.</p>
<p>Let's start by constructing a simple simplicial complex using the code we wrote in part 4.</p>
<div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span><span class="mi">4</span><span class="p">],[</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">],[</span><span class="mi">6</span><span class="p">,</span><span class="mi">1</span><span class="p">],[</span><span class="mi">6</span><span class="p">,</span><span class="mi">4</span><span class="p">]])</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="c1">#for example... this is with a small epsilon, to illustrate the presence of a 1-dimensional cycle</span>
<span class="n">graph</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">5.1</span><span class="p">)</span>
<span class="n">ripsComplex</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">rips</span><span class="p">(</span><span class="n">nodes</span><span class="o">=</span><span class="n">graph</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">edges</span><span class="o">=</span><span class="n">graph</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">ripsComplex</span><span class="p">,</span> <span class="n">axes</span><span class="o">=</span><span class="p">[</span><span class="mi">0</span><span class="p">,</span><span class="mi">7</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">5</span><span class="p">])</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_33_0.png"></p>
<p>So our simplicial complex is just a box. It obviously has 1 connected component and a 1-dimensional cycle. If we keep increasing <span class="math">\(\epsilon\)</span> then the box will "fill in" and we'll get a maximal simplex with all four points forming a 3-dimensional simplex (tetrahedron).</p>
<blockquote>
<p>Note, I have modified the <code>SimplicialComplex</code> library a bit (mostly cosmetic/stylistic changes) since <a href="http://outlace.com/Topological+Data+Analysis+Tutorial+-+Part+4/">part 4</a>. Refer to the <a href="https://github.com/outlace/outlace.github.io">GitHub project</a> for changes.</p>
</blockquote>
<p>Next we're going to modify the functions from the original <code>SimplicialComplex</code> library from part 4 so that it works well with a filtration complex rather than ordinary simplicial complexes.</p>
<p>So I'm just going to drop a block of code on you know and describe what each function does. The <code>buildGraph</code> function is the same as before. But we have a several new functions: <code>ripsFiltration</code>, <code>getFilterValue</code>, <code>compare</code> and <code>sortComplex</code>.</p>
<p>The <code>ripsFiltration</code> function accepts the graph object from <code>buildGraph</code> and maximal dimension <code>k</code> (e.g. up to what dimensional simpices we will bother calculating) and returns a simplicial complex object sorted by filter values. The filter values are determined as described above. We have a <code>sortComplex</code> function that takes a complex and filter values and returns the sorted complex.</p>
<p>So the only difference between our previous simplicial complex function and the <code>ripsFiltration</code> function is that the latter also generates filter values for each simplex in the complex and imposes a total order on the simplices in the filtration.</p>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">itertools</span>
<span class="kn">import</span> <span class="nn">functools</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">euclidianDist</span><span class="p">(</span><span class="n">a</span><span class="p">,</span><span class="n">b</span><span class="p">):</span> <span class="c1">#this is the default metric we use but you can use whatever distance function you want</span>
<span class="k">return</span> <span class="n">np</span><span class="o">.</span><span class="n">linalg</span><span class="o">.</span><span class="n">norm</span><span class="p">(</span><span class="n">a</span> <span class="o">-</span> <span class="n">b</span><span class="p">)</span> <span class="c1">#euclidian distance metric</span>
<span class="c1">#Build neighorbood graph</span>
<span class="k">def</span> <span class="nf">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="p">,</span> <span class="n">epsilon</span> <span class="o">=</span> <span class="mf">3.1</span><span class="p">,</span> <span class="n">metric</span><span class="o">=</span><span class="n">euclidianDist</span><span class="p">):</span> <span class="c1">#raw_data is a numpy array</span>
<span class="n">nodes</span> <span class="o">=</span> <span class="p">[</span><span class="n">x</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">raw_data</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">])]</span> <span class="c1">#initialize node set, reference indices from original data array</span>
<span class="n">edges</span> <span class="o">=</span> <span class="p">[]</span> <span class="c1">#initialize empty edge array</span>
<span class="n">weights</span> <span class="o">=</span> <span class="p">[]</span> <span class="c1">#initialize weight array, stores the weight (which in this case is the distance) for each edge</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">raw_data</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]):</span> <span class="c1">#iterate through each data point</span>
<span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">raw_data</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">i</span><span class="p">):</span> <span class="c1">#inner loop to calculate pairwise point distances</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">raw_data</span><span class="p">[</span><span class="n">i</span><span class="p">]</span>
<span class="n">b</span> <span class="o">=</span> <span class="n">raw_data</span><span class="p">[</span><span class="n">j</span><span class="o">+</span><span class="n">i</span><span class="p">]</span> <span class="c1">#each simplex is a set (no order), hence [0,1] = [1,0]; so only store one</span>
<span class="k">if</span> <span class="p">(</span><span class="n">i</span> <span class="o">!=</span> <span class="n">j</span><span class="o">+</span><span class="n">i</span><span class="p">):</span>
<span class="n">dist</span> <span class="o">=</span> <span class="n">metric</span><span class="p">(</span><span class="n">a</span><span class="p">,</span><span class="n">b</span><span class="p">)</span>
<span class="k">if</span> <span class="n">dist</span> <span class="o"><=</span> <span class="n">epsilon</span><span class="p">:</span>
<span class="n">edges</span><span class="o">.</span><span class="n">append</span><span class="p">({</span><span class="n">i</span><span class="p">,</span><span class="n">j</span><span class="o">+</span><span class="n">i</span><span class="p">})</span> <span class="c1">#add edge if distance between points is < epsilon</span>
<span class="n">weights</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">dist</span><span class="p">)</span>
<span class="k">return</span> <span class="n">nodes</span><span class="p">,</span><span class="n">edges</span><span class="p">,</span><span class="n">weights</span>
<span class="k">def</span> <span class="nf">lower_nbrs</span><span class="p">(</span><span class="n">nodeSet</span><span class="p">,</span> <span class="n">edgeSet</span><span class="p">,</span> <span class="n">node</span><span class="p">):</span> <span class="c1">#lowest neighbors based on arbitrary ordering of simplices</span>
<span class="k">return</span> <span class="p">{</span><span class="n">x</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">nodeSet</span> <span class="k">if</span> <span class="p">{</span><span class="n">x</span><span class="p">,</span><span class="n">node</span><span class="p">}</span> <span class="ow">in</span> <span class="n">edgeSet</span> <span class="ow">and</span> <span class="n">node</span> <span class="o">></span> <span class="n">x</span><span class="p">}</span>
<span class="k">def</span> <span class="nf">ripsFiltration</span><span class="p">(</span><span class="n">graph</span><span class="p">,</span> <span class="n">k</span><span class="p">):</span> <span class="c1">#k is the maximal dimension we want to compute (minimum is 1, edges)</span>
<span class="n">nodes</span><span class="p">,</span> <span class="n">edges</span><span class="p">,</span> <span class="n">weights</span> <span class="o">=</span> <span class="n">graph</span>
<span class="n">VRcomplex</span> <span class="o">=</span> <span class="p">[{</span><span class="n">n</span><span class="p">}</span> <span class="k">for</span> <span class="n">n</span> <span class="ow">in</span> <span class="n">nodes</span><span class="p">]</span>
<span class="n">filter_values</span> <span class="o">=</span> <span class="p">[</span><span class="mi">0</span> <span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="n">VRcomplex</span><span class="p">]</span> <span class="c1">#vertices have filter value of 0</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">edges</span><span class="p">)):</span> <span class="c1">#add 1-simplices (edges) and associated filter values</span>
<span class="n">VRcomplex</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">edges</span><span class="p">[</span><span class="n">i</span><span class="p">])</span>
<span class="n">filter_values</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">weights</span><span class="p">[</span><span class="n">i</span><span class="p">])</span>
<span class="k">if</span> <span class="n">k</span> <span class="o">></span> <span class="mi">1</span><span class="p">:</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">k</span><span class="p">):</span>
<span class="k">for</span> <span class="n">simplex</span> <span class="ow">in</span> <span class="p">[</span><span class="n">x</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">VRcomplex</span> <span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">x</span><span class="p">)</span><span class="o">==</span><span class="n">i</span><span class="o">+</span><span class="mi">2</span><span class="p">]:</span> <span class="c1">#skip 0-simplices and 1-simplices</span>
<span class="c1">#for each u in simplex</span>
<span class="n">nbrs</span> <span class="o">=</span> <span class="nb">set</span><span class="o">.</span><span class="n">intersection</span><span class="p">(</span><span class="o">*</span><span class="p">[</span><span class="n">lower_nbrs</span><span class="p">(</span><span class="n">nodes</span><span class="p">,</span> <span class="n">edges</span><span class="p">,</span> <span class="n">z</span><span class="p">)</span> <span class="k">for</span> <span class="n">z</span> <span class="ow">in</span> <span class="n">simplex</span><span class="p">])</span>
<span class="k">for</span> <span class="n">nbr</span> <span class="ow">in</span> <span class="n">nbrs</span><span class="p">:</span>
<span class="n">newSimplex</span> <span class="o">=</span> <span class="nb">set</span><span class="o">.</span><span class="n">union</span><span class="p">(</span><span class="n">simplex</span><span class="p">,{</span><span class="n">nbr</span><span class="p">})</span>
<span class="n">VRcomplex</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">newSimplex</span><span class="p">)</span>
<span class="n">filter_values</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">getFilterValue</span><span class="p">(</span><span class="n">newSimplex</span><span class="p">,</span> <span class="n">VRcomplex</span><span class="p">,</span> <span class="n">filter_values</span><span class="p">))</span>
<span class="k">return</span> <span class="n">sortComplex</span><span class="p">(</span><span class="n">VRcomplex</span><span class="p">,</span> <span class="n">filter_values</span><span class="p">)</span> <span class="c1">#sort simplices according to filter values</span>
<span class="k">def</span> <span class="nf">getFilterValue</span><span class="p">(</span><span class="n">simplex</span><span class="p">,</span> <span class="n">edges</span><span class="p">,</span> <span class="n">weights</span><span class="p">):</span> <span class="c1">#filter value is the maximum weight of an edge in the simplex</span>
<span class="n">oneSimplices</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">itertools</span><span class="o">.</span><span class="n">combinations</span><span class="p">(</span><span class="n">simplex</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span> <span class="c1">#get set of 1-simplices in the simplex</span>
<span class="n">max_weight</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="n">oneSimplex</span> <span class="ow">in</span> <span class="n">oneSimplices</span><span class="p">:</span>
<span class="n">filter_value</span> <span class="o">=</span> <span class="n">weights</span><span class="p">[</span><span class="n">edges</span><span class="o">.</span><span class="n">index</span><span class="p">(</span><span class="nb">set</span><span class="p">(</span><span class="n">oneSimplex</span><span class="p">))]</span>
<span class="k">if</span> <span class="n">filter_value</span> <span class="o">></span> <span class="n">max_weight</span><span class="p">:</span> <span class="n">max_weight</span> <span class="o">=</span> <span class="n">filter_value</span>
<span class="k">return</span> <span class="n">max_weight</span>
<span class="k">def</span> <span class="nf">compare</span><span class="p">(</span><span class="n">item1</span><span class="p">,</span> <span class="n">item2</span><span class="p">):</span>
<span class="c1">#comparison function that will provide the basis for our total order on the simpices</span>
<span class="c1">#each item represents a simplex, bundled as a list [simplex, filter value] e.g. [{0,1}, 4]</span>
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">item1</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span> <span class="o">==</span> <span class="nb">len</span><span class="p">(</span><span class="n">item2</span><span class="p">[</span><span class="mi">0</span><span class="p">]):</span>
<span class="k">if</span> <span class="n">item1</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">==</span> <span class="n">item2</span><span class="p">[</span><span class="mi">1</span><span class="p">]:</span> <span class="c1">#if both items have same filter value</span>
<span class="k">if</span> <span class="nb">sum</span><span class="p">(</span><span class="n">item1</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span> <span class="o">></span> <span class="nb">sum</span><span class="p">(</span><span class="n">item2</span><span class="p">[</span><span class="mi">0</span><span class="p">]):</span>
<span class="k">return</span> <span class="mi">1</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="o">-</span><span class="mi">1</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">if</span> <span class="n">item1</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">></span> <span class="n">item2</span><span class="p">[</span><span class="mi">1</span><span class="p">]:</span>
<span class="k">return</span> <span class="mi">1</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="o">-</span><span class="mi">1</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">item1</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span> <span class="o">></span> <span class="nb">len</span><span class="p">(</span><span class="n">item2</span><span class="p">[</span><span class="mi">0</span><span class="p">]):</span>
<span class="k">return</span> <span class="mi">1</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="o">-</span><span class="mi">1</span>
<span class="k">def</span> <span class="nf">sortComplex</span><span class="p">(</span><span class="n">filterComplex</span><span class="p">,</span> <span class="n">filterValues</span><span class="p">):</span> <span class="c1">#need simplices in filtration have a total order</span>
<span class="c1">#sort simplices in filtration by filter values</span>
<span class="n">pairedList</span> <span class="o">=</span> <span class="nb">zip</span><span class="p">(</span><span class="n">filterComplex</span><span class="p">,</span> <span class="n">filterValues</span><span class="p">)</span>
<span class="c1">#since I'm using Python 3.5+, no longer supports custom compare, need conversion helper function..its ok</span>
<span class="n">sortedComplex</span> <span class="o">=</span> <span class="nb">sorted</span><span class="p">(</span><span class="n">pairedList</span><span class="p">,</span> <span class="n">key</span><span class="o">=</span><span class="n">functools</span><span class="o">.</span><span class="n">cmp_to_key</span><span class="p">(</span><span class="n">compare</span><span class="p">))</span>
<span class="n">sortedComplex</span> <span class="o">=</span> <span class="p">[</span><span class="nb">list</span><span class="p">(</span><span class="n">t</span><span class="p">)</span> <span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="o">*</span><span class="n">sortedComplex</span><span class="p">)]</span>
<span class="c1">#then sort >= 1 simplices in each chain group by the arbitrary total order on the vertices</span>
<span class="n">orderValues</span> <span class="o">=</span> <span class="p">[</span><span class="n">x</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">filterComplex</span><span class="p">))]</span>
<span class="k">return</span> <span class="n">sortedComplex</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">graph2</span> <span class="o">=</span> <span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mi">7</span><span class="p">)</span> <span class="c1">#epsilon = 9 will build a "maximal complex"</span>
<span class="n">ripsComplex2</span> <span class="o">=</span> <span class="n">ripsFiltration</span><span class="p">(</span><span class="n">graph2</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">ripsComplex2</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">axes</span><span class="o">=</span><span class="p">[</span><span class="mi">0</span><span class="p">,</span><span class="mi">7</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">5</span><span class="p">])</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_38_0.png"></p>
<div class="highlight"><pre><span></span><span class="n">ripsComplex2</span>
</pre></div>
<div class="highlight"><pre><span></span>[[{0},
{1},
{2},
{3},
{0, 1},
{2, 3},
{1, 2},
{0, 3},
{0, 2},
{1, 3},
{0, 1, 2},
{0, 1, 3},
{0, 2, 3},
{1, 2, 3},
{0, 1, 2, 3}],
[0,
0,
0,
0,
3.0,
3.0,
5.0,
5.0,
5.8309518948453007,
5.8309518948453007,
5.8309518948453007,
5.8309518948453007,
5.8309518948453007,
5.8309518948453007,
5.8309518948453007]]
</pre></div>
<div class="highlight"><pre><span></span><span class="c1">#return the n-simplices and weights in a complex</span>
<span class="k">def</span> <span class="nf">nSimplices</span><span class="p">(</span><span class="n">n</span><span class="p">,</span> <span class="n">filterComplex</span><span class="p">):</span>
<span class="n">nchain</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">nfilters</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">filterComplex</span><span class="p">[</span><span class="mi">0</span><span class="p">])):</span>
<span class="n">simplex</span> <span class="o">=</span> <span class="n">filterComplex</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="n">i</span><span class="p">]</span>
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">simplex</span><span class="p">)</span> <span class="o">==</span> <span class="p">(</span><span class="n">n</span><span class="o">+</span><span class="mi">1</span><span class="p">):</span>
<span class="n">nchain</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">simplex</span><span class="p">)</span>
<span class="n">nfilters</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">filterComplex</span><span class="p">[</span><span class="mi">1</span><span class="p">][</span><span class="n">i</span><span class="p">])</span>
<span class="k">if</span> <span class="p">(</span><span class="n">nchain</span> <span class="o">==</span> <span class="p">[]):</span> <span class="n">nchain</span> <span class="o">=</span> <span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="k">return</span> <span class="n">nchain</span><span class="p">,</span> <span class="n">nfilters</span>
<span class="c1">#check if simplex is a face of another simplex</span>
<span class="k">def</span> <span class="nf">checkFace</span><span class="p">(</span><span class="n">face</span><span class="p">,</span> <span class="n">simplex</span><span class="p">):</span>
<span class="k">if</span> <span class="n">simplex</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
<span class="k">return</span> <span class="mi">1</span>
<span class="k">elif</span> <span class="p">(</span><span class="nb">set</span><span class="p">(</span><span class="n">face</span><span class="p">)</span> <span class="o"><</span> <span class="nb">set</span><span class="p">(</span><span class="n">simplex</span><span class="p">)</span> <span class="ow">and</span> <span class="p">(</span> <span class="nb">len</span><span class="p">(</span><span class="n">face</span><span class="p">)</span> <span class="o">==</span> <span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">simplex</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span> <span class="p">)):</span> <span class="c1">#if face is a (n-1) subset of simplex</span>
<span class="k">return</span> <span class="mi">1</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="mi">0</span>
<span class="c1">#build boundary matrix for dimension n ---> (n-1) = p</span>
<span class="k">def</span> <span class="nf">filterBoundaryMatrix</span><span class="p">(</span><span class="n">filterComplex</span><span class="p">):</span>
<span class="n">bmatrix</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="nb">len</span><span class="p">(</span><span class="n">filterComplex</span><span class="p">[</span><span class="mi">0</span><span class="p">]),</span><span class="nb">len</span><span class="p">(</span><span class="n">filterComplex</span><span class="p">[</span><span class="mi">0</span><span class="p">])),</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">'>i8'</span><span class="p">)</span>
<span class="c1">#bmatrix[0,:] = 0 #add "zero-th" dimension as first row/column, makes algorithm easier later on</span>
<span class="c1">#bmatrix[:,0] = 0</span>
<span class="n">i</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="n">colSimplex</span> <span class="ow">in</span> <span class="n">filterComplex</span><span class="p">[</span><span class="mi">0</span><span class="p">]:</span>
<span class="n">j</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="n">rowSimplex</span> <span class="ow">in</span> <span class="n">filterComplex</span><span class="p">[</span><span class="mi">0</span><span class="p">]:</span>
<span class="n">bmatrix</span><span class="p">[</span><span class="n">j</span><span class="p">,</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">checkFace</span><span class="p">(</span><span class="n">rowSimplex</span><span class="p">,</span> <span class="n">colSimplex</span><span class="p">)</span>
<span class="n">j</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="n">i</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="k">return</span> <span class="n">bmatrix</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">bm</span> <span class="o">=</span> <span class="n">filterBoundaryMatrix</span><span class="p">(</span><span class="n">ripsComplex2</span><span class="p">)</span>
<span class="n">bm</span> <span class="c1">#Here is the (non-reduced) boundary matrix</span>
</pre></div>
<div class="highlight"><pre><span></span>array([[0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
</pre></div>
<p>The following functions are for reducing the boundary matrix as described above (when we did it by hand).</p>
<div class="highlight"><pre><span></span><span class="c1">#returns row index of lowest "1" in a column i in the boundary matrix</span>
<span class="k">def</span> <span class="nf">low</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">matrix</span><span class="p">):</span>
<span class="n">col</span> <span class="o">=</span> <span class="n">matrix</span><span class="p">[:,</span><span class="n">i</span><span class="p">]</span>
<span class="n">col_len</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">col</span><span class="p">)</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span> <span class="p">(</span><span class="n">col_len</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span> <span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">):</span> <span class="c1">#loop through column from bottom until you find the first 1</span>
<span class="k">if</span> <span class="n">col</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">==</span> <span class="mi">1</span><span class="p">:</span> <span class="k">return</span> <span class="n">i</span>
<span class="k">return</span> <span class="o">-</span><span class="mi">1</span> <span class="c1">#if no lowest 1 (e.g. column of all zeros), return -1 to be 'undefined'</span>
<span class="c1">#checks if the boundary matrix is fully reduced</span>
<span class="k">def</span> <span class="nf">isReduced</span><span class="p">(</span><span class="n">matrix</span><span class="p">):</span>
<span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">matrix</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]):</span> <span class="c1">#iterate through columns</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">j</span><span class="p">):</span> <span class="c1">#iterate through columns before column j</span>
<span class="n">low_j</span> <span class="o">=</span> <span class="n">low</span><span class="p">(</span><span class="n">j</span><span class="p">,</span> <span class="n">matrix</span><span class="p">)</span>
<span class="n">low_i</span> <span class="o">=</span> <span class="n">low</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">matrix</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="n">low_j</span> <span class="o">==</span> <span class="n">low_i</span> <span class="ow">and</span> <span class="n">low_j</span> <span class="o">!=</span> <span class="o">-</span><span class="mi">1</span><span class="p">):</span>
<span class="k">return</span> <span class="n">i</span><span class="p">,</span><span class="n">j</span> <span class="c1">#return column i to add to column j</span>
<span class="k">return</span> <span class="p">[</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]</span>
<span class="c1">#the main function to iteratively reduce the boundary matrix</span>
<span class="k">def</span> <span class="nf">reduceBoundaryMatrix</span><span class="p">(</span><span class="n">matrix</span><span class="p">):</span>
<span class="c1">#this refers to column index in the boundary matrix</span>
<span class="n">reduced_matrix</span> <span class="o">=</span> <span class="n">matrix</span><span class="o">.</span><span class="n">copy</span><span class="p">()</span>
<span class="n">matrix_shape</span> <span class="o">=</span> <span class="n">reduced_matrix</span><span class="o">.</span><span class="n">shape</span>
<span class="n">memory</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">identity</span><span class="p">(</span><span class="n">matrix_shape</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">'>i8'</span><span class="p">)</span> <span class="c1">#this matrix will store the column additions we make</span>
<span class="n">r</span> <span class="o">=</span> <span class="n">isReduced</span><span class="p">(</span><span class="n">reduced_matrix</span><span class="p">)</span>
<span class="k">while</span> <span class="p">(</span><span class="n">r</span> <span class="o">!=</span> <span class="p">[</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]):</span>
<span class="n">i</span> <span class="o">=</span> <span class="n">r</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">j</span> <span class="o">=</span> <span class="n">r</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">col_j</span> <span class="o">=</span> <span class="n">reduced_matrix</span><span class="p">[:,</span><span class="n">j</span><span class="p">]</span>
<span class="n">col_i</span> <span class="o">=</span> <span class="n">reduced_matrix</span><span class="p">[:,</span><span class="n">i</span><span class="p">]</span>
<span class="c1">#print("Mod: add col %s to %s \n" % (i+1,j+1)) #Uncomment to see what mods are made</span>
<span class="n">reduced_matrix</span><span class="p">[:,</span><span class="n">j</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">bitwise_xor</span><span class="p">(</span><span class="n">col_i</span><span class="p">,</span><span class="n">col_j</span><span class="p">)</span> <span class="c1">#add column i to j</span>
<span class="n">memory</span><span class="p">[</span><span class="n">i</span><span class="p">,</span><span class="n">j</span><span class="p">]</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">r</span> <span class="o">=</span> <span class="n">isReduced</span><span class="p">(</span><span class="n">reduced_matrix</span><span class="p">)</span>
<span class="k">return</span> <span class="n">reduced_matrix</span><span class="p">,</span> <span class="n">memory</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">z</span> <span class="o">=</span> <span class="n">reduceBoundaryMatrix</span><span class="p">(</span><span class="n">bm</span><span class="p">)</span>
<span class="n">z</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">Mod</span><span class="o">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="mi">6</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="mi">8</span>
<span class="n">Mod</span><span class="o">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="mi">7</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="mi">8</span>
<span class="n">Mod</span><span class="o">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="mi">5</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="mi">8</span>
<span class="n">Mod</span><span class="o">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="mi">7</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="mi">9</span>
<span class="n">Mod</span><span class="o">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="mi">5</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="mi">9</span>
<span class="n">Mod</span><span class="o">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="mi">6</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="mi">10</span>
<span class="n">Mod</span><span class="o">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="mi">7</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="mi">10</span>
<span class="n">Mod</span><span class="o">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="mi">11</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="mi">13</span>
<span class="n">Mod</span><span class="o">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="mi">12</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="mi">14</span>
<span class="n">Mod</span><span class="o">:</span><span class="w"> </span><span class="n">add</span><span class="w"> </span><span class="n">col</span><span class="w"> </span><span class="mi">13</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="mi">14</span>
<span class="o">(</span><span class="n">array</span><span class="o">([[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">]]),</span>
<span class="w"> </span><span class="n">array</span><span class="o">([[</span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">0</span><span class="o">,</span><span class="w"> </span><span class="mi">1</span><span class="o">]]))</span>
</pre></div>
<p>So the <code>reduceBoundaryMatrix</code> function returns two matrices, the reduced boundary matrix and a <em>memory</em> matrix that records all the actions of the reduction algorithm. This is necessary so we can look up what each column in the boundary matrix actually refers to. Once it's reduced each column in the boundary matrix is not necessarily a single simplex but possibly a group of simplices such as some n-dimensional cycle.</p>
<p>The following functions use the reduced matrix to read the intervals for all the features that are born and die throughout the filtration</p>
<div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">readIntervals</span><span class="p">(</span><span class="n">reduced_matrix</span><span class="p">,</span> <span class="n">filterValues</span><span class="p">):</span> <span class="c1">#reduced_matrix includes the reduced boundary matrix AND the memory matrix</span>
<span class="c1">#store intervals as a list of 2-element lists, e.g. [2,4] = start at "time" point 2, end at "time" point 4</span>
<span class="c1">#note the "time" points are actually just the simplex index number for now. we will convert to epsilon value later</span>
<span class="n">intervals</span> <span class="o">=</span> <span class="p">[]</span>
<span class="c1">#loop through each column j</span>
<span class="c1">#if low(j) = -1 (undefined, all zeros) then j signifies the birth of a new feature j</span>
<span class="c1">#if low(j) = i (defined), then j signifies the death of feature i</span>
<span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">reduced_matrix</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]):</span> <span class="c1">#for each column (its a square matrix so doesn't matter...)</span>
<span class="n">low_j</span> <span class="o">=</span> <span class="n">low</span><span class="p">(</span><span class="n">j</span><span class="p">,</span> <span class="n">reduced_matrix</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
<span class="k">if</span> <span class="n">low_j</span> <span class="o">==</span> <span class="o">-</span><span class="mi">1</span><span class="p">:</span>
<span class="n">interval_start</span> <span class="o">=</span> <span class="p">[</span><span class="n">j</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">]</span>
<span class="n">intervals</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">interval_start</span><span class="p">)</span> <span class="c1"># -1 is a temporary placeholder until we update with death time</span>
<span class="c1">#if no death time, then -1 signifies feature has no end (start -> infinity)</span>
<span class="c1">#-1 turns out to be very useful because in python if we access the list x[-1] then that will return the</span>
<span class="c1">#last element in that list. in effect if we leave the end point of an interval to be -1</span>
<span class="c1"># then we're saying the feature lasts until the very end</span>
<span class="k">else</span><span class="p">:</span> <span class="c1">#death of feature</span>
<span class="n">feature</span> <span class="o">=</span> <span class="n">intervals</span><span class="o">.</span><span class="n">index</span><span class="p">([</span><span class="n">low_j</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">])</span> <span class="c1">#find the feature [start,end] so we can update the end point</span>
<span class="n">intervals</span><span class="p">[</span><span class="n">feature</span><span class="p">][</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="n">j</span> <span class="c1">#j is the death point</span>
<span class="c1">#if the interval start point and end point are the same, then this feature begins and dies instantly</span>
<span class="c1">#so it is a useless interval and we dont want to waste memory keeping it</span>
<span class="n">epsilon_start</span> <span class="o">=</span> <span class="n">filterValues</span><span class="p">[</span><span class="n">intervals</span><span class="p">[</span><span class="n">feature</span><span class="p">][</span><span class="mi">0</span><span class="p">]]</span>
<span class="n">epsilon_end</span> <span class="o">=</span> <span class="n">filterValues</span><span class="p">[</span><span class="n">j</span><span class="p">]</span>
<span class="k">if</span> <span class="n">epsilon_start</span> <span class="o">==</span> <span class="n">epsilon_end</span><span class="p">:</span> <span class="n">intervals</span><span class="o">.</span><span class="n">remove</span><span class="p">(</span><span class="n">intervals</span><span class="p">[</span><span class="n">feature</span><span class="p">])</span>
<span class="k">return</span> <span class="n">intervals</span>
<span class="k">def</span> <span class="nf">readPersistence</span><span class="p">(</span><span class="n">intervals</span><span class="p">,</span> <span class="n">filterComplex</span><span class="p">):</span>
<span class="c1">#this converts intervals into epsilon format and figures out which homology group each interval belongs to</span>
<span class="n">persistence</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">interval</span> <span class="ow">in</span> <span class="n">intervals</span><span class="p">:</span>
<span class="n">start</span> <span class="o">=</span> <span class="n">interval</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">end</span> <span class="o">=</span> <span class="n">interval</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">homology_group</span> <span class="o">=</span> <span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">filterComplex</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="n">start</span><span class="p">])</span> <span class="o">-</span> <span class="mi">1</span><span class="p">)</span> <span class="c1">#filterComplex is a list of lists [complex, filter values]</span>
<span class="n">epsilon_start</span> <span class="o">=</span> <span class="n">filterComplex</span><span class="p">[</span><span class="mi">1</span><span class="p">][</span><span class="n">start</span><span class="p">]</span>
<span class="n">epsilon_end</span> <span class="o">=</span> <span class="n">filterComplex</span><span class="p">[</span><span class="mi">1</span><span class="p">][</span><span class="n">end</span><span class="p">]</span>
<span class="n">persistence</span><span class="o">.</span><span class="n">append</span><span class="p">([</span><span class="n">homology_group</span><span class="p">,</span> <span class="p">[</span><span class="n">epsilon_start</span><span class="p">,</span> <span class="n">epsilon_end</span><span class="p">]])</span>
<span class="k">return</span> <span class="n">persistence</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">intervals</span> <span class="o">=</span> <span class="n">readIntervals</span><span class="p">(</span><span class="n">z</span><span class="p">,</span> <span class="n">ripsComplex2</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="n">intervals</span>
</pre></div>
<div class="highlight"><pre><span></span>[[0, -1], [1, 4], [2, 6], [3, 5], [7, 12]]
</pre></div>
<p>So those are all the intervals for the features that arise and die. The <code>readPersistence</code> function will just convert the start/end points from being indices in the boundary matrix to their corresponding <span class="math">\(\epsilon\)</span> value. It will also figure out to which homology group (i.e. which Betti number dimension) each interval belongs.</p>
<div class="highlight"><pre><span></span><span class="n">persist1</span> <span class="o">=</span> <span class="n">readPersistence</span><span class="p">(</span><span class="n">intervals</span><span class="p">,</span> <span class="n">ripsComplex2</span><span class="p">)</span>
<span class="n">persist1</span>
</pre></div>
<div class="highlight"><pre><span></span>[[0, [0, 5.8309518948453007]],
[0, [0, 3.0]],
[0, [0, 5.0]],
[0, [0, 3.0]],
[1, [5.0, 5.8309518948453007]]]
</pre></div>
<p>This function will just graph the persistence barcode for individual dimensions.</p>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
<span class="k">def</span> <span class="nf">graph_barcode</span><span class="p">(</span><span class="n">persistence</span><span class="p">,</span> <span class="n">homology_group</span> <span class="o">=</span> <span class="mi">0</span><span class="p">):</span>
<span class="c1">#this function just produces the barcode graph for each homology group</span>
<span class="n">xstart</span> <span class="o">=</span> <span class="p">[</span><span class="n">s</span><span class="p">[</span><span class="mi">1</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span> <span class="k">for</span> <span class="n">s</span> <span class="ow">in</span> <span class="n">persistence</span> <span class="k">if</span> <span class="n">s</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">==</span> <span class="n">homology_group</span><span class="p">]</span>
<span class="n">xstop</span> <span class="o">=</span> <span class="p">[</span><span class="n">s</span><span class="p">[</span><span class="mi">1</span><span class="p">][</span><span class="mi">1</span><span class="p">]</span> <span class="k">for</span> <span class="n">s</span> <span class="ow">in</span> <span class="n">persistence</span> <span class="k">if</span> <span class="n">s</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">==</span> <span class="n">homology_group</span><span class="p">]</span>
<span class="n">y</span> <span class="o">=</span> <span class="p">[</span><span class="mf">0.1</span> <span class="o">*</span> <span class="n">x</span> <span class="o">+</span> <span class="mf">0.1</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">xstart</span><span class="p">))]</span>
<span class="n">plt</span><span class="o">.</span><span class="n">hlines</span><span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="n">xstart</span><span class="p">,</span> <span class="n">xstop</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s1">'b'</span><span class="p">,</span> <span class="n">lw</span><span class="o">=</span><span class="mi">4</span><span class="p">)</span>
<span class="c1">#Setup the plot</span>
<span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">gca</span><span class="p">()</span>
<span class="n">plt</span><span class="o">.</span><span class="n">ylim</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="nb">max</span><span class="p">(</span><span class="n">y</span><span class="p">)</span><span class="o">+</span><span class="mf">0.1</span><span class="p">)</span>
<span class="n">ax</span><span class="o">.</span><span class="n">yaxis</span><span class="o">.</span><span class="n">set_major_formatter</span><span class="p">(</span><span class="n">plt</span><span class="o">.</span><span class="n">NullFormatter</span><span class="p">())</span>
<span class="n">plt</span><span class="o">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s1">'epsilon'</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">ylabel</span><span class="p">(</span><span class="s2">"Betti dim </span><span class="si">%s</span><span class="s2">"</span> <span class="o">%</span> <span class="p">(</span><span class="n">homology_group</span><span class="p">,))</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">graph_barcode</span><span class="p">(</span><span class="n">persist1</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">graph_barcode</span><span class="p">(</span><span class="n">persist1</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_52_0.png"></p>
<p><img alt="png" src="TDApart5_files/TDApart5_52_1.png"></p>
<p>Schweeeeet! Persistent homology, at last!</p>
<p>So we've graphed the barcode diagrams for the first two Betti numbers. The first barcode is a little underwhelming since what we want to see is some bars that are significantly longer than others, indicating a true feature. In this case, the Betti 0 barcode has a longest bar which represents the single connected componenent that is formed with the box, but it's not <em>that</em> much longer then the next longest bar. That's mostly an artifact of the example being so simple. If I had added in a few more points then we would see a more significant longest bar.</p>
<p>The Betti 1 barcode is in a lot better shape. We clearly just have a single long bar indicating the 1-dimensional cycle that exists up until the box "fills in" at <span class="math">\(\epsilon = 5.8\)</span>.</p>
<p>An important feature of persistent homology is being able to find the data points that lie on some interesting topological feature. If all persistent homology could do was give us barcodes and tell us how many connected components and cycles then that would be useful but wanting.</p>
<p>What we really want to be able to do is say, "hey look, the barcode shows there's a statistically significant 1-dimensional cycle, I wonder which data points form that cycle?"</p>
<p>To test out this procedure, let's modify our simple "box" simplicial complex a bit and add another edge (giving us another connected component).</p>
<div class="highlight"><pre><span></span><span class="n">data_b</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span><span class="mi">4</span><span class="p">],[</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">],[</span><span class="mi">6</span><span class="p">,</span><span class="mi">1</span><span class="p">],[</span><span class="mi">6</span><span class="p">,</span><span class="mi">4</span><span class="p">],[</span><span class="mi">12</span><span class="p">,</span><span class="mf">3.5</span><span class="p">],[</span><span class="mi">12</span><span class="p">,</span><span class="mf">1.5</span><span class="p">]])</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">graph2b</span> <span class="o">=</span> <span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">data_b</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mi">8</span><span class="p">)</span> <span class="c1">#epsilon is set to a high value to create a maximal complex</span>
<span class="n">rips2b</span> <span class="o">=</span> <span class="n">ripsFiltration</span><span class="p">(</span><span class="n">graph2b</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">data_b</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">rips2b</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">axes</span><span class="o">=</span><span class="p">[</span><span class="mi">0</span><span class="p">,</span><span class="mi">14</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">6</span><span class="p">])</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_55_0.png"></p>
<p>The depiction shows the maximal complex since we set <span class="math">\(\epsilon\)</span> to be a high value. But I tried to design the data so the "true" features are a box (which is a 1-dim cycle) and an edge off to the right, for a total of two "true" connected components.</p>
<p>Alright, let's run persistent homology on this data.</p>
<div class="highlight"><pre><span></span><span class="n">bm2b</span> <span class="o">=</span> <span class="n">filterBoundaryMatrix</span><span class="p">(</span><span class="n">rips2b</span><span class="p">)</span>
<span class="n">rbm2b</span> <span class="o">=</span> <span class="n">reduceBoundaryMatrix</span><span class="p">(</span><span class="n">bm2b</span><span class="p">)</span>
<span class="n">intervals2b</span> <span class="o">=</span> <span class="n">readIntervals</span><span class="p">(</span><span class="n">rbm2b</span><span class="p">,</span> <span class="n">rips2b</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="n">persist2b</span> <span class="o">=</span> <span class="n">readPersistence</span><span class="p">(</span><span class="n">intervals2b</span><span class="p">,</span> <span class="n">rips2b</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">graph_barcode</span><span class="p">(</span><span class="n">persist2b</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">graph_barcode</span><span class="p">(</span><span class="n">persist2b</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_58_0.png"></p>
<p><img alt="png" src="TDApart5_files/TDApart5_58_1.png"></p>
<p>We can see the two connected components (the two longest bars) in <code>Betti dim 0</code> and we see two bars in <code>Betti dim 1</code>, but one is clearly almost twice as long as the other. The shorter bar is from when the edge on the right forms a cycle with the two left-most vertices on the left-sided box.</p>
<p>So at this point we're thinking we have one significant 1-dim cycle, but (pretending we can't just plot our data) we don't know which points form this cycle so that we can further analyze that subset of the data if we wish.</p>
<p>In order to figure that out, we just need to use the <em>memory</em> matrix that our reduction algorithm also returns to us. First we find the interval we want from the <code>intervals2b</code> list, in this case it is the first element, then we get the start point (since that indicates the birth of the feature). The start point is an index value in the boundary array, so we'll just find that column in the memory array and look for the 1s in that column. The rows with 1s in that column are the other simplices in the group (including the column itself).</p>
<div class="highlight"><pre><span></span><span class="n">persist2b</span>
</pre></div>
<div class="highlight"><pre><span></span>[[0, [0, 6.5]],
[0, [0, 3.0]],
[0, [0, 5.0]],
[0, [0, 3.0]],
[0, [0, 6.0207972893961479]],
[0, [0, 2.0]],
[1, [5.0, 5.8309518948453007]],
[1, [6.0207972893961479, 6.5]]]
</pre></div>
<p>First, look at the intervals in homology group 1, then we want the interval that spans the epsilon range from 5.0 to 5.83. That's index 6 in the persistence list, and is likewise index 6 in the intervals list. The intervals list, rather than epsilon start and end, has index values so we can lookup the simplices in the memory matrix.</p>
<div class="highlight"><pre><span></span><span class="n">cycle1</span> <span class="o">=</span> <span class="n">intervals2b</span><span class="p">[</span><span class="mi">6</span><span class="p">]</span>
<span class="n">cycle1</span>
<span class="c1">#So birth index is 10</span>
</pre></div>
<div class="highlight"><pre><span></span>[10, 19]
</pre></div>
<div class="highlight"><pre><span></span><span class="n">column10</span> <span class="o">=</span> <span class="n">rbm2b</span><span class="p">[</span><span class="mi">1</span><span class="p">][:,</span><span class="mi">10</span><span class="p">]</span>
<span class="n">column10</span>
</pre></div>
<div class="highlight"><pre><span></span>array([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0])
</pre></div>
<p>So this is the column in the memory matrix with index=10. So we automatically know that whatever simplex is in index 10 is part of the cycle as well as the rows with 1s in this column.</p>
<div class="highlight"><pre><span></span><span class="n">ptsOnCycle</span> <span class="o">=</span> <span class="p">[</span><span class="n">i</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">column10</span><span class="p">))</span> <span class="k">if</span> <span class="n">column10</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">==</span> <span class="mi">1</span><span class="p">]</span>
<span class="n">ptsOnCycle</span>
</pre></div>
<div class="highlight"><pre><span></span>[7, 8, 9, 10]
</pre></div>
<div class="highlight"><pre><span></span><span class="c1">#so the simplices with indices 7,8,9,10 lie on our 1-dimensional cycle, let's find what those simplices are</span>
<span class="n">rips2b</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">7</span><span class="p">:</span><span class="mi">11</span><span class="p">]</span> <span class="c1">#range [start:stop], but stop is non-inclusive, so put 11 instead of 10</span>
</pre></div>
<div class="highlight"><pre><span></span>[{0, 1}, {2, 3}, {1, 2}, {0, 3}]
</pre></div>
<p>Exactly! Now this is the list of 1-simplices that form the 1-dimensional cycle we saw in our barcode. It should be trivial to go from this list to the raw data points so I won't bore you with those details here.</p>
<p>Alright. Let's try this with a little bit more realistic data. We'll use data sampled from a circle like we did in the beginning of this section. For this example, I've set the parameter <code>k=2</code> in the <code>ripsFiltration</code> function so it will only generate simplices up to 2-simplices. This is just to reduce the memory needed. If you have a fast computer with a lot of memory, you're welcome to set <code>k</code> to 3 or so, but I wouldn't make it much greater than that. Usually we're mostly interested in connected components and 1 or 2 dimensional cycles. The utility of topological features in dimensions higher than that seems to be a diminishing return and the price in memory and algorithm running time is generally not worth it.</p>
<blockquote>
<p><strong>NOTE</strong>: The following may take awhile to run, perhaps several minutes. This is because the code written in these tutorials is optimized for clarity and ease, NOT for efficiency or speed. There are a lot of performance optimizations that can and should be made if we wanted to make this anywhere close to a production ready TDA library. I plan to write a follow up post at some point about the most reasonable algorithm and data structure optimizations that we can make because I hope to develop a reasonable efficient open source TDA library in Python in the future and would appreciate any help I can get.</p>
</blockquote>
<div class="highlight"><pre><span></span><span class="n">n</span> <span class="o">=</span> <span class="mi">30</span> <span class="c1">#number of points to generate</span>
<span class="c1">#generate space of parameter</span>
<span class="n">theta</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mf">2.0</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">pi</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">r</span> <span class="o">=</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">5.0</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">r</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">cos</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">b</span> <span class="o">+</span> <span class="n">r</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span>
<span class="c1">#code to plot the circle for visualization</span>
<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
<span class="n">xc</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mf">0.25</span><span class="p">,</span><span class="mf">0.25</span><span class="p">,</span><span class="n">n</span><span class="p">)</span> <span class="o">+</span> <span class="n">x</span> <span class="c1">#add some "jitteriness" to the points (but less than before, reduces memory)</span>
<span class="n">yc</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mf">0.25</span><span class="p">,</span><span class="mf">0.25</span><span class="p">,</span><span class="n">n</span><span class="p">)</span> <span class="o">+</span> <span class="n">y</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">()</span>
<span class="n">ax</span><span class="o">.</span><span class="n">scatter</span><span class="p">(</span><span class="n">xc</span><span class="p">,</span><span class="n">yc</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
<span class="n">circleData</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="nb">list</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">xc</span><span class="p">,</span><span class="n">yc</span><span class="p">)))</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_68_0.png"></p>
<p><img alt="png" src="TDApart5_files/TDApart5_68_1.png"></p>
<div class="highlight"><pre><span></span><span class="n">graph4</span> <span class="o">=</span> <span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">circleData</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">3.0</span><span class="p">)</span>
<span class="n">rips4</span> <span class="o">=</span> <span class="n">ripsFiltration</span><span class="p">(</span><span class="n">graph4</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">circleData</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">rips4</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">axes</span><span class="o">=</span><span class="p">[</span><span class="o">-</span><span class="mi">6</span><span class="p">,</span><span class="mi">6</span><span class="p">,</span><span class="o">-</span><span class="mi">6</span><span class="p">,</span><span class="mi">6</span><span class="p">])</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_69_0.png"></p>
<p>Clearly, persistent homology should tell us we have 1 connected component and a single 1-dimensional cycle.</p>
<div class="highlight"><pre><span></span><span class="nb">len</span><span class="p">(</span><span class="n">rips4</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
<span class="c1">#On my laptop, a rips filtration with more than about 250 simplices will take >10 mins to compute persistent homology</span>
<span class="c1">#anything < ~220 only takes a few minutes or less</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="mf">148</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="o">%%</span><span class="n">time</span>
<span class="n">bm4</span> <span class="o">=</span> <span class="n">filterBoundaryMatrix</span><span class="p">(</span><span class="n">rips4</span><span class="p">)</span>
<span class="n">rbm4</span> <span class="o">=</span> <span class="n">reduceBoundaryMatrix</span><span class="p">(</span><span class="n">bm4</span><span class="p">)</span>
<span class="n">intervals4</span> <span class="o">=</span> <span class="n">readIntervals</span><span class="p">(</span><span class="n">rbm4</span><span class="p">,</span> <span class="n">rips4</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="n">persist4</span> <span class="o">=</span> <span class="n">readPersistence</span><span class="p">(</span><span class="n">intervals4</span><span class="p">,</span> <span class="n">rips4</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>CPU times: user 43.4 s, sys: 199 ms, total: 43.6 s
Wall time: 44.1 s
</pre></div>
<div class="highlight"><pre><span></span><span class="n">graph_barcode</span><span class="p">(</span><span class="n">persist4</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">graph_barcode</span><span class="p">(</span><span class="n">persist4</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_73_0.png"></p>
<p><img alt="png" src="TDApart5_files/TDApart5_73_1.png"></p>
<p>We can clearly see that there is a <em>significantly</em> longer bar than the others in the <code>Betti dim 0</code> barcode, indicating we have only one significant connected component. This fits clearly with the circular data we plotted.</p>
<p>The <code>Betti dim 1</code> barcode is even easier as it only shows a single bar, so we of course have a significant feature here being a 1-dimensional cycle.</p>
<p>Okay well, as usual, let's make things a little bit tougher to test our algorithms.</p>
<p>We're going to sample points from a shape called a <strong>lemniscate</strong>, more commonly known as a figure-of-eight, since it looks like the number 8 sideways. As you can tell, it should have 1 connected component and two 1-dimensional cycles.</p>
<div class="highlight"><pre><span></span><span class="n">n</span> <span class="o">=</span> <span class="mi">50</span>
<span class="n">t</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">pi</span><span class="p">,</span> <span class="n">num</span><span class="o">=</span><span class="n">n</span><span class="p">)</span>
<span class="c1">#equations for lemniscate</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">cos</span><span class="p">(</span><span class="n">t</span><span class="p">)</span> <span class="o">/</span> <span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">t</span><span class="p">)</span><span class="o">**</span><span class="mi">2</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">cos</span><span class="p">(</span><span class="n">t</span><span class="p">)</span> <span class="o">*</span> <span class="n">np</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">t</span><span class="p">)</span> <span class="o">/</span> <span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">t</span><span class="p">)</span><span class="o">**</span><span class="mi">2</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_75_0.png"></p>
<div class="highlight"><pre><span></span><span class="n">x2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mf">0.03</span><span class="p">,</span> <span class="mf">0.03</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="o">+</span> <span class="n">x</span> <span class="c1">#add some "jitteriness" to the points</span>
<span class="n">y2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mf">0.03</span><span class="p">,</span> <span class="mf">0.03</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="o">+</span> <span class="n">y</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">()</span>
<span class="n">ax</span><span class="o">.</span><span class="n">scatter</span><span class="p">(</span><span class="n">x2</span><span class="p">,</span><span class="n">y2</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_76_0.png"></p>
<div class="highlight"><pre><span></span><span class="n">figure8Data</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="nb">list</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">x2</span><span class="p">,</span><span class="n">y2</span><span class="p">)))</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">graph5</span> <span class="o">=</span> <span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">figure8Data</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">0.2</span><span class="p">)</span>
<span class="n">rips5</span> <span class="o">=</span> <span class="n">ripsFiltration</span><span class="p">(</span><span class="n">graph5</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">figure8Data</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">rips5</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">axes</span><span class="o">=</span><span class="p">[</span><span class="o">-</span><span class="mf">1.5</span><span class="p">,</span><span class="mf">1.5</span><span class="p">,</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">])</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_78_0.png"></p>
<div class="highlight"><pre><span></span><span class="o">%%</span><span class="n">time</span>
<span class="n">bm5</span> <span class="o">=</span> <span class="n">filterBoundaryMatrix</span><span class="p">(</span><span class="n">rips5</span><span class="p">)</span>
<span class="n">rbm5</span> <span class="o">=</span> <span class="n">reduceBoundaryMatrix</span><span class="p">(</span><span class="n">bm5</span><span class="p">)</span>
<span class="n">intervals5</span> <span class="o">=</span> <span class="n">readIntervals</span><span class="p">(</span><span class="n">rbm5</span><span class="p">,</span> <span class="n">rips5</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="n">persist5</span> <span class="o">=</span> <span class="n">readPersistence</span><span class="p">(</span><span class="n">intervals5</span><span class="p">,</span> <span class="n">rips5</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>CPU times: user 17min 8s, sys: 3.93 s, total: 17min 12s
Wall time: 17min 24s
</pre></div>
<p>Yeah... that took 17 minutes. Good thing I still had enough CPU/RAM to watch YouTube.</p>
<div class="highlight"><pre><span></span><span class="n">graph_barcode</span><span class="p">(</span><span class="n">persist5</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">graph_barcode</span><span class="p">(</span><span class="n">persist5</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_81_0.png"></p>
<p><img alt="png" src="TDApart5_files/TDApart5_81_1.png"></p>
<p>:-) Just as we expected. <code>Betti dim 0</code> shows one significantly longer bar than the others and <code>Betti dim 1</code> shows us two long bars, our two 1-dim cycles.</p>
<p>Let's add in another component. In this example, I've just added in a small circle in the data. So we should have two connected components and 3 1-dim cycles.</p>
<div class="highlight"><pre><span></span><span class="n">theta</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mf">2.0</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">pi</span><span class="p">,</span> <span class="mi">10</span><span class="p">)</span>
<span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">r</span> <span class="o">=</span> <span class="mf">1.6</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">,</span> <span class="mf">0.2</span>
<span class="n">x3</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">r</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">cos</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span>
<span class="n">y3</span> <span class="o">=</span> <span class="n">b</span> <span class="o">+</span> <span class="n">r</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span>
<span class="n">x4</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">x3</span><span class="p">)</span>
<span class="n">y4</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="n">y3</span><span class="p">)</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">()</span>
<span class="n">ax</span><span class="o">.</span><span class="n">scatter</span><span class="p">(</span><span class="n">x4</span><span class="p">,</span><span class="n">y4</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
<span class="n">figure8Data2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="nb">list</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">x4</span><span class="p">,</span><span class="n">y4</span><span class="p">)))</span>
<span class="c1"># I didn't add "jitteriness" this time since that increases the complexity of the subsequent simplicial complex, </span>
<span class="c1"># which makes the memory and computation requirements much greater</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_83_0.png"></p>
<div class="highlight"><pre><span></span><span class="n">graph6</span> <span class="o">=</span> <span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">figure8Data2</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">0.19</span><span class="p">)</span>
<span class="n">rips6</span> <span class="o">=</span> <span class="n">ripsFiltration</span><span class="p">(</span><span class="n">graph6</span><span class="p">,</span> <span class="n">k</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">figure8Data2</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">rips6</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">axes</span><span class="o">=</span><span class="p">[</span><span class="o">-</span><span class="mf">1.5</span><span class="p">,</span><span class="mf">2.5</span><span class="p">,</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">])</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_84_0.png"></p>
<div class="highlight"><pre><span></span><span class="nb">len</span><span class="p">(</span><span class="n">rips6</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span> <span class="c1">#reasonable size</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="mf">220</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="o">%%</span><span class="n">time</span>
<span class="n">bm6</span> <span class="o">=</span> <span class="n">filterBoundaryMatrix</span><span class="p">(</span><span class="n">rips6</span><span class="p">)</span>
<span class="n">rbm6</span> <span class="o">=</span> <span class="n">reduceBoundaryMatrix</span><span class="p">(</span><span class="n">bm6</span><span class="p">)</span>
<span class="n">intervals6</span> <span class="o">=</span> <span class="n">readIntervals</span><span class="p">(</span><span class="n">rbm6</span><span class="p">,</span> <span class="n">rips6</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
<span class="n">persist6</span> <span class="o">=</span> <span class="n">readPersistence</span><span class="p">(</span><span class="n">intervals6</span><span class="p">,</span> <span class="n">rips6</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>CPU times: user 4min 2s, sys: 780 ms, total: 4min 2s
Wall time: 4min 4s
</pre></div>
<div class="highlight"><pre><span></span><span class="n">graph_barcode</span><span class="p">(</span><span class="n">persist6</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">graph_barcode</span><span class="p">(</span><span class="n">persist6</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart5_files/TDApart5_87_0.png"></p>
<p><img alt="png" src="TDApart5_files/TDApart5_87_1.png"></p>
<p>Excellent. I think by now I don't need to tell you how to interpret the barcodes.</p>
<h3>The End... What's next?</h3>
<p>Well that's it folks. Part 5 is the end of this sub-series on persistent homology. You now should have all the knowledge necessary to understand and use existing persistent homology software tools, or even build your own if you want.</p>
<p>Next, we will turn our attention to the other major tool in topological data analysis, <strong>mapper</strong>. Mapper is an algorithm that allows us to create visualizable graphs from arbitrarily high-dimensional data. In this way, we are able to see global and local topological features. It is very useful for exploratory data analysis and hypothesis generation. Fortunately, the concepts and math behind it are a lot easier than persistent homology.</p>
<h4>References (Websites):</h4>
<ol>
<li>http://dyinglovegrape.com/math/topology_data_1.php</li>
<li>http://www.math.uiuc.edu/~r-ash/Algebra/Chapter4.pdf</li>
<li>https://en.wikipedia.org/wiki/Group_(mathematics)</li>
<li>https://jeremykun.com/2013/04/03/homology-theory-a-primer/</li>
<li>http://suess.sdf-eu.org/website/lang/de/algtop/notes4.pdf</li>
<li>http://www.mit.edu/~evanchen/napkin.html</li>
<li>https://triangleinequality.wordpress.com/2014/01/23/computing-homology</li>
</ol>
<h4>References (Academic Publications):</h4>
<ol>
<li>Adams, H., Atanasov, A., & Carlsson, G. (2011). Nudged Elastic Band in Topological Data Analysis. arXiv Preprint, 1112.1993v(December 2011). Retrieved from http://arxiv.org/abs/1112.1993</li>
<li>Artamonov, O. (2010). Topological Methods for the Representation and Analysis of Exploration Data in Oil Industry by Oleg Artamonov.</li>
<li>Basher, M. (2012). On the Folding of Finite Topological Space. International Mathematical Forum, 7(15), 745–752. Retrieved from http://www.m-hikari.com/imf/imf-2012/13-16-2012/basherIMF13-16-2012.pdf</li>
<li>Bauer, U., Kerber, M., & Reininghaus, J. (2013). Distributed computation of persistent homology. arXiv Preprint arXiv:1310.0710, 31–38. http://doi.org/10.1137/1.9781611973198.4</li>
<li>Bauer, U., Kerber, M., & Reininghaus, J. (2013). Clear and Compress: Computing Persistent Homology in Chunks. arXiv Preprint arXiv:1303.0477, 1–12. http://doi.org/10.1007/978-3-319-04099-8__7</li>
<li>Berry, T., & Sauer, T. (2016). Consistent Manifold Representation for Topological Data Analysis. Retrieved from http://arxiv.org/abs/1606.02353</li>
<li>Biasotti, S., Giorgi, D., Spagnuolo, M., & Falcidieno, B. (2008). Reeb graphs for shape analysis and applications. Theoretical Computer Science, 392(1–3), 5–22. http://doi.org/10.1016/j.tcs.2007.10.018</li>
<li>Boissonnat, J.-D., & Maria, C. (2014). The Simplex Tree: An Efficient Data Structure for General Simplicial Complexes. Algorithmica, 70(3), 406–427. http://doi.org/10.1007/s00453-014-9887-3</li>
<li>Cazals, F., Roth, A., Robert, C., & Christian, M. (2013). Towards Morse Theory for Point Cloud Data, (July). Retrieved from http://hal.archives-ouvertes.fr/hal-00848753/</li>
<li>Chazal, F., & Michel, B. (2016). Persistent homology in TDA.</li>
<li>Cheng, J. (n.d.). Lecture 16 : Computation of Reeb Graphs Topics in Computational Topology : An Algorithmic View Computation of Reeb Graphs, 1, 1–5.</li>
<li>Day, M. (2012). Notes on Cayley Graphs for Math 5123 Cayley graphs, 1–6.</li>
<li>Dey, T. K., Fan, F., & Wang, Y. (2013). Graph Induced Complex: A Data Sparsifier for Homology Inference.</li>
<li>Doktorova, M. (2012). CONSTRUCTING SIMPLICIAL COMPLEXES OVER by, (June).</li>
<li>Edelsbrunner, H. (2006). IV.1 Homology. Computational Topology, 81–87. Retrieved from http://www.cs.duke.edu/courses/fall06/cps296.1/</li>
<li>Edelsbrunner, H. (2006). VI.1 Persistent Homology. Computational Topology, 128–134. Retrieved from http://www.cs.duke.edu/courses/fall06/cps296.1/</li>
<li>Edelsbrunner, H., Letscher, D., & Zomorodian, A. (n.d.). a d “ d A ( gpirqtsuGv I ” dfe h d5e x V W x ( A x Aji x } ~ k g G “ f g I ktg § y V k g G ” f g I ¨ " f g ¡ k g § VXW.</li>
<li>Edelsbrunner, H., Letscher, D., & Zomorodian, A. (2002). Topological persistence and simplification. Discrete and Computational Geometry, 28(4), 511–533. http://doi.org/10.1007/s00454-002-2885-2</li>
<li>Edelsbrunner, H., & Morozov, D. (2012). Persistent homology: theory and practice. 6th European Congress of Mathematics, 123–142. http://doi.org/10.4171/120-1/3</li>
<li>Erickson, J. (1908). Homology. Computational Topology, 1–11.</li>
<li>Evan Chen. (2016). An Infinitely Large Napkin.</li>
<li>Figure, S., & Figure, S. (n.d.). Chapter 4 : Persistent Homology Topics in Computational Topology : An Algorithmic View Persistent homology, 1–8.</li>
<li>Grigor’yan, A., Muranov, Y. V., & Yau, S. T. (2014). Graphs associated with simplicial complexes. Homology, Homotopy and Applications, 16(1), 295–311. http://doi.org/10.4310/HHA.2014.v16.n1.a16</li>
<li>Kaczynski, T., Mischaikow, K., & Mrozek, M. (2003). Computing homology. Homology, Homotopy and Applications, 5(2), 233–256. http://doi.org/10.4310/HHA.2003.v5.n2.a8</li>
<li>Kerber, M. (2016). Persistent Homology – State of the art and challenges 1 Motivation for multi-scale topology. Internat. Math. Nachrichten Nr, 231(231), 15–33.</li>
<li>Khoury, M. (n.d.). Lecture 6 : Introduction to Simplicial Homology Topics in Computational Topology : An Algorithmic View, 1–6.</li>
<li>Kraft, R. (2016). Illustrations of Data Analysis Using the Mapper Algorithm and Persistent Homology.</li>
<li>Lakshmivarahan, S., & Sivakumar, L. (2016). Cayley Graphs, (1), 1–9.</li>
<li>Lewis, R. (n.d.). Parallel Computation of Persistent Homology using the Blowup Complex, 323–331. http://doi.org/10.1145/2755573.2755587</li>
<li>Liu, X., Xie, Z., & Yi, D. (2012). A fast algorithm for constructing topological structure in large data. Homology, Homotopy and Applications, 14(1), 221–238. http://doi.org/10.4310/HHA.2012.v14.n1.a11</li>
<li>Medina, P. S., & Doerge, R. W. (2016). Statistical Methods in Topological Data Analysis for Complex, High-Dimensional Data. Retrieved from http://arxiv.org/abs/1607.05150</li>
<li>Morozov, D. (n.d.). A Practical Guide to Persistent Homology A Practical Guide to Persistent Homology.</li>
<li>Murty, N. A., Natarajan, V., & Vadhiyar, S. (2013). Efficient homology computations on multicore and manycore systems. 20th Annual International Conference on High Performance Computing, HiPC 2013. http://doi.org/10.1109/HiPC.2013.6799139</li>
<li>Naik, V. (2006). Group theory : a first journey, 1–21.</li>
<li>Otter, N., Porter, M. A., Tillmann, U., Grindrod, P., & Harrington, H. A. (2015). A roadmap for the computation of persistent homology. Preprint ArXiv, (June), 17. Retrieved from http://arxiv.org/abs/1506.08903</li>
<li>Pearson, P. T. (2013). Visualizing Clusters in Artificial Neural Networks Using Morse Theory. Advances in Artificial Neural Systems, 2013, 1–8. http://doi.org/10.1155/2013/486363</li>
<li>Reininghaus, J. (2012). Computational Discrete Morse Theory.</li>
<li>Reininghaus, J., Huber, S., Bauer, U., Tu, M., & Kwitt, R. (2015). A Stable Multi-Scale Kernel for Topological Machine Learning, 1–8. Retrieved from papers3://publication/uuid/CA230E5C-90AC-4352-80D2-2F556E8B47D3</li>
<li>Rykaczewski, K., Wiśniewski, P., & Stencel, K. (n.d.). An Algorithmic Way to Generate Simplexes for Topological Data Analysis.</li>
<li>Semester, A. (2017). § 4 . Simplicial Complexes and Simplicial Homology, 1–13.</li>
<li>Siles, V. (n.d.). Computing Persistent Homology within Coq / SSReflect, 243847(243847).</li>
<li>Singh, G. (2007). Algorithms for Topological Analysis of Data, (November).</li>
<li>Tylianakis, J. (2009). Course Notes. Methodology, (2002), 1–124.</li>
<li>Wagner, H., & Dłotko, P. (2014). Towards topological analysis of high-dimensional feature spaces. Computer Vision and Image Understanding, 121, 21–26. http://doi.org/10.1016/j.cviu.2014.01.005</li>
<li>Xiaoyin Ge, Issam I. Safa, Mikhail Belkin, & Yusu Wang. (2011). Data Skeletonization via Reeb Graphs. Neural Information Processing Systems 2011, 837–845. Retrieved from https://papers.nips.cc/paper/4375-data-skeletonization-via-reeb-graphs.pdf</li>
<li>Zomorodian, A. (2010). Fast construction of the Vietoris-Rips complex. Computers and Graphics (Pergamon), 34(3), 263–271. http://doi.org/10.1016/j.cag.2010.03.007</li>
<li>Zomorodian, A. (2009). Computational Topology Notes. Advances in Discrete and Computational Geometry, 2, 109–143. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.7483</li>
<li>Zomorodian, A. J. (2001). Computing and Comprehending Topology: Persistence and Hierarchical Morse Complexes, 199. Retrieved from http://www.cs.dartmouth.edu/~afra/papers.html</li>
<li>Zomorodian, A., & Carlsson, G. (2005). Computing persistent homology. Discrete and Computational Geometry, 33(2), 249–274. http://doi.org/10.1007/s00454-004-1146-y</li>
<li>Groups and their Representations Karen E. Smith. (n.d.).</li>
<li>Symmetry and Group Theory 1. (2016), 1–18. http://doi.org/10.1016/B978-0-444-53786-7.00026-5</li>
</ol>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Persistent Homology (Part 4)2017-02-23T00:35:00-06:002017-02-23T00:35:00-06:00Brandon Browntag:outlace.com,2017-02-23:/TDApart4.html<p>In part 4 we use linear algebra to build out simple algorithms to efficiently calculate homology groups and Betti numbers.</p><h2>Topological Data Analysis - Part 4 - Persistent Homology</h2>
<p>This is Part 4 in a series on topological data analysis.
See <a href="TDApart1.html">Part 1</a> | <a href="TDApart2.html">Part 2</a> | <a href="TDApart3.html">Part 3</a> | <a href="TDApart5.html">Part 5</a></p>
<p>In this part we learn more about how linear algebra can help us calculate topological features in a computationally efficient manner.</p>
<h3>Linear algebra saves the day</h3>
<p>You might have noticed that calculating the homology groups and Betti numbers by hand can be very tedious and impractical for anything larger than the simple examples we've considered thus far. Fortunately, there are better ways. In particularly, we can represent most of the computation of homology groups in terms of vectors and matrices and computers are very efficient when working with vectors and matrices.</p>
<p>Now, we already went over what a vector is (an element in a vector space) but what is a matrix? You probably think of a matrix has a kind of 2-dimensional grid of numbers, and you know you can multiply matrices by other matrices and vectors and what not. Well a grid of numbers is certainly a convenient notation for matrices, but that's not what they <em>are</em>.</p>
<h5>What's a matrix?</h5>
<p>By this point, you should be comfortable with the idea of a function or map. These are both ways of translating one type of mathematical structure into another (or at least mapping one element in a structure to a different element in the same structure). In particular, we've spent a good amount of time working with boundary maps that mapped a higher-dimensional chain group to a lower dimensional chain group that preserved the structure of the original group in some way (it is a homomorphism).</p>
<p>So just like we can have a map between two groups, we can have a map between two vector spaces. And we call (linear) maps between vector spaces, <strong>matrices</strong>. A matrix basically applies a linear transformation on a vector space (or individual vector element) producing new vector space. A <em>linear</em> transformation just implies that we can only transform the vector space via the normal vector operations of scaling by a constant and addition of a constant vector.</p>
<blockquote>
<p><strong>Definition (Linear Tansformation)</strong> <br />
More precisely, a linear transformation <span class="math">\(M\ :\ V_1 \rightarrow V_2\)</span> is a map <span class="math">\(M\)</span> from the vector spaces <span class="math">\(V_1\)</span> to <span class="math">\(V_2\)</span> such that <span class="math">\(M(V_1 + V_2) = M(V_1) + M(V_2)\)</span> and <span class="math">\(M(aV_1) = aM(V_1)\)</span> where <span class="math">\(a\)</span> is a scalar.</p>
</blockquote>
<p>Let's say we want to map the real-valued volume <span class="math">\(\mathbb R^3\)</span> to the plane <span class="math">\(\mathbb R^2\)</span>.</p>
<div class="math">$$
\begin{aligned}
V_1 &= span\{(1,0,0),(0,1,0),(0,0,1)\} \\
V_2 &= span\{(1,0),(0,1)\}
\end{aligned}
$$</div>
<p>Now, what if I want to map <span class="math">\(V_1\)</span> to <span class="math">\(V_2\)</span>, that is, I want to send every point in <span class="math">\(V_1\)</span> to a point in <span class="math">\(V_2\)</span>. There are many reasons why I might want to do something like this. If I'm making a graphics application, for example, I want to offer an option to rotate images drawn, and this is simply a matter of applying a linear transformation on the pixels.</p>
<p>So we want a map <span class="math">\(M : V_1 \rightarrow V_2\)</span>, and we'll call this map a matrix. Notice that <span class="math">\(V_1\)</span> has a basis with 3 elements, and <span class="math">\(V_2\)</span> has a basis with 2 elements. In order to map from one space to the other, we just need to map one basis set to another basis set. And remember, since this is a linear map, all we are allowed to do to a basis is multiply it by a scalar or add another vector to it; we can't do exotic things like square it or take the logarithm.</p>
<p>Let's call the 3 basis elements of <span class="math">\(V_1: B_1, B_2, B_3\)</span>. Hence, <span class="math">\(V_1 = \langle{B_1, B_2, B_3}\rangle\)</span>.
Similarly, we'll call the 2 basis elements of <span class="math">\(V_2: \beta_1, \beta_2\)</span>. Hence, <span class="math">\(V_2 = \langle{\beta_1, \beta_2}\rangle\)</span>. (Remember the angle brackets <span class="math">\(\langle\rangle\)</span> mean span, i.e. the set of all linear combinations of those elements). We can setup equations such that any vector in <span class="math">\(V_1\)</span> can be mapped to a vector in <span class="math">\(V_2\)</span> by using the fact that each vector space can be defined by their bases.</p>
<blockquote>
<p><em>New Notation (Vector)</em> <br />
To prevent confusion between symbols that refer to scalars and symbols that refer vectors, I will henceforth add a little arrow over every vector <span class="math">\(\vec{v}\)</span> to denote it is a vector, not a scalar. Remember, the scalar is just a single element from the underlying field <span class="math">\(F\)</span> over which the vector space is defined.</p>
</blockquote>
<p>We can define our map <span class="math">\(M(V_1) = V_2\)</span> in this way:</p>
<div class="math">$$
\begin{aligned}
M(B_1) &= a\vec \beta_1 + b\vec \beta_2 \mid a,b \in \mathbb R \\
M(B_2) &= c\vec \beta_1 + d\vec \beta_2 \mid c,d \in \mathbb R \\
M(B_3) &= e\vec \beta_1 + f\vec \beta_2 \mid e,f \in \mathbb R \\
\end{aligned}
$$</div>
<p>That is, the map from each basis in <span class="math">\(V_1\)</span> is setup as a linear combination of the basis elements in <span class="math">\(V_2\)</span>. This requires us to define a total of 6 new pieces of data: <span class="math">\(a,b,c,d,e,f \in \mathbb R\)</span> required for our mapping. We just have to keep track of the fact that <span class="math">\(a,b\)</span> are for mapping to <span class="math">\(\beta_1\)</span> and <span class="math">\(d,e,f\)</span> are for mapping to <span class="math">\(\beta_2\)</span>. What's a convenient way to keep track of all of that? Oh I know, a matrix!</p>
<div class="math">$$
M =
\begin{pmatrix}
a & c & e \\
b & d & f \\
\end{pmatrix}
$$</div>
<p>That is a very convenient way to represent our map <span class="math">\(M\ :\ V_1 \rightarrow V_2\)</span> indeed. Notice that each <em>column</em> of this matrix corresponds to the "mapping equation" coefficients for each M(B_n). Also notice that the dimensions of this matrix, <span class="math">\(2\times3\)</span>, corresponds to the dimensions of the two vector spaces we're mapping to and from. That is, any map <span class="math">\(\mathbb R^n \rightarrow \mathbb R^m\)</span> will be represented as an <span class="math">\(m\times n\)</span> matrix. It is important to keep in mind that since the linear map (and hence the matrix) depend on coefficients applied to a basis, then the matrix elements will change if one uses a different basis.</p>
<p>Knowing this we can easily see how vector-matrix multiplication <em>should</em> work and why the dimensions of a matrix and vector have to correspond. Namely, a <span class="math">\(n \times m\)</span> vector/matrix multiplied by a <span class="math">\(j \times k\)</span> vector/matrix must produce a <span class="math">\(n \times k\)</span> vector/matrix, and for it to work at all, <span class="math">\(m = j\)</span>.</p>
<p>This is how we can multiply our matrix map <span class="math">\(M\)</span> by any vector in <span class="math">\(V_1\)</span> to produce the mapped-to vector in <span class="math">\(V_2\)</span>:</p>
<div class="math">$$
M(\vec v\ \in\ V_1)
=
\underbrace{
\begin{bmatrix}
a & c & e \\
b & d & f \\
\end{bmatrix}}_{M:V_1\rightarrow V_2}
\underbrace{
\begin{pmatrix}
x \\ y \\ z \\
\end{pmatrix}}_{\vec v\ \in\ V_1}
=
\underbrace{
\begin{bmatrix}
a * x & c * y & e * z \\
b * x & d * y & f * z \\
\end{bmatrix}}_{M:V_1\rightarrow V_2}
=
\begin{pmatrix}
a * x + c * y + e * z \\
b * x + d * y + f * z \\
\end{pmatrix}
$$</div>
<p>Ok now we know a matrix is a linear map between two vector spaces. But what happens if you multiply two matrices together? Well that is just a composition of maps. For example, we have three vector spaces <span class="math">\(T, U, V\)</span> and two linear maps <span class="math">\(m_1, m_2\)</span>:</p>
<div class="math">$$ T \stackrel{m_1}{\rightarrow} U \stackrel{m_2}{\rightarrow} V$$</div>
<p>To get from <span class="math">\(T\)</span> to <span class="math">\(V\)</span>, we need to apply both maps: <span class="math">\(m_2(m_1(T)) = V\)</span>. Hence, multiplying two matrices together gives us a composition of maps <span class="math">\(m_2 \circ m_1\)</span>. The <em>identity</em> matrix is an identity map (i.e. it doesn't change the input) that takes the form where 1s are along the diagonal and 0s everywhere else, e.g.:</p>
<div class="math">$$
m=
\begin{bmatrix}
\ddots & 0 & 0 & 0 & ⋰ \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
⋰ & 0 & 0 & 0 & \ddots \\
\end{bmatrix}
\\
m \vec V = \vec V
$$</div>
<h4>Back to simplicial homology (again)</h4>
<p>We learned all of that so we can represent the boundary map <span class="math">\(\partial(C_n)\)</span> as a <strong>matrix</strong> so we can apply the tools of linear algebra. This makes sense since we already know that the chain groups <span class="math">\(C_n\)</span> can be viewed as vector spaces when we allow scalar multiplication, so then a linear map between the chain vector spaces is the boundary map, that we can represent as a matrix.</p>
<p>We represent an <span class="math">\(n\)</span>-boundary map, i.e. <span class="math">\(\partial(C_n)\)</span>, where <span class="math">\(n\)</span> is the dimension of the chain group, <span class="math">\(k\)</span> is the number of simplices in <span class="math">\(C_n\)</span> and <span class="math">\(l\)</span> is the number of simplices in <span class="math">\(C_{n-1}\)</span>, as a matrix with <span class="math">\(k\)</span> columns and <span class="math">\(l\)</span> rows. Thus each column represents a simplex in <span class="math">\(C_n\)</span> and each row represents a simplex in <span class="math">\(C_{n-1}\)</span>. We put a <span class="math">\(1\)</span> in a cell of the matrix if the simplex in that column maps to the simplex in that row. For example, <span class="math">\(\partial([a,b]) = a - b\)</span> if the field is <span class="math">\(\mathbb Z\)</span>, so we will put a <span class="math">\(1\)</span> in the row for <span class="math">\(a\)</span> and <span class="math">\(b\)</span> since the 1-simplex <span class="math">\([a,b]\)</span> maps to those two 0-simplices.</p>
<p>Let's try calculating the homology groups of the previous simplicial complex (depicted below again) using matrices and vectors. We're going to go back to using <span class="math">\(\mathbb Z_2\)</span> as our field (so simplex orientation can be ignored) because it is computationally more efficient to do so.</p>
<div class="math">$$ S = \text{ {[a], [b], [c], [d], [a, b], [b, c], [c, a], [c, d], [d, b], [a, b, c]} } $$</div>
<p><img src="images/TDAimages/part4/simplicialcomplex5b.svg" /></p>
<p>Since we're using the (very small) finite field <span class="math">\(\mathbb Z_2\)</span> then we can actually list out all the vectors in our chain (group) vector space. We have 3 chain groups, namely the group of 0-simplices (vertices), 1-simplices (edges), and 2-simplices (triangle).</p>
<p>In our example, we only have a single 2-simplex: [a,b,c], thus the group it generates over the field <span class="math">\(\mathbb Z_2\)</span> is only <span class="math">\(\{0, [a,b,c]\}\)</span> which is isomorphic to <span class="math">\(\mathbb Z_2\)</span>. Recall, in general, the group generated by the number <span class="math">\(n\)</span> of <span class="math">\(p\)</span>-simplices in a simplicial complex is isomorphic to <span class="math">\(\mathbb Z^n_2\)</span>. For a computer to understand, we can encode the group elements just using their coefficients 1 or 1. So, for example, the group generated by <span class="math">\([a,b,c]\)</span> can just be represented as <span class="math">\(\{0,1\}\)</span>. Or the group generated by the 0-simplices <span class="math">\(\{a, b, c, d\}\)</span> can be represented by 4-dimensional vectors, for example, if a group element is <span class="math">\(a+b+c\)</span> then we encode this as <span class="math">\((1, 1, 1, 0)\)</span> where each position represents the presence or abscence of <span class="math">\((a, b, c, d)\)</span>, respectively.</p>
<p>Here are all the chain groups represented as vectors with just coefficients (I didn't list all elements for <span class="math">\(C_1\)</span> since there are so many [32]):</p>
<div class="math">$$
\begin{align}
C_0
&=
\left\{
\begin{array}{ll}
(0,0,0,0) & (1,0,0,0) & (0,1,1,0) & (0,1,0,1) \\
(0,1,0,0) & (0,0,1,0) & (0,0,1,1) & (0,1,1,1) \\
(0,0,0,1) & (1,1,0,0) & (1,0,0,1) & (1,0,1,1) \\
(1,1,1,0) & (1,1,1,1) & (1,0,1,0) & (1,1,0,1) \\
\end{array}
\right.
& \cong \mathbb Z^4_2
\\
C_1
&=
\left\{
\begin{array}{ll}
(0,0,0,0,0) & (1,0,0,0,0) & (0,1,1,0,0) & (0,1,0,1,0) \\
(0,1,0,0,0) & (0,0,1,0,0) & (0,0,1,1,0) & (0,1,1,1,0) \\
\dots
\end{array}
\right.
& \cong \mathbb Z^5_2
\\
C_2
&=
\left\{
\begin{array}{ll}
0 & 1
\end{array}
\right.
& \cong \mathbb Z_2
\end{align}
$$</div>
<p>To represent the boundary map (which is a linear map) of the group of <span class="math">\(p\)</span>-simplices as a matrix, we set the columns to represent each <span class="math">\(p\)</span>-simplex in the group, and the rows represent each <span class="math">\((p-1)\)</span>-simplex. We put a <span class="math">\(1\)</span> in each position of the matrix if the <span class="math">\((p-1)\)</span>-simplex row is a <em>face</em> of the <span class="math">\(p\)</span>-simplex column.</p>
<p>We index rows and columns as an ordered pair <span class="math">\((i, j)\)</span> respectively. Thus the element <span class="math">\(a_{2,3}\)</span> is the element in the 2nd row (from the top) and the 3rd column (from the left).</p>
<p>The generic boundary matrix is thus (each column is a <span class="math">\(p\)</span>-simplex, each row is a <span class="math">\((p-1)\)</span>-simplex):</p>
<div class="math">$$ \begin{align}
\partial_p
&=
\begin{pmatrix}
a_{1,1} & a_{1,2} & a_{1,3} & \cdots & a_{1,j} \\
a_{2,1} & a_{2,2} & a_{2,3} & \cdots & a_{2,j} \\
a_{3,1} & a_{3,2} & a_{3,3} & \cdots & a_{3,j} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
a_{i,1} & a_{i,2} & a_{i,3} & \cdots & a_{i,j}
\end{pmatrix}
\end{align}
$$</div>
<p>We'll start by representing the boundary map <span class="math">\(\partial(C_2)\)</span> as a matrix. There's only one 2-simplex in <span class="math">\(C_2\)</span> so there is only one column, but there are five 1-simplices in <span class="math">\(C_1\)</span> so there are 5 rows.</p>
<div class="math">$$
\partial_2
=
\begin{array}{c|lcr}
\partial & [a,b,c] \\
\hline
[a,b] & 1 \\
[b,c] & 1 \\
[c,a] & 1 \\
[c,d] & 0 \\
[d,b] & 0 \\
\end{array}
$$</div>
<p>We put a <span class="math">\(1\)</span> if each row-element was a face of the simplex <span class="math">\([a,b,c]\)</span>. This matrix makes sense as a linear map because if we multiply it by a vector element in <span class="math">\(C_2\)</span> (there's only 1, besides the 0 element) we get what we expect:</p>
<div class="math">$$
\begin{align}
\begin{pmatrix}
1 \\
1 \\
1 \\
0 \\
0 \\
\end{pmatrix} *
0 \qquad
&=
\qquad
\begin{pmatrix}
0 \\
0 \\
0 \\
0 \\
0 \\
\end{pmatrix} \\
\begin{pmatrix}
1 \\
1 \\
1 \\
0 \\
0 \\
\end{pmatrix} *
1 \qquad
&=
\qquad
\begin{pmatrix}
1 \\
1 \\
1 \\
0 \\
0 \\
\end{pmatrix}
\end{align}
$$</div>
<p>Okay, let's move on to building the boundary matrix <span class="math">\(\partial(C_1)\)</span>:</p>
<div class="math">$$
\partial_1 =
\begin{array}{c|lcr}
\partial & [a,b] & [b,c] & [c,a] & [c,d] & [d,b] \\
\hline
a & 1 & 0 & 1 & 0 & 0 \\
b & 1 & 1 & 0 & 0 & 1 \\
c & 0 & 1 & 1 & 1 & 0 \\
d & 0 & 0 & 0 & 1 & 1 \\
\end{array}
$$</div>
<p>Does this make sense? Let's check with some python/numpy. Let's take an arbitrary element from the group of 1-chains, namely: <span class="math">\([a,b]+[c,a]+[c,d]\)</span> which we've encoded as <span class="math">\((1,0,1,1,0)\)</span> and apply the boundary matrix and see what we get. </p>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="n">b1</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">matrix</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">],[</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">],[</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">],[</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">]])</span> <span class="c1">#boundary matrix C_1</span>
<span class="n">el</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">matrix</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">])</span> <span class="c1">#random element from C_1</span>
<span class="n">np</span><span class="o">.</span><span class="n">fmod</span><span class="p">(</span><span class="n">b1</span> <span class="o">*</span> <span class="n">el</span><span class="o">.</span><span class="n">T</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span> <span class="c1"># we want integers modulo 2</span>
</pre></div>
<div class="highlight"><pre><span></span>matrix([[0],
[1],
[0],
[1]])
</pre></div>
<p><span class="math">\(\require{cancel}\)</span></p>
<p>Recall that <span class="math">\((0,1,0,1)\)</span> translates to <span class="math">\(b+d\)</span>. By hand we can calculate the boundary and compare:
</p>
<div class="math">$$\partial([a,b]+[c,a]+[c,d]) = a+b+c+a+c+d = \cancel{a}+b+\cancel{c}+\cancel{a}+\cancel{c}+d = b+d = (0,1,0,1)$$</div>
<p>It works!</p>
<p>Lastly, we need the boundary matrix for <span class="math">\(C_0\)</span> which is trivial since the boundary of <span class="math">\(0\)</span>-simplices always maps to <span class="math">\(0\)</span>, so </p>
<div class="math">$$
\partial_0 =
\begin{pmatrix}
0 & 0 & 0 & 0 \\
\end{pmatrix}
$$</div>
<p>Okay, now we have our three boundary matrices, how do we calculate the Betti numbers? Well recall that sequence of subgroups of a chain group: $ B_n \leq Z_n \leq C_n $ which are the group of boundaries, group of cycles, and chain group, respectively.</p>
<p>Also recall that Betti <span class="math">\(b_n = dim(Z_n\ /\ B_n)\)</span>. But that's when things were represented as just sets with group structure, now everything is represented as vectors and matrices, so instead we define the Betti number <span class="math">\(b_n = rank(Z_n)\ -\ rank(B_n)\)</span>. What does <strong>rank</strong> mean? Rank and dimension are related but not the same. If we think of the columns of a matrix as a set basis vectors: <span class="math">\(\beta_1, \beta_2, \dots \beta_k\)</span> then the dimension of the span of those column vectors <span class="math">\(\langle \beta_1, \beta_2, \dots \beta_k \rangle\)</span> is the rank of the matrix. It turns out that you can also use the rows and it will have the same result. Importantly, however, dimension is defined on the smallest set of basis elements, i.e. the basis elements that are linearly independent.</p>
<p>The boundary matrix <span class="math">\(\partial_n\)</span> contains the information for the chain group and cycles subgroup, and the <span class="math">\(B_{n-1}\)</span> boundary subgroup, all the information we need to calculate the Betti number. Unfortunately, in general, our naïve approach of building the boundary matrix is not in a form where the group and subgroup information is readily accessible. We need to modify the boundary matrix, without disturbing the mapping information it contains, into a new form called <strong>Smith normal form</strong>. Basically, the smith normal form of a matrix will have <span class="math">\(1\)</span>s along the diagonal starting from the top left of the matrix and <span class="math">\(0\)</span>s everywhere else.</p>
<p>For example,
</p>
<div class="math">$$
\begin{align}
\text{Smith normal form}
&:\
\begin{pmatrix}
1 & 0 & 0 & \cdots & 0 \\
0 & 1 & 0 & \cdots & 0 \\
0 & 0 & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & ?
\end{pmatrix}
\end{align}
$$</div>
<p>Notice the <span class="math">\(1\)</span>s along the diagonal do not necessarily need to extend all the way down to the bottom right. And here's the information available once it's in Smith normal form (the red diagonal box indicates the <span class="math">\(1\)</span>s):
<img src="images/TDAimages/part4/smithnormalformsubgroups.svg" />
(Source: "COMPUTATIONAL TOPOLOGY" by Edelsbrunner and Harer, pg. 104)</p>
<p>So how do we get a matrix into Smith normal form? We do so by playing a game involving manipulating the matrix according to some rules. Here are the two allowed operations on the matrix:
1. You can swap any two columns or any two rows in the matrix.
2. You can add a column to another column, or a row to another row.</p>
<p>Now you just need to apply these operations until you get the matrix in Smith normal form. I should point out that this process is alot easier when we use the field <span class="math">\(\mathbb Z_2\)</span>. Let's try it out on the boundary matrix for <span class="math">\(C_1\)</span>.</p>
<div class="math">$$
\partial_1 =
\begin{array}{c|lcr}
\partial & [a,b] & [b,c] & [c,a] & [c,d] & [d,b] \\
\hline
a & 1 & 0 & 1 & 0 & 0 \\
b & 1 & 1 & 0 & 0 & 1 \\
c & 0 & 1 & 1 & 1 & 0 \\
d & 0 & 0 & 0 & 1 & 1 \\
\end{array}
$$</div>
<p>We already have 1s across the diagonal, but we have a lot of 1s not along the diagonal.<br />
Steps: Add column 3 to 5, then add column 4 to 5, then add column 5 to 1, then swap columns 1 and 5:
</p>
<div class="math">$$
\partial_1 =
\begin{pmatrix}
1 & 0 & 1 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 \\
0 & 1 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 & 0 \\
\end{pmatrix}
$$</div>
<p>Steps: Add column 1 to 3, add column 2 to 3, swap columns 3 and 4, add row 1 to 2, add row 4 to 2, add row 3 to 2, add row 4 to 3, swap rows 3 and 2, swap rows 4 and 3. Stop.
</p>
<div class="math">$$
\text{Smith normal form: }
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
\end{pmatrix}
$$</div>
<p>
Once we have the matrix in Smith normal form, we don't do any more operations. We could of course continue adding rows and columns together until we get a matrix of only 0s, but that wouldn't be very helpful! I sort of randomly added rows/columns to get it into the Smith normal form, but there really is an algorithm that can do it relatively efficiently.</p>
<p>Rather than walk through the detailed implementation of the Smith normal form algorithm, I will merely use an <a href="https://triangleinequality.wordpress.com/2014/01/23/computing-homology/">existing algorithm</a>:</p>
<div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">reduce_matrix</span><span class="p">(</span><span class="n">matrix</span><span class="p">):</span>
<span class="c1">#Returns [reduced_matrix, rank, nullity]</span>
<span class="k">if</span> <span class="n">np</span><span class="o">.</span><span class="n">size</span><span class="p">(</span><span class="n">matrix</span><span class="p">)</span><span class="o">==</span><span class="mi">0</span><span class="p">:</span>
<span class="k">return</span> <span class="p">[</span><span class="n">matrix</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]</span>
<span class="n">m</span><span class="o">=</span><span class="n">matrix</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">n</span><span class="o">=</span><span class="n">matrix</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="k">def</span> <span class="nf">_reduce</span><span class="p">(</span><span class="n">x</span><span class="p">):</span>
<span class="c1">#We recurse through the diagonal entries.</span>
<span class="c1">#We move a 1 to the diagonal entry, then</span>
<span class="c1">#knock out any other 1s in the same col/row.</span>
<span class="c1">#The rank is the number of nonzero pivots,</span>
<span class="c1">#so when we run out of nonzero diagonal entries, we will</span>
<span class="c1">#know the rank.</span>
<span class="n">nonzero</span><span class="o">=</span><span class="kc">False</span>
<span class="c1">#Searching for a nonzero entry then moving it to the diagonal.</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="n">m</span><span class="p">):</span>
<span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="n">n</span><span class="p">):</span>
<span class="k">if</span> <span class="n">matrix</span><span class="p">[</span><span class="n">i</span><span class="p">,</span><span class="n">j</span><span class="p">]</span><span class="o">==</span><span class="mi">1</span><span class="p">:</span>
<span class="n">matrix</span><span class="p">[[</span><span class="n">x</span><span class="p">,</span><span class="n">i</span><span class="p">],:]</span><span class="o">=</span><span class="n">matrix</span><span class="p">[[</span><span class="n">i</span><span class="p">,</span><span class="n">x</span><span class="p">],:]</span>
<span class="n">matrix</span><span class="p">[:,[</span><span class="n">x</span><span class="p">,</span><span class="n">j</span><span class="p">]]</span><span class="o">=</span><span class="n">matrix</span><span class="p">[:,[</span><span class="n">j</span><span class="p">,</span><span class="n">x</span><span class="p">]]</span>
<span class="n">nonzero</span><span class="o">=</span><span class="kc">True</span>
<span class="k">break</span>
<span class="k">if</span> <span class="n">nonzero</span><span class="p">:</span>
<span class="k">break</span>
<span class="c1">#Knocking out other nonzero entries.</span>
<span class="k">if</span> <span class="n">nonzero</span><span class="p">:</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">x</span><span class="o">+</span><span class="mi">1</span><span class="p">,</span><span class="n">m</span><span class="p">):</span>
<span class="k">if</span> <span class="n">matrix</span><span class="p">[</span><span class="n">i</span><span class="p">,</span><span class="n">x</span><span class="p">]</span><span class="o">==</span><span class="mi">1</span><span class="p">:</span>
<span class="n">matrix</span><span class="p">[</span><span class="n">i</span><span class="p">,:]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">logical_xor</span><span class="p">(</span><span class="n">matrix</span><span class="p">[</span><span class="n">x</span><span class="p">,:],</span> <span class="n">matrix</span><span class="p">[</span><span class="n">i</span><span class="p">,:])</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">x</span><span class="o">+</span><span class="mi">1</span><span class="p">,</span><span class="n">n</span><span class="p">):</span>
<span class="k">if</span> <span class="n">matrix</span><span class="p">[</span><span class="n">x</span><span class="p">,</span><span class="n">i</span><span class="p">]</span><span class="o">==</span><span class="mi">1</span><span class="p">:</span>
<span class="n">matrix</span><span class="p">[:,</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">logical_xor</span><span class="p">(</span><span class="n">matrix</span><span class="p">[:,</span><span class="n">x</span><span class="p">],</span> <span class="n">matrix</span><span class="p">[:,</span><span class="n">i</span><span class="p">])</span>
<span class="c1">#Proceeding to next diagonal entry.</span>
<span class="k">return</span> <span class="n">_reduce</span><span class="p">(</span><span class="n">x</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="c1">#Run out of nonzero entries so done.</span>
<span class="k">return</span> <span class="n">x</span>
<span class="n">rank</span><span class="o">=</span><span class="n">_reduce</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
<span class="k">return</span> <span class="p">[</span><span class="n">matrix</span><span class="p">,</span> <span class="n">rank</span><span class="p">,</span> <span class="n">n</span><span class="o">-</span><span class="n">rank</span><span class="p">]</span>
<span class="c1"># Source: < https://triangleinequality.wordpress.com/2014/01/23/computing-homology/ ></span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">reduce_matrix</span><span class="p">(</span><span class="n">b1</span><span class="p">)</span>
<span class="c1">#Returns the matrix in Smith normal form as well as rank(B_n-1) and rank(Z_n)</span>
</pre></div>
<div class="highlight"><pre><span></span>[matrix([[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]]), 3, 2]
</pre></div>
<p>As you can see we got the same result by hand, but surely the algorithm was more efficient.</p>
<p>Since each boundary map gives us <span class="math">\(Z_n\)</span> (cycles) and <span class="math">\(B_{n-1}\)</span> (boundary for (n-1)-chain group) we need both <span class="math">\(\partial_n\)</span> and <span class="math">\(\partial_{n+1}\)</span> in order to calculate the Betti number for chain group <span class="math">\(n\)</span>. Remember, we now calculate Betti numbers as <br />
Betti <span class="math">\(b_n = rank(Z_n) - rank(B_n)\)</span></p>
<p>Let's start calculating those Betti numbers.</p>
<div class="highlight"><pre><span></span><span class="c1">#Initialize boundary matrices</span>
<span class="n">boundaryMap0</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">matrix</span><span class="p">([[</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]])</span>
<span class="n">boundaryMap1</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">matrix</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">],[</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">],[</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">],[</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">]])</span>
<span class="n">boundaryMap2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">matrix</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]])</span>
<span class="c1">#Smith normal forms of the boundary matrices</span>
<span class="n">smithBM0</span> <span class="o">=</span> <span class="n">reduce_matrix</span><span class="p">(</span><span class="n">boundaryMap0</span><span class="p">)</span>
<span class="n">smithBM1</span> <span class="o">=</span> <span class="n">reduce_matrix</span><span class="p">(</span><span class="n">boundaryMap1</span><span class="p">)</span>
<span class="n">smithBM2</span> <span class="o">=</span> <span class="n">reduce_matrix</span><span class="p">(</span><span class="n">boundaryMap2</span><span class="p">)</span>
<span class="c1">#Calculate Betti numbers</span>
<span class="n">betti0</span> <span class="o">=</span> <span class="p">(</span><span class="n">smithBM0</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">-</span> <span class="n">smithBM1</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="n">betti1</span> <span class="o">=</span> <span class="p">(</span><span class="n">smithBM1</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">-</span> <span class="n">smithBM2</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="n">betti2</span> <span class="o">=</span> <span class="mi">0</span> <span class="c1">#There is no n+1 chain group, so the Betti is 0</span>
<span class="nb">print</span><span class="p">(</span><span class="n">smithBM0</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">smithBM1</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">smithBM2</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Betti #0: </span><span class="si">%s</span><span class="s2"> </span><span class="se">\n</span><span class="s2"> Betti #1: </span><span class="si">%s</span><span class="s2"> </span><span class="se">\n</span><span class="s2"> Betti #2: </span><span class="si">%s</span><span class="s2">"</span> <span class="o">%</span> <span class="p">(</span><span class="n">betti0</span><span class="p">,</span> <span class="n">betti1</span><span class="p">,</span> <span class="n">betti2</span><span class="p">))</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="k">[matrix([[0, 0, 0, 0]]), 0, 4]</span>
<span class="na">[matrix([[1, 0, 0, 0, 0],</span>
<span class="w"> </span><span class="na">[0, 1, 0, 0, 0],</span>
<span class="w"> </span><span class="na">[0, 0, 1, 0, 0],</span>
<span class="w"> </span><span class="k">[0, 0, 0, 0, 0]]), 3, 2]</span>
<span class="k">[matrix([[1, 0, 0, 0, 0]]), 1, 4]</span>
<span class="na">Betti #0</span><span class="o">:</span><span class="w"> </span><span class="s">1</span><span class="w"> </span>
<span class="w"> </span><span class="na">Betti #1</span><span class="o">:</span><span class="w"> </span><span class="s">1</span><span class="w"> </span>
<span class="w"> </span><span class="na">Betti #2</span><span class="o">:</span><span class="w"> </span><span class="s">0</span>
</pre></div>
<p>Great it worked!</p>
<p>But we skipped an important step. We designed the boundary matrices by hand initially, in order to algorithm-ize the entire process from building a simplicial complex over data to computing Betti numbers, we need an algorithm that takes a simplicial complex and builds the boundary matrices. Let's tackle that now.</p>
<div class="highlight"><pre><span></span><span class="c1">#return the n-simplices in a complex</span>
<span class="k">def</span> <span class="nf">nSimplices</span><span class="p">(</span><span class="n">n</span><span class="p">,</span> <span class="nb">complex</span><span class="p">):</span>
<span class="n">nchain</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">simplex</span> <span class="ow">in</span> <span class="nb">complex</span><span class="p">:</span>
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">simplex</span><span class="p">)</span> <span class="o">==</span> <span class="p">(</span><span class="n">n</span><span class="o">+</span><span class="mi">1</span><span class="p">):</span>
<span class="n">nchain</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">simplex</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="n">nchain</span> <span class="o">==</span> <span class="p">[]):</span> <span class="n">nchain</span> <span class="o">=</span> <span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="k">return</span> <span class="n">nchain</span>
<span class="c1">#check if simplex is a face of another simplex</span>
<span class="k">def</span> <span class="nf">checkFace</span><span class="p">(</span><span class="n">face</span><span class="p">,</span> <span class="n">simplex</span><span class="p">):</span>
<span class="k">if</span> <span class="n">simplex</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
<span class="k">return</span> <span class="mi">1</span>
<span class="k">elif</span> <span class="nb">set</span><span class="p">(</span><span class="n">face</span><span class="p">)</span> <span class="o"><</span> <span class="nb">set</span><span class="p">(</span><span class="n">simplex</span><span class="p">):</span> <span class="c1">#if face is a subset of simplex</span>
<span class="k">return</span> <span class="mi">1</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="mi">0</span>
<span class="c1">#build boundary matrix for dimension n ---> (n-1) = p</span>
<span class="k">def</span> <span class="nf">boundaryMatrix</span><span class="p">(</span><span class="n">nchain</span><span class="p">,</span> <span class="n">pchain</span><span class="p">):</span>
<span class="n">bmatrix</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="nb">len</span><span class="p">(</span><span class="n">nchain</span><span class="p">),</span><span class="nb">len</span><span class="p">(</span><span class="n">pchain</span><span class="p">)))</span>
<span class="n">i</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="n">nSimplex</span> <span class="ow">in</span> <span class="n">nchain</span><span class="p">:</span>
<span class="n">j</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="n">pSimplex</span> <span class="ow">in</span> <span class="n">pchain</span><span class="p">:</span>
<span class="n">bmatrix</span><span class="p">[</span><span class="n">i</span><span class="p">,</span> <span class="n">j</span><span class="p">]</span> <span class="o">=</span> <span class="n">checkFace</span><span class="p">(</span><span class="n">pSimplex</span><span class="p">,</span> <span class="n">nSimplex</span><span class="p">)</span>
<span class="n">j</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="n">i</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="k">return</span> <span class="n">bmatrix</span><span class="o">.</span><span class="n">T</span>
</pre></div>
<p>Those are very simple helper functions that we'll use to build the boundary matrix and then use the previously described reduction algorithm to get it into Smith normal form. Remember, the simplicial complex example we're using looks like this:
<img src="images/TDAimages/part4/simplicialcomplex5b.svg" />
I've just replaced {a,b,c,d} with {0,1,2,3} so Python can understand it.</p>
<div class="highlight"><pre><span></span><span class="n">S</span> <span class="o">=</span> <span class="p">[{</span><span class="mi">0</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">3</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">}]</span> <span class="c1">#this is our simplex from above</span>
<span class="n">chain2</span> <span class="o">=</span> <span class="n">nSimplices</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">S</span><span class="p">)</span>
<span class="n">chain1</span> <span class="o">=</span> <span class="n">nSimplices</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">S</span><span class="p">)</span>
<span class="n">reduce_matrix</span><span class="p">(</span><span class="n">boundaryMatrix</span><span class="p">(</span><span class="n">chain2</span><span class="p">,</span> <span class="n">chain1</span><span class="p">))</span>
</pre></div>
<div class="highlight"><pre><span></span>[array([[ 1., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 0.]]), 3, 2]
</pre></div>
<p>Now let's put everything together and make a function that will return all the Betti numbers of a simplicial complex.</p>
<div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">betti</span><span class="p">(</span><span class="nb">complex</span><span class="p">):</span>
<span class="n">max_dim</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="nb">max</span><span class="p">(</span><span class="nb">complex</span><span class="p">,</span> <span class="n">key</span><span class="o">=</span><span class="nb">len</span><span class="p">))</span> <span class="c1">#get the maximum dimension of the simplicial complex, 2 in our example</span>
<span class="n">betti_array</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">(</span><span class="n">max_dim</span><span class="p">)</span> <span class="c1">#setup array to store n-th dimensional Betti numbers</span>
<span class="n">z_n</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">(</span><span class="n">max_dim</span><span class="p">)</span> <span class="c1">#number of cycles (from cycle group)</span>
<span class="n">b_n</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">(</span><span class="n">max_dim</span><span class="p">)</span> <span class="c1">#b_(n-1) boundary group</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">max_dim</span><span class="p">):</span> <span class="c1">#loop through each dimension starting from maximum to generate boundary maps</span>
<span class="n">bm</span> <span class="o">=</span> <span class="mi">0</span> <span class="c1">#setup n-th boundary matrix</span>
<span class="n">chain2</span> <span class="o">=</span> <span class="n">nSimplices</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="nb">complex</span><span class="p">)</span> <span class="c1">#n-th chain group</span>
<span class="k">if</span> <span class="n">i</span><span class="o">==</span><span class="mi">0</span><span class="p">:</span> <span class="c1">#there is no n+1 boundary matrix in this case</span>
<span class="n">bm</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">z_n</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">chain2</span><span class="p">)</span>
<span class="n">b_n</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">chain1</span> <span class="o">=</span> <span class="n">nSimplices</span><span class="p">(</span><span class="n">i</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="nb">complex</span><span class="p">)</span> <span class="c1">#(n-1)th chain group</span>
<span class="n">bm</span> <span class="o">=</span> <span class="n">reduce_matrix</span><span class="p">(</span><span class="n">boundaryMatrix</span><span class="p">(</span><span class="n">chain2</span><span class="p">,</span> <span class="n">chain1</span><span class="p">))</span>
<span class="n">z_n</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">bm</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span>
<span class="n">b_n</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">bm</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="c1">#b_(n-1)</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">max_dim</span><span class="p">):</span> <span class="c1">#Calculate betti number: Z_n - B_n</span>
<span class="k">if</span> <span class="p">(</span><span class="n">i</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="o"><</span> <span class="n">max_dim</span><span class="p">:</span>
<span class="n">betti_array</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">z_n</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">-</span> <span class="n">b_n</span><span class="p">[</span><span class="n">i</span><span class="o">+</span><span class="mi">1</span><span class="p">]</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">betti_array</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">z_n</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">-</span> <span class="mi">0</span> <span class="c1">#if there are no higher simplices, the boundary group of this chain is 0</span>
<span class="k">return</span> <span class="n">betti_array</span>
</pre></div>
<p>Alright, no we should have everything we need to calculate the set of Betti numbers on any arbitrary simplicial complex given in the right format. Keep in mind that all this code is for learning purposes so I've kept it intentionally simple. It is not production ready. It has basically no safety checks so it will just fail if it gets something even slightly unexpected.</p>
<p>But let's see how versatile our procedure is by trying it out on various simplicial complexes. </p>
<p>Let <span class="math">\(H = \text{ { {0}, {1}, {2}, {3}, {4}, {5}, {4, 5}, {0, 1}, {1, 2}, {2, 0}, {2, 3}, {3, 1}, {0, 1, 2} } }\)</span>
<img src="images/TDAimages/part4/simplicialComplex7a.png" /></p>
<p>As you can tell this is the same simplicial complex we've been working with except now it has a disconnected edge on the right. Thus we should get Betti=2 for dimension 0 since there are 2 connect components.</p>
<div class="highlight"><pre><span></span><span class="n">H</span> <span class="o">=</span> <span class="p">[{</span><span class="mi">0</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">4</span><span class="p">},</span> <span class="p">{</span><span class="mi">5</span><span class="p">},</span> <span class="p">{</span><span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">3</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">}]</span>
<span class="n">betti</span><span class="p">(</span><span class="n">H</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>array([ 2., 1., 0.])
</pre></div>
<p>Let's try another one, now with 2 cycles and 2 connect components. <br />
Let <span class="math">\(Y_1 = \text{ { {0}, {1}, {2}, {3}, {4}, {5}, {6}, {0, 6}, {2, 6}, {4, 5}, {0, 1}, {1, 2}, {2, 0}, {2, 3}, {3, 1}, {0, 1, 2} } }\)</span>
<img src="images/TDAimages/part4/simplicialComplex7b.png" /></p>
<div class="highlight"><pre><span></span><span class="n">Y1</span> <span class="o">=</span> <span class="p">[{</span><span class="mi">0</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">4</span><span class="p">},</span> <span class="p">{</span><span class="mi">5</span><span class="p">},</span> <span class="p">{</span><span class="mi">6</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span> <span class="mi">6</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span> <span class="mi">6</span><span class="p">},</span> <span class="p">{</span><span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">3</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">}]</span>
<span class="n">betti</span><span class="p">(</span><span class="n">Y1</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>array([ 2., 2., 0.])
</pre></div>
<p>Here's another. I just added a stranded vertex: <br />
Let <span class="math">\(Y_2 = \text{ { {0}, {1}, {2}, {3}, {4}, {5}, {6}, {7}, {0, 6}, {2, 6}, {4, 5}, {0, 1}, {1, 2}, {2, 0}, {2, 3}, {3, 1}, {0, 1, 2} } }\)</span>
<img src="images/TDAimages/part4/simplicialComplex7c.png" /></p>
<div class="highlight"><pre><span></span><span class="n">Y2</span> <span class="o">=</span> <span class="p">[{</span><span class="mi">0</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">4</span><span class="p">},</span> <span class="p">{</span><span class="mi">5</span><span class="p">},</span> <span class="p">{</span><span class="mi">6</span><span class="p">},</span> <span class="p">{</span><span class="mi">7</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span> <span class="mi">6</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span> <span class="mi">6</span><span class="p">},</span> <span class="p">{</span><span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">3</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">}]</span>
<span class="n">betti</span><span class="p">(</span><span class="n">Y2</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>array([ 3., 2., 0.])
</pre></div>
<p>One last one. This is a hollow tetrahedron:
<img src="images/TDAimages/part4/simplicialComplex8a.png" /></p>
<div class="highlight"><pre><span></span><span class="n">D</span> <span class="o">=</span> <span class="p">[{</span><span class="mi">0</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">3</span><span class="p">,</span><span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span><span class="mi">0</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span><span class="mi">1</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span><span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">},</span> <span class="p">{</span><span class="mi">2</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">3</span><span class="p">},</span> <span class="p">{</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">}]</span>
<span class="n">betti</span><span class="p">(</span><span class="n">D</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>array([ 1., 0., 1.])
</pre></div>
<p>Exactly what we expect! Okay, it looks like we can reliably calculate Betti numbers for any arbitrary simplicial complex. </p>
<h6>What's next?</h6>
<p>These first 4 posts were all just exposition on the math and concepts behind persistent homology, but so far all we've done is (non-persistent) homology. Remember back in part 2 where we wrote an algorithm to build a simplicial complex from data? Well recall that we needed to arbitrarily choose a parameter <span class="math">\(\epsilon\)</span> that determined whether or not two vertices were close enough to connect with an edge. If we set a small <span class="math">\(\epsilon\)</span> then we'd have a very dense graph with a lot of edges, if we chose a large <span class="math">\(\epsilon\)</span> then we'd get a more sparse graph.</p>
<p>The problem is we have no way of knowing what the "correct" <span class="math">\(\epsilon\)</span> value should be. We will get dramatically different simplicial complexes (and thus different homology groups and Betti numbers) with varying levels of <span class="math">\(\epsilon\)</span>. Persistent homology basically says: let's just continuously scale <span class="math">\(\epsilon\)</span> from 0 to the maximal value (where all vertices are edge-wise connected) and see which topological features <em>persist</em> the longest. We then believe that topological features (e.g. connected components, cycles) that are short-lived across scaling <span class="math">\(\epsilon\)</span> are noise whereas those that are long-lived (i.e. persistent) are <em>real</em> features of the data. So next time we will work on modifying our algorithms to be able to continuously vary <span class="math">\(\epsilon\)</span> while tracking changes in the calculated homology groups.</p>
<h4>References (Websites):</h4>
<ol>
<li>http://dyinglovegrape.com/math/topology_data_1.php</li>
<li>http://www.math.uiuc.edu/~r-ash/Algebra/Chapter4.pdf</li>
<li>https://en.wikipedia.org/wiki/Group_(mathematics)</li>
<li>https://jeremykun.com/2013/04/03/homology-theory-a-primer/</li>
<li>http://suess.sdf-eu.org/website/lang/de/algtop/notes4.pdf</li>
<li>http://www.mit.edu/~evanchen/napkin.html</li>
<li>https://triangleinequality.wordpress.com/2014/01/23/computing-homology</li>
</ol>
<h4>References (Academic Publications):</h4>
<ol>
<li>
<p>Basher, M. (2012). On the Folding of Finite Topological Space. International Mathematical Forum, 7(15), 745–752. Retrieved from http://www.m-hikari.com/imf/imf-2012/13-16-2012/basherIMF13-16-2012.pdf</p>
</li>
<li>
<p>Day, M. (2012). Notes on Cayley Graphs for Math 5123 Cayley graphs, 1–6.</p>
</li>
<li>
<p>Doktorova, M. (2012). CONSTRUCTING SIMPLICIAL COMPLEXES OVER by, (June).</p>
</li>
<li>
<p>Edelsbrunner, H. (2006). IV.1 Homology. Computational Topology, 81–87. Retrieved from http://www.cs.duke.edu/courses/fall06/cps296.1/</p>
</li>
<li>
<p>Erickson, J. (1908). Homology. Computational Topology, 1–11.</p>
</li>
<li>
<p>Evan Chen. (2016). An Infinitely Large Napkin.</p>
</li>
<li>
<p>Grigor’yan, A., Muranov, Y. V., & Yau, S. T. (2014). Graphs associated with simplicial complexes. Homology, Homotopy and Applications, 16(1), 295–311. http://doi.org/10.4310/HHA.2014.v16.n1.a16</p>
</li>
<li>
<p>Kaczynski, T., Mischaikow, K., & Mrozek, M. (2003). Computing homology. Homology, Homotopy and Applications, 5(2), 233–256. http://doi.org/10.4310/HHA.2003.v5.n2.a8</p>
</li>
<li>
<p>Kerber, M. (2016). Persistent Homology – State of the art and challenges 1 Motivation for multi-scale topology. Internat. Math. Nachrichten Nr, 231(231), 15–33.</p>
</li>
<li>
<p>Khoury, M. (n.d.). Lecture 6 : Introduction to Simplicial Homology Topics in Computational Topology : An Algorithmic View, 1–6.</p>
</li>
<li>
<p>Kraft, R. (2016). Illustrations of Data Analysis Using the Mapper Algorithm and Persistent Homology.</p>
</li>
<li>
<p>Lakshmivarahan, S., & Sivakumar, L. (2016). Cayley Graphs, (1), 1–9.</p>
</li>
<li>
<p>Liu, X., Xie, Z., & Yi, D. (2012). A fast algorithm for constructing topological structure in large data. Homology, Homotopy and Applications, 14(1), 221–238. http://doi.org/10.4310/HHA.2012.v14.n1.a11</p>
</li>
<li>
<p>Naik, V. (2006). Group theory : a first journey, 1–21.</p>
</li>
<li>
<p>Otter, N., Porter, M. A., Tillmann, U., Grindrod, P., & Harrington, H. A. (2015). A roadmap for the computation of persistent homology. Preprint ArXiv, (June), 17. Retrieved from http://arxiv.org/abs/1506.08903</p>
</li>
<li>
<p>Semester, A. (2017). § 4 . Simplicial Complexes and Simplicial Homology, 1–13.</p>
</li>
<li>
<p>Singh, G. (2007). Algorithms for Topological Analysis of Data, (November).</p>
</li>
<li>
<p>Zomorodian, A. (2009). Computational Topology Notes. Advances in Discrete and Computational Geometry, 2, 109–143. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.7483</p>
</li>
<li>
<p>Zomorodian, A. (2010). Fast construction of the Vietoris-Rips complex. Computers and Graphics (Pergamon), 34(3), 263–271. http://doi.org/10.1016/j.cag.2010.03.007</p>
</li>
<li>
<p>Symmetry and Group Theory 1. (2016), 1–18. http://doi.org/10.1016/B978-0-444-53786-7.00026-5</p>
</li>
</ol>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Persistent Homology (Part 3)2017-02-23T00:30:00-06:002017-02-23T00:30:00-06:00Brandon Browntag:outlace.com,2017-02-23:/TDApart3.html<p>In part 3 we start calculating homology groups and Betti numbers of simplicial complexes.</p><h2>Topological Data Analysis - Part 3 - Persistent Homology</h2>
<p>This is Part 3 in a series on topological data analysis.
See <a href="TDApart1.html">Part 1</a> | <a href="TDApart2.html">Part 2</a> | <a href="TDApart4.html">Part 4</a> | <a href="TDApart5.html">Part 5</a></p>
<p>In this part we begin to apply our the math we learned from Parts 1-2 to actually calculating the interesting topological features of a simplicial complex.</p>
<h4>Back to simplicial homology</h4>
<p>We've finally covered enough group theory to be able to finish our calculations of homology groups on simplicial complexes. As you should recall, we had given definitions for the n-th homology group and the n-th Betti numbers.</p>
<p>Betti number's are what we ultimately want. They nicely summarize the topological properties of a simplicial complex. If we have a simplicial complex that forms a single circular object, then <span class="math">\(b_0\)</span> (the 0th Betti Number) represents the number of connected components which is 1, <span class="math">\(b_1\)</span> is the number of 1-dimensional holes (i.e. cycles) which is 1, and <span class="math">\(b_n, n \gt 1\)</span> are the higher-dimensional holes of which there are zero.</p>
<p>Let's see if we can calculate the Betti Numbers of a simple triangle simplicial complex. </p>
<p>Recall that <span class="math">\(\mathcal T = \text{ {{a}, {b}, {c}, [a, b], [b, c], [c, a]} }\)</span>. (Depicted below).
<img src="images/TDAimages/triangleSimplex.svg" /></p>
<p>Since we know, by visual inspection, that <span class="math">\(T\)</span> <em>should</em> have Betti numbers <span class="math">\(b_0 = 1\)</span> (1 connected component), <span class="math">\(b_1 = 1\)</span> (1 one-dimensional hole), we will only compute those Betti numbers.</p>
<p>Let's walk through the whole sequence of steps slowly. First we'll note the n-chains.</p>
<p>The 0-chain is the set of 0-simplices: <span class="math">\(\text{ {{a}, {b}, {c} } }\)</span>
The 1-chain is the set of 1-simplices: <span class="math">\(\text{ [a, b], [b, c], [c, a] }\)</span>
There are no higher-dimensional n-chains.</p>
<p>Now we can use the n-chains to define our <em>chain groups</em>. We're going to be using coefficients from <span class="math">\(\mathbb Z_2\)</span>, which is a field, and remember there are only 2 elements <span class="math">\(\{0,1\}\)</span> where <span class="math">\(1+1=0\)</span>. </p>
<p>The 0-chain group is defined as:
</p>
<div class="math">$$C_0 = \{\{x*(a,0,0)\}, \{y*(0,b,0)\}, \{z*(0,0,c)\} \mid x,y,z \in \mathbb Z_2\} \\$$</div>
<p>
Remember a group only has an addition operation defined but we're <em>building</em> the group by using a multiplication operation from the field <span class="math">\(\mathbb Z_2\)</span>. So this group is actually isomorphic to <span class="math">\(\mathbb Z_{2}^{3} = Z_{2} \oplus Z_{2} \oplus Z_{2}\)</span>.</p>
<p>But we also want to represent our chain groups as a vector space. This means it becomes a structure where elements can be scaled up or down (i.e. multiplication operation) by elements of a field (in our case <span class="math">\(\mathbb Z_2\)</span>) and added together, with all results still contained in the structure. If we only pay attention to the addition operation then we're basically looking at its group structure, whereas if we pay attention to both the multiplcation and addition operations then we are considering it as a vector space.</p>
<p>0-chain vector space is generated by:
</p>
<div class="math">$$\mathscr C_0 = \{\{x*(a,0,0)\}, \{y*(0,b,0)\}, \{z*(0,0,c)\} \mid x,y,z \in \mathbb Z_2\} \\$$</div>
<p>
(Yes it's the same set that forms the group from above).</p>
<p>The vector space is the set of elements we can multiply by 0 or 1, and add them together. For example, we can do: <span class="math">\(1*(a,0,0) + 1*(0,0,c) = (a,0,c)\)</span>. This vector space is so small <span class="math">\((2^3=8\ \text{elements})\)</span> we can actually list out all the elements. Here they are:</p>
<div class="math">$$\mathscr{C_0} = \begin{Bmatrix} (a,0,0), (0,b,0), (0,0,c), (a,b,0) \\
(a,b,c), (0,b,c), (a,0,c), (0,0,0) \end{Bmatrix} \\ $$</div>
<p>You can see that we can add any two elements in this vector space and the result will be another element in the vector space. A random example: <span class="math">\((a,0,c) + (a,b,c) = (a+a,0+b,c+c) = (0,b,0)\)</span>. Recall addition is component-wise. We can also multiply vectors by an element in our field <span class="math">\(\mathbb Z_2\)</span> but since our field is finite with only 2 elements, it's not very interesting, e.g. <span class="math">\(1*(a,b,0) = (a,b,0)\)</span> and <span class="math">\(0*(a,b,0) = (0,0,0)\)</span>, but none the less, the multiplication operation still results in an element within our vector space.</p>
<p>We can represent this vector space as a polynomial, so our 0-chain vector space can be defined equivalently as:
</p>
<div class="math">$$ \mathscr{C_0} = \{xa + yb + zc \mid z,y,z \in \mathbb Z_2\}$$</div>
<p>
We can easily translate a polynomial like <span class="math">\(a+b+c\)</span> to its ordered-set notation <span class="math">\((a,b,c)\)</span>. Or <span class="math">\(a+b\)</span> is <span class="math">\((a,b,0)\)</span>. The vector space seen as a set of polynomials looks like this:
</p>
<div class="math">$$ \mathscr{C_0} = \begin{Bmatrix} \text{ {a}, {b}, {c}, {a+b} $\\$
{a+b+c}, {b+c}, {a+c}, {0} } \end{Bmatrix} \\ $$</div>
<p>
It's more convenient to work with the polynomial form, in general, because we can make familiar algebraic equations like </p>
<div class="math">$$a+b=0 \\
a = -b \\
a = b $$</div>
<p>
(Recall that the inverse of an element in <span class="math">\(\mathbb Z_2\)</span> is just itself, hence <span class="math">\(-b = b\)</span> where "<span class="math">\(-\)</span>" denotes inverse).</p>
<blockquote>
<p><strong>NOTE</strong>: It is very important to keep track of whether we're talking about groups or vector spaces. I will use a normal letter <span class="math">\(C\)</span> to denote the chain <strong>group</strong> and the fancy script <span class="math">\(\mathscr{C}\)</span> to denote the chain <strong>(vector) space</strong>. They have the same underlying <em>set</em>, only different operations defined. If we talk about the group form we can only reference its addition operation, whereas if we talk about its vector space form we can talk about its multiplication and addition operation.</p>
</blockquote>
<p>Let's do the same for the 1-chains: <span class="math">\(\text{ [a, b], [b, c], [c, a] }\)</span>. We can use the 1-chain set to define another chain group, <span class="math">\(C_1\)</span>. It will be isomorphic to <span class="math">\(C_0\)</span> and hence <span class="math">\(\mathbb Z_{2}^{3}\)</span>.
</p>
<div class="math">$$C_1 = \{\ (\ x([a, b]), y([b, c]), z([c, a])\ ) \mid x,y,z \in \mathbb Z_2\ \} $$</div>
<p>We can define a vector space, <span class="math">\(\mathscr C_1\)</span>, using this chain group in the same way we did for <span class="math">\(C_0\)</span>. I will use the polynomial form henceforth. Remember, the chain group and vector space have the same set, its just that the vector space has two binary operations instead of one.</p>
<p>This is the full list of elements in the vector space:
</p>
<div class="math">$$\mathscr{C_1} = \begin{Bmatrix} \text{
{[a, b]}, {[b, c]}, {[c, a]}, {[a, b] + [b, c]}, $\\$
{[b, c] + [c, a]}, {[a, b] + [c, a]}, {[a, b] + [b, c] + [c, a]}, {0} } \end{Bmatrix} \\$$</div>
<p>Just for clarification about the boundary map, here is a diagram of it. This shows how the boundary operator maps each element in <span class="math">\(C_1\)</span> to an element in <span class="math">\(C_0\)</span>.
<img src="images/TDAimages/boundarymap1.png" /></p>
<p>Now we can start computing the first Betti number, <span class="math">\(b_0\)</span>.</p>
<p>Recall the definition of a Betti number is:</p>
<blockquote>
<p>The n-th Betti number, <span class="math">\(b_n = dim(H_n)\)</span>, where <span class="math">\(H_n\)</span> is the n-th homology group.</p>
</blockquote>
<p>And recall the definition of a homology group</p>
<blockquote>
<p>The <span class="math">\(n^{th}\)</span> Homology Group <span class="math">\(H_n\)</span> is defined as <span class="math">\(H_n\)</span> = Ker <span class="math">\(\partial_n \ / \ \text{Im } \partial_{n+1}\)</span>.</p>
</blockquote>
<p>Lastly, recall the definition of a kernel:</p>
<blockquote>
<p>The kernel of <span class="math">\(\partial(C_n)\)</span>, denoted <span class="math">\(\text{Ker}(\partial(C_n))\)</span> is the group of <span class="math">\(n\)</span>-chains <span class="math">\(Z_n \subseteq C_n\)</span> such that <span class="math">\(\partial(Z_n) = 0\)</span></p>
</blockquote>
<p>So first we need the kernel of the boundary of <span class="math">\(C_0\)</span>, Ker <span class="math">\(\partial(C_0)\)</span>. Remember the boundary map <span class="math">\(\partial\)</span> gives us a map from <span class="math">\(C_n \rightarrow C_{n-1}\)</span>.</p>
<p>In all cases, the boundary of a 0-chain is <span class="math">\(0\)</span>, thus the Ker <span class="math">\(\partial(C_0)\)</span> is the whole 0-chain.</p>
<div class="math">$$\text{Ker} \partial(C_0) = \{a, b, c\}$$</div>
<p> This forms another group we will denote <span class="math">\(Z_0\)</span> (or <span class="math">\(Z_n\)</span> generally), the group of 0-cycles, which in general is a subgroup of <span class="math">\(C_0\)</span>, i.e. <span class="math">\(Z_n \leq C_n\)</span>. With addition defined over <span class="math">\(\mathbb Z_2\)</span> then <span class="math">\(Z_0\)</span> is also ismorphic to <span class="math">\(\mathbb Z_2\)</span> and hence is the same as <span class="math">\(C_0\)</span>.</p>
<p>The other thing we need to find the homology group <span class="math">\(H_0\)</span> is <span class="math">\(\text{Im } \partial_{1}\)</span>. This forms a subgroup of <span class="math">\(Z_0\)</span> that we denote <span class="math">\(B_0\)</span> (or more generally <span class="math">\(B_n\)</span>) which is the group of boundaries of the (n+1)-chain. Hence, <span class="math">\(B_n \leq Z_n \leq C_n\)</span>.</p>
<div class="math">$$\partial{C_1} = \partial({[a, b], [b, c], [c, a]} = (a + b) + (b + c) + (c + a) \\
\partial{C_1} = (a + b) + (b + c) + (c + a) = (a + a) + (b + b) + (c + c) = (0) + (0) + (0) = 0 \\
\partial{C_1} = 0 $$</div>
<p>
So <span class="math">\(\text{Im } \partial_{1} = {0}\)</span></p>
<p>Thus we compute the quotient group <span class="math">\(H_0 = Z_0\ /\ B_0\)</span>, which in this case is:
</p>
<div class="math">$$Z_0 = \text { { {a, b, c}, {0} } } \\
B_0 = \{0\} $$</div>
<p>
so we take the left cosets of <span class="math">\(\{0\}\)</span> for the two elements in <span class="math">\(Z_0\)</span> to get the quotient group, which gives us:
</p>
<div class="math">$$ Z_0\ /\ B_0 = \text { { {a, b, c}, {0} } } = Z_0$$</div>
<p>So in general, if <span class="math">\(B_n = \{0\}\)</span> then <span class="math">\(Z_n\ /\ B_n = Z_n\)</span>. Hence <span class="math">\(H_0 = Z_0\)</span>.</p>
<p>The Betti number <span class="math">\(b_0\)</span> is the dimension of <span class="math">\(H_0 = Z_0\)</span>. What is the dimension of <span class="math">\(H_0\)</span>? Well, it has two elements, but the dimension is defined as the smallest set of generators for a group, and since this group is isomorphic to <span class="math">\(\mathbb Z_2\)</span>, it only has 1 generator. For <span class="math">\(\mathbb Z_2\)</span> the generator is <span class="math">\(1\)</span>, since the whole group can be formed by repeatedly applying the addition operation on <span class="math">\(1\)</span>, i.e. <span class="math">\(1+1=0, 1+1+1 = 1\)</span> and now we have the full <span class="math">\(\mathbb Z_2\)</span>. </p>
<p>So <span class="math">\(b_0 = dim(H_0) = 1\)</span>, which is what we expected, this simplicial complex has 1 connected component.</p>
<p>Now let's start calculating Betti <span class="math">\(b_1\)</span> for the 1-dimension. This time it will be a bit different because figuring out <span class="math">\(\text{Ker}\partial(C_1)\)</span> is going to be more involved. We're going to need to do some algebra.</p>
<p>So, we first want <span class="math">\(Z_1\)</span>, the group of 1-cycles. This is the set of 1-simplices whose boundary is 0. Remember the 1-chain is <span class="math">\(\{ [a,b], [b,c], [c,a]\}\)</span> and forms the 1-chain group <span class="math">\(C_1\)</span> when applied over <span class="math">\(\mathbb Z_2\)</span>. We will setup an equation like this:</p>
<div class="math">$$
\begin{aligned}
\mathscr C_1 &= \lambda_0([a,b]) + \lambda_1([b,c]) \lambda_2([c,a]) \text{ ... where $\lambda_n \in \mathbb Z_2$. $\\$ This is the general form of any element in the vector space $\mathscr C_1$} \\
\lambda_0([a,b]) + \lambda_1([b,c]) + \lambda_2([c,a]) &= 0 \text{ ... then take the boundary of that} \\
\partial(\lambda_0([a,b]) + \lambda_1([b,c]) + \lambda_2([c,a])) &= 0 \\
\lambda_0(a+b) + \lambda_1(b+c) + \lambda_2(c+a) &= 0 \\
\lambda_0{a} + \lambda_0{b} + \lambda_1{b} + \lambda_1{c} + \lambda_2{c} + \lambda_2{a} &= 0 \\
a(\lambda_0 + \lambda_2) + b(\lambda_0 + \lambda_1) + c(\lambda_1 + \lambda_2) &= 0 \text{ ...factor out the a,b,c} \\
\end{aligned}
$$</div>
<p>For this equation to be satisfied, then all the coefficients <span class="math">\(\lambda_n\)</span> for each term need to sum to 0 to make <span class="math">\(a,b,c\)</span> go to <span class="math">\(0\)</span>. That is, if the whole equation goes to <span class="math">\(0\)</span>, then each term like <span class="math">\(a(\lambda_0 + \lambda_2) = 0\)</span>, hence $(\lambda_0 + \lambda_2) = 0, $. So now we basically have a system of linear equations:</p>
<div class="math">$$\lambda_0 + \lambda_2 = 0 \\
\lambda_0 + \lambda_1 = 0 \\
\lambda_1 + \lambda_2 = 0 \\
\text{...now solve...} \\
\lambda_0 = \lambda_2 \\
\lambda_0 = \lambda_1 \\
\lambda_1 = \lambda_1 \\
\lambda_0 = \lambda_1 = \lambda_2 \\
\text{So for the equation above to be satisfied, all the coefficients $\lambda_n$ must be equal. Let's just replace all the lambdas with a single symbol, phi, i.e...} \\
\lambda_0, \lambda_1, \lambda_2 = \phi$$</div>
<p>Now, let's go back to the general expression for the 1-chain vector space <span class="math">\(\mathscr C_1 = \lambda_0([a,b]) + \lambda_1([b,c]) + \lambda_2([c,a])\)</span>. When we take the boundary of that and set it to 0, we get <span class="math">\(\lambda_0 = \lambda_1 = \lambda_2\)</span>, and I proposed we just replace those with one symbol, <span class="math">\(\phi\)</span>. </p>
<p>Hence, the cycle group:
</p>
<div class="math">$$Z_1 \leq \mathscr C_1 = \phi([a,b]) + \phi([b,c]) + \phi([c,a]) \\
Z_1 = \phi([a,b] + [b,c] + [c,a]), \text{ ...remember $\phi$ is from $\mathbb Z_2$ so it is either 0 or 1.}
$$</div>
<p>So the cycle group only contains two elements and is hence isomorphic to <span class="math">\(\mathbb Z_2\)</span>.</p>
<blockquote>
<p><strong>NOTE</strong>: I will introduce new notation. If two mathematical structures (e.g. groups) <span class="math">\(G_1, G_2\)</span> are isomorphic, we denote it as <span class="math">\(G_1 \cong G_2\)</span></p>
</blockquote>
<p>If <span class="math">\(\phi = 0\)</span>, then we get <span class="math">\(\phi([a,b] + [b,c] + [c,a]) = 0([a,b] + [b,c] + [c,a]) = 0 \\\)</span>
, whereas if <span class="math">\(\phi = 1\)</span>, then <span class="math">\(\phi([a,b] + [b,c] + [c,a]) = 1([a,b] + [b,c] + [c,a]) = [a,b] + [b,c] + [c,a] \\\)</span>
So the full group is:
</p>
<div class="math">$$Z_1 = \begin{Bmatrix} [a,b] + [b,c] + [c,a] \\ 0 \end{Bmatrix}$$</div>
<p>The boundary group <span class="math">\(B_1 = Im\partial(C_2)\)</span> but since <span class="math">\(C_2\)</span> is an empty set, <span class="math">\(B_1 = \{0\}\)</span>.</p>
<p>So once again we can compute the homology group:
</p>
<div class="math">$$H_1 = Z_1 / B_1 = \begin{Bmatrix} [a,b] + [b,c] + [c,a] \\ 0 \end{Bmatrix}$$</div>
<p>And Betti <span class="math">\(b_1 = dim(H_1) = 1\)</span> since we only have one generator in the group <span class="math">\(H_1\)</span>.</p>
<p>So that's it for that very simple simplicial complex. We'll move on to a bigger complex. This time I won't be as verbose and will use a lot of the simplifying notation and conventions that I've already defined or described.</p>
<p>Let's do the same but with a slightly more complicated simplicial complex that we've seen before:
</p>
<div class="math">$$S = \text{{[a], [b], [c], [d], [a, b], [b, c], [c, a], [c, d], [d, b], [a, b, c]}}$$</div>
<p> (Depicted below).
<img src="images/TDAimages/simplicialcomplex5b.svg" /></p>
<p>Notice we now have a 2-simplex [a, b, c], depicted as the filled in triangle.</p>
<p>This time we will use the full integer field <span class="math">\(\mathbb Z\)</span> for our coefficients, thus the resulting vector spaces will be infinite instead of finite spaces that we could list out. Since we're using <span class="math">\(\mathbb Z\)</span>, we must define what it means to be a "negative" simplex, e.g. what does <span class="math">\(-[c,a]\)</span> <em>mean</em>? Well we discussed this previously. Basically we define two ways a simplex can be oriented and opposite orientation to the original definition will be assigned the "negative" value of a simplex.</p>
<p>So <span class="math">\([a,c] = -[c,a]\)</span>. But what about <span class="math">\([a,b,c]\)</span>? There are more than two ways to permute a 3 element list, but there only two orientations. <br />
If you look at the oriented simplex from before:
<img src="images/TDAimages/trianglesimplex.svg" />
There are only two ways you can "go around" the loop. Either clockwise or counterclockwise.<br />
<span class="math">\([a,b,c]\)</span> is clockwise (and we'll call it positive).<br />
<span class="math">\([c,a,b]\)</span> is also clockwise (so <span class="math">\([c,a,b] = [a,b,c]\)</span>)<br />
<span class="math">\([a,c,b]\)</span> is a counterclockwise, as well as <span class="math">\([b,c,a]\)</span>, so <span class="math">\([a,b,c] = [c,a,b] = -[a,c,b] = -[b,c,a]\)</span>.<br /></p>
<p>Let's start by listing our chain groups.
</p>
<div class="math">$$ C_0 = \langle{a,b,c,d}\rangle \cong \mathbb Z^4 \\
C_1 = \langle{[a, b], [b, c], [c, a], [c, d], [d, b]}\rangle \cong \mathbb Z^5 \\
C_2 = \langle{[a, b, c]}\rangle \cong \mathbb Z \\ $$</div>
<p>Recall the angle brackets mean <em>span</em>, i.e. the set of all linear combinations of the elements between the angle brackets. This is obviously much more succinct than how we we're building the groups in our last example. And note how each group is isomorphic to the vector space <span class="math">\(\mathbb Z^n\)</span> where <span class="math">\(n\)</span> is the number of simplices in the n-chain.<br /></p>
<p>We can thus describe our <em>chain complex</em> as:</p>
<div class="math">$$C_2 \stackrel{\partial_1}\rightarrow C_1 \stackrel{\partial_0}\rightarrow C_0$$</div>
<p>We know, since we can easily visualize the simplicial complex, that it has one connected component and one 1-cycle (one 1-dimensional hole). Hence, Betti <span class="math">\(b_0 = 1, b_1 = 1\)</span>. But we need to calculate that for ourselves.</p>
<p>Let's start from the higher-dimensional chain group, the 2-chain group.</p>
<p>Remember, <span class="math">\(Z_n = \text{Ker}\partial(C_n)\)</span> the group of n-cycles, which is a subgroup of <span class="math">\(C_n\)</span>. And <span class="math">\(B_n = \text{Im}\partial(C_{n+1})\)</span> is the group of n-boundaries, which is a subgroup of the n-cycles. Hence <span class="math">\(B_n \leq Z_n \leq C_n\)</span>. Also recall, the homology group <span class="math">\(H_n = Z_n\ /\ B_n\)</span> and the n-th Betti number is the dimension of the n-th homology group.</p>
<p>To find <span class="math">\(Z_n\)</span>, we have to setup our expression for a general element in <span class="math">\(C_n\)</span>.
</p>
<div class="math">$$
\begin{aligned}
C_2 &= \lambda_0{[a,b,c]}, \lambda_0 \in \mathbb{Z} \\
Z_2 &= \text{Ker}\partial{(C_2)} \\
\partial{(C_2)} &= \lambda_0{([b,c])} - \lambda_0{([a,c])} + \lambda_0{([a,b])} \text{ ...set it equal to 0 to get Kernel} \\
\lambda_0{([b,c])} - \lambda_0{([a,c])} + \lambda_0{([a,b])} &= 0 \\
\lambda_0{([b,c] - [a,c] + [a,b])} &= 0 \\
\lambda_0 &= 0 \text{ ... there is only a single solution where $\lambda_0 = 0$, then $0=0$. } \\
\lambda_0{[a,b,c]} &= 0, \lambda_0 = 0 \text{ ... so nothing in $C_2$ goes to 0, thus it's only {0} in the Kernel} \\
... \\
\text{Ker}\partial{(C_2)} &= \{0\} \\
\end{aligned}
$$</div>
<p>Since there are no 3-simplices or higher, <span class="math">\(B_2 = {0}\)</span>. Thus Betti <span class="math">\(b_2 = dim(\{0\} / \{0\}) = 0\)</span>. This is what we expect, there are no 2-dimensional holes in the simplicial complex.</p>
<p>Let's do the same for <span class="math">\(C_1\)</span>.</p>
<div class="math">$$
\begin{aligned}
C_1 &= \lambda_0[a, b] + \lambda_1[b, c] + \lambda_2[c, a] + \lambda_3[c, d] + \lambda_4[d, b], \lambda_n \in \mathbb{Z} \\
Z_1 &= \text{Ker}\partial{(C_1)} \\
\partial{(C_1)} &= \lambda_0{(a - b)} + \lambda_1{(b - c)} + \lambda_2{(c - a)} + \lambda_3{(c - d)} + \lambda_4{(d - b)} \\
\text{ ...set it equal to 0 to get Kernel} \\
\lambda_0{(a - b)} + \lambda_1{(b - c)} + \lambda_2{(c - a)} + \lambda_3{(c - d)} + \lambda_4{(d - b)} &= 0 \\
\lambda_0a - \lambda_0b + \lambda_1b - \lambda_1c + \lambda_2c - \lambda_2a + \lambda_3c - \lambda_3d + \lambda_4d - \lambda_4b &= 0 \text { ...factor out the a,b,c,d}\\
a(\lambda_0 - \lambda_2) + b(\lambda_1 - \lambda_0 - \lambda_4) + c(\lambda_2 - \lambda_1 + \lambda_3) + d(\lambda_4 - \lambda_3) &= 0 \\
\text{Now we can setup a system of linear equations...} \\
\lambda_0 - \lambda_2 &= 0 \\
\lambda_1 - \lambda_0 - \lambda_4 &= 0 \\
\lambda_2 - \lambda_1 + \lambda_3 &= 0 \\
\lambda_4 - \lambda_3 &= 0 \\
\text{Solve the equations, plug the solutions back into the original $C_1$ expression...} \\
\lambda_0([a,b] + [b,c] + [c,a]) + \lambda_3([b,c] + \cancel{[a,c]} + [c,d] + \cancel{[c,a]} + [d,b]) &= \text{Ker}\partial{(C_1)} \\
\text{Ker}\partial{(C_1)} &= \lambda_0([a,b] + [b,c] + [c,a]) + \lambda_3([b,c] + [c,d] + [d,b]) \\
Z_1 = \text{Ker}\partial{(C_1)} &\cong \mathbb Z^2
\end{aligned}
$$</div>
<p>Now to get the boundaries <span class="math">\(B_1 = Im\partial(C_2)\)</span>.
</p>
<div class="math">$$
\begin{aligned}
\partial(C_2) &= \lambda_0{([b,c])} - \lambda_0{([a,c])} + \lambda_0{([a,b])} \text {... remember $-[a,c] = [c,a]$ ...} \\
\partial(C_2) &= \lambda_0{([b,c] + [c,a] + [a,b])} \\
B_1 = Im\partial(C_2) &= \{\lambda_0{([b,c] + [c,a] + [a,b])}\}, \lambda_0 \in \mathbb Z \\
B_1 &\cong Z \\
H_1 = Z_1\ /\ B_1 &= \{\lambda_0([a,b] + [b,c] + [c,a]) + \lambda_3([b,c] + [c,d] + [d,b])\}\ /\ \{\lambda_0{([b,c] + [c,a] + [a,b])}\} \\
H_1 &= \{\lambda_3([b,c] + [c,d] + [d,b])\} \cong \mathbb Z
\end{aligned}
$$</div>
<p>Another way to more easily take the quotient group <span class="math">\(H_1 = Z_1\ /\ B_1\)</span> is to just pay attention to what <span class="math">\(Z_n, B_n\)</span> are isomorphic to in terms of <span class="math">\(\mathbb Z^n\)</span>. In this case:
</p>
<div class="math">$$ Z_1 \cong \mathbb Z^2 \\
B_1 \cong \mathbb Z^1 \\
H_1 = \mathbb Z^2\ /\ \mathbb Z = \mathbb Z$$</div>
<p>So since <span class="math">\(H_1 \cong \mathbb Z\)</span>, the Betti number for <span class="math">\(H_1\)</span> is 1 because the dimension of <span class="math">\(\mathbb Z\)</span> is 1 (it only has one generator or basis).</p>
<p>So I think you get the point by now, I'm not going to go through all the details of calculating Betti <span class="math">\(b_0\)</span>, it should be 1 since there is only one connected component.</p>
<h5>Next time...</h5>
<p>We've learned how to calculate homology groups and Betti numbers of simple simplicial complexes by hand. But we're going to need to develop some new tools so that we can let computer algorithms handle these calculations for the real, and generally much larger, simplicial complexes. Next time we'll see how linear algebra gives us an efficient means of doing this.</p>
<h4>References (Websites):</h4>
<ol>
<li>http://dyinglovegrape.com/math/topology_data_1.php</li>
<li>http://www.math.uiuc.edu/~r-ash/Algebra/Chapter4.pdf</li>
<li>https://en.wikipedia.org/wiki/Group_(mathematics)</li>
<li>https://jeremykun.com/2013/04/03/homology-theory-a-primer/</li>
<li>http://suess.sdf-eu.org/website/lang/de/algtop/notes4.pdf</li>
<li>http://www.mit.edu/~evanchen/napkin.html</li>
</ol>
<h4>References (Academic Publications):</h4>
<ol>
<li>
<p>Basher, M. (2012). On the Folding of Finite Topological Space. International Mathematical Forum, 7(15), 745–752. Retrieved from http://www.m-hikari.com/imf/imf-2012/13-16-2012/basherIMF13-16-2012.pdf</p>
</li>
<li>
<p>Day, M. (2012). Notes on Cayley Graphs for Math 5123 Cayley graphs, 1–6.</p>
</li>
<li>
<p>Doktorova, M. (2012). CONSTRUCTING SIMPLICIAL COMPLEXES OVER by, (June).</p>
</li>
<li>
<p>Edelsbrunner, H. (2006). IV.1 Homology. Computational Topology, 81–87. Retrieved from http://www.cs.duke.edu/courses/fall06/cps296.1/</p>
</li>
<li>
<p>Erickson, J. (1908). Homology. Computational Topology, 1–11.</p>
</li>
<li>
<p>Evan Chen. (2016). An Infinitely Large Napkin.</p>
</li>
<li>
<p>Grigor’yan, A., Muranov, Y. V., & Yau, S. T. (2014). Graphs associated with simplicial complexes. Homology, Homotopy and Applications, 16(1), 295–311. http://doi.org/10.4310/HHA.2014.v16.n1.a16</p>
</li>
<li>
<p>Kaczynski, T., Mischaikow, K., & Mrozek, M. (2003). Computing homology. Homology, Homotopy and Applications, 5(2), 233–256. http://doi.org/10.4310/HHA.2003.v5.n2.a8</p>
</li>
<li>
<p>Kerber, M. (2016). Persistent Homology – State of the art and challenges 1 Motivation for multi-scale topology. Internat. Math. Nachrichten Nr, 231(231), 15–33.</p>
</li>
<li>
<p>Khoury, M. (n.d.). Lecture 6 : Introduction to Simplicial Homology Topics in Computational Topology : An Algorithmic View, 1–6.</p>
</li>
<li>
<p>Kraft, R. (2016). Illustrations of Data Analysis Using the Mapper Algorithm and Persistent Homology.</p>
</li>
<li>
<p>Lakshmivarahan, S., & Sivakumar, L. (2016). Cayley Graphs, (1), 1–9.</p>
</li>
<li>
<p>Liu, X., Xie, Z., & Yi, D. (2012). A fast algorithm for constructing topological structure in large data. Homology, Homotopy and Applications, 14(1), 221–238. http://doi.org/10.4310/HHA.2012.v14.n1.a11</p>
</li>
<li>
<p>Naik, V. (2006). Group theory : a first journey, 1–21.</p>
</li>
<li>
<p>Otter, N., Porter, M. A., Tillmann, U., Grindrod, P., & Harrington, H. A. (2015). A roadmap for the computation of persistent homology. Preprint ArXiv, (June), 17. Retrieved from http://arxiv.org/abs/1506.08903</p>
</li>
<li>
<p>Semester, A. (2017). § 4 . Simplicial Complexes and Simplicial Homology, 1–13.</p>
</li>
<li>
<p>Singh, G. (2007). Algorithms for Topological Analysis of Data, (November).</p>
</li>
<li>
<p>Zomorodian, A. (2009). Computational Topology Notes. Advances in Discrete and Computational Geometry, 2, 109–143. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.7483</p>
</li>
<li>
<p>Zomorodian, A. (2010). Fast construction of the Vietoris-Rips complex. Computers and Graphics (Pergamon), 34(3), 263–271. http://doi.org/10.1016/j.cag.2010.03.007</p>
</li>
<li>
<p>Symmetry and Group Theory 1. (2016), 1–18. http://doi.org/10.1016/B978-0-444-53786-7.00026-5</p>
</li>
</ol>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Persistent Homology (Part 2)2017-02-22T00:20:00-06:002017-02-22T00:20:00-06:00Brandon Browntag:outlace.com,2017-02-22:/TDApart2.html<p>In part 2 we implement an algorithm to construct a simplicial complex from data and continue to build up the fundamental mathematical knowledge for TDA</p><h2>Topological Data Analysis - Part 2 - Persistent Homology</h2>
<p>This is Part 2 in a series on topological data analysis.
See <a href="TDApart1.html">Part 1</a> | <a href="TDApart3.html">Part 3</a> | <a href="TDApart4.html">Part 4</a> | <a href="TDApart5.html">Part 5</a></p>
<hr>
<p>The time has come for us to finally start coding. Generally my posts are very practical and involve coding right away, but topological data analysis can't be simplified very much, one really must understand the underlying mathematics to make any progress.</p>
<p>We're going to learn how to build a VR complex from simulated data that we sample from a circle (naturally) embedded in <span class="math">\(\mathbb R^2\)</span>.</p>
<p>So we're going to randomly sample points from this shape and pretend it's our raw point cloud data. Many real data are generated by cyclical processes, so it's not an unrealistic exercise. Using our point cloud data, we will build a Vietoris-Rips simplicial complex as described (in math terms) above. Then we'll have to develop some more mathematics to determine the homology groups of the complex.</p>
<p>Recall the parametric form of generating the point set for a circle is as follows:
<br />
<span class="math">\(x=a+r\cos(\theta),\)</span> <br />
<span class="math">\(y=b+r\sin(\theta)\)</span> <br />
where <span class="math">\((a,b)\)</span> is the center point of the circle, <span class="math">\(\theta\)</span> is a parameter from <span class="math">\(0 \text{ to } 2\pi\)</span>, and <span class="math">\(r\)</span> is the radius.</p>
<p>The following code will generate the discrete points of sampled circle and graph it.</p>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
<span class="n">n</span> <span class="o">=</span> <span class="mi">30</span> <span class="c1">#number of points to generate</span>
<span class="c1">#generate space of parameter</span>
<span class="n">theta</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mf">2.0</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">pi</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">r</span> <span class="o">=</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">5.0</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">r</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">cos</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">b</span> <span class="o">+</span> <span class="n">r</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span>
<span class="c1">#code to plot the circle for visualization</span>
<span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
<p><img alt="png" src="TDApart2_files/TDApart2_3_0.png"></p>
<p>Okay, let's stochastically sample from this (somewhat) perfect circle, basically add some jitteriness.</p>
<div class="highlight"><pre><span></span><span class="n">x2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mf">0.75</span><span class="p">,</span><span class="mf">0.75</span><span class="p">,</span><span class="n">n</span><span class="p">)</span> <span class="o">+</span> <span class="n">x</span> <span class="c1">#add some "jitteriness" to the points</span>
<span class="n">y2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mf">0.75</span><span class="p">,</span><span class="mf">0.75</span><span class="p">,</span><span class="n">n</span><span class="p">)</span> <span class="o">+</span> <span class="n">y</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">()</span>
<span class="n">ax</span><span class="o">.</span><span class="n">scatter</span><span class="p">(</span><span class="n">x2</span><span class="p">,</span><span class="n">y2</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
<p><img alt="png" src="TDApart2_files/TDApart2_5_0.png"></p>
<p>As you can tell, the generated points look "circular" as in there is a clear loop with a hole, so we want our simplicial complex to capture that property.</p>
<p>Let's break down the construction of the VR complex into digestable steps: <br /></p>
<ol>
<li>Define a distance function <span class="math">\(d(a,b) = \sqrt{(a_1-b_1)^2+(a_2-b_2)^2}\)</span> (Euclidian distance metric)</li>
<li>Establish the <span class="math">\(\epsilon\)</span> parameter for constructing a VR complex</li>
<li>Create a collection (python <em>list</em>, closest thing to a mathematical <em>set</em>) of the point cloud data, which will be the 0-simplices of the complex.</li>
<li>Scan through each pair of points, calculate the distance between the points. If the pairwise distance between points is <span class="math">\(< \epsilon\)</span>, we add an edge between those points. This will generate a 1-complex (a graph).</li>
<li>Once we've calculated all pairwise distances and have a (undirected) graph, we can iterate through each vertex, identify its neighbors (points to which it has an egde) and attempt to build higher-dimensional simplices incrementally (e.g. from our 1-complex (graph), add all 2-simplices, then add all 3-simplices, etc)</li>
</ol>
<p>There are many algorithms for creating a simplicial complex from data (and there are many other types of simplicial complexes besides the vietoris-rips complex). Unfortunately, to my knowledge, there are no polynomial-time algorithms for creating a full (not downsampled) simplicial complex from point data. So no matter what, once we start dealing with really big data sets, building the complex will become computationally expensive (even prohibitive). A lot more work needs to be done in this area.</p>
<p>We will be using the algorithm as described in "Fast Construction of the Vietoris-Rips Complex" by Afra Zomorodian. This algorithm operates in two major steps.
1. Construct the <strong>neighborhood graph</strong> of point set data. The neighborhood graph is an undirected weighted graph <span class="math">\((G,w)\)</span> where <span class="math">\(G = (V,E), V\)</span> is the node/vertex set and <span class="math">\(E\)</span> is the edge set, and <span class="math">\(w : E \rightarrow \mathbb R\)</span> (<span class="math">\(w\)</span> is a function mapping each edge in <span class="math">\(E\)</span> to a real number, it's weight). Recall our edges are created by connecting points that are within some defined distance of each other (given by a parameter <span class="math">\(\epsilon\)</span>). Specifically, </p>
<div class="math">$$E_{\epsilon} = \{\{u,v\} \mid d(u,v) \leq \epsilon, u \neq v \in V\}$$</div>
<p> where <span class="math">\(d(u,v)\)</span> is the metric/distance function for two points <span class="math">\(u,v \in V\)</span>. And the weight function simply assigns each edge a weight which is equal to the distance between the pair of points in the edge. That is, <span class="math">\(w(\{u,v\}) = d(u,v), \forall\{u,v\} \in E_{\epsilon}(V)\)</span></p>
<ol>
<li>Peform a <strong>Vietoris-Rips expansion</strong> on the neighborhood graph from step 1. Given a neighborhood graph <span class="math">\((G,w)\)</span>, the weight-filtered (will explain this soon) Vietoris-Rips complex <span class="math">\((R(G), w)\)</span> (where <span class="math">\(R\)</span> is VR complex) is given by:
<div class="math">$$R(G) = V \cup E \cup \{ \sigma \mid \left ({\sigma}\above 0pt {2} \right ) \subseteq E \} , $$</div>
For <span class="math">\(\sigma \in R(G) \\\)</span>,
<div class="math">$$ w(\sigma) =
\left\{
\begin{array}{ll}
0, & \sigma = \{v\},v \in V, \\
w(\{u,v\}), & \sigma = \{u,v\} \in E \\
\displaystyle \operatorname*{max}_{\rm \tau \ \subset \ \sigma} w(\tau), & otherwise.
\end{array}
\right\}
$$</div>
</li>
</ol>
<p>Okay what does that mean? Well, in this simple example, we want to get from our neighborhood graph (left) to our Vietoris-Rips complex (right):
<img src="images/TDAimages/VRconstruct1.svg" /></p>
<p>So the math above is saying that our Vietoris-Rips complex is the set that is the union of all the vertices and edges in our neighborhood graph (which takes us to a 1-complex), and the union of all simplices <span class="math">\(\sigma\)</span> (remember <span class="math">\(\sigma\)</span> is just a set of vertices) where each possible combination of 2 vertices in <span class="math">\(\sigma\)</span> is in <span class="math">\(E\)</span> (hence the <span class="math">\(\left ({\sigma}\above 0pt {2} \right ) \subseteq E\)</span> part). </p>
<p>The next part defines the weight function for each simplex in our VR complex, from individual 0-simplices (vertices) to the highest dimensional simplex. If the simplex is a 0-simplex (just a vertex), then the weight of that simplex is 0. If the simplex is a 1-simplex (an edge), then the weight is the distance (defined by our distance function) between those two vertices in teh edge. If the simplex is a higher-dimensional simplex, like a 2-simplex (triangle), then the weight is the weight of the longest edge in that simplex.</p>
<p>Before we get to computing the VR complex for our "circle" data from earlier, let's just do a sanity check with the simple simplex shown above. We'll embed the vertices in <span class="math">\(\mathbb R^2\)</span> and then attempt to build the neighborhood graph first. </p>
<div class="highlight"><pre><span></span><span class="n">raw_data</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="mi">0</span><span class="p">,</span><span class="mi">2</span><span class="p">],[</span><span class="mi">2</span><span class="p">,</span><span class="mi">2</span><span class="p">],[</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">],[</span><span class="mf">1.5</span><span class="p">,</span><span class="o">-</span><span class="mf">3.0</span><span class="p">]])</span> <span class="c1">#embedded 3 vertices in R^2</span>
<span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">([</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="o">-</span><span class="mi">4</span><span class="p">,</span><span class="mi">3</span><span class="p">])</span>
<span class="n">plt</span><span class="o">.</span><span class="n">scatter</span><span class="p">(</span><span class="n">raw_data</span><span class="p">[:,</span><span class="mi">0</span><span class="p">],</span><span class="n">raw_data</span><span class="p">[:,</span><span class="mi">1</span><span class="p">])</span> <span class="c1">#plotting just for clarity</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">txt</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">raw_data</span><span class="p">):</span>
<span class="n">plt</span><span class="o">.</span><span class="n">annotate</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="p">(</span><span class="n">raw_data</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span><span class="o">+</span><span class="mf">0.05</span><span class="p">,</span> <span class="n">raw_data</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="mi">1</span><span class="p">]))</span> <span class="c1">#add labels</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
<p><img alt="png" src="TDApart2_files/TDApart2_8_0.png"></p>
<p>We'll be representing each vertex in our simplicial complex by the index number in the original data array. For example, the point [0,2] shows up first in our data array, so we reference it in our simplicial complex as simply point [0].</p>
<div class="highlight"><pre><span></span><span class="c1">#Build neighorbood graph</span>
<span class="n">nodes</span> <span class="o">=</span> <span class="p">[</span><span class="n">x</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">raw_data</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">])]</span> <span class="c1">#initialize node set, reference indices from original data array</span>
<span class="n">edges</span> <span class="o">=</span> <span class="p">[]</span> <span class="c1">#initialize empty edge array</span>
<span class="n">weights</span> <span class="o">=</span> <span class="p">[]</span> <span class="c1">#initialize weight array, stores the weight (which in this case is the distance) for each edge</span>
<span class="n">eps</span> <span class="o">=</span> <span class="mf">3.1</span> <span class="c1">#epsilon distance parameter</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">raw_data</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]):</span> <span class="c1">#iterate through each data point</span>
<span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">raw_data</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">i</span><span class="p">):</span> <span class="c1">#inner loop to calculate pairwise point distances</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">raw_data</span><span class="p">[</span><span class="n">i</span><span class="p">]</span>
<span class="n">b</span> <span class="o">=</span> <span class="n">raw_data</span><span class="p">[</span><span class="n">j</span><span class="o">+</span><span class="n">i</span><span class="p">]</span> <span class="c1">#each simplex is a set (no order), hence [0,1] = [1,0]; so only store one</span>
<span class="k">if</span> <span class="p">(</span><span class="n">i</span> <span class="o">!=</span> <span class="n">j</span><span class="o">+</span><span class="n">i</span><span class="p">):</span>
<span class="n">dist</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">linalg</span><span class="o">.</span><span class="n">norm</span><span class="p">(</span><span class="n">a</span> <span class="o">-</span> <span class="n">b</span><span class="p">)</span> <span class="c1">#euclidian distance metric</span>
<span class="k">if</span> <span class="n">dist</span> <span class="o"><=</span> <span class="n">eps</span><span class="p">:</span>
<span class="n">edges</span><span class="o">.</span><span class="n">append</span><span class="p">({</span><span class="n">i</span><span class="p">,</span><span class="n">j</span><span class="o">+</span><span class="n">i</span><span class="p">})</span> <span class="c1">#add edge</span>
<span class="n">weights</span><span class="o">.</span><span class="n">append</span><span class="p">([</span><span class="nb">len</span><span class="p">(</span><span class="n">edges</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span><span class="n">dist</span><span class="p">])</span> <span class="c1">#store index and weight</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Nodes: "</span> <span class="p">,</span> <span class="n">nodes</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Edges: "</span> <span class="p">,</span> <span class="n">edges</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Weights: "</span><span class="p">,</span> <span class="n">weights</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>Nodes: [0, 1, 2, 3]
Edges: [{0, 1}, {0, 2}, {1, 2}, {2, 3}]
Weights: [[0, 2.0], [1, 2.2360679774997898], [2, 2.2360679774997898], [3, 3.0413812651491097]]
</pre></div>
<p>Perfect. Now we have a node set, edge set, and a weights set that all constitute our neighborhood graph (G,<span class="math">\(w\)</span>). Our next task is to use the neighborhood graph to start building up the higher-dimensional simplices. In this case we'll only have one additional 2-simplex (triangle). We'll need to setup a some basic functions.</p>
<div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">lower_nbrs</span><span class="p">(</span><span class="n">nodeSet</span><span class="p">,</span> <span class="n">edgeSet</span><span class="p">,</span> <span class="n">node</span><span class="p">):</span>
<span class="k">return</span> <span class="p">{</span><span class="n">x</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">nodeSet</span> <span class="k">if</span> <span class="p">{</span><span class="n">x</span><span class="p">,</span><span class="n">node</span><span class="p">}</span> <span class="ow">in</span> <span class="n">edgeSet</span> <span class="ow">and</span> <span class="n">node</span> <span class="o">></span> <span class="n">x</span><span class="p">}</span>
<span class="k">def</span> <span class="nf">rips</span><span class="p">(</span><span class="n">nodes</span><span class="p">,</span> <span class="n">edges</span><span class="p">,</span> <span class="n">k</span><span class="p">):</span>
<span class="n">VRcomplex</span> <span class="o">=</span> <span class="p">[{</span><span class="n">n</span><span class="p">}</span> <span class="k">for</span> <span class="n">n</span> <span class="ow">in</span> <span class="n">nodes</span><span class="p">]</span>
<span class="k">for</span> <span class="n">e</span> <span class="ow">in</span> <span class="n">edges</span><span class="p">:</span> <span class="c1">#add 1-simplices (edges)</span>
<span class="n">VRcomplex</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">e</span><span class="p">)</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">k</span><span class="p">):</span>
<span class="k">for</span> <span class="n">simplex</span> <span class="ow">in</span> <span class="p">[</span><span class="n">x</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">VRcomplex</span> <span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">x</span><span class="p">)</span><span class="o">==</span><span class="n">i</span><span class="o">+</span><span class="mi">2</span><span class="p">]:</span> <span class="c1">#skip 0-simplices</span>
<span class="c1">#for each u in simplex</span>
<span class="n">nbrs</span> <span class="o">=</span> <span class="nb">set</span><span class="o">.</span><span class="n">intersection</span><span class="p">(</span><span class="o">*</span><span class="p">[</span><span class="n">lower_nbrs</span><span class="p">(</span><span class="n">nodes</span><span class="p">,</span> <span class="n">edges</span><span class="p">,</span> <span class="n">z</span><span class="p">)</span> <span class="k">for</span> <span class="n">z</span> <span class="ow">in</span> <span class="n">simplex</span><span class="p">])</span>
<span class="k">for</span> <span class="n">nbr</span> <span class="ow">in</span> <span class="n">nbrs</span><span class="p">:</span>
<span class="n">VRcomplex</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="nb">set</span><span class="o">.</span><span class="n">union</span><span class="p">(</span><span class="n">simplex</span><span class="p">,{</span><span class="n">nbr</span><span class="p">}))</span>
<span class="k">return</span> <span class="n">VRcomplex</span>
</pre></div>
<p>Great, let's try it out and see if it works. We're explicitly telling it to find all simplicies up to 3-dimensions.</p>
<div class="highlight"><pre><span></span><span class="n">theComplex</span> <span class="o">=</span> <span class="n">rips</span><span class="p">(</span><span class="n">nodes</span><span class="p">,</span> <span class="n">edges</span><span class="p">,</span> <span class="mi">3</span><span class="p">)</span>
<span class="n">theComplex</span>
</pre></div>
<div class="highlight"><pre><span></span>[{0}, {1}, {2}, {3}, {0, 1}, {0, 2}, {1, 2}, {2, 3}, {0, 1, 2}]
</pre></div>
<p>Awesome, looks perfect.</p>
<p>Now we want to see what it looks like. I've produced some code that will graph the simplicial complex based on the output from our Vietoris-Rips algorithm from above. This is not crucial to understanding TDA (most of the time we don't try to visualize simplicial complexes as they are too high-dimensional) so I will not attempt to explain the code for graphing.</p>
<div class="highlight"><pre><span></span><span class="n">plt</span><span class="o">.</span><span class="n">clf</span><span class="p">()</span>
<span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">([</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="o">-</span><span class="mi">4</span><span class="p">,</span><span class="mi">3</span><span class="p">])</span>
<span class="n">plt</span><span class="o">.</span><span class="n">scatter</span><span class="p">(</span><span class="n">raw_data</span><span class="p">[:,</span><span class="mi">0</span><span class="p">],</span><span class="n">raw_data</span><span class="p">[:,</span><span class="mi">1</span><span class="p">])</span> <span class="c1">#plotting just for clarity</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">txt</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">raw_data</span><span class="p">):</span>
<span class="n">plt</span><span class="o">.</span><span class="n">annotate</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="p">(</span><span class="n">raw_data</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span><span class="o">+</span><span class="mf">0.05</span><span class="p">,</span> <span class="n">raw_data</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="mi">1</span><span class="p">]))</span> <span class="c1">#add labels</span>
<span class="c1">#add lines for edges</span>
<span class="k">for</span> <span class="n">edge</span> <span class="ow">in</span> <span class="p">[</span><span class="n">e</span> <span class="k">for</span> <span class="n">e</span> <span class="ow">in</span> <span class="n">theComplex</span> <span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">e</span><span class="p">)</span><span class="o">==</span><span class="mi">2</span><span class="p">]:</span>
<span class="n">pt1</span><span class="p">,</span><span class="n">pt2</span> <span class="o">=</span> <span class="p">[</span><span class="n">raw_data</span><span class="p">[</span><span class="n">pt</span><span class="p">]</span> <span class="k">for</span> <span class="n">pt</span> <span class="ow">in</span> <span class="p">[</span><span class="n">n</span> <span class="k">for</span> <span class="n">n</span> <span class="ow">in</span> <span class="n">edge</span><span class="p">]]</span>
<span class="nb">print</span><span class="p">(</span><span class="n">pt1</span><span class="p">,</span><span class="n">pt2</span><span class="p">)</span>
<span class="n">line</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">Polygon</span><span class="p">([</span><span class="n">pt1</span><span class="p">,</span><span class="n">pt2</span><span class="p">],</span> <span class="n">closed</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">fill</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">edgecolor</span><span class="o">=</span><span class="s1">'r'</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">gca</span><span class="p">()</span><span class="o">.</span><span class="n">add_line</span><span class="p">(</span><span class="n">line</span><span class="p">)</span>
<span class="c1">#add triangles</span>
<span class="k">for</span> <span class="n">triangle</span> <span class="ow">in</span> <span class="p">[</span><span class="n">t</span> <span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="n">theComplex</span> <span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">t</span><span class="p">)</span><span class="o">==</span><span class="mi">3</span><span class="p">]:</span>
<span class="n">pt1</span><span class="p">,</span><span class="n">pt2</span><span class="p">,</span><span class="n">pt3</span> <span class="o">=</span> <span class="p">[</span><span class="n">raw_data</span><span class="p">[</span><span class="n">pt</span><span class="p">]</span> <span class="k">for</span> <span class="n">pt</span> <span class="ow">in</span> <span class="p">[</span><span class="n">n</span> <span class="k">for</span> <span class="n">n</span> <span class="ow">in</span> <span class="n">triangle</span><span class="p">]]</span>
<span class="n">line</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">Polygon</span><span class="p">([</span><span class="n">pt1</span><span class="p">,</span><span class="n">pt2</span><span class="p">,</span><span class="n">pt3</span><span class="p">],</span> <span class="n">closed</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s2">"blue"</span><span class="p">,</span><span class="n">alpha</span><span class="o">=</span><span class="mf">0.3</span><span class="p">,</span> <span class="n">fill</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">edgecolor</span><span class="o">=</span><span class="kc">None</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">gca</span><span class="p">()</span><span class="o">.</span><span class="n">add_line</span><span class="p">(</span><span class="n">line</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="k">[ 0. 2.] [ 2. 2.]</span>
<span class="k">[ 0. 2.] [ 1. 0.]</span>
<span class="k">[ 2. 2.] [ 1. 0.]</span>
<span class="k">[ 1. 0.] [ 1.5 -3. ]</span>
</pre></div>
<p><img alt="png" src="TDApart2_files/TDApart2_16_1.png"></p>
<p>Now we have a nice little depiction of our very simple VR complex. Now that we know what to do. We need to learn about <strong>simplicial homology</strong>, which is the study of topological invariants between simplicial complexes. In particular, we're interested in being able to mathematically identify n-dimensional connected components, holes and loops. To aid in this effort, I've repackage the code we've used above as a separate file so we can just import it and use the functions conveniently on our data. You can download the latest code here: < https://github.com/outlace/OpenTDA/blob/master/SimplicialComplex.py ></p>
<p>Here I will zip our <span class="math">\(x\)</span> and <span class="math">\(y\)</span> coordinates from the (jittered) points we sampled from a circle so we can use it to build a more complicated simplicial complex.</p>
<div class="highlight"><pre><span></span><span class="n">newData</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="nb">list</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">x2</span><span class="p">,</span><span class="n">y2</span><span class="p">)))</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">SimplicialComplex</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">graph</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">buildGraph</span><span class="p">(</span><span class="n">raw_data</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">3.0</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">ripsComplex</span> <span class="o">=</span> <span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">rips</span><span class="p">(</span><span class="n">nodes</span><span class="o">=</span><span class="n">graph</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">edges</span><span class="o">=</span><span class="n">graph</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">SimplicialComplex</span><span class="o">.</span><span class="n">drawComplex</span><span class="p">(</span><span class="n">origData</span><span class="o">=</span><span class="n">newData</span><span class="p">,</span> <span class="n">ripsComplex</span><span class="o">=</span><span class="n">ripsComplex</span><span class="p">)</span>
</pre></div>
<p><img alt="png" src="TDApart2_files/TDApart2_22_0.png"></p>
<p>That's neat! Clearly we have reproduced the circular space from which the points were sampled. Notice that there are 1-simplices and higher-dimensional simplices (the darker blue sections) but it forms a single connected component with a single 1-dimensional hole.</p>
<div class="highlight"><pre><span></span><span class="c1">#This is what it looks like if we decrease the Epsilon parameter too much:</span>
</pre></div>
<p><img alt="png" src="TDApart2_files/TDApart2_24_0.png"></p>
<h4>Homology Groups</h4>
<p>Now that we know what simplicial complexes are and how to generate them on raw point data, we need to get to the next step of actually calculating the interesting topological features of these simplicial complexes.</p>
<p>Topologicla data analysis in the form of computational homology gives us a way of identifying the number of components and the number of n-dimensional "holes" (e.g. the hole in the middle of a circle) in some topological space (generally a simplicial complex) that we create based on a data set.</p>
<p>Before we proceed, I want to describe an extra property we can impose on the simplicial complexes we've been using thus far. We can give them an <strong>orientation</strong> property. An oriented simplex <span class="math">\(\sigma = {u_1, u_2, u_3, ... u_n}\)</span> is defined by the order of its vertices. Thus the oriented simplex {a,b,c} is not the same as the oriented simplex {b,a,c}. We can depict this by making our edges into arrows when drawing low-dimensional simplicial complexes.</p>
<p><img src="images/TDAimages/orientedSimplices.svg" /></p>
<p>Now, strictly speaking a mathematical set (designated with curcly braces <span class="math">\({}\)</span>) is by definition an unordered collection of objects, so in order to impose an orientation on our simplex, we would need add some additional mathematical structure e.g. via making the set of vertices an ordered set by adding a binary <span class="math">\(\leq\)</span> relation on the elements. This isn't particularly worth delving into, we'll just henceforth presume that the vertex sets are ordered without explicitly declaring the additional structure necessary to precisely define that order.</p>
<p>Looking back at the above two oriented simplices, we can see that the directionality of the arrows is exactly reverse for each simplex. If we call the left simplex <span class="math">\(\sigma_1\)</span> and the right <span class="math">\(\sigma_2\)</span> then we would say that <span class="math">\(\sigma_1 = -\sigma_2\)</span>.</p>
<p>The reason for bringing in orientation will be made clear later.</p>
<h5>n-Chains</h5>
<p>Remember that a simplicial complex contains all faces of each highest-dimensional simplex in the complex. That is to say, if we have a 2-complex (a simplicial complex with the highest dimensional simplex being a 2-simplex (triangle)), then the complex also contains all of its lower dimensional faces (e.g. edges and vertices).</p>
<p>Let <span class="math">\(\mathcal C = \text{{{0}, {1}, {2}, {3}, {0, 1}, {0, 2}, {1, 2}, {2, 3}, {0, 1, 2}}}\)</span> be the simplicial complex constructed from a point cloud (e.g. data set), <span class="math">\(X = \{0,1,2,3\}\)</span>.</p>
<p><span class="math">\(\mathcal C\)</span> is a 2-complex since its highest-dimensional simplex is a 2-simplex (triangle). We can break this complex up into groups of subsets of this complex where each group is composed of the set of all <span class="math">\(k\)</span>-simplices. In simplicial homology theory, these groups are called <strong>chain groups</strong>, and any particular group is the <em>k-th chain group</em>, <span class="math">\(C_k(X)\)</span>. For example, the 1st-chain group of <span class="math">\(\mathcal C\)</span> is <span class="math">\(\mathcal C_1(X) = \text{ {{0,1},{0,2},{1,2},{2,3}} }\)</span></p>
<h4>Basic Abstract Algebra</h4>
<p>The "group" in "chain <em>group</em>" actually has a specific mathematical meaning that warrants covering. The concept of a <strong>group</strong> is a notion from abstract algebra, the field of mathematics that generalizes some of the familiar topics from your high school algebra classes. Needless to say, it is fairly <em>abstract</em>, but I will do my best to start with concrete examples that are easy to conceptualize, then gently abstract away until we get to the most general notions. I'm going to be covering <strong>groups, rings, fields, modules, and vector spaces</strong> and various other minor topics as they arise. Once we get this stuff down, we'll return to our discussion of <em>chain groups</em>.</p>
<p>Basically my only requirement of you, the reader, is that you already have an understanding of basic <em>set theory</em>. So if you've been lying to me this whole time and some how understood what's going on so far, then stop and learn some set theory because you're going to need it.</p>
<h5>Groups</h5>
<p>The mathematical structure known as a <em>group</em> can be thought of as generalizing a notion of symmetry. There's a rich body of mathematics that study groups known as (unsurprisingly) <em>group theory</em>. We won't go very far in our brief study of groups here, as we only need to know what we need to know. For our purposes, a group is a mathematical object that has some symmetrical properties to it. It might be easiest to think in terms of geometry, but as we will see, groups are so general that many different mathematical structures can benefit from a group theory perspective.
<img src="images/TDAimages/triangleGroupTheory.svg" /></p>
<p>Just by visual inspection, we can see a few of the possible operations we can perform on this triangle that will not alter its structure. I've drawn lines of symmetry showing that you can reflect across these 3 lines and still end up with the same triangle structure. More trivially, you can translate the triangle on the plane and still have the same structure. You can also rotate the triangle by 120 degrees and it still preserves the structure of the triangle. Group theory offers precise tools for managing these types of operations and their results. </p>
<p>Here's the mathematical definition of a group. </p>
<blockquote>
<p>A <em>group</em> is a set, <span class="math">\(G\)</span>, together with a binary operation <span class="math">\(\star\)</span> (or whatever symbol you like) that maps any two elements <span class="math">\(a,b \in G\)</span> to another element <span class="math">\(c \in G\)</span>, notated as <span class="math">\(a\star b = c, \text{for all } a,b,c \in G\)</span>. The set and its operation are notated as the ordered pair <span class="math">\((G, \star)\)</span>. Additionally, to be a valid group, the set and its operation must satisfy the following axioms (rules):</p>
<ol>
<li>
<p><em>Associativity</em> <br />
For all <span class="math">\(\text{a, b and c in G}, (a \star b) \star c = a \star (b \star c)\)</span>.</p>
</li>
<li>
<p><em>Identity element</em> <br />
There exists an element <span class="math">\(e \in G\)</span> such that, for every element <span class="math">\(a \in G\)</span>, the equation <span class="math">\(e \star a = a \star e = a\)</span> holds. Such an element is unique and is called the identity element.</p>
</li>
<li>
<p><em>Inverse element</em> <br />
For each <span class="math">\(a \in G\)</span>, there exists an element <span class="math">\(b \in G\)</span>, commonly denoted <span class="math">\(a^{-1}\)</span> (or <span class="math">\(−a\)</span>, if the operation is denoted "<span class="math">\(+\)</span>"), such that <span class="math">\(a \star b = b \star a = e\)</span>, where <span class="math">\(e\)</span> is the identity element.
(Adapted from wikipedia)</p>
</li>
</ol>
<p><em>NOTE:</em> Notice that the operation <span class="math">\(\star\)</span> is not necessarily <em>commutative</em>, that is, <span class="math">\(a \star b {?\above 0pt =} b \star a\)</span>. The order of operation may matter. If it does not matter, it is called a commutative or <strong>abelian</strong> group. The set <span class="math">\(\mathbb Z\)</span> (the integers) is an <em>abelian</em> group since e.g. <span class="math">\(1+2 = 2+1\)</span>.</p>
</blockquote>
<p>This "group" concept seems arbitrary and begs the question of what its use is, but hopefully that will become clear. Keep in mind all mathematical objects are simply sets with some (seemingly arbitrary) axioms (basically rules the sets must obey that define a structure on those sets). You can define whatever structure you want on sets (as long as they're logically consistent and coherent rules) and you'll have some mathematical object/structure. Some structures are more interesting than others. Some are sets have a lot of structure (i.e. a lot of rules) and others will have very few. Typically the structures with a lot of rules are merely specializations of more general/abstract structures. Groups are just mathematical structures (sets with rules that someone made up) that have interesting properties and turn out to be useful in a lot of areas. But since they are so general, it is a bit difficult to reason about them concretely.</p>
<p>Let's see if we can "group-ify" our triangle example from above. We can consider the triangle to be a set of labeled vertices, as if it were a 2-simplex. Since we've labeled the vertices of the triangle, we can easily describe it as the set </p>
<div class="math">$$t = \{a, b, c\}$$</div>
<p> But how do we define a binary operation on <span class="math">\(t\)</span>? I'm not sure, let's just try things out. We'll build a table that shows us what happens when we "operate" on two elements in <span class="math">\(t\)</span>. I'm seriously just going to make up a binary operation (a map from <span class="math">\((a,b) \mid a,b \in t\)</span> ) and see if it turns out to be a valid group. Here it is.
<img src="images/TDAimages/GroupOpTable1.svg" width="150px" />
So to figure out what <span class="math">\(a \star b\)</span> is, you start from the top row, find <span class="math">\(a\)</span>, then locate <span class="math">\(b\)</span> in the vertical left column, and where they meet up gives you the result. In my made up example, <span class="math">\(a \star b = a\)</span>. Note that I've defined this operation to be NON-commutative, thus <span class="math">\(a \star b \neq b \star a\)</span>. You have to start from the top row and then go to the left side row (in that order).</p>
<p>Now you should be able to quickly tell that this is in fact <em>not</em> a valid group as it violates the axioms of groups. For example, check the element <span class="math">\(b \in G\)</span>, you'll notice there is no identity element, <span class="math">\(e\)</span>, for which <span class="math">\(b + e = b\)</span>. </p>
<p>So let's try again. This time I've actually <em>tried</em> to make a valid group.
<img src="images/TDAimages/GroupOpTable2.svg" width="150px" />
You should check for yourself that this is in fact a valid group, and this time this group <em>is</em> commutative, therefore we call it an abelian group. The identity element is <span class="math">\(a\)</span> since <span class="math">\(a\)</span> added to any other element <span class="math">\(b\)</span> or <span class="math">\(c\)</span> just gives <span class="math">\(b\)</span> or <span class="math">\(c\)</span> back unchanged. Notice that the table itself looks like it has some symmetry just by visual inspection. </p>
<p>It turns out that finite groups, just like finite topological spaces, can be represented as directed graphs, which aids in visualization (aren't the patterns in math beautiful?). These graphs of groups have a special name: <strong>Cayley graphs</strong>. It's a little more complicated to construct a Cayley graph than it was to make digraphs for topological spaces. We have to add another property to Cayley graphs besides just having directed arrows (edges), we also assign an operation to each arrow. Thus if an arrow is drawn from <span class="math">\(a \rightarrow b\)</span> then that arrow represents the group operation on <span class="math">\(a\)</span> that produces <span class="math">\(b\)</span>. And not all arrows are going to be the same operation, so to aid in visualization, we typically make each type of operation associated with an arrow a different color.</p>
<p>Before we construct a Cayley graph, we need to understand what a <strong>generating set</strong> of a group is. Remember, a group is a set <span class="math">\(G\)</span> with a binary operation <span class="math">\(\star\)</span> (or whatever symbol you want to use), <span class="math">\((G, \star)\)</span>. A generating set is a subset <span class="math">\(S \subseteq G\)</span> such that <span class="math">\(G = \{a \star b \mid a,b \in S\}\)</span>. In words, it means that the generating set <span class="math">\(S\)</span> is a subset of <span class="math">\(G\)</span> but if we apply our binary operation <span class="math">\(\star\)</span> on the elements in <span class="math">\(S\)</span>, possibly repeatedly, it will produce the full set <span class="math">\(G\)</span>. It's almost like <span class="math">\(S\)</span> compresses <span class="math">\(G\)</span>. There may be many possible generators. So what is/are the generator(s) for our set <span class="math">\(t = \{a,b,c\}\)</span> with <span class="math">\(\star\)</span> defined in the table above? Well, look at the subsection of the operation table I've highlited red.
<img src="images/TDAimages/GroupOpTable2b.svg" width="150px" /></p>
<p>You'll notice I've highlighted the subset <span class="math">\(\{b,c\}\)</span> because these two elements can generate the full set <span class="math">\(\{a,b,c\}\)</span>. But actually just <span class="math">\(\{b\}\)</span> and <span class="math">\(\{c\}\)</span> individually can generate the full set. For example, <span class="math">\(b\star b=c\)</span> and <span class="math">\(b \star b \star b = a\)</span> (we can also write <span class="math">\(b^2 = c\)</span> and <span class="math">\(b^3 = a\)</span>). Similarly, <span class="math">\(c \star c = b\)</span> and <span class="math">\(c \star c \star = a\)</span>. So by repeatedly applying the <span class="math">\(\star\)</span> operation on just <span class="math">\(b\)</span> or <span class="math">\(c\)</span> we can generate all 3 elements of the full set. Since <span class="math">\(a\)</span> is the identity element of the set, it is <em>not</em> a generator as <span class="math">\(a^n = a, n \in \mathbb N\)</span> (<span class="math">\(a\)</span> to any positive power is still <span class="math">\(a\)</span>).</p>
<p>Since there are two possible generators, <span class="math">\(b\)</span> and <span class="math">\(c\)</span>, there will be two different "types" of arrows, representing two different operations. Namely, we'll have a "<span class="math">\(b\)</span>" arrow and a "<span class="math">\(c\)</span>" arrow (representing the <span class="math">\(\star b \text{ and } \star c\)</span> operations). To build the edge set <span class="math">\(E\)</span> for a Cayley graph of a group <span class="math">\((G, \star)\)</span> and generator set <span class="math">\(S \subseteq G\)</span>, is the edge set </p>
<div class="math">$$E = \{(a,c) \mid c = a\star b \land a,c \in G \land b \in S\}$$</div>
<p> where each edge is colored/labeled by <span class="math">\(b \in S\)</span>.</p>
<p>The resulting Cayley graph is:
<img src="images/TDAimages/CayleyDiagram1.svg" /></p>
<p>In this Cayley graph we've drawn two types of arrows for the generators {b} and {c}, however, we really only need to choose one since only one element is necessary to generate the full group. So in general we choose the smallest generator set to draw the Cayley graph, in this case then we'd only have the red arrow.</p>
<p>So this group is the group of rotational symmetries of the equilateral triangle because we can rotate the triangle 120 degrees without changing it and our group codifies that by saying each turn of 120s is like the group operation of "adding" (<span class="math">\(\star\)</span>) the generator element <span class="math">\(b\)</span>. We can also add the identity element, which is like deciding not to rotate it at all. Here we can see how "adding" {b} to each element in the original set {a,b,c} looks like rotating counter-clockwise by 120 degrees:</p>
<p><img src="images/TDAimages/triangleGroupOps2ex.svg" /></p>
<p>This is also called the <em>cyclic group of order 3</em> which is isomorphic to <span class="math">\(\mathbb Z_3\)</span>. Woah, isomorphic? <span class="math">\(\mathbb Z_3\)</span>? What's all of that you ask?</p>
<p>Well isomorphic basically means there exists a one-to-one (bijective) mapping between two mathematical structures that maintains the structure. It's like they're the same structure but with different labelings. The rotational symmetry group of the triangle we just studied is isomorphic to the integers modulo 3 ( <span class="math">\(\mathbb Z_3\)</span> ). Modular arithmetic means that at some point the operation loops back to the beginning. Unlike the full integers <span class="math">\(\mathbb Z\)</span> where if you keep adding 1 you'll keep getting a bigger number, in modular arithmetic, eventually you add 1 and you'll loop back to the starting element (the identity element 0). Consider the hour hand on a clock, it is basically the integers modulo 12 (<span class="math">\(\mathbb Z_{12}\)</span>) since if you keep adding one hour it eventually just loops back around.</p>
<p>Here's the addition table for the integers modulo 3:
<img src="images/TDAimages/GroupOpTable3.svg" width="150px" />
Hence <span class="math">\(1+1 = 2\)</span> but <span class="math">\(2+2 = 1\)</span> and <span class="math">\(1+2=0\)</span> in <span class="math">\(\mathbb Z_3\)</span>. The integers modulo <span class="math">\(x\)</span> forms a cyclic group (with a single generator) with <span class="math">\(x\)</span> elements and <span class="math">\(0\)</span> being the identity element.</p>
<p>Okay so that's the basics of groups, let's move on to rings and fields.</p>
<h5>Rings and Fields</h5>
<p>So now we move on to learning a bit about <em>rings</em> and then <em>fields</em>. To preface, fields and rings are essentially specializations of groups, i.e. they are sets with the rules of groups plus additional rules. Every ring is a group, and every field is a ring.</p>
<blockquote>
<p><strong>Definition (Ring)</strong>
A ring is a set <span class="math">\(R\)</span> equipped with two binary operations <span class="math">\(\star\)</span> and <span class="math">\(\bullet\)</span> (or whatever symbols you want to use) satisfying the following three sets of axioms, called the ring axioms: <br />
1. <span class="math">\(R\)</span> is an abelian (commutative) group over the <span class="math">\(\star\)</span> operation. Meaning that <span class="math">\((R, \star)\)</span> satisfies the axioms for being a group.
2. <span class="math">\((R, \bullet)\)</span> forms a mathematical structure called a <strong>monoid</strong> when the <span class="math">\(\bullet\)</span> operation is associative < i.e. <span class="math">\(a\bullet (b\bullet c) = (a \bullet b) \bullet c\)</span> and <span class="math">\((R, \bullet)\)</span> has an identity element (i.e. <span class="math">\(\exists e \in R\)</span> such that <span class="math">\(e \bullet b = b \bullet e = e\)</span> )
3. <span class="math">\(\star\)</span> is distributive with respect to <span class="math">\(\bullet\)</span>, i.e. <br />
<span class="math">\(a \bullet (b \star c) = (a \bullet b) \star (a \bullet c)\)</span> for all <span class="math">\(a, b, c \in R\)</span> (left distributivity). <br />
<span class="math">\((b \star c) \bullet a = (b \bullet a) \star (c \bullet a)\)</span> for all <span class="math">\(a, b, c \in R\)</span> (right distributivity). <br />
(Adapted from Wikipedia)</p>
</blockquote>
<p>The most familiar ring is the integers, <span class="math">\(\mathbb Z\)</span>, with the familiar operations <span class="math">\(+\)</span> (addition) and <span class="math">\(\times\)</span> (multiplication). Since a ring is also a group, we can speak of generators for the group of integers. Since the integers span from <span class="math">\(\{-n...-3, -2, -1, 0, 1, 2, 3...n\}\)</span> there are only two generators for the integers, namely <span class="math">\(\{-1,1\}\)</span> under the addition operation (<span class="math">\(+\)</span>), since we can repeatedly do <span class="math">\(1+1+1+...+n\)</span> to get all the positive integers and <span class="math">\(-1+-1+-1+...-n\)</span> to get all the negative integers and <span class="math">\(-1+1=0\)</span> to get 0.</p>
<p>And here is the definition of a field.</p>
<blockquote>
<p><strong>Definition (Field)</strong>
A <em>field</em> is a set <span class="math">\(F\)</span> with two binary operations <span class="math">\(\star\)</span> and <span class="math">\(\bullet\)</span>, denoted <span class="math">\(F(\star, \bullet)\)</span>, that satisfy the following axioms.</p>
<table>
<thead>
<tr>
<th>name</th>
<th><span class="math">\(\star\)</span></th>
<th><span class="math">\(\bullet\)</span></th>
</tr>
</thead>
<tbody>
<tr>
<td>associativity</td>
<td><span class="math">\((a \star b)\star c=a \star (b \star c)\)</span></td>
<td><span class="math">\((a\bullet b)\bullet c=a \bullet (b \bullet c)\)</span></td>
</tr>
<tr>
<td>commutativity</td>
<td><span class="math">\(a \star b=b \star a\)</span></td>
<td><span class="math">\(a \bullet b=b \bullet a\)</span></td>
</tr>
<tr>
<td>distributivity</td>
<td><span class="math">\(a(b \star c)=a\bullet b \star a \bullet c\)</span></td>
<td><span class="math">\((a\star b)\bullet c=a\bullet c \star b\bullet c\)</span></td>
</tr>
<tr>
<td>identity</td>
<td><span class="math">\(a \star e=a=0 \star a\)</span></td>
<td><span class="math">\(a\bullet 1=a=1 \bullet a\)</span></td>
</tr>
<tr>
<td>inverses</td>
<td><span class="math">\(a \star (-a)=0=(-a) \star a\)</span></td>
<td><span class="math">\(a\bullet a^{(-1)}=1=a^{(-1)}\bullet a, \text{ if } a\neq 0\)</span></td>
</tr>
<tr>
<td>...for all <span class="math">\(a,b,c \in F\)</span>, where <span class="math">\(0\)</span> is the symbol for the identity element under the operation <span class="math">\(\star\)</span> and <span class="math">\(1\)</span> is the symbol for the identity element under the operation for <span class="math">\(\bullet\)</span>.</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</blockquote>
<p>Clearly, a field has a lot more requirements than just a group. And just to note, I know I've been using the symbols <span class="math">\(\star\)</span> and <span class="math">\(\bullet\)</span> for the binary operations of a group, ring and field, but these are more commonly denoted as <span class="math">\(+\)</span> and <span class="math">\(\times\)</span>, called addition and multiplication, respectively. The only reason why I didn't initially use those symbols was because I wanted to emphasize the point that these do not just apply to numbers like you're familiar with, but are abstract operations that can function over any mathematical structures that meet the requirements. But now that you understand that, we can just use the more familiar symbols. So <span class="math">\(\star = +\)</span> (addition) and <span class="math">\(\bullet = \times\)</span> (multiplication) and <span class="math">\(a \div b = a \times b^{-1}\)</span> is division.</p>
<p>Remember the integers <span class="math">\(\mathbb Z\)</span> is the most familiar <em>ring</em> with the operations additon and multiplication? Well the integers do not form a <em>field</em> because there is not an inverse for each element in <span class="math">\(\mathbb Z\)</span> with respect to <span class="math">\(\times\)</span> operation. For example, if <span class="math">\(\mathbb Z\)</span> was a field then <span class="math">\(5 \times 5^{-1} = 1\)</span>, however, <span class="math">\(5^{-1}\)</span> is not defined in the integers. If we consider the real numbers <span class="math">\(\mathbb R\)</span>, then of course <span class="math">\(5^{-1} = 1/5\)</span>. Thus a field, while defined just in terms of addition (<span class="math">\(+\)</span>) and multiplication (<span class="math">\(\times\)</span>), implicitly defines the inverses of those operations, namely substraction (<span class="math">\(-\)</span>) and division (<span class="math">\(/\)</span>). So for a set to be a field, the division operation (inverse of multiplication) must be defined for every element in the set <em>except</em> for the identity element under the addition operation (<span class="math">\(0\)</span> in the case of <span class="math">\(\mathbb Z\)</span>); as you know from elementary arithmetic that one cannot divide by 0 (since there is no inverse of <span class="math">\(0\)</span>). And it all has to do with symmetry. The inverse of <span class="math">\(1\)</span> is <span class="math">\(-1\)</span> under addition, and <span class="math">\(-2\)</span> is the inverse of <span class="math">\(2\)</span> and so on.
<img src="images/TDAimages/integerInverses.png" />
Notice the symmetry of inverses? Each inverse is equidistant from the "center" of the set, that being <span class="math">\(0\)</span>. But since <span class="math">\(0\)</span> is the center, there is no symmetrical opposite of it, thus <span class="math">\(0\)</span> has no inverse and cannot be defined with respect to division.</p>
<p>...</p>
<p>So stepping back a bit, group theory is all about studying symmetry. Any mathematical objects that have symmetrical features can be codified as groups and then studied algebraically to determine what operations can be done on those groups that preserve the symmetries. If we don't care about symmetry and we just want to study sets with a binary operation and associativity, then we're working with <em>monoids</em>. </p>
<h6>Why are we learning about groups, rings, and fields?</h6>
<p>Ok, so we've learned the basics of groups, rings and fields, but why? Well I've already alluded that we'll need to understand groups to understand Chain groups which are needed to calculate the homology of simplicial complexes. But more generally, groups, rings and fields allow us to use the familiar tools of high school algebra on ANY mathematical objects that meet the relatively relaxed requirements of groups/rings/fields (not just numbers). So we can add, substract (groups), multiply (rings) and divide (fields) with mathematical objects like (gasp) simplicial complexes. Moreover, we can solve equations with unknown variables involving abstract mathematical objects that are not numbers.</p>
<h5>Modules and Vector Spaces</h5>
<p>Okay so there's a couple other mathematical structures from abstract algebra we need to study in order to be prepared for the rest of persistent homology, namely modules and vector spaces, which are very similar. Let's start with vector spaces since you should already be familiar with vectors. You should be familiar with vectors because generally we represent data as vectors, i.e., if we have an excel file with rows and columns, each row can be represented as an n-dimensional vector (n being the number of columns).</p>
<p>Intuitively then, vectors are n-dimensional lists of numbers, such as <span class="math">\([1.2,4.3,5.5,4.1]\)</span>. Importantly, I'm sure you're aware of the basic rules of adding vectors together and multiplying them by scalars. For example,
</p>
<div class="math">$$[1.2,4.3,5.5,4.1] + [1,3,2,1] = [1.2 + 1, 4.3 + 3, 5.5 + 2, 4.1 + 1] = [2.2,7.3,7.5,5.1]$$</div>
<p>
...in words, when adding vectors, they have to be the same length, and you add each corresponding element. That is, the first element in each vector get added together, and so on. And for scaling...
</p>
<div class="math">$$ 2 \times [1.2,4.3,5.5,4.1] = [2.2, 8.6, 11.0, 8.2]$$</div>
<p>
...each element in the vector gets multiplied by the scalar.</p>
<p>But wait! The way vectors are defined does not mention anything about the elements being NUMBERS or lists. A vector can be a set of ANY valid mathematical structure that meets the criteria of being a <em>field</em>. As long as the elements of a vector space can be scaled up or down by elements from a field (usually the real numbers or integers) and added together producing a new element still in the vector space.</p>
<p>Here's the formal definiton of a <strong>vector space</strong>, the mathematical structure whose elements are <strong>vectors</strong>.</p>
<blockquote>
<p><strong>Definition (Vector Space)</strong> <br />
A vector space <span class="math">\(V\)</span> over a field <span class="math">\(F\)</span> is a <em>set</em> of objects called vectors, which can be added,
subtracted and multiplied by scalars (members of the underlying field). Thus <span class="math">\(V\)</span> is an
abelian group under addition, and for each <span class="math">\(f \in F\)</span> and <span class="math">\(v \in V\)</span> we have an element <span class="math">\(fv \in V\)</span> (the product of <span class="math">\(f\times v\)</span> is itself in <span class="math">\(V\)</span>.)
Scalar multiplication is distributive and associative, and the multiplicative identity of the
field acts as an identity on vectors.</p>
</blockquote>
<p>For example, the familiar vectors of numbers is from a vector space over the field <span class="math">\(\mathbb R\)</span>.</p>
<p>Ok, so a <strong>module</strong> is the same as a vector space, except that it is defined over a <em>ring</em> rather than a field. And remember, every field <em>is</em> a ring, so a module is a more relaxed (more general) mathematical structure than a vector space.</p>
<p>(Adapted from < http://www.math.uiuc.edu/~r-ash/Algebra/Chapter4.pdf >)</p>
<p>We should also talk about a <strong>basis</strong> of a vector space (or module).</p>
<p>Say we have a finite set <span class="math">\(S = \{a,b,c\}\)</span> and we want to use this to build a module (or vector space). Well we can use this set as a basis to build module over some ring <span class="math">\(R\)</span>. In this case, our module would be mathematically defined as:</p>
<div class="math">$$M = \{(x* a, y* b, z* c) \mid x,y,z \in R\}$$</div>
<p> <br />
or equivalently: </p>
<div class="math">$$M = \{(x*g, y*g, z*g) \mid x,y,z \in R, g \in S\}$$</div>
<p>
<br />
Where <span class="math">\(*\)</span> is the binary "multiplication" operation of our module. But since <span class="math">\(R\)</span> is a ring, it also must have a second binary operation that we might call "addition" and denote with <span class="math">\(+\)</span>. Notice I use parenthesis because the order matters, i.e. <span class="math">\((a,b,c) \neq (b,a,c)\)</span>.</p>
<p>Now, every element in <span class="math">\(M\)</span> is of the form <span class="math">\(\{xa,yb,zc\}\)</span> (omitted the explicit <span class="math">\(*\)</span> operation for convenience) hence that forms a <em>basis</em> of this module.</p>
<p>And we can add and scale each element of <span class="math">\(M\)</span> using elements from its underlying ring <span class="math">\(R\)</span>. If we take the ring to be the integers, <span class="math">\(\mathbb Z\)</span> then we can add and scale in the following ways:
</p>
<div class="math">$$m_1, m_2 {\in M}\\
m_1 = (3a, b, 5c) \\
m_2 = (a, 2b, c) \\
m_1 + m_2 = (3a+a, b+2b, 5c+c) = (4a, 3b, 6c) \\
5*m_1 = 5 * (3a, b, 5c) = (5*3a, 5*b, 5*5c) = (15a, 5b, 25c)$$</div>
<p>This module is also a group (since every module and vector space is a group) if we only pay attention to the addition operation, but even though our generating set is a finite set like <span class="math">\(\{a,b,c\}\)</span>, once we apply it over an infinite ring like the integers, we've constructed an infinite module or vector space.</p>
<p>In general, we can come up with multiple bases for a vector space, however, there is a mathematical theorem that tells us that all possible bases are of the same size. This leads us to the notion of <strong>dimension</strong>. The dimension of a vector space (or module) is taken to be the size of its base. So for the example given above, the size of the base was 3 (the base has three elements) and thus that module has a dimension of 3.</p>
<p>As another example, take for example the vector space formed by <span class="math">\(\mathbb R^2\)</span> where <span class="math">\(\mathbb R\)</span> is the set of real numbers. This is defined as:
</p>
<div class="math">$$\mathbb R^2 = \{(x,y) \mid x,y \in \mathbb R\}$$</div>
<p>
Basically we have an infinite set of all possible pairs of real numbers. One basis for this vector space is simply <span class="math">\((x,y) \mid x,y \in \mathbb R\)</span>, which feels the most natural as it is the simplest, but there's nothing forbidding us from making the basis <span class="math">\((2x+1.55,3y-0.718) \mid x,y \in \mathbb R\)</span> since we end up with the same vector space. But no matter how we define our basis, it will always have 2 elements and thus its dimension is 2.</p>
<p>When we have a vector space, say of dimension 2, like <span class="math">\(\mathbb R^2\)</span>, we can separate out its components like so:
</p>
<div class="math">$$ \mathbb R_x = \{(x, 0) \mid x \in \mathbb R\} \\
\mathbb R_y = \{(0, y) \mid y \in \mathbb R\} \\
\mathbb R^2 = \mathbb R_x \oplus \mathbb R_y $$</div>
<p>
We can introduce new notation called a <strong>direct sum</strong> <span class="math">\(\oplus\)</span>, to signify this process of building out the dimensions of a vector space by a process like <span class="math">\((x,0)+(0,y)=(x+0,0+y)=(x,y) \mid x,y \in \mathbb R\)</span>. Thus we can more simply say <span class="math">\(\mathbb R^2 = \mathbb R \oplus \mathbb R\)</span>.</p>
<p>We can also say that the base of <span class="math">\(\mathbb R^2\)</span> is the <strong>span</strong> of the set <span class="math">\(\{(1,0), (0,1)\}\)</span>, denoted <span class="math">\(span\{(1,0), (0,1)\}\)</span> or sometimes even more simply denoted using angle brackets <span class="math">\(\langle\ (1,0), (0,1)\ \rangle\)</span></p>
<p><span class="math">\(span\{(1,0), (0,1)\}\)</span> is shorthand for saying "the set composed of all <strong>linear combinations</strong> of the bases <span class="math">\((1,0)\)</span> and <span class="math">\((0,1)\)</span>".</p>
<p>What is a linear combination? Well, in general, a linear combination of <span class="math">\(x\)</span> and <span class="math">\(y\)</span> is any expression of the form <span class="math">\(ax + by\)</span> where <span class="math">\(a,b\)</span> are constants in some field <span class="math">\(F\)</span>. </p>
<p>So a single possible linear combination of <span class="math">\((1,0)\)</span> and <span class="math">\((0,1)\)</span> would be: <span class="math">\(5(1,0) + 2(0,1) = (5*1,5*0) + (2*0,2*1) = (5,0) + (0,2) = (5+0, 0+2) = (5, 2)\)</span>. But <em>all</em> the linear combinations of <span class="math">\((1,0)\)</span> and <span class="math">\((0,1)\)</span> would be the expression: <span class="math">\(\{a(1,0) + b(0,1) \mid a,b \in \mathbb R\}\)</span> and this is the same as saying <span class="math">\(span\{(1,0), (0,1)\}\)</span> or <span class="math">\(\langle\ (1,0), (0,1)\ \rangle\)</span>. And this set of all ordered pairs of real numbers is denoted by <span class="math">\(\mathbb R^2\)</span>.</p>
<p>What's important about bases of a vector space is that they must be <strong>linearly independent</strong>, this means that one element <em>cannot</em> be expressed as a linear combination of the other. For example, the base element <span class="math">\((1,0)\)</span> cannot be expressed in terms of <span class="math">\((0,1)\)</span>. There is no expression <span class="math">\(\not\exists a,b \in \mathbb R \land a,b\neq 0 \text{ such that }a(0,1) + b(1,0) = (1,0)\)</span>.</p>
<p>So in summary, a <em>basis</em> of a vector space <span class="math">\(V\)</span> consists of a set of elements <span class="math">\(B\)</span> such that each element <span class="math">\(b \in B\)</span> is linearly independent and the span of <span class="math">\(B\)</span> produces the whole vector space <span class="math">\(V\)</span>. Thus the dimension of the vector space dim<span class="math">\((V)\)</span> is the number of elements in <span class="math">\(B\)</span>.</p>
<p>(Reference: The Napkin Project by Evan Chen < http://www.mit.edu/~evanchen/napkin.html >)</p>
<h5>Back to Chain Groups</h5>
<p>Sigh. Ok, that was a lot of stuff we had to get through, but now we're back to the real problem we care about: figuring out the homology groups of a simplicial complex. As you may recall, we had left off discussing <em>chain groups</em> of a simplicial complex. I don't want to have to repeat everything, so just scroll up and re-read that part if you forget. I'll wait...</p>
<p>Let <span class="math">\(\mathcal S = \text{{{a}, {b}, {c}, {d}, {a, b}, {b, c}, {c, a}, {c, d}, {d, b}, {a, b, c}}}\)</span> be an <em>oriented</em> abstract simplicial complex (depicted below) constructed from some point cloud (e.g. data set). The <strong>n-chain</strong>, denoted <span class="math">\(C_n(S)\)</span> is the subset of <span class="math">\(S\)</span> of <span class="math">\(n\)</span>-dimensional simplicies. For example, <span class="math">\(C_1(S) = \text{ {{a, b}, {b, c}, {c, a}, {c, d}, {d, b}}}\)</span> and <span class="math">\(C_2(S) = \text{{a, b, c}}\)</span>.
<img src="images/TDAimages/simplicialcomplex5b.svg" /></p>
<p>Now, an <em>n-chain</em> can become a <strong>chain group</strong> if we give it a binary operation called addition that satisfies the group axioms. With this structure, we can add together <span class="math">\(n\)</span>-simplicies in <span class="math">\(C_n(S)\)</span>. More precisely, an <span class="math">\(n\)</span>-chain group is the sum of <span class="math">\(n\)</span>-chains with coefficients from a group, ring or field <span class="math">\(F\)</span>. I'm going to use the same <span class="math">\(C_n\)</span> notation for a chain group as I did for an n-chain.
</p>
<div class="math">$$C_n(S) = \sum a_i \sigma_i$$</div>
<p>
where <span class="math">\(\sigma_i\)</span> refers to the <span class="math">\(i\)</span>-th simplex in the n-chain <span class="math">\(C_n\)</span>, <span class="math">\(a_i\)</span> is the corresponding coefficient from a field, ring or group, and <span class="math">\(S\)</span> is the original simplicial complex.</p>
<p>Technically, any field/group/ring could be used to provide the coefficients for the chain group, however, for our purposes, the easiest group to work with is the cyclic group <span class="math">\(\mathbb Z_2\)</span>, i.e. the integers modulo 2. <span class="math">\(\mathbb Z_2\)</span> only contains <span class="math">\(\{0,1\}\)</span> such that <span class="math">\(1+1=0\)</span> and is a <em>field</em> because we can define an addition and multiplication operation that meet the axioms of a field. This is useful because we really just want to be able to either say a simplex exists in our n-chain (i.e. it has coefficient of <span class="math">\(1\)</span>) or it does not (coefficient of <span class="math">\(0\)</span>) and if we have a duplicate simplex, when we add them together they will cancel out. It turns out this is exactly the property we want. You might object that <span class="math">\(\mathbb Z_2\)</span> is not a group because it doesn't have an inverse, e.g. <span class="math">\(-1\)</span>, but in fact it does, the inverse of <span class="math">\(a\)</span>, for example, is <span class="math">\(a\)</span>. Wait what? Yes, <span class="math">\(a = -a\)</span> under <span class="math">\(\mathbb Z_2\)</span> because <span class="math">\(a + a = 0\)</span>. That's all that's required for an inverse to exist, you just need some element in your group such that <span class="math">\(a+b=0; \forall a,b \in G\)</span> (<span class="math">\(G\)</span> being a group).</p>
<p>If we use <span class="math">\(\mathbb Z_2\)</span> as our coefficient group, then we can essentially ignore simplex orientation. That makes it a bit more convenient. But for completeness sake, I wanted to incorporate orientations because I've most often seen people use the full set of integers <span class="math">\(\mathbb Z\)</span> as coefficients in academic papers and commercially. If we use a field with negative numbers like <span class="math">\(\mathbb Z\)</span> then our simplices need to be oriented, such that <span class="math">\([a,b] \neq [b,a]\)</span>. This is because, if we use <span class="math">\(\mathbb Z\)</span>, then <span class="math">\([a,b] = -[b,a]\)</span>, hence <span class="math">\([a,b] + [b,a] = 0\)</span>.</p>
<p>Our ultimate goal, remember, is to mathematically find connected components and <span class="math">\(n\)</span>-dimensional loops in a simplicial complex. Our simplicial complex <span class="math">\(S\)</span> from above, by visual inspection, has one connected component and one 2-dimensional loop or hole. Keep in mind that the simplex <span class="math">\(\{a,b,c\} \in S\)</span> is "filled in", there is no hole in the middle, it is a solid object.</p>
<p>We now move to defining <strong>boundary maps</strong>. Intuitively, a boundary map (or just a boundary for short) of an un-oriented <span class="math">\(n\)</span>-simplex <span class="math">\(X\)</span> is the set of <span class="math">\({ {X} \choose {n-1}}\)</span> subsets of <span class="math">\(X\)</span>. That is, the boundary is the set of all <span class="math">\((n-1)\)</span>-subsets of <span class="math">\(X\)</span>. For example, the boundary of <span class="math">\(\{a,b,c\}\)</span> is <span class="math">\(\text{ {{a,b},{b,c},{c,a}} }\)</span>.</p>
<p>Let's give a more precise definition that applies to oriented simplices, and offer some notation.</p>
<blockquote>
<p><strong>Definition (Boundary)</strong> <br />
The boundary of an <span class="math">\(n\)</span>-simplex <span class="math">\(X\)</span> with vertex set <span class="math">\([v_0, v_1, v_2, ... v_n]\)</span>, denoted <span class="math">\(\partial(X)\)</span>, is: <br />
<div class="math">$$\partial(X) = \sum^{n}_{i=0}(-1)^{i}[v_0, v_1, v_2, \hat{v_i} ... v_n], \text{ where the $i$-th vertex is removed from the sequence}$$</div> <br />
The boundary of a single vertex is 0, <span class="math">\(\partial([v_i]) = 0\)</span>.</p>
</blockquote>
<p>For example, if <span class="math">\(X\)</span> is the 2-simplex <span class="math">\([a,b,c]\)</span>, then <span class="math">\(\partial(X) = [b,c] + (-1)[a,c] + [a,b] = [b,c] + [c,a] + [a,b]\)</span></p>
<p>Let's see how the idea of a boundary can find us a simple loop in the 2-complex example from above. We see that <span class="math">\([b,c] + [c,d] + [d,b]\)</span> are the 1-simplices that form a cycle or loop. If we take the boundary of this set with the coefficient field <span class="math">\(\mathbb Z\)</span> then,
</p>
<div class="math">$$\partial([b,c] + [c,d] + [d,b]) = \partial([b,c]) + \partial([c,d]) + \partial([d,b])$$</div>
<div class="math">$$\partial([b,c]) + \partial([c,d]) + \partial([d,b]) = [b] + (-1)[c] + [c] + (-1)[d] + [d] + (-1)[b]$$</div>
<div class="math">$$\require{cancel} \cancel{[b]} + \cancel{(-1)[b]} + \cancel{(-1)[c]} + \cancel{[c]} + \cancel{(-1)[d]} + \cancel{[d]} = 0$$</div>
<p>This leads us to a more general principle, a <strong><span class="math">\(p\)</span>-cycle</strong> is an <span class="math">\(n\)</span>-chain in <span class="math">\(C_n\)</span> whose boundary, <span class="math">\(\partial(C_n) = 0\)</span>.</p>
<p>That is, in order to find the p-cycles in a chain group <span class="math">\(C_n\)</span> we need to solve the algebraic equation <span class="math">\(\partial(C_n) = 0\)</span> and the solutions will be the p-cycles. Don't worry, this will all make sense when we run through some examples shortly.</p>
<p>An important result to point out is that the boundary of a boundary is always 0, i.e. <span class="math">\(\partial_n \partial_{n-1} = 0\)</span></p>
<h6>Chain Complexes</h6>
<p>We just saw how the boundary operation is distributive, e.g. for two simplices <span class="math">\(\sigma_1, \sigma_2 \in S\)</span>
</p>
<div class="math">$$ \partial(\sigma_1 + \sigma_2) = \partial(\sigma_1) + \partial(\sigma_2)$$</div>
<blockquote>
<p><strong>Definition (Chain Complex)</strong> <br />
Let <span class="math">\(S\)</span> be a simplicial <span class="math">\(p\)</span>-complex. Let <span class="math">\(C_n(S)\)</span> be the <span class="math">\(n\)</span>-chain of <span class="math">\(S\)</span>, <span class="math">\(n \leq p\)</span>. The chain complex, <span class="math">\(\mathscr C(S)\)</span> is:<br />
<div class="math">$$\mathscr C(S) = \sum^{p}_{n=0}\partial(C_n(S)) \text{ , or in other words...}$$</div> <br />
<div class="math">$$\mathscr C(S) = \partial(C_0(S)) + \partial(C_1(S)) \ + \ ... \ + \ \partial(C_p(S))$$</div>
</p>
</blockquote>
<p>Now we can define how to describe find the <span class="math">\(p\)</span>-cycles in a simplicial complex.</p>
<blockquote>
<p><strong>Definition (Kernel)</strong><br />
The kernel of <span class="math">\(\partial(C_n)\)</span>, denoted <span class="math">\(\text{Ker}(\partial(C_n))\)</span> is the group of <span class="math">\(n\)</span>-chains <span class="math">\(Z_n \subseteq C_n\)</span> such that <span class="math">\(\partial(Z_n) = 0\)</span></p>
</blockquote>
<p>We're almost there, we need a couple more definitions and we can finally do some <em>simplicial homology</em>.</p>
<blockquote>
<p><strong>Definition (Image of Boundary)</strong> <br />
The image of a boundary <span class="math">\(\partial_n\)</span> (boundary of some <span class="math">\(n\)</span>-chain), <span class="math">\(\text{Im }(\partial_n)\)</span>, is the <em>set</em> of boundaries. <br /><br />
For example, if a 1-chain is <span class="math">\(C_1 = \{[v_0, v_1], [v_1, v_2], [v_2, v_0]\}\)</span>, <br />
then <span class="math">\(\partial_1 = [v_0] + (-1)[v_1] + [v_1] + (-1)[v_2] + [v_2] + (-1)[v_0]\)</span> <br />
<span class="math">\(\text{Im }\partial_1 = \{[v_0-v_1],[v_1-v_2],[v_2-v_0]\}\)</span></p>
</blockquote>
<p>So the only difference between <span class="math">\(\partial_n\)</span> and Im <span class="math">\(\partial_n\)</span> is that the image of the boundary is in set form, whereas the boundary is in a polynomial-like form.</p>
<blockquote>
<p><strong>Definition (<span class="math">\(n^{th}\)</span> Homology Group)</strong> <br />
The <span class="math">\(n^{th}\)</span> Homology Group <span class="math">\(H_n\)</span> is defined as <span class="math">\(H_n\)</span> = Ker <span class="math">\(\partial_n \ / \ \text{Im } \partial_{n+1}\)</span>.</p>
<p><strong>Definition (Betti Numbers)</strong> <br/>
The <span class="math">\(n^{th}\)</span> Betti Number <span class="math">\(b_n\)</span> is defined as the dimension of <span class="math">\(H_n\)</span>. <br />
<span class="math">\(b_n = dim(H_n)\)</span></p>
</blockquote>
<h4>More group theory</h4>
<p>We've reached an impasse again requiring some exposition. I casually used the notion <span class="math">\(/\)</span> in defining a homology group to be Ker <span class="math">\(\partial_n \ / \ \text{Im } \partial_{n+1}\)</span>. The mathematical use of this notation is to say that for some <em>group</em> <span class="math">\(G\)</span> and <span class="math">\(H\)</span>, a subgroup of <span class="math">\(G\)</span>, then <span class="math">\(G / H\)</span> is the quotient group. Ok, so what is a quotient group? Alright, we need to learn more group theory. And unfortunately it's kind of hard, but I'll do my best to make it intuitive.</p>
<blockquote>
<p><strong>Definition (Quotient Group)</strong> <br />
For a group <span class="math">\(G\)</span> and a normal subgroup <span class="math">\(N\)</span> of <span class="math">\(G\)</span>, denoted <span class="math">\(N \leq G\)</span>, the quotient group of <span class="math">\(N\)</span> in <span class="math">\(G\)</span>, written <span class="math">\(G/N\)</span> and read "<span class="math">\(G\)</span> modulo <span class="math">\(N\)</span>", is the set of <em>cosets</em> of <span class="math">\(N\)</span> in <span class="math">\(G\)</span>. <br />
(Source: Weisstein, Eric W. "Quotient Group." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/QuotientGroup.html)</p>
</blockquote>
<p>For now you can ignore what a <em>normal</em> subgroup means because all the groups we will deal with in TDA are abelian groups, and all subgroups of abelian groups are normal. But this definition just defines something in terms of something else called <em>cosets</em>. Annoying. Ok what is a coset?</p>
<blockquote>
<p><strong>Definition (Cosets)</strong> <br />
For a group <span class="math">\((G, \star)\)</span>, consider a subgroup <span class="math">\((H, \star)\)</span> with elements <span class="math">\(h_i\)</span> and an element <span class="math">\(x\)</span> in <span class="math">\(G\)</span>, then <span class="math">\(x\star{h_i}\)</span> for <span class="math">\(i=1, 2, ...\)</span> constitute the <em>left coset</em> of the subgroup <span class="math">\(H\)</span> with respect to <span class="math">\(x\)</span>. <br />
(Adapted from: Weisstein, Eric W. "Left Coset." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/LeftCoset.html)</p>
</blockquote>
<p>So we can ask what the left (or right) coset is of a subgroup <span class="math">\(H \leq G\)</span> with respect to some element <span class="math">\(x \in G\)</span> and that gives us a single coset, but if we get the set of <em>all</em> left cosets (i.e. the cosets with respect to every element <span class="math">\(x \in G\)</span>) then we have our quotient group <span class="math">\(G\ /\ H\)</span>.</p>
<p>For our purposes, we only need to concern ourselves with <em>left</em> cosets, because TDA only involves abelian groups, and for abelian groups, left cosets and right cosets are the same. (We <em>will</em> see an example of a non-abelian group).</p>
<p>We'll reconsider the equilateral triangle and its symmetries to get a better sense of subgroups, quotient groups and cosets.
<img src="images/TDAimages/triangleGroupTheory.svg" />
Remember, by simple visualization we identified the types of operations we could perform on the equilateral triangle that preserve its structure: we can rotate it by 0, 120, or 240 degrees and we can reflect it across 3 lines of symmetry. Any other operations, like rotating by 1 degree, would produce a different structure when embedded in, for example, two-dimensional Euclidian space.</p>
<p>We can build a set of these 6 group operations:
</p>
<div class="math">$$S = \text{{$rot_0$, $rot_{120}$, $rot_{240}$, $ref_a$, $ref_b$, $ref_c$}}$$</div>
<p> <br />
...where <span class="math">\(rot_0\)</span> and so on means to rotate the triangle about its center 0 degrees (an identity operation), and <span class="math">\(ref_a\)</span> means to reflect across the line labeled <span class="math">\(a\)</span> in the picture above.</p>
<p>For example, we can take the triangle and apply two operations from <span class="math">\(S\)</span>, such as <span class="math">\(rot_{120}, ref_a\)</span></p>
<p><img src="images/TDAimages/triangleGroupOps1.svg" />
(note I'm being a bit confusing by labeling the vertices of the triangle <span class="math">\(a,b,c\)</span> but also labeling the lines of reflection <span class="math">\(a,b,c\)</span>, but it should be obvious by context what I'm referring to.)</p>
<p>So does <span class="math">\(S\)</span> form a valid group? Well it does it we define a binary operation for each pair of elements it contains. And the operation <span class="math">\(a \star b\)</span> for any two elements in <span class="math">\(S\)</span> will simply mean "do <span class="math">\(a\)</span>, then do <span class="math">\(b\)</span>". The elements of <span class="math">\(S\)</span> are actions that we take on the triangle. We can build a multiplication (or Cayley) table that shows the result of applying the operation for every pair of elements.</p>
<p>Here's the Cayley table:</p>
<table>
<thead>
<tr>
<th></th>
<th><span class="math">\(\mathbf{rot_0}\)</span></th>
<th><span class="math">\(\mathbf{rot_{120}}\)</span></th>
<th><span class="math">\(\mathbf{rot_{240}}\)</span></th>
<th><span class="math">\(\mathbf{ref_a}\)</span></th>
<th><span class="math">\(\mathbf{ref_b}\)</span></th>
<th><span class="math">\(\mathbf{ref_c}\)</span></th>
</tr>
</thead>
<tbody>
<tr>
<td><span class="math">\(\mathbf{rot_0}\)</span></td>
<td><span class="math">\(rot_0\)</span></td>
<td><span class="math">\(rot_{120}\)</span></td>
<td><span class="math">\(rot_{240}\)</span></td>
<td><span class="math">\(ref_a\)</span></td>
<td><span class="math">\(ref_b\)</span></td>
<td><span class="math">\(ref_c\)</span></td>
</tr>
<tr>
<td><span class="math">\(\mathbf{rot_{120}}\)</span></td>
<td><span class="math">\(rot_{120}\)</span></td>
<td><span class="math">\(rot_{240}\)</span></td>
<td><span class="math">\(rot_0\)</span></td>
<td><span class="math">\(ref_c\)</span></td>
<td><span class="math">\(ref_a\)</span></td>
<td><span class="math">\(ref_b\)</span></td>
</tr>
<tr>
<td><span class="math">\(\mathbf{rot_{240}}\)</span></td>
<td><span class="math">\(rot_{240}\)</span></td>
<td><span class="math">\(rot_0\)</span></td>
<td><span class="math">\(rot_{120}\)</span></td>
<td><span class="math">\(ref_b\)</span></td>
<td><span class="math">\(ref_c\)</span></td>
<td><span class="math">\(ref_a\)</span></td>
</tr>
<tr>
<td><span class="math">\(\mathbf{ref_a}\)</span></td>
<td><span class="math">\(ref_a\)</span></td>
<td><span class="math">\(ref_b\)</span></td>
<td><span class="math">\(ref_c\)</span></td>
<td><span class="math">\(rot_0\)</span></td>
<td><span class="math">\(rot_{120}\)</span></td>
<td><span class="math">\(rot_{240}\)</span></td>
</tr>
<tr>
<td><span class="math">\(\mathbf{ref_b}\)</span></td>
<td><span class="math">\(ref_b\)</span></td>
<td><span class="math">\(ref_c\)</span></td>
<td><span class="math">\(ref_a\)</span></td>
<td><span class="math">\(rot_{240}\)</span></td>
<td><span class="math">\(rot_0\)</span></td>
<td><span class="math">\(rot_{120}\)</span></td>
</tr>
<tr>
<td><span class="math">\(\mathbf{ref_c}\)</span></td>
<td><span class="math">\(ref_c\)</span></td>
<td><span class="math">\(ref_a\)</span></td>
<td><span class="math">\(ref_b\)</span></td>
<td><span class="math">\(rot_{120}\)</span></td>
<td><span class="math">\(rot_{240}\)</span></td>
<td><span class="math">\(rot_0\)</span></td>
</tr>
</tbody>
</table>
<p>Notice that this defines a non-commutative (non-abelian) group, since in general <span class="math">\(a \star b \neq b \star a\)</span>.</p>
<p>Now we can use the Cayley table to build a Cayley diagram and visualize the group <span class="math">\(S\)</span>. Let's recall how to build a Cayley diagram. We will first start with our vertices (aka nodes), one for each of the 6 actions in our group <span class="math">\(S\)</span>. Then we need to figure out the minimum generator for this group, that is, the minimal subset of <span class="math">\(S\)</span> that with various combinations and repeated applications of the group operation <span class="math">\(\star\)</span> will generate the full 6 element set <span class="math">\(S\)</span>. It turns out that you just need <span class="math">\(\{rot_{120}, ref_a\}\)</span> to generate the full set, hence that subset of 2 elements is the minimal generating set. </p>
<p>Now, each element in the generating set is assigned a different colored arrow, and thus starting from a node <span class="math">\(a\)</span> and following a particular arrow to another element <span class="math">\(b\)</span>, means that <span class="math">\(a \star g = b\)</span> where <span class="math">\(g\)</span> is an element from the generating set. Thus for <span class="math">\(S\)</span>, we will have a graph with two different types of arrows, and I will color the <span class="math">\(rot_{120}\)</span> arrow as blue and the <span class="math">\(ref_a\)</span> arrow as red. Then we use our Cayley table from above to connect the nodes with the two types of arrows. </p>
<p>Here's the resulting Cayley diagram:</p>
<p><img src="images/TDAimages/CayleyDiagramD6.svg" /></p>
<p>For the curious, it turns out this group is the smallest non-abelian finite group, it's called the "Dihedral group of order 6", and can be used to represent a number of other things besides the symmetry actions on an equilateral triangle.</p>
<p>We will refer to both this Cayley table and the Cayley diagram to get an intuition for the definitions we gave earlier for subgroups, cosets and quotient groups.</p>
<p>Let's start by revisiting the notion of a <em>subgroup</em>. A subgroup <span class="math">\((H,\star)\)</span> of a group <span class="math">\((G,\star)\)</span> (often denoted <span class="math">\(H < G\)</span>) is merely a subset of <span class="math">\(G\)</span> with the same binary operation <span class="math">\(\star\)</span> that satisfies the group axioms. For example, every group has a trivial subgroup that just includes the identity element (any valid subgroup will need to include the identity element to meet the group axioms).</p>
<p>Consider the subgroup <span class="math">\(W \leq S = \{rot_0, rot_{120}, rot_{240}\}\)</span>. Is this a valid subgroup? Well yes because it is a subset of <span class="math">\(S\)</span>, contains the identity element, is associative, and each element has an inverse. For this example, the subgroup <span class="math">\(W < S\)</span> forms the outer circuit in the Cayley diagram (nodes highlighted green):</p>
<p><img src="images/TDAimages/CayleyDiagramD6_2.svg" /></p>
<p>Okay, so a subgroup is fairly straightforward. What about a <em>coset</em>? Well referring back to the definition given previously, a coset is in reference to a particular subgroup. So let's consider our subgroup <span class="math">\(W\leq S\)</span> and ask what the <em>left cosets</em> of this subgroup are. Now, I said earlier that we only need to worry about <em>left</em> cosets because in TDA the groups are all abelian, well that's true, but the group of symmetries of the equilateral triangle is not <em>not</em> an abelian group thus the left and right cosets will, in general, not be the same. We're just using the triangle to learn about group theory, once we get back to the chain groups of persistent homology, we'll be back to abelian groups.</p>
<p>Recall that the left cosets of some subgroup <span class="math">\(H\leq G\)</span> are denoted <span class="math">\(xH = \{x\star{h} \mid \forall h \in H; \forall x \in G\)</span>}<br />
And for completeness, the right cosets are <span class="math">\(Hx = \{{h}\star{x} \mid \forall h \in H; \forall x \in G\)</span>}<br /></p>
<p>Back to our triangle symmetries, group <span class="math">\(S\)</span> and its subgroup <span class="math">\(W\)</span>. Recall, <span class="math">\(W \leq S = \{rot_0, rot_{120}, rot_{240}\}\)</span>. To figure out the left cosets then, we'll start by choosing an <span class="math">\(x\in S\)</span> where <span class="math">\(x\)</span> is not in our subgroup <span class="math">\(W\)</span>. Then we will multiply <span class="math">\(x\)</span> by each element in <span class="math">\(W\)</span>. Let's start with <span class="math">\(x = ref_a\)</span>.</p>
<p>So <span class="math">\(ref_a \star \{rot_0, rot_{120}, rot_{240}\} = \{ref_a \star rot_0, ref_a \star rot_{120}, ref_a \star rot_{240}\} = \{ref_a, ref_b, ref_c\}\)</span>. So the left coset with respect to <span class="math">\(ref_a\)</span> is the set <span class="math">\(\{ref_a, ref_b, ref_c\}\)</span>. Now, we're supposed to do the same with another <span class="math">\(x \in S, x \not\in W\)</span> but if we do, we just get the same set: <span class="math">\(\{ref_a, ref_b, ref_c\}\)</span>. So we just have one left coset.</p>
<p>It turns our for this subgroup, the right and left coset are the same, the right being: <span class="math">\(\{rot_0\star ref_a, rot_{120}\star ref_a, rot_{240}\star ref_a \} = \{ref_a, ref_b, ref_c\}\)</span>.</p>
<p>(Reference: < http://www.math.clemson.edu/~macaule/classes/m16_math4120/slides/math4120_lecture-3-02_handout.pdf >)</p>
<p>Interestingly, since all Cayley diagrams have symmetry themselves, in general the <em>left</em> cosets of a subgroup will appear like copies of the subgroup in the Cayley diagram. If you consider our subgroup <span class="math">\(W \leq S = \{rot_0, rot_{120}, rot_{240}\}\)</span>, it forms this outer "ring" in the Cayley diagram, and the left coset is the set of vertices that forms the inner "ring" of the diagram. So it's like they're copies of each other. Here's another example with the subgroup being <span class="math">\(\{rot_0, ref_a\}\)</span>:</p>
<p><img src="images/TDAimages/CayleyDiagramD6_3.svg" /></p>
<p>So we begin to see how the left cosets of a subgroup of a group appear to evenly partition the group into pieces of the same form as the subgroup. With the subgroup being <span class="math">\(W \leq S = \{rot_0, rot_{120}, rot_{240}\}\)</span> we could partition the group <span class="math">\(S\)</span> into two pieces that both have the form of <span class="math">\(W\)</span>, whereas if the subgroup is <span class="math">\(\{rot_0, ref_a\}\)</span> then we can partition the group <span class="math">\(S\)</span> into 3 pieces that have the same form as the subgroup.</p>
<p>This leads us directly to the idea of a <strong>quotient group</strong>. Recall the definition given earlier: </p>
<blockquote>
<p>For a group <span class="math">\(G\)</span> and a normal subgroup <span class="math">\(N\)</span> of <span class="math">\(G\)</span>, denoted <span class="math">\(N \leq G\)</span>, the quotient group of <span class="math">\(N\)</span> in <span class="math">\(G\)</span>, written <span class="math">\(G/N\)</span> and read "<span class="math">\(G\)</span> modulo <span class="math">\(N\)</span>", is the set of <em>cosets</em> of <span class="math">\(N\)</span> in <span class="math">\(G\)</span>. <br /></p>
</blockquote>
<p>A <em>normal subgroup</em> is just a subgroup in which the left and right cosets are the same. Hence, our subgroup <span class="math">\(W \leq S = \{rot_0, rot_{120}, rot_{240}\}\)</span> is a normal subgroup as we discovered. We can use it to construct the quotient group, <span class="math">\(S / W\)</span>.</p>
<p>Now that we know what cosets are, it's easy to find what <span class="math">\(S / W\)</span>, it's just the set of (left or right, they're the same) cosets with respect to <span class="math">\(W\)</span>, and we already figured that out, the cosets are just:
</p>
<div class="math">$$ S\ /\ W = \{\{rot_0, rot_{120}, rot_{240}\}, \{ref_a, ref_b, ref_c\}\}$$</div>
<p> <br />
(we include the subgroup itself in the set since the cosets of a subgroup technically includes itself).</p>
<p>Okay so this is really interesting for two reasons, we've taken <span class="math">\(S\ /\ W\)</span> and it resulted in a set with <em>2</em> elements (the elements themselves being sets), so in a sense, we took an original set (the whole group) with 6 elements and "divided" it by a set with 3 elemenets, and got a set with 2 elements. Seem familiar? Yeah, seems like the simple arithmetic <span class="math">\(6\ /\ 3=2\)</span>. And that's because division over the real numbers is defined in exactly this way, using cosets and quotient groups. The second reason it's interesting, is that the two elements in our quotient group are the basic two <em>kinds</em> of operations on our triangle, namely <em>rotation</em> operations and <em>reflection</em> operations. </p>
<p>I also just want to put out that our resulting quotient group <span class="math">\( S\ /\ W $ is in fact itself a group, that is, it meets all the group axioms, and in this example, is isomorphic to the integers modulo 2 (\)</span>\mathbb Z_2$).</p>
<p>So intuitively, whenever you want some quotient group <span class="math">\(A\ /\ B\)</span> where <span class="math">\(B \leq A\)</span> (B is a subgroup of A), just ask yourself, "how can I partition <span class="math">\(A\)</span> into <span class="math">\(B\)</span>-like pieces?" And the partitions do NOT need to be non-overlapping. In this case our partition was non-overlapping, i.e. each coset in the quotient group had no elements in common, but that is not always the case. Consider the cyclic group <span class="math">\(\mathbb Z_4\)</span> with the single generator <span class="math">\(1\)</span>:
<img src="images/TDAimages/cyclicGroupZ4.svg" width="200px" />
We could partition this group into pieces of 2, but there are in fact two ways to do this. We could make a subgroup <span class="math">\(N \leq \mathbb Z_4 = \{0,2\}\)</span>, which would partition the space into only two pieces (there are 2 left cosets, hence our quotient group is of size 2). We've depicted this below, where each "piece" is the pair of elements "across from each other" in the Cayley diagram.</p>
<p><img src="images/TDAimages/cyclicGroupZ4_partDim2.svg" width="200px" />
</p>
<div class="math">$$ N = \{0,2\} \\
N \leq \mathbb Z_4 \\
\mathbb Z_4\ /\ N = \{\{0,2\},\{1,3\}\}$$</div>
<p>
But we could also choose a subgroup <span class="math">\(N \leq \mathbb Z_4 = \{0,1\}\)</span> where each pair of elements is right next to each other. In this case, we can partition the group into 4 pieces (i.e. the set of left cosets or the quotient group has 4 elements).
<img src="images/TDAimages/cyclicGroupZ4_partDim4.svg" width="200px" />
</p>
<div class="math">$$ N = \{0,1\} \\
N \leq \mathbb Z_4 \\
\mathbb Z_4\ /\ N = \{\{0,1\},\{1,2\},\{2,3\},\{3,0\}\}$$</div>
<p>
<br /></p>
<p>The last thing I want to mention is the idea of an <strong>algebraically closed group</strong> versus non-closed groups. Basically, a group that is closed is one in which the solution to any equation with the group is also contained in the group. For example, if we consider the cyclic group <span class="math">\(\mathbb Z_2\)</span> which consists of <span class="math">\(\{0,1,2\}\)</span>, then the solution to the equation <span class="math">\(x^2 = 1\)</span> is <span class="math">\(1\)</span> which is in our group <span class="math">\(\{0,1,2\}\)</span>. However, if we can come up with an equation whose solution is not in <span class="math">\(\mathbb Z_2\)</span> but say only found in the reals <span class="math">\(\mathbb R\)</span>, then our group is not closed. In fact, it's quite easy, just ask the solution to <span class="math">\(x/3=1\)</span> and we realized the solution, <span class="math">\(3\)</span>, is not in <span class="math">\(\mathbb Z_2\)</span>.</p>
<p>(Ref: < https://en.wikipedia.org/wiki/Algebraically_closed_group >)</p>
<h6>Next time...</h6>
<p>So we've actually covered most of the basic mathematics knowledge that we'll need to actually start using simplicial complexes to calculate topological features, so that is what we'll begin to do next time.</p>
<h4>References (Websites):</h4>
<ol>
<li>http://dyinglovegrape.com/math/topology_data_1.php</li>
<li>http://www.math.uiuc.edu/~r-ash/Algebra/Chapter4.pdf</li>
<li>https://en.wikipedia.org/wiki/Group_(mathematics)</li>
<li>https://jeremykun.com/2013/04/03/homology-theory-a-primer/</li>
<li>http://suess.sdf-eu.org/website/lang/de/algtop/notes4.pdf</li>
<li>http://www.mit.edu/~evanchen/napkin.html</li>
</ol>
<h4>References (Academic Publications):</h4>
<ol>
<li>
<p>Basher, M. (2012). On the Folding of Finite Topological Space. International Mathematical Forum, 7(15), 745–752. Retrieved from http://www.m-hikari.com/imf/imf-2012/13-16-2012/basherIMF13-16-2012.pdf</p>
</li>
<li>
<p>Day, M. (2012). Notes on Cayley Graphs for Math 5123 Cayley graphs, 1–6.</p>
</li>
<li>
<p>Doktorova, M. (2012). CONSTRUCTING SIMPLICIAL COMPLEXES OVER by, (June).</p>
</li>
<li>
<p>Edelsbrunner, H. (2006). IV.1 Homology. Computational Topology, 81–87. Retrieved from http://www.cs.duke.edu/courses/fall06/cps296.1/</p>
</li>
<li>
<p>Erickson, J. (1908). Homology. Computational Topology, 1–11.</p>
</li>
<li>
<p>Evan Chen. (2016). An Infinitely Large Napkin.</p>
</li>
<li>
<p>Grigor’yan, A., Muranov, Y. V., & Yau, S. T. (2014). Graphs associated with simplicial complexes. Homology, Homotopy and Applications, 16(1), 295–311. http://doi.org/10.4310/HHA.2014.v16.n1.a16</p>
</li>
<li>
<p>Kaczynski, T., Mischaikow, K., & Mrozek, M. (2003). Computing homology. Homology, Homotopy and Applications, 5(2), 233–256. http://doi.org/10.4310/HHA.2003.v5.n2.a8</p>
</li>
<li>
<p>Kerber, M. (2016). Persistent Homology – State of the art and challenges 1 Motivation for multi-scale topology. Internat. Math. Nachrichten Nr, 231(231), 15–33.</p>
</li>
<li>
<p>Khoury, M. (n.d.). Lecture 6 : Introduction to Simplicial Homology Topics in Computational Topology : An Algorithmic View, 1–6.</p>
</li>
<li>
<p>Kraft, R. (2016). Illustrations of Data Analysis Using the Mapper Algorithm and Persistent Homology.</p>
</li>
<li>
<p>Lakshmivarahan, S., & Sivakumar, L. (2016). Cayley Graphs, (1), 1–9.</p>
</li>
<li>
<p>Liu, X., Xie, Z., & Yi, D. (2012). A fast algorithm for constructing topological structure in large data. Homology, Homotopy and Applications, 14(1), 221–238. http://doi.org/10.4310/HHA.2012.v14.n1.a11</p>
</li>
<li>
<p>Naik, V. (2006). Group theory : a first journey, 1–21.</p>
</li>
<li>
<p>Otter, N., Porter, M. A., Tillmann, U., Grindrod, P., & Harrington, H. A. (2015). A roadmap for the computation of persistent homology. Preprint ArXiv, (June), 17. Retrieved from http://arxiv.org/abs/1506.08903</p>
</li>
<li>
<p>Semester, A. (2017). § 4 . Simplicial Complexes and Simplicial Homology, 1–13.</p>
</li>
<li>
<p>Singh, G. (2007). Algorithms for Topological Analysis of Data, (November).</p>
</li>
<li>
<p>Zomorodian, A. (2009). Computational Topology Notes. Advances in Discrete and Computational Geometry, 2, 109–143. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.7483</p>
</li>
<li>
<p>Zomorodian, A. (2010). Fast construction of the Vietoris-Rips complex. Computers and Graphics (Pergamon), 34(3), 263–271. http://doi.org/10.1016/j.cag.2010.03.007</p>
</li>
<li>
<p>Symmetry and Group Theory 1. (2016), 1–18. http://doi.org/10.1016/B978-0-444-53786-7.00026-5</p>
</li>
</ol>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Topological Data Analysis - Persistent Homology2017-02-22T00:10:00-06:002017-02-22T00:10:00-06:00Brandon Browntag:outlace.com,2017-02-22:/TDApart1.html<p>(Part 1) In this multi-part series we will go through the fundamental mathematics and algorithms of a powerful new tool for analysis of complex data.</p><h2>Topological Data Analysis - Part 1 - Persistent Homology</h2>
<p>This is Part 1 in a series on topological data analysis.
See <a href="TDApart2.html">Part 2</a> | <a href="TDApart3.html">Part 3</a> | <a href="TDApart4.html">Part 4</a> | <a href="TDApart5.html">Part 5</a></p>
<h4>Introduction</h4>
<p>I find Topological Data Analysis (TDA) to be one of the most exciting (yet under-rated) developments in data analysis and thus I want to do my part to spread the knowledge. So what's it all about? Well there are two major flavors of TDA: persistent homology and mapper. Both are useful, and can be used to supplement each other. In this post (and the next couple of posts) we will cover persistent homology. TDA in general is very mathematical (it was born out the "lab" of a mathematics group at Stanford, particularly Gunnar Carlsson and his graduate student Gurjeet Singh, although the foundations had been developed for years before by others) and thus we cannot really study it without learning a lot of math. Hence, this post is going to be just as much a tutorial on various topics in higher math as it is TDA, so if you're not that interested in TDA but want to learn about topology, group theory, linear algebra, graph theory and abstract algebra, then this might be useful just in that regard. Of course, I will not cover these math topics in as much detail or with as much rigor as a textbook would, but my hope is if you understand what I present here, reading a textbook (or math papers) will make a whole lot more sense.</p>
<h5>What is persistent homology and why should I care?</h5>
<p>Think of a typical data set being a big Excel file with columns being various parameters and the rows being individual data points. Say there are 100 columns and 900 rows. If we think of the rows as being data points, then we can think of them as being 100-dimensional data points. Obviously, helplessly constrained to our 3-dimensional universe, we have no way of seeing what are data <em>looks</em> like. Well, of course there are numerous methods for projecting high-dimensional data down to a lower dimensional space that we <em>can</em> see. Usually we want to see our data so we can easily identify patterns, particularly clusters. The most well-known of these visualization methods is probably principal component analysis (PCA). But all of these methods involve transforming our original data in a way that loses some potentially valuable information. There's no free lunch here, if you use PCA to project 100-dimensional data to a 2-dimensional plot, you're going to be missing something.</p>
<p>Persistent homology (henceforth just <strong>PH</strong>) gives us a way to find interesting patterns in data without having to "downgrade" the data in anyway so we can see it. PH let's us leave our data in it's original, ultra-high dimensional space and tells us how many clusters there are, and how many loop-like structures there are in the data, all without being able to actually see it.</p>
<p>As an example, consider a biologist studying some genes in cells. She uses fancy equipment to measure the expression levels of 100 genes in 900 different cells. She's interested in genes that might play a role in cell division. Being the cutting-edge biologist she is, she utilizes persistent homology to analyze her data and PH reports her data has a prominent <em>cycle</em>, which she further analyzes and is able to confirm that a subset of her 100 genes seem to have a cyclical expression pattern.</p>
<p>The field of topology in mathematics studies properties of spaces where all we care about is the relationship of points to one another, unlike geometry, where exact distances and angles are important. Thus PH let's us ask topological questions of our data in a reliable way without having to adulterate the data in anyway. The conventional output from persistent homology is a "barcode" graph that looks like this:
<figure>
<img src="images/TDAimages/PHbarcodeExample.png" width="600px" alt="Example persistent homology barcode">
<figcaption>Reference: Topaz, C. M., Ziegelmeier, L., & Halverson, T. (2015). Topological data analysis of biological aggregation models. PloS one, 10(5), e0126383.</figcaption>
</figure>
<!-- <img src="images/TDAimages/PHbarcodeExample.png" width="600px" /> -->
This graph encodes all the topological features we're interested in a compact and visual way.</p>
<h4>Intended audience</h4>
<p>As usual, my intended audience is people like me. I'm a programmer with an interest in TDA. I majored in Neuroscience in college so I have no formal mathematics training beyond high school. Everything else has been self-taught. If you have a degree in mathematics, this is not the post for you, but you can take a look at my extensive reference list.</p>
<h4>Assumptions</h4>
<p>While I always attempt to make my posts as accessible to a general audience with a programming background and some basic math knowledge, I do make a few knowledge assumptions here. I assume you have a foundational understanding of the following:
- High-school algebra
- Set theory
- Python and Numpy</p>
<p>but I will try to explain as much as possible along the way. If you've followed my previous posts then you can probably follow this one.</p>
<h3>Set Theory Review</h3>
<p>We will just very quickly review basics of set theory, but I am assuming you already have the necessary background knowledge of set theory, this is just a refresher and guide for the notation we'll be using.</p>
<p>Recall a <strong>set</strong> is an abstract mathematical structure that is an unordered collection of abstract objects, typically denoted by curly braces, e.g. the set <span class="math">\(S = \{a,b,c\}\)</span>. The objects contained in a set are called its <em>elements</em>. If an element <span class="math">\(a\)</span> is contained in a set <span class="math">\(S\)</span>, then we denote this relationship as <span class="math">\(a \in S\)</span> where the symbol <span class="math">\(\in\)</span> is read "in" (<span class="math">\(a\)</span> in <span class="math">\(S\)</span>). Or if an element <span class="math">\(d\)</span> is <em>not</em> in a set <span class="math">\(S\)</span> then we denote it as <span class="math">\(d \not\in S\)</span>. Intuitively, one can think of a set as a box or container and you can add various objects into the box (including other boxes).</p>
<p>A <strong>subset</strong> <span class="math">\(Z\)</span> of a set <span class="math">\(S\)</span> is a set with all elements also in <span class="math">\(S\)</span>. A strict subset is denoted <span class="math">\(Z \subset S\)</span>, which means there is at least one element in <span class="math">\(S\)</span> that is not in <span class="math">\(Z\)</span>, whereas <span class="math">\(Z \subseteq S\)</span> means <span class="math">\(Z\)</span> could be a strict subset of <span class="math">\(S\)</span> or it could be identical to <span class="math">\(S\)</span>. For every set, the empty set (denoted <span class="math">\(\emptyset\)</span>), and the set itself are (non-strict) subsets, i.e. for a set <span class="math">\(S\)</span>, <span class="math">\(\emptyset \subseteq S\)</span> and <span class="math">\(S \subseteq S\)</span>.</p>
<p>The symbol <span class="math">\(\forall\)</span> means "for all" and the symbol <span class="math">\(\exists\)</span> means "there exists". For example, we can say something like <span class="math">\(\forall x \in S\)</span> which means "for all elements <span class="math">\(x\)</span> in <span class="math">\(S\)</span>". Or we can say, <span class="math">\(\exists x \in S, x = a\)</span>, which means "there exists an element x in the set <span class="math">\(S\)</span> for which <span class="math">\(x = a\)</span>".</p>
<p>There are logical operators we use called <strong>AND</strong> (denoted <span class="math">\(\land\)</span>) and <strong>OR</strong> (denoted <span class="math">\(\lor\)</span>). For example, suppose we have two sets <span class="math">\(S_1 = \{a,b,c\}, S_2 = \{d,e\}\)</span>, we can propose <span class="math">\(a \in S_1 \land a \in S_2\)</span>, which we can evaluate to be a false proposition since <span class="math">\(a\)</span> is <em>not</em> in <span class="math">\(S_2\)</span>. The proposition <span class="math">\(a \in S_1 \lor a \in S_2\)</span> is evaluated to be true since the element <span class="math">\(a\)</span> is one or both of the two sets in the proposition.</p>
<p>The <strong>union</strong> (denoted <span class="math">\(\cup\)</span>) of two sets <span class="math">\(S_1, S_2\)</span> is a new set <span class="math">\(S_3\)</span> that contains all the elements from <span class="math">\(S_1\)</span> and <span class="math">\(S_2\)</span>. For example, if <span class="math">\(S_1 = \{a,b,c\}, S_2 = \{d,e\}\)</span> then <span class="math">\(S_1 \cup S_2 = \{a,b,c,d,e\}\)</span>.</p>
<p>We can use <em>set-builder notation</em> to describe this as <span class="math">\(S_1 \cup S_2 = \{x \mid \forall x \in S_1, \forall x \in S_2\}\)</span> or equivalently <span class="math">\(S_1 \cup S_2 = \{x \mid x \in S_1 \lor x \in S_2\}\)</span>. The part before the vertical pipe | describes the elements that compose the set whereas the part after the pipe describe the conditions those elements must meet to be included in the set. For example, if we want to build the set of points that form a two-dimensional circle, <span class="math">\(C = \{(a,b) \mid a^2 + b^2 = 1\}\)</span>. This brings up <em>ordered sets</em> or <em>sequences</em> where the order of the elements does matter and we denote these by using parentheses, e.g. <span class="math">\((a,b,c) \neq (c,b,a)\)</span> whereas <span class="math">\(\{a,b,c\} = \{c,b,a\}\)</span>.</p>
<p>The <strong>intersection</strong> (denoted <span class="math">\(\cap\)</span>) of two sets <span class="math">\(S_1, S_2\)</span> is a new set <span class="math">\(S_3\)</span> that contains the elements that are shared between <span class="math">\(S_1\)</span> and <span class="math">\(S_2\)</span>, that is, <span class="math">\(S_1 \cap S_2 = \{x \mid x\in S_1 \land x\in S_2\}\)</span>. For example, if <span class="math">\(S_1 = \{a,b,c\}, S_2 = \{a,b,d,e\}\)</span> then <span class="math">\(S_1 \cap S_2 = \{a,b\}\)</span>.</p>
<p>The size or <strong>cardinality</strong> of a set is the number of elements in that set. For example, if <span class="math">\(S = \{a,b,c\}\)</span> then the cardinality of <span class="math">\(S\)</span>, denoted <span class="math">\(\vert{S}\vert = 3\)</span>.</p>
<p>A <strong>function</strong> is a relation between the elements in one set to another set. We can visualize a function as so:
<img src="images/TDAimages/setFuncExWiki.png" width="150px" />
(source: < https://en.wikipedia.org/wiki/Function_(mathematics) >)</p>
<p>Here we have two sets <span class="math">\(X = \{1,2,3\}\)</span> and <span class="math">\(Y = \{A,B,C,D\}\)</span> and a function <span class="math">\(f\)</span> that maps each element in <span class="math">\(X\)</span> (called the domain) to an element in <span class="math">\(Y\)</span> (the codomain). We denote <span class="math">\(f(1) = D\)</span> to mean that the function <span class="math">\(f\)</span> is mapping the element <span class="math">\(1 \in X\)</span> to <span class="math">\(D \in Y\)</span>.</p>
<p>A generic mapping or relation can be any mapping of elements in one set to another set, however, a <em>function</em> must only have one output for each input, i.e. each element in the domain can only be mapped to a single element in the codomain.</p>
<p>We define a function by building a new set of ordered pairs. For two sets <span class="math">\(X\)</span> and <span class="math">\(Y\)</span>, we denote a function <span class="math">\(f : X \rightarrow Y\)</span> to be a subset of the Cartesian product <span class="math">\(X \times Y\)</span> (i.e., <span class="math">\(f \subseteq X \times Y\)</span>). A <strong>Cartesian product</strong> is the set of all possible ordered pairs between elements in the two sets. </p>
<p>For example, the set that defines the function <span class="math">\(f\)</span> from the picture above is <span class="math">\(f = \{(1,D), (2,C), (3,C)\})\)</span>. So if we want to know the result of <span class="math">\(f(1)\)</span> then we just find the ordered pair where <span class="math">\(1\)</span> is in the first position, and the second position element is the result (in this case its <span class="math">\(D\)</span>).</p>
<p>The <strong>image</strong> of a function <span class="math">\(f : X \rightarrow Y\)</span> is the subset of <span class="math">\(Y\)</span> whose elements are mapped to elements in <span class="math">\(X\)</span>. For example, for the function depicted above, the image of the function is <span class="math">\(\{C,D\}\)</span> since only those elements are mapped to elements in <span class="math">\(X\)</span>.</p>
<p>Given a function <span class="math">\(f : X \rightarrow Y\)</span>, the <strong>preimage</strong> of a subset <span class="math">\(K \subseteq Y\)</span> is the set of elements in <span class="math">\(X\)</span> that are mapped to elements in <span class="math">\(K\)</span>. For example, the preimage of the subset <span class="math">\(K = \{C\}\)</span> from the depicted function above is the set <span class="math">\(\{2,3\}\)</span>.</p>
<h3>Topology Primer</h3>
<p>As you might have guessed, TDA involves the mathematical field of topology. I'm far from a mathematician, but we have to have a basic context so I'll do my best to explain the relevant aspects of topology in the least jargon-y and most computational (that's how I tend to think) way possible.</p>
<p>So mathematics in general is broken up into many fields of study, such as geometry, topology, linear algebra, etc. Each field is essentially defined by the mathematical objects under study. In linear algebra, the mathematical objects of interest are <em>vector spaces</em>. In topology, the mathematical objects are <em>topological spaces</em>. And since set theory is taken as a foundation of mathematics, all these mathematical objects are simply sets (collections of abstract things) with specific rules about what form the sets must be in and how they can be transformed or operated on.</p>
<p>Let's define what a topological space is now. This is one of several equivalent definitions of a topology (taken from wikipedia):</p>
<blockquote>
<p><strong>Definition (Topological Space)</strong> <br />
A topological space is an ordered pair <span class="math">\((X, \tau)\)</span>, where <span class="math">\(X\)</span> is a set and <span class="math">\(\tau\)</span> is a collection of subsets (subset symbol: <span class="math">\(\subseteq\)</span> ) of <span class="math">\(X\)</span>, satisfying the following axioms:
- The empty set (symbol: <span class="math">\(\emptyset\)</span>) and <span class="math">\(X\)</span> itself belong to <span class="math">\(\tau\)</span>.
- Any (finite or infinite) union (symbol: <span class="math">\(\cup\)</span> ) of members of <span class="math">\(\tau\)</span> still belongs to <span class="math">\(\tau\)</span>.
- The intersection (symbol: <span class="math">\(\cap\)</span> ) of any finite number of members of <span class="math">\(\tau\)</span> still belongs to <span class="math">\(\tau\)</span>.</p>
<p>The elements of <span class="math">\(\tau\)</span> are called <strong>open sets</strong> and the collection <span class="math">\(\tau\)</span> is called a topology on <span class="math">\(X\)</span>.</p>
</blockquote>
<p>Okay so what the hell does that even mean and who cares? Let's give a really simple example.
So let's just make up an abstract set of objects that happen to be some of the English alphabet letters. Here's our set, <span class="math">\(X = \{a,b,c\}\)</span>. So we have a collection of 3 distinct objects and we want to define a topology on that set. Our topology τ (tau) is simply going to be a set of sets, it's a collection of subsets from X that satisfy the axioms of topology.</p>
<p>Let's give it a try, maybe our topology τ should be this: <span class="math">\(\{\{a\},\{b\},\{c\}\}\)</span>. So our topology τ is a collection of single element subsets from X. Notice the difference in notation. If I had simply written <span class="math">\(\tau = \{a,b,c\}\)</span> that would merely be the same as <span class="math">\(X\)</span>, just an ordinary set with 3 elements. No, <span class="math">\(\tau\)</span> is a set whose elements are also sets (even if those sets contain one element). </p>
<p>Ok, anyway, does our <span class="math">\(\tau\)</span> satisfy the axioms? Does the empty set and <span class="math">\(X\)</span> itself belong to <span class="math">\(\tau\)</span>? Uh no. The empty set is <span class="math">\(\{\}\)</span> (or <span class="math">\(\emptyset\)</span>) and <span class="math">\(X\)</span> itself is <span class="math">\(\{a,b,c\}\)</span>, our <span class="math">\(\tau\)</span> did not have those two sets as members, so already our attempted topology failed. Let's try again. How about <span class="math">\(\tau = \{\emptyset,\{a\},\{b\},\{c\}, \{a,b,c\}\}\)</span>. Now at least this τ satisfies the first axiom. The second axiom is less obvious, but take for example the union of <span class="math">\(\{a\}\)</span> and <span class="math">\(\{b\}\)</span>, which yields <span class="math">\(\{a,b\}\)</span>. Is <span class="math">\(\{a,b\}\)</span> in <span class="math">\(\tau\)</span>? No it's not, so this attempted topology also fails.</p>
<p>Alright here's a legitimate topology on <span class="math">\(X\)</span>... <span class="math">\(\tau = \{\emptyset, \{a\}, \{a,b,c\}\}\)</span>. It satisfies the first axiom, it has the empty set <span class="math">\(\{\}\)</span> and the full <span class="math">\(X\)</span> set as members of <span class="math">\(\tau\)</span>, and if you take the union of any combination of members of <span class="math">\(\tau\)</span>, the resulting set is also a member of <span class="math">\(\tau\)</span>.</p>
<p>For example,
<span class="math">\( \{ \} \cup \{ a \} = \{ a \} $ (read: the empty set union the set $\{a\}\)</span> produces the set <span class="math">\(\{a\}\)</span>). Obviously the union of the empty set and <span class="math">\(\{a\}\)</span> must produce <span class="math">\(\{a\}\)</span> which is in <span class="math">\(\tau\)</span>. We must verify for all possible unions and intersections that the results are still in <span class="math">\(\tau\)</span>.</p>
<p><span class="math">\( \{a\} \cup \{a,b,c\} = \{a,b,c\} $, which is also in $\tau\)</span>.
<span class="math">\( \{a\} \cap \{a,b,c\} = \{a\}\)</span>, which is also in <span class="math">\(\tau\)</span>.</p>
<p>Hence, the union or intersection of any elements in <span class="math">\(\tau\)</span> is also in <span class="math">\(\tau\)</span>, thus we have ourselves a valid topology on <span class="math">\(X\)</span>.</p>
<p>I know this all seems rather academic at this point, but keep with me and we will eventually get to a point of using this new knowledge for practical purposes.</p>
<h5>Closeness</h5>
<p>What is important about topological spaces as opposed to any other mathematical abstraction? Well, one important aspect is that topological spaces end up defining a notion of <em>closeness</em> between elements in a set that has a defined topology. In a "raw" set that has no structure defined, e.g. <span class="math">\(Y = \{c,d,e\}\)</span>, it's just a collection of abstract objects. That's it, there's nothing more we say about this set or it's elements. But once we define a topology on <span class="math">\(Y\)</span>, then we can answer questions like "is the element <span class="math">\(c \in Y\)</span> closer to <span class="math">\(d\)</span> than <span class="math">\(e\)</span> is to <span class="math">\(d\)</span>?"</p>
<p>One of the fascinating things about mathematics is how so many things relate to each other and how there are almost always several ways of defining mathematical relationships, some of which are easier to grasp than others. So far we've been considering <strong>finite</strong> topological spaces, that is, the set <span class="math">\(X\)</span> for which we've defined a topology has a finite number of elements. Of course topological spaces do not have to be finite, and we'll eventually spend most of our time considering infinite topological spaces such as those defined on the set of real numbers. When we start to consider those types of spaces, visualizing them tends to be easier as we can often draw diagrams. As you've probably noticed, all this abstract non-sense about open sets defining a topological space seems really hard to grasp intuitively. As it turns out, however, there is another way to represent <em>finite</em> topological spaces, and that is by using <strong>directed graphs</strong>.</p>
<h5>A little bit of graph theory</h5>
<blockquote>
<p><strong>Definition (Graph)</strong> <br />
A simple graph <span class="math">\(G\)</span> is a set of vertices (or nodes) <span class="math">\(V\)</span> paired with a set of edges (i.e. connections between vertices) <span class="math">\(E\)</span>, whose elements are 2-element subsets of <span class="math">\(V\)</span>. Hence, <span class="math">\(G = (V,E)\)</span>.</p>
</blockquote>
<p>Example: G(V,E) where <span class="math">\(V = \{a,b,c,d\}, E = \{\{a,b\},\{a,d\},\{a,c\},\{c,d\}\}\)</span>
<img src="images/TDAimages/Graph1.svg" width="250px" /></p>
<blockquote>
<p><strong>Definition (Directed Graph)</strong> <br />
A <strong>directed</strong> graph (or digraph) <span class="math">\(G(V,E)\)</span> is a graph whose edges are ordered pairs of vertices from <span class="math">\(V\)</span>. Thus the "connections" between vertices/nodes have direction. The first vertex in an ordered pair is the start and the second vertex is the end. When drawing the graph, the edges are arrows with the arrowhead facing toward and contacting the end vertex.</p>
</blockquote>
<p>Example: G(V,E) where <span class="math">\(V = \{a,b,c,d\}, E = \{(b,a), (a,d), (a,c), (c,d)\}\)</span>
<img src="images/TDAimages/Graph2.svg" width="250px" /></p>
<p>Just for completeness sake, I want to mention a couple other properties we can impose on graph structures. We've already noted how graphs can have a direction property to the edges, but edges can also have a <strong>weight</strong> property, i.e. different edges can have different weights or strengths, implying that some connections are stronger than others. When drawing graphs with weighted edges, one way to depict this would be to simply make edges with bigger weights as proportionally thicker lines. Mathematically, a graph with vertices, edges, and weights is defined as a graph <span class="math">\(G(V,E,w)\)</span> where <span class="math">\(w : E \rightarrow \mathbb R\)</span> (<span class="math">\(w\)</span> is a function that maps each edge in E to a real number, it's weight). Similarly, one can have a function that endows each vertex with a weight. One might depict this where nodes (vertices) are of different sizes reflecting their respective weights. </p>
<h4>Visualizing Finite Topology</h4>
<p>It turns out that one can build a set of binary relations called <strong>preorders</strong> between elements in a set <span class="math">\(X\)</span> with topology <span class="math">\(\tau\)</span>. The binary relation preorder is both reflexive (every element is related to itself, <span class="math">\(a \sim a\)</span>) and transitive (if <span class="math">\(a\)</span> is related to <span class="math">\(b\)</span>, and <span class="math">\(b\)</span> is related to <span class="math">\(c\)</span>, it implies <span class="math">\(a\)</span> is related to <span class="math">\(c\)</span>, i.e. <span class="math">\(a \sim b \land b \sim c \Rightarrow a \sim c\)</span>). [The symbol ~ is used generically to denote the relation of interest]. This preorder relation (more precisely called a <em>specialization preorder</em> on <span class="math">\(X\)</span>) can be determined by examining pair-wise relationships of elements in <span class="math">\(X\)</span> one at a time. The specialization preorder relation is generally denoted by the symbol <span class="math">\(\leq\)</span> (but it is not the same as the less-than-or-equal-to sign that you're used to; there are only so many convenient symbols so things tend to get re-used).</p>
<p>Here's the definition of a specialization pre-order on a topological space <span class="math">\((X, \tau)\)</span> (note there are other equivalent definitions).</p>
<blockquote>
<p><strong>Definition (Specialization Pre-order)</strong> <br />
<span class="math">\( \text{ $x \leq y\)</span>, if and only if <span class="math">\(y\)</span> is contained in all the open sets containing <span class="math">\(x\)</span> } $</p>
</blockquote>
<p>And remember that open sets are the elements of the topology <span class="math">\(\tau\)</span>. Once we've determined that <span class="math">\(x \leq y,\ \forall x,y \in X\)</span>, then we can say that <span class="math">\(x\)</span> is a <em>specialization</em> of <span class="math">\(y\)</span>. This kind of means that <span class="math">\(y\)</span> is more general than <span class="math">\(x\)</span> since it appears in more open sets.</p>
<blockquote>
<p><strong>Example (Topology)</strong> <br />
To illustrate a graphic represention of our previous finite topological space, let's expand our topology on <span class="math">\(X = \{a,b,c\}\)</span>.
Now, <span class="math">\(\tau = \{\{\}, \{a\}, \{b\}, \{a,b\}, \{b,c\}, \{a,b,c\}\}\)</span></p>
</blockquote>
<p>To define the specialization preorder on this topological space, we need to enumerate all the possible pairings of points in the topology and figure out if the preorder relation <span class="math">\(\leq\)</span> is satisfied for each pair. Let's just focus on one pair, <span class="math">\((c,b)\)</span>, so we want to ask if <span class="math">\(c \leq b\)</span> is true. According to our definition of specialization preorder, if <span class="math">\(c\)</span> is in fact <span class="math">\(\leq b\)</span>, then <span class="math">\(b\)</span> will be contained in all of the open sets that contain <span class="math">\(c\)</span>. So let's list out all the open sets that contain <span class="math">\(c: \{b,c\}, \{a,b,c\}\)</span>. As you can see, both of these open sets that contain <span class="math">\(c\)</span> also contain <span class="math">\(b\)</span>, therefore, <span class="math">\(c \leq b\)</span> is true. An important note is that the preorder relation does not imply equality when two elements are specializations of each other, i.e. if <span class="math">\(x \leq y \land y \leq x \not \Rightarrow x = y\)</span></p>
<p>I will list all the true and untrue preorderings on <span class="math">\(X\)</span> and then we can build the topological space into a visualization graph.</p>
<p><span class="math">\(
a \not\leq b \\
a \not\leq c \\
b \not\leq a \\
b \not\leq c \\
c \not\leq a \\
c \leq b \\
$
<br />
There is only one true preorder relation between all pairs of points in $X\)</span>. In order to make a directed graph from a preorder on a topological space <span class="math">\((X,\tau)\)</span>, you simply take the points in <span class="math">\(X\)</span> as vertices of the graph and create a directed edge between two vertices that have a preorder relation, where the arrow points from the specialization point to the more general point (i.e. if <span class="math">\(x \leq y\)</span> then our graph will have an edge starting from <span class="math">\(x\)</span> and pointing to <span class="math">\(y\)</span>). Any points without relations to other points are just disconnected nodes. Here's the visualized graph of our example preorder on <span class="math">\(X\)</span>.</p>
<p><img src="images/TDAimages/Graph3.svg" width="250px" /></p>
<blockquote>
<p><strong>Example (Specialization preorder graph)</strong> <br />
Here's another example on a different topological space. Let <span class="math">\(Z = \{a, b, c, d\}\)</span> be a set with the topology
<span class="math">\(\tau_Z = \{Z, \emptyset, \{b\}, \{a, b\}, \{b, c, d\}\}\)</span>. Listing the specialization preorder on <span class="math">\(Z\)</span> is left as an exercise for the reader. The graph of this topological space resulting from its specialization preorder is shown.
<img src="images/TDAimages/Graph4.svg" width="250px" /></p>
</blockquote>
<p>Just like you can take any finite topological space, generate a specialization preorder on it and build a graph, you can also take a graph built by a preordering and generate its topology. In fact, by just looking at the graph you can determine a lot of the topological properties of the space. With this view, you can interpret a finite topology as a set of points with paths between them.</p>
<h5>Connectedness</h5>
<p>I'll digress here to define another property of topological spaces called connectedness. If you draw two separated circles on a sheet of paper those two shapes represent a topological space that is <em>disconnected</em> since there is no line or path connecting the circles. In this case we would say there are two <em>components</em> in the space. The intuiton captures the sense of how many "whole pieces" are in the space. The definition of connectedness in topology abstracts and generalizes the intuitive notion of "pieces" in a space.</p>
<blockquote>
<p><strong>Definition (Connectedness)</strong> <br />
A topological space <span class="math">\((X,\tau)\)</span> is said to be <em>connected</em> if <span class="math">\(X\)</span> is not the union of two disjoint nonempty open sets. Consequently, a topological space is <em>disconnected</em> if the union of any two disjoint nonempty subsets in <span class="math">\(\tau\)</span> produces <span class="math">\(X\)</span>. </p>
</blockquote>
<p>Looking back at the previous example, with <span class="math">\(X = \{a,b,c\}, \tau = \{\{\}, \{a\}, \{b\}, \{a,b\}, \{b,c\}, \{a,b,c\}\}\)</span>, we can determine that this topological space is disconnected because the union of the disjoint (they dont share any common elements) open sets <span class="math">\(\{a\} \cup \{b,c\} = X\)</span>. Alternatively, if we look at the graph that we generated from our preordering on <span class="math">\(X\)</span>, we can visually see that <span class="math">\(c\)</span> and <span class="math">\(b\)</span> are connected by an edge but <span class="math">\(a\)</span> is a disconnected point. The graph for the example with set <span class="math">\(Z\)</span>, however, demonstrates that this topological space is connected, all the vertices are connected in some way.</p>
<p>With these types of general "pure" topological spaces, we can't say anymore than "closeness", we don't have a notion of distance. We know <span class="math">\(b\)</span> is close to <span class="math">\(c\)</span> but we can't say <em>how</em> close. All we know is closeness in terms of relations between elements, e.g. this element is closer to that element than this other element, and so on.</p>
<h4>Metric Spaces</h4>
<p>As you've probably noticed, the general "pure" topological spaces we've been studying are fairly abstract. We're going to move to studying <em>metric spaces</em>, which are a type of topological space with a definite notion distance, not merely an abstract notion of "closeness." That is, all metric spaces are topological spaces but not all topological spaces are metric spaces. Being in the land of metric spaces makes things a lot easier, and fortunately topological data analysis is really dealing with metric spaces not "pure" topological spaces.</p>
<blockquote>
<p><strong>Definition (Metric Space):</strong> <br />
<span class="math">\( \text{A metric space is an ordered pair $(M,d)\)</span> where <span class="math">\(M\)</span> is a set and <span class="math">\(d\)</span> is a metric on <span class="math">\(M\)</span>, that is, a function}
\
d: M\times M \rightarrow \mathbb{R}$</p>
<p>(this defines a function <span class="math">\(d\)</span> mapping every ordered pair of elements in <span class="math">\(M\)</span> to an element in the set of real numbers <span class="math">\(\mathbb R\)</span>) <br /><br />
<span class="math">\( \text {
such that for any elements $x,y,z\)</span> in <span class="math">\(M\)</span>, the following conditions are met: <span class="math">\(\\\)</span>
1. <span class="math">\(d(x,y)\geq 0\)</span> (all distances are non-negative) <span class="math">\(\\\)</span>
2. <span class="math">\(d(x,y) = 0\)</span> if and only if (iff) x = y <span class="math">\(\\\)</span>
3. <span class="math">\(d(x,y) = d(y,x)\)</span> (distance is symmetrical) <span class="math">\(\\\)</span>
4. <span class="math">\(d(x,z) \leq d(x,y) + d(y,z)\)</span> (going from x to z directly must be shorter than stopping at a another point along the way)
}
$</p>
</blockquote>
<p>This should be fairly straightforward. A metric space is simply a set paired with a distance function that accepts any two elements from that set and returns the metric distance between those two elements. The most familiar metric spaces would be the real number line, where the set is the set of real numbers and the metric is taken to be the absolute value of the difference between any two numbers on the line (for any <span class="math">\(x,y\)</span> in <span class="math">\(\mathbb R\)</span>, <span class="math">\(d = |x-y|\)</span>). Another familiar one is the 2-dimensional Euclidian space in <span class="math">\(\mathbb R^2\)</span>, where the distance function between any two points <span class="math">\((x_1,y_1)\)</span> and <span class="math">\((x_2,y_2)\)</span> is defined as <span class="math">\(d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}\)</span>.</p>
<p>The Euclidian metric basically defines a topological space in which the shortest path between any two points is a straight line. One could define a different metric where all the points lie on the surface of a sphere and thus only a curved line is the shortest path between two points. One mustn't be constrained to the real numbers, however. You can have a matric space where the set is a bunch of images, or text blobs, or whatever type of data, as long as you can define a function that computes the distance between any two elements in the set, it's a valid metric space.</p>
<h5>Continuity</h5>
<p>An important concept in topology is the notion of continuity. Imagine a flat sheet of gold (or some other pliable metal) in the shape of a square. You could very carefully transform this flat sheet into a circle by smashing at the edges until the hard edges become curved. From a topology standpoint, the square and the circle are equivalent topological spaces because you can apply a continuous transformation from the square to the circle. And since a topology is all about defining the closeness relationships between points, if you've continuously deformed a square into a circle, then any two points that were "close" before the deformation are still "close" after the deformation. All the closeness relations between points have been preserved even if the shape looks different from a geometric perspective (in geometry you care about the actual distance between points not just their abstract and relative closeness).</p>
<blockquote>
<p><strong>Definition (Homomorphism)</strong>
There is a <strong>homomorphism</strong> (i.e. an equivalence relation) between two topological spaces if there exists a function
<span class="math">\( f: X \rightarrow Y \text{, where X and Y are topological spaces } (X,\tau_X) \text{ and } (Y, \tau_Y) $
with the following properties:
- $f\)</span> maps the elements of <span class="math">\(X\)</span> to <span class="math">\(Y\)</span> in a one-to-one relationship (bijective).
- <span class="math">\(f\)</span> is continuous
- The inverse function <span class="math">\(f^-1\)</span> is continuous</p>
</blockquote>
<p>What does continuous mean? We'll state the precise mathematical definition then try to find an intuitive meaning. </p>
<blockquote>
<p><strong>Definition (Continuous Function)</strong> <br />
For two topological spaces <span class="math">\((X,\tau_X) \text{ and } (Y, \tau_Y)\)</span>, a function <span class="math">\(f\)</span> is continuous if for every element in <span class="math">\(V \in \tau_Y\)</span> (i.e. open subsets of Y), the preimage (inverse image) <span class="math">\(f^{−1}(V)\)</span> is open in <span class="math">\(X\)</span>.</p>
</blockquote>
<p>Here's an equivalent definition of a continuous function that employs an understanding of specialization preorders on a topological space.<br /></p>
<blockquote>
<p><strong>Definition (Continuous Function)</strong> <br />
A function <span class="math">\(f : X \rightarrow Y\)</span> is continuous if and only if it is order preserving: <span class="math">\(x \leq y\)</span> in <span class="math">\(X\)</span> implies <span class="math">\(f(x) \leq f(y)\)</span> in <span class="math">\(Y\)</span> .</p>
</blockquote>
<p>We must remember that a function is a mapping where each element in a set <span class="math">\(X\)</span> is mapped to elements in another set <span class="math">\(Y\)</span> (it's a mapping between elements of the sets <span class="math">\(X\)</span> and <span class="math">\(Y\)</span> not a mapping between their topologies <span class="math">\(\tau_X\)</span> and <span class="math">\(\tau_Y\)</span> ). We also need to recall the definitions of preimage (or inverse image) from set theory. Recall that the domain of a function <span class="math">\(f: X \rightarrow Y\)</span> is <span class="math">\(X\)</span> and it's codomain is <span class="math">\(Y\)</span>. The <em>image</em> of <span class="math">\(f\)</span> is the subset of <span class="math">\(Y\)</span> for which there is a mapping between each element <span class="math">\(x \in X\)</span> and elements <span class="math">\(y \in Y\)</span>. That is, the image of <span class="math">\(f\)</span> is <span class="math">\(\{f(x) \mid x \in X\}\)</span>. We can also speak of the image of a subset <span class="math">\(U \subset X\)</span> being the set <span class="math">\(\{f(x) \mid x \in U\}\)</span>. The preimage (aka inverse image) of <span class="math">\(f\)</span> is equal to the domain of <span class="math">\(f\)</span>, therefore we really only refer to the preimage of individual elements or subsets of <span class="math">\(Y\)</span></p>
<blockquote>
<p>The preimage or inverse image of a set <span class="math">\(B \subseteq Y\)</span> under <span class="math">\(f\)</span> is the subset of <span class="math">\(X\)</span> defined by
<span class="math">\(f^{-1}(B) = \{ x \in X \mid f(x) \in B\}\)</span></p>
<p><strong>Example (Continuous function)</strong> <br />
Let <span class="math">\(X = \{a,b,c\}\)</span> and its topology <span class="math">\(\tau_X = \{\emptyset, \{a\}, X\}\)</span>. <span class="math">\(Y = \{d\}\)</span> and its topology <span class="math">\(\tau_Y = \{\emptyset, Y\}\)</span>. A continuous function <span class="math">\(f: X \rightarrow Y\)</span> is depicted below.
<img src="images/TDAimages/function1.svg" /></p>
</blockquote>
<p>We can see that the preimage <span class="math">\(f^{-1}(\{d\}) = \{a,b,c\}\)</span> is an open set in <span class="math">\(X\)</span>, thus this function is continuous. Yes, it is a pretty unimpressive function called a constant function since it maps all <span class="math">\(X\)</span> to a single element. The intuitive idea of a continuous function in <span class="math">\(\mathbb R^2\)</span> is one that can be drawn without lifting one's pencil.</p>
<h4>Simplices and Simplicial Complexes</h4>
<p>Topological data analysis employs the use of simplicial complexes, which are complexes of geometric structures called simplices (singular: simplex). TDA uses simplicial complexes because they can approximate more complicated shapes and are much more mathematically and computationally tractable than the original shapes that they approximate.</p>
<p>In words, a simplex is a generalization of a triangle to arbitrary dimensions. For example, we call a 2-simplex an ordinary 3-sided triangle in two-dimensions (or could be embedded in higher-dimensional spaces), and a 3-simplex is a tetrahedron (with triangles as faces) in 3-dimensions, and a 4-simplex is beyond our visualization, but it has tetrahedrons as faces and so on.</p>
<p><img src="images/TDAimages/simplices2.svg" /></p>
<p>A simplicial complex is formed when we "glue" together different simplices. For example, we can connect a 2-simplex (triangle) to another 2-simplex via a 1-simplex (line segment).</p>
<blockquote>
<p><strong>Example (Simplicial Complex)</strong>
<img src="images/TDAimages/simplicialcomplex2.svg" />
This depicts two triangles connected along one side, which are connected via a 1-simplex (line segment) to a third triangle. We call this a 2-complex because the highest-dimensional simplex in the complex is a 2-simplex (triangle). </p>
</blockquote>
<p>The <strong>faces</strong> of a simplex are its boundaries. For a 1-simplex (line segment) the faces are points (0-simplices), for a 2-simplex (triangle) the faces are line segments, and for a 3-simplex (tetrahedron) the faces are triangles (2-simplices) and so on. When depicting a simplex or complex, it is conventional to "color in" the faces of a simplex to make it clear that the simplex is a "solid object." For example, it is possible to draw a graph with three connected points that is actually a simplicial complex (1-complex) of line segments even though it looks like a triangle (but the middle is "empty"). If we color in the face, then we are indicating that it is actually a filled-in 2-simplex.</p>
<blockquote>
<p>A simplex versus a simplicial complex. Importance of "coloring in" simplices.
<img src="images/TDAimages/simplexVScomplex1.svg" /></p>
</blockquote>
<p>Okay, so we have an intuition for what simplices and simplicial complexes are, but now we need a precise mathematical definition.</p>
<blockquote>
<p><strong>Definition (Abstract Simplex)</strong> <br />
An <em>abstract</em> simplex is any finite set of vertices. For example, the simplex <span class="math">\(J = \{a,b\}\)</span> and <span class="math">\(K = \{a,b,c\}\)</span> represent a 1-simplex (line segment) and a 2-simplex (triangle), respectively.</p>
</blockquote>
<p>Notice this defines an <em>abstract</em> simplex. An abstract simplex and abstract simplicial complexes are abstract because we haven't given them any specific geometric realization. They're "graph-like" objects since we could technically draw the simplices in any number of arbitrary ways (e.g. line segments become curvy lines). A <em>geometric</em> 2-simplex, for example, could be the triangle formed by connecting the points <span class="math">\(\{(0,0),(0,1),(1,1)\}\)</span> in <span class="math">\(\mathbb R^2\)</span> (the ordinary 2-dimensional Euclidian plane) and filling in the middle. The definition for a geometric simplex would be different (and more complicated) since it would need to include all points within some boundary.</p>
<blockquote>
<p><strong>Definition (Simplicial Complex)</strong> <br />
<span class="math">\( \text{ A simplicial complex $\mathcal {K} $ is a set of simplices that satisfies the following conditions: }\)</span></p>
<ol>
<li><span class="math">\( \text {Any face of a simplex in $\mathcal {K}\)</span> is also in <span class="math">\(\mathcal {K}\)</span> }$.</li>
<li><span class="math">\( \text {The intersection of any two simplices $\sigma _{1}, \sigma _{2}\in \mathcal {K}\)</span> is either <span class="math">\( \emptyset $ or a face of both $\sigma _{1}\)</span> and <span class="math">\(\sigma _{2}\)</span> }$</li>
</ol>
</blockquote>
<p>As an example, here is a simplicial complex depicted graphically with vertex labels and then we'll define it mathematically.
<img src="images/TDAimages/simplicialcomplex4.svg" /></p>
<p>This simplicial complex is defined as a set: <span class="math">\(K = \text{{{a},{b},{c},{d},{e},{f},{a,b},{a,c},{b,c},{c,d},{d,f},{d,e},{e,f},{a,b,c},{d,e,f}}}\)</span>
Notice that we first list all the 0-simplices (vertices), then we list all the 1-simplices (line segments), then we list the 2-simplices (triangles). If we had any higher-dimensional simplices then those would come next and so on. Thus we meet the conditions set in Definition 8.2 because any face of a higher-dimensional simplex will be listed before and so on all the way down to individual vertices. Of course since this is a set (of sets), order does not matter, however, it is conventional to list the complex in this way for readability.</p>
<p>The second condition set in the definition for a simplicial complex means that structures such as this are <em>not</em> valid simplices or complexes:
<img src="images/TDAimages/notacomplex1.svg" />
This is invalid since the line segment is connected to the triangle along its edge and not at one of its vertices.</p>
<p>When we analyze data, our data is generally in the form of a finite metric space, i.e. we have discrete points (e.g. from a database of rows and columns of data) with a metric function defined (which places them in some metric space, like Euclidian space), and this gives us a "point cloud." A point cloud is just a bunch of points placed in our space with no obvious relationship.</p>
<p>Here's a point cloud in <span class="math">\(\mathbb R^2\)</span> that kind of looks like a circle, or we could say the points look as if they were <em>sampled</em> from a circle.
<img src="images/TDAimages/pointcloud1.svg" />
Here's a similar point cloud in <span class="math">\(\mathbb R^2\)</span> but it's smaller and more elliptical than circular.
<img src="images/TDAimages/pointcloud2.svg" />
A geometrician who builds a simplicial complex from these two point clouds would say they are quite different shapes geometrically, however, the topologist would say they are topologically identical since they both have a single "loop" feature and only exhibit one component ("piece"). Topologists don't care about differences in size/scale or mere stretching of edges, they care about <strong>topological invariants</strong> (properties of topological spaces that do not vary with certain types of continous deformations) such as holes, loops, and connected components.</p>
<p>So just how <em>do</em> we construct a simplicial complex from data? And how do we calculate these topological invariants?
Well, there are actually many different types of simplicial complex constructions that have differing properties. Some are easier to describe mathematically, some are easier to compute algorithmically, others are simple but computationally inefficient. The most common simplicial complexes go by names such as the Čech complex, Vietoris-Rips complex, alpha complex, and witness complex.</p>
<p>We will focus on just one, the <strong>Vietoris-Rips (VR) complex</strong>, as it is fairly easy to describe and reasonably practical from a computational standpoint. I will briefly describe other complexes as appropriate.</p>
<h5>Constructing a Vietoris-Rips Complex</h5>
<p>Intuitively, we construct a Vietoris-Rips (VR) complex from a point cloud <span class="math">\(P \subseteq \mathbb R^d\)</span> (a subset <span class="math">\(P\)</span> of some <span class="math">\(d\)</span>-dimensional space) by initially connecting points in <span class="math">\(P\)</span> with edges that are less than some arbitrarily defined distance <span class="math">\(\epsilon\)</span> from each other. This will construct a 1-complex, which is essentially just a graph as described above (a set of vertices and a set of edges between those vertices). Next we need to fill in the higher-dimensional simplices, e.g. any triangles, tetrahedrons, etc. so we won't have a bunch of empty holes.</p>
<p>Here's a visualization of the major steps (from left to right) of constructing a VR complex on a small point cloud in <span class="math">\(\mathbb R^2\)</span> that was sampled from a circular structure:
<div>
<div style="width:20%; float:left; margin:10px 35px 25px 10px;"><img src="images/TDAimages/pointcloud3.svg" /></div>
<div style="width:20%; float:left; margin:10px 35px 25px 10px;"><img src="images/TDAimages/VRcomplex1.svg" /></div>
<div style="width:20%; float:left; margin:10px 35px 25px 10px;"><img src="images/TDAimages/VRcomplex2.svg" /></div>
<div style="width:20%; float:left; margin:10px 0px 25px 0px;"><img src="images/TDAimages/VRcomplex3.svg" /></div>
</div>
<div style="clear: both;"></div></p>
<p>As you can see, we take what are called the <strong><span class="math">\(\epsilon\)</span>-balls</strong> around each point in <span class="math">\(P\)</span> (the dotted circles of radius <span class="math">\(\epsilon\)</span>) and build edges between that point and all other points within its ball. I only drew in the balls for a few of the points on the left because it would get too hard to see if I drew them all. More generally, a <strong>ball</strong> around a d-dimensional point is the (d-1)-dimensional generalization of a sphere around that point. So the ball of a point in <span class="math">\(\mathbb R\)</span> (the real number line) is simply a line segment around that point, the ball of a point in <span class="math">\(\mathbb R^2\)</span> is a circle, a ball around a point in <span class="math">\(\mathbb R^3\)</span> is a sphere, and so on. It is important to realize that a particular VR construction depends not only on the point cloud data but also on a parameter <span class="math">\(\epsilon\)</span> that is arbitrarily chosen. </p>
<blockquote>
<p><strong>Note</strong> (How to choose <span class="math">\(\epsilon\)</span>) <br />
So how does one know what to make <span class="math">\(\epsilon\)</span>? Excellent question, and the answer is simple: you just play around with various levels for <span class="math">\(\epsilon\)</span> and see what seems to result in a meaningful VR complex. If you set <span class="math">\(\epsilon\)</span> too small, then your complex may just consist of the original point cloud, or only a few edges between points. If you set <span class="math">\(\epsilon\)</span> too big, then the point cloud will just become one massive ultradimensional simplex. As we will learn later, the key to actually discovering meaningful patterns in a simplicial complex is to continuously vary the <span class="math">\(\epsilon\)</span> parameter (and continually re-build complexes) from 0 to a maximum that results in a single massive simplex. Then you generate a diagram that shows what topological features are born and die as <span class="math">\(\epsilon\)</span> continuously increases. We assume that features that persist for long intervals over <span class="math">\(\epsilon\)</span> are meaningful features whereas features that are very short-lived are likely noise. This procedure is called <strong>persistent homology</strong> as it finds the homological features of a topological space (specifically a simplicial complex) that persist while you vary <span class="math">\(\epsilon\)</span>. We will delve deeper in persistent homology after we've learned how to build simplicial complexes from data.</p>
</blockquote>
<p>Let's make the VR construction mathematically precise...</p>
<blockquote>
<p><strong>Definition (Vietoris-Rips Complex)</strong> <br />
If we have a set of points <span class="math">\(P\)</span> of dimension <span class="math">\(d\)</span>, and <span class="math">\(P\)</span> is a subset of <span class="math">\(\mathbb R^d\)</span>, then the Vietoris-Rips (VR) complex <span class="math">\(V_{\epsilon}(P)\)</span> at scale <span class="math">\(\epsilon\)</span> (the VR complex over the point cloud <span class="math">\(P\)</span> with parameter <span class="math">\(\epsilon\)</span>) is defined as:</p>
<p><span class="math">\(V_{\epsilon}(P) = \{ \sigma \subseteq P \mid d(u,v) \le \epsilon, \forall u \neq v \in \sigma \}\)</span></p>
</blockquote>
<p>Okay let's translate that into English. It reads as follows: The VR complex at scale <span class="math">\(\epsilon\)</span> is the set <span class="math">\(V_{\epsilon}(P)\)</span> of all subsets <span class="math">\(\sigma\)</span> of <span class="math">\(P\)</span> such that the pair-wise distance between any non-identical points in <span class="math">\(\sigma\)</span> is less than or equal to a parameter <span class="math">\(\epsilon\)</span>.</p>
<p>So basically, if we have a data set <span class="math">\(P\)</span> with a bunch of points, we add a simplex <span class="math">\(\sigma\)</span> (which is a subset of <span class="math">\(P\)</span>) if the points in <span class="math">\(\sigma\)</span> are all within <span class="math">\(\epsilon\)</span> distance of each other. Thus we get a set of subsets of <span class="math">\(P\)</span> that are all simplices, and hence we get a simplicial complex of <span class="math">\(P\)</span>.</p>
<h6>Next time...</h6>
<p>We're going to end this post here for now but we'll pick up right where we left off in Part 2 where we'll actually start writing some code to build a Vietoris-Rips complex on some data.</p>
<h4>References (Websites):</h4>
<ol>
<li>http://dyinglovegrape.com/math/topology_data_1.php</li>
<li>http://www.math.uiuc.edu/~r-ash/Algebra/Chapter4.pdf</li>
<li>https://en.wikipedia.org/wiki/Group_(mathematics)</li>
<li>https://jeremykun.com/2013/04/03/homology-theory-a-primer/</li>
<li>http://suess.sdf-eu.org/website/lang/de/algtop/notes4.pdf</li>
<li>http://www.mit.edu/~evanchen/napkin.html</li>
</ol>
<h4>References (Academic Publications):</h4>
<ol>
<li>
<p>Basher, M. (2012). On the Folding of Finite Topological Space. International Mathematical Forum, 7(15), 745–752. Retrieved from http://www.m-hikari.com/imf/imf-2012/13-16-2012/basherIMF13-16-2012.pdf</p>
</li>
<li>
<p>Day, M. (2012). Notes on Cayley Graphs for Math 5123 Cayley graphs, 1–6.</p>
</li>
<li>
<p>Doktorova, M. (2012). CONSTRUCTING SIMPLICIAL COMPLEXES OVER by, (June).</p>
</li>
<li>
<p>Edelsbrunner, H. (2006). IV.1 Homology. Computational Topology, 81–87. Retrieved from http://www.cs.duke.edu/courses/fall06/cps296.1/</p>
</li>
<li>
<p>Erickson, J. (1908). Homology. Computational Topology, 1–11.</p>
</li>
<li>
<p>Evan Chen. (2016). An Infinitely Large Napkin.</p>
</li>
<li>
<p>Grigor’yan, A., Muranov, Y. V., & Yau, S. T. (2014). Graphs associated with simplicial complexes. Homology, Homotopy and Applications, 16(1), 295–311. http://doi.org/10.4310/HHA.2014.v16.n1.a16</p>
</li>
<li>
<p>Kaczynski, T., Mischaikow, K., & Mrozek, M. (2003). Computing homology. Homology, Homotopy and Applications, 5(2), 233–256. http://doi.org/10.4310/HHA.2003.v5.n2.a8</p>
</li>
<li>
<p>Kerber, M. (2016). Persistent Homology – State of the art and challenges 1 Motivation for multi-scale topology. Internat. Math. Nachrichten Nr, 231(231), 15–33.</p>
</li>
<li>
<p>Khoury, M. (n.d.). Lecture 6 : Introduction to Simplicial Homology Topics in Computational Topology : An Algorithmic View, 1–6.</p>
</li>
<li>
<p>Kraft, R. (2016). Illustrations of Data Analysis Using the Mapper Algorithm and Persistent Homology.</p>
</li>
<li>
<p>Lakshmivarahan, S., & Sivakumar, L. (2016). Cayley Graphs, (1), 1–9.</p>
</li>
<li>
<p>Liu, X., Xie, Z., & Yi, D. (2012). A fast algorithm for constructing topological structure in large data. Homology, Homotopy and Applications, 14(1), 221–238. http://doi.org/10.4310/HHA.2012.v14.n1.a11</p>
</li>
<li>
<p>Naik, V. (2006). Group theory : a first journey, 1–21.</p>
</li>
<li>
<p>Otter, N., Porter, M. A., Tillmann, U., Grindrod, P., & Harrington, H. A. (2015). A roadmap for the computation of persistent homology. Preprint ArXiv, (June), 17. Retrieved from http://arxiv.org/abs/1506.08903</p>
</li>
<li>
<p>Semester, A. (2017). § 4 . Simplicial Complexes and Simplicial Homology, 1–13.</p>
</li>
<li>
<p>Singh, G. (2007). Algorithms for Topological Analysis of Data, (November).</p>
</li>
<li>
<p>Zomorodian, A. (2009). Computational Topology Notes. Advances in Discrete and Computational Geometry, 2, 109–143. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.7483</p>
</li>
<li>
<p>Zomorodian, A. (2010). Fast construction of the Vietoris-Rips complex. Computers and Graphics (Pergamon), 34(3), 263–271. http://doi.org/10.1016/j.cag.2010.03.007</p>
</li>
<li>
<p>Symmetry and Group Theory 1. (2016), 1–18. http://doi.org/10.1016/B978-0-444-53786-7.00026-5</p>
</li>
</ol>
<div class="highlight"><pre><span></span>
</pre></div>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Q-learning with Neural Networks2015-10-30T00:00:00-05:002015-10-30T00:00:00-05:00Brandon Browntag:outlace.com,2015-10-30:/rlpart3.html<p>In part 3 of the reinforcement learning series we implement a neural network as the action-value function and use the Q-learning algorithm to train an agent how to play Gridworld.</p><h3>Learning Gridworld with Q-learning</h3>
<h4>Introduction</h4>
<p>We've finally made it. We've made it to what we've all been waiting for, Q-learning with neural networks. Since I'm sure a lot of people didn't follow parts 1 and 2 because they were kind of boring, I will attempt to make this post relatively (but not completely) self-contained. In this post, we will dive into using Q-learning to train an agent (player) how to play Gridworld. Gridworld is a simple text based game in which there is a 4x4 grid of tiles and 4 objects placed therein: a player, pit, goal, and a wall. The player can move up/down/left/right (<span class="math">\(a \in A \{up,down,left,right\}\)</span>) and the point of the game is to get to the goal where the player will receive a numerical reward. Unfortunately, we have to avoid a pit, because if we land on the pit we are penalized with a negative 'reward'. As if our task wasn't difficult enough, there's also a wall that can block the player's path (but it offers no reward or penalty).</p>
<p><img src="images/RL/gridworld.png" /></p>
<h4>Quick Review of Terms and Concepts (skip if you followed parts 1 & 2)</h4>
<p>A state is all the information necessary (e.g. pixel data in a game) to make a decision that you expect will take you to a new (higher value) state. The high level function of reinforcement learning is to learn the values of states or state-action pairs (the value of taking action <span class="math">\(a\)</span> given we're in state <span class="math">\(s\)</span>). The value is some notion of how "good" that state or action is. Generally this is a function of rewards received now or in the future as a result of taking some action or being in some state.</p>
<p>A policy, denoted <span class="math">\(\pi\)</span>, is the specific strategy we take in order to get into high value states or take high value actions to maximize our rewards over time. For example, a policy in blackjack might be to always hit until we have 19. We denote a function, <span class="math">\(\pi(s)\)</span> that accepts a state <span class="math">\(s\)</span> and returns the action to be taken. Generally <span class="math">\(\pi(s)\)</span> as a function just evaluates the value of all possible actions given the state <span class="math">\(s\)</span> and returns the highest value action. This will result in a specific policy <span class="math">\(\pi\)</span> that may change over time as we improve our value estimates.</p>
<p>We call the function that accepts a state <span class="math">\(s\)</span> and returns the value of that state <span class="math">\(v_{\pi}(s)\)</span>. This is the value function. Similarly, there is an action-value function <span class="math">\(Q(s, a)\)</span> that accepts a state <span class="math">\(s\)</span> and an action <span class="math">\(a\)</span> and returns the value of taking that action given that state. Some RL algorithms or implementations will use one or the other. Importantly, if we base our algorithm on learning state-values (as opposed to action-values), we must keep in mind that the value of a state depends completely on our policy <span class="math">\(\pi\)</span>. Using blackjack as an example, if we're in the state of having a card total of 20, and have two possible actions, hit or stay, the value of this state is only high if our policy says to stay when we have 20. If our policy said to hit when we have 20, we would probably bust and lose the game, thus the value of that state would be low. More formally, the value of a state is equivalent to the value of the highest action taken in that state.</p>
<h4>What is Q-learning?</h4>
<p>Q-learning, like virtually all RL methods, is one type of algorithm used to calculate state-action values. It falls under the class of <em>temporal difference</em> (TD) algorithms, which suggests that time differences between actions taken and rewards received are involved.</p>
<p>In part 2 where we used a Monte Carlo method to learn to play blackjack, we had to wait until the end of a game (episode) to update our state-action values. With TD algorithms, we make updates after every action taken. In most cases, that makes more sense. We make a prediction (based on previous experience), take an action based on that prediction, receive a reward and then update our prediction.</p>
<p>(Btw: Don't confuse the "Q" in Q-learning with the <span class="math">\(Q\)</span> function we've discussed in the previous parts. The <span class="math">\(Q\)</span> function is always the name of the function that accepts states and actions and spits out the value of that state-action pair. RL methods involve a <span class="math">\(Q\)</span> function but aren't necessarily Q-learning algorithms.)</p>
<p>Here's the tabular Q-learning update rule:
</p>
<div class="math">$$Q(S_t, A_t) \leftarrow Q(S_t, A_t) + \alpha[R_{t+1} + \gamma maxQ(S_{t+1}, a_{t+1}) - Q(S_t, A_t)]$$</div>
<p>So, like Monte Carlo, we could have a table that stores the Q-value for every possible state-action pair and iteratively update this table as we play games. Our policy <span class="math">\(\pi\)</span> would be based on choosing the action with the highest Q value for that given state.</p>
<p>But we're done with tables. This is 2015, we have GPUs and stuff. Well, as I alluded to in part 2, our <span class="math">\(Q(s,a)\)</span> function doesn't have to just be a lookup table. In fact, in most interesting problems, our state-action space is much too large to store in a table. Imagine a very simplified game of Pacman. If we implement it as a graphics-based game, the state would be the raw pixel data. In a tabular method, if the pixel data changes by just a single pixel, we have to store that as a completely separate entry in the table. Obviously that's silly and wasteful. What we need is some way to generalize and pattern match between states. We need our algorithm to say "the value of these <em>kind</em> of states is X" rather than "the value of this exact, super specific state is X."</p>
<p>That's where neural networks come in. Or any other type of function approximator, even a simple linear model. We can use a neural network, instead of a lookup table, as our <span class="math">\(Q(s,a)\)</span> function. Just like before, it will accept a state and an action and spit out the value of that state-action.</p>
<p>Importantly, however, unlike a lookup table, a neural network also has a bunch of parameters associated with it. These are the weights. So our <span class="math">\(Q\)</span> function actually looks like this: <span class="math">\(Q(s, a, \theta)\)</span> where <span class="math">\(\theta\)</span> is a vector of parameters. And instead of iteratively updating values in a table, we will iteratively update the <span class="math">\(\theta\)</span> parameters of our neural network so that it learns to provide us with better estimates of state-action values.</p>
<p>Of course we can use gradient descent (backpropagation) to train our <span class="math">\(Q\)</span> neural network just like any other neural network.</p>
<p>But what's our target <code>y</code> vector (expected output vector)? Since the net is not a table, we don't use the formula shown above, our target is simply: <span class="math">\(r_{t+1} + \gamma * maxQ(s', a')\)</span> for the state-action that just happened. <span class="math">\(\gamma\)</span> is a parameter <span class="math">\(0\rightarrow1\)</span> that is called the <em>discount factor</em>. Basically it determines how much each future reward is taken into consideration for updating our Q-value. If <span class="math">\(\gamma\)</span> is close to 0, we heavily discount future rewards and thus mostly care about immediate rewards. <span class="math">\(s'\)</span> refers to the new state after having taken action <span class="math">\(a\)</span> and <span class="math">\(a'\)</span> refers to the next actions possible in this new state. So <span class="math">\(maxQ(s', a')\)</span> means we calculate all the Q-values for each state-action pair in the new state, and take the maximium value to use in our new value update. (Note I may use <span class="math">\(s' \text{ and } a'\)</span> interchangeably with <span class="math">\(s_{t+1} \text{ and } a_{t+1}\)</span>.)</p>
<p>One important note: our reward update for every state-action pair is <span class="math">\(r_{t+1} + \gamma*maxQ(s_{t+1}, a)\)</span> <strong>except</strong> when the state <span class="math">\(s'\)</span> is a terminal state. When we've reached a terminal state, the reward update is simply <span class="math">\(r_{t+1}\)</span>. A terminal state is the last state in an episode. In our case, there are 2 terminal states: the state where the player fell into the pit (and receives -10) and the state where the player has reached the goal (and receives +10). Any other state is non-terminal and the game is still in progress.</p>
<p>There are two keywords I need to mention as well: <strong>on-policy</strong> and <strong>off-policy</strong> methods. In on-policy methods we iteratively learn about state values at the same time that we improve our policy. In other words, the updates to our state values depend on the policy. In contrast, off-policy methods do not depend on the policy to update the value function. Q-learning is an off-policy method. It's advantageous because with off-policy methods, we can follow one policy while learning about another. For example, with Q-learning, we could always take completely random actions and yet we would still learn about another policy function of taking the best actions in every state. If there's ever a <span class="math">\(\pi\)</span> referenced in the value update part of the algorithm then it's an on-policy method.</p>
<h3>Gridworld Details</h3>
<p>Before we get too deep into the neural network Q-learning stuff, let's discuss the Gridworld game implementation that we're using as our toy problem.</p>
<p>We're going to implement 3 variants of the game in order of increasing difficulty. The first version will initialize a grid in exactly the same way each time. That is, every new game starts with the player (P), goal (+), pit (-), and wall (W) in exactly the same positions. Thus the algorithm just needs to learn how to take the player from a known starting position to a known end position without hitting the pit, which gives out negative rewards.</p>
<p>The second implementation is slightly more difficult. The goal, pit and wall will always be initialized in the same positions, but the player will be placed randomly on the grid on each new game. The third implementation is the most difficult to learn, and that's where all elements are randomly placed on the grid each game.</p>
<p>Let's get to coding.</p>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="k">def</span> <span class="nf">randPair</span><span class="p">(</span><span class="n">s</span><span class="p">,</span><span class="n">e</span><span class="p">):</span>
<span class="k">return</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randint</span><span class="p">(</span><span class="n">s</span><span class="p">,</span><span class="n">e</span><span class="p">),</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randint</span><span class="p">(</span><span class="n">s</span><span class="p">,</span><span class="n">e</span><span class="p">)</span>
<span class="c1">#finds an array in the "depth" dimension of the grid</span>
<span class="k">def</span> <span class="nf">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">obj</span><span class="p">):</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">):</span>
<span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">):</span>
<span class="k">if</span> <span class="p">(</span><span class="n">state</span><span class="p">[</span><span class="n">i</span><span class="p">,</span><span class="n">j</span><span class="p">]</span> <span class="o">==</span> <span class="n">obj</span><span class="p">)</span><span class="o">.</span><span class="n">all</span><span class="p">():</span>
<span class="k">return</span> <span class="n">i</span><span class="p">,</span><span class="n">j</span>
<span class="c1">#Initialize stationary grid, all items are placed deterministically</span>
<span class="k">def</span> <span class="nf">initGrid</span><span class="p">():</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">))</span>
<span class="c1">#place player</span>
<span class="n">state</span><span class="p">[</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">])</span>
<span class="c1">#place wall</span>
<span class="n">state</span><span class="p">[</span><span class="mi">2</span><span class="p">,</span><span class="mi">2</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">])</span>
<span class="c1">#place pit</span>
<span class="n">state</span><span class="p">[</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">])</span>
<span class="c1">#place goal</span>
<span class="n">state</span><span class="p">[</span><span class="mi">3</span><span class="p">,</span><span class="mi">3</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">])</span>
<span class="k">return</span> <span class="n">state</span>
<span class="c1">#Initialize player in random location, but keep wall, goal and pit stationary</span>
<span class="k">def</span> <span class="nf">initGridPlayer</span><span class="p">():</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">))</span>
<span class="c1">#place player</span>
<span class="n">state</span><span class="p">[</span><span class="n">randPair</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">)]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">])</span>
<span class="c1">#place wall</span>
<span class="n">state</span><span class="p">[</span><span class="mi">2</span><span class="p">,</span><span class="mi">2</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">])</span>
<span class="c1">#place pit</span>
<span class="n">state</span><span class="p">[</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">])</span>
<span class="c1">#place goal</span>
<span class="n">state</span><span class="p">[</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">])</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">]))</span> <span class="c1">#find grid position of player (agent)</span>
<span class="n">w</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span> <span class="c1">#find wall</span>
<span class="n">g</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span> <span class="c1">#find goal</span>
<span class="n">p</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span> <span class="c1">#find pit</span>
<span class="k">if</span> <span class="p">(</span><span class="ow">not</span> <span class="n">a</span> <span class="ow">or</span> <span class="ow">not</span> <span class="n">w</span> <span class="ow">or</span> <span class="ow">not</span> <span class="n">g</span> <span class="ow">or</span> <span class="ow">not</span> <span class="n">p</span><span class="p">):</span>
<span class="c1">#print('Invalid grid. Rebuilding..')</span>
<span class="k">return</span> <span class="n">initGridPlayer</span><span class="p">()</span>
<span class="k">return</span> <span class="n">state</span>
<span class="c1">#Initialize grid so that goal, pit, wall, player are all randomly placed</span>
<span class="k">def</span> <span class="nf">initGridRand</span><span class="p">():</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">))</span>
<span class="c1">#place player</span>
<span class="n">state</span><span class="p">[</span><span class="n">randPair</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">)]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">])</span>
<span class="c1">#place wall</span>
<span class="n">state</span><span class="p">[</span><span class="n">randPair</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">)]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">])</span>
<span class="c1">#place pit</span>
<span class="n">state</span><span class="p">[</span><span class="n">randPair</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">)]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">])</span>
<span class="c1">#place goal</span>
<span class="n">state</span><span class="p">[</span><span class="n">randPair</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">)]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">])</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">]))</span>
<span class="n">w</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span>
<span class="n">g</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span>
<span class="n">p</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span>
<span class="c1">#If any of the "objects" are superimposed, just call the function again to re-place</span>
<span class="k">if</span> <span class="p">(</span><span class="ow">not</span> <span class="n">a</span> <span class="ow">or</span> <span class="ow">not</span> <span class="n">w</span> <span class="ow">or</span> <span class="ow">not</span> <span class="n">g</span> <span class="ow">or</span> <span class="ow">not</span> <span class="n">p</span><span class="p">):</span>
<span class="c1">#print('Invalid grid. Rebuilding..')</span>
<span class="k">return</span> <span class="n">initGridRand</span><span class="p">()</span>
<span class="k">return</span> <span class="n">state</span>
</pre></div>
<p>The state is a 3-dimensional numpy array (4x4x4). You can think of the first two dimensions as the positions on the board; e.g. row 1, column 2 is the position (1,2) [zero indexed] on the board. The 3rd dimension encodes the object/element at that position. Since there are 4 different possible objects, the 3rd dimension of the state contains vectors of length 4. We're using a one-hot encoding for the elements except that the empty position is just a vector of all zeros. So with a 4 length vector we're encoding 5 possible options at each grid position: empty, player, goal, pit, or wall.</p>
<p>You can also think of the 3rd dimension as being divided into 4 separate grid planes, where each plane represents the position of each element. So below is an example where the player is at grid position (3,0), the wall is at (0,0), the pit is at (0,1) and the goal is at (1,0). [All other elements are 0s]</p>
<p><img src="images/RL/gridpositions.png" width="300px" /></p>
<p>In our simple implementation it's possible for the board to be initialized such that some of the objects contain a 1 at the same "x,y" position (but different "z" positions), which indicates they're at the same position on the grid. Obviously we don't want to initialize the board in this way, so for the last 2 variants of the game that involve some element of random initialization, we check if we can find "clean" arrays (only one "1" in the 'Z' dimension of a particular grid position) of the various element types on the grid and if not, we just recursively call the initialize grid function until we get a state where elements are not superimposed.</p>
<p>When the player successfully plays the game and lands on the goal, the player and goal positions <em>will</em> be superimposed and that is how we know the player has won (likewise if the player hits the pit and loses). The wall is supposed to block the movement of the player so we prevent the player from taking an action that would place them at the same position as the wall. Additionally, the grid is "enclosed" so that player cannot walk through the edges of the grid.</p>
<p>Now we will implement the movement function.</p>
<div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">makeMove</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">action</span><span class="p">):</span>
<span class="c1">#need to locate player in grid</span>
<span class="c1">#need to determine what object (if any) is in the new grid spot the player is moving to</span>
<span class="n">player_loc</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">]))</span>
<span class="n">wall</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span>
<span class="n">goal</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span>
<span class="n">pit</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">))</span>
<span class="n">actions</span> <span class="o">=</span> <span class="p">[[</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">],[</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">],[</span><span class="mi">0</span><span class="p">,</span><span class="o">-</span><span class="mi">1</span><span class="p">],[</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">]]</span>
<span class="c1">#e.g. up => (player row - 1, player column + 0)</span>
<span class="n">new_loc</span> <span class="o">=</span> <span class="p">(</span><span class="n">player_loc</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="n">actions</span><span class="p">[</span><span class="n">action</span><span class="p">][</span><span class="mi">0</span><span class="p">],</span> <span class="n">player_loc</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">+</span> <span class="n">actions</span><span class="p">[</span><span class="n">action</span><span class="p">][</span><span class="mi">1</span><span class="p">])</span>
<span class="k">if</span> <span class="p">(</span><span class="n">new_loc</span> <span class="o">!=</span> <span class="n">wall</span><span class="p">):</span>
<span class="k">if</span> <span class="p">((</span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">new_loc</span><span class="p">)</span> <span class="o"><=</span> <span class="p">(</span><span class="mi">3</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span><span class="o">.</span><span class="n">all</span><span class="p">()</span> <span class="ow">and</span> <span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">new_loc</span><span class="p">)</span> <span class="o">>=</span> <span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">))</span><span class="o">.</span><span class="n">all</span><span class="p">()):</span>
<span class="n">state</span><span class="p">[</span><span class="n">new_loc</span><span class="p">][</span><span class="mi">3</span><span class="p">]</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">new_player_loc</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">]))</span>
<span class="k">if</span> <span class="p">(</span><span class="ow">not</span> <span class="n">new_player_loc</span><span class="p">):</span>
<span class="n">state</span><span class="p">[</span><span class="n">player_loc</span><span class="p">]</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">])</span>
<span class="c1">#re-place pit</span>
<span class="n">state</span><span class="p">[</span><span class="n">pit</span><span class="p">][</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="mi">1</span>
<span class="c1">#re-place wall</span>
<span class="n">state</span><span class="p">[</span><span class="n">wall</span><span class="p">][</span><span class="mi">2</span><span class="p">]</span> <span class="o">=</span> <span class="mi">1</span>
<span class="c1">#re-place goal</span>
<span class="n">state</span><span class="p">[</span><span class="n">goal</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span> <span class="o">=</span> <span class="mi">1</span>
<span class="k">return</span> <span class="n">state</span>
</pre></div>
<p>The first thing we do is try to find the positions of each element on the grid (state). Then it's just a few simple if-conditions. We need to make sure the player isn't trying to step on the wall and make sure that the player isn't stepping outside the bounds of the grid.</p>
<p>Now we implement <code>getLoc</code> which is similar to <code>findLoc</code> but can identify superimposed elements, whereas <code>findLoc</code> would miss it (intentionally) if there was superimposition. Additionally, we'll implement our reward function, which will award +10 if the player steps onto the goal, -10 if the player steps into the pit, and -1 for any other move. These rewards are pretty arbitrary, as long as the goal has a significantly higher reward than the pit, the algorithm should do fine.</p>
<p>Lastly, I've implemented a function that will display our grid as a text array so we can see what's going on.</p>
<div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">getLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">level</span><span class="p">):</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">):</span>
<span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">):</span>
<span class="k">if</span> <span class="p">(</span><span class="n">state</span><span class="p">[</span><span class="n">i</span><span class="p">,</span><span class="n">j</span><span class="p">][</span><span class="n">level</span><span class="p">]</span> <span class="o">==</span> <span class="mi">1</span><span class="p">):</span>
<span class="k">return</span> <span class="n">i</span><span class="p">,</span><span class="n">j</span>
<span class="k">def</span> <span class="nf">getReward</span><span class="p">(</span><span class="n">state</span><span class="p">):</span>
<span class="n">player_loc</span> <span class="o">=</span> <span class="n">getLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="mi">3</span><span class="p">)</span>
<span class="n">pit</span> <span class="o">=</span> <span class="n">getLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">goal</span> <span class="o">=</span> <span class="n">getLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="n">player_loc</span> <span class="o">==</span> <span class="n">pit</span><span class="p">):</span>
<span class="k">return</span> <span class="o">-</span><span class="mi">10</span>
<span class="k">elif</span> <span class="p">(</span><span class="n">player_loc</span> <span class="o">==</span> <span class="n">goal</span><span class="p">):</span>
<span class="k">return</span> <span class="mi">10</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="o">-</span><span class="mi">1</span>
<span class="k">def</span> <span class="nf">dispGrid</span><span class="p">(</span><span class="n">state</span><span class="p">):</span>
<span class="n">grid</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">),</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">'<U2'</span><span class="p">)</span>
<span class="n">player_loc</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">]))</span>
<span class="n">wall</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span>
<span class="n">goal</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span>
<span class="n">pit</span> <span class="o">=</span> <span class="n">findLoc</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">0</span><span class="p">]))</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">):</span>
<span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">):</span>
<span class="n">grid</span><span class="p">[</span><span class="n">i</span><span class="p">,</span><span class="n">j</span><span class="p">]</span> <span class="o">=</span> <span class="s1">' '</span>
<span class="k">if</span> <span class="n">player_loc</span><span class="p">:</span>
<span class="n">grid</span><span class="p">[</span><span class="n">player_loc</span><span class="p">]</span> <span class="o">=</span> <span class="s1">'P'</span> <span class="c1">#player</span>
<span class="k">if</span> <span class="n">wall</span><span class="p">:</span>
<span class="n">grid</span><span class="p">[</span><span class="n">wall</span><span class="p">]</span> <span class="o">=</span> <span class="s1">'W'</span> <span class="c1">#wall</span>
<span class="k">if</span> <span class="n">goal</span><span class="p">:</span>
<span class="n">grid</span><span class="p">[</span><span class="n">goal</span><span class="p">]</span> <span class="o">=</span> <span class="s1">'+'</span> <span class="c1">#goal</span>
<span class="k">if</span> <span class="n">pit</span><span class="p">:</span>
<span class="n">grid</span><span class="p">[</span><span class="n">pit</span><span class="p">]</span> <span class="o">=</span> <span class="s1">'-'</span> <span class="c1">#pit</span>
<span class="k">return</span> <span class="n">grid</span>
</pre></div>
<p>And that's it. That's the entire gridworld game implementation. Not too bad right? As with my part 2 blackjack implementation, this game is not using OOP-style and implemented in a functional style where we just pass around states.</p>
<p>Let's demonstrate some gameplay. I'll be using the <code>initGridRand()</code> variant so that all items are placed randomly.</p>
<div class="highlight"><pre><span></span><span class="n">state</span> <span class="o">=</span> <span class="n">initGridRand</span><span class="p">()</span>
<span class="n">dispGrid</span><span class="p">(</span><span class="n">state</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="nx">array</span><span class="p">([[</span><span class="sc">'P'</span><span class="p">,</span><span class="w"> </span><span class="sc">'-'</span><span class="p">,</span><span class="w"> </span><span class="sc">' '</span><span class="p">,</span><span class="w"> </span><span class="sc">' '</span><span class="p">],</span>
<span class="w"> </span><span class="p">[</span><span class="sc">' '</span><span class="p">,</span><span class="w"> </span><span class="sc">' '</span><span class="p">,</span><span class="w"> </span><span class="sc">' '</span><span class="p">,</span><span class="w"> </span><span class="sc">' '</span><span class="p">],</span>
<span class="w"> </span><span class="p">[</span><span class="sc">' '</span><span class="p">,</span><span class="w"> </span><span class="sc">' '</span><span class="p">,</span><span class="w"> </span><span class="sc">'W'</span><span class="p">,</span><span class="w"> </span><span class="sc">' '</span><span class="p">],</span>
<span class="w"> </span><span class="p">[</span><span class="sc">' '</span><span class="p">,</span><span class="w"> </span><span class="sc">'+'</span><span class="p">,</span><span class="w"> </span><span class="sc">' '</span><span class="p">,</span><span class="w"> </span><span class="sc">' '</span><span class="p">]],</span><span class="w"> </span>
<span class="w"> </span><span class="nx">dtype</span><span class="p">=</span><span class="err">'</span><span class="p"><</span><span class="nx">U2</span><span class="err">'</span><span class="p">)</span>
</pre></div>
<p>As you can see, I clearly need to move 3 spaces down, and 1 space to the right to land on the goal.
Remember, our action encoding is: 0 = up, 1 = down, 2 = left, 3 = right.</p>
<div class="highlight"><pre><span></span><span class="n">state</span> <span class="o">=</span> <span class="n">makeMove</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">makeMove</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">makeMove</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">makeMove</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="mi">3</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Reward: </span><span class="si">%s</span><span class="s1">'</span> <span class="o">%</span> <span class="p">(</span><span class="n">getReward</span><span class="p">(</span><span class="n">state</span><span class="p">),))</span>
<span class="n">dispGrid</span><span class="p">(</span><span class="n">state</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">Reward</span><span class="o">:</span><span class="w"> </span><span class="mi">10</span>
<span class="n">array</span><span class="o">([[</span><span class="s1">' '</span><span class="o">,</span><span class="w"> </span><span class="s1">'-'</span><span class="o">,</span><span class="w"> </span><span class="s1">' '</span><span class="o">,</span><span class="w"> </span><span class="s1">' '</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="s1">' '</span><span class="o">,</span><span class="w"> </span><span class="s1">' '</span><span class="o">,</span><span class="w"> </span><span class="s1">' '</span><span class="o">,</span><span class="w"> </span><span class="s1">' '</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="s1">' '</span><span class="o">,</span><span class="w"> </span><span class="s1">' '</span><span class="o">,</span><span class="w"> </span><span class="s1">'W'</span><span class="o">,</span><span class="w"> </span><span class="s1">' '</span><span class="o">],</span>
<span class="w"> </span><span class="o">[</span><span class="s1">' '</span><span class="o">,</span><span class="w"> </span><span class="s1">' '</span><span class="o">,</span><span class="w"> </span><span class="s1">' '</span><span class="o">,</span><span class="w"> </span><span class="s1">' '</span><span class="o">]],</span><span class="w"> </span>
<span class="w"> </span><span class="n">dtype</span><span class="o">=</span><span class="s1">'<U2'</span><span class="o">)</span>
</pre></div>
<p>We haven't implemented a display for when the player is on the goal or pit so the player and goal just disappear when that happens. </p>
<h3>Neural Network as our Q function</h3>
<p>Now for the fun part. Let's build our neural network that will serve as our <span class="math">\(Q\)</span> function. Since this is a post about Q-learning, I'm not going to code a neural network from scratch. I'm going to use the fairly popular Theano-based library Keras. You can of course use whatever library you want, or roll your own.</p>
<p><strong>Important Note</strong>:
Up until now, I've been talking about how the neural network can serve the role of <span class="math">\(Q(s, a)\)</span>, and that's absolutely true. However, I will be implementing our neural network in the same way that Google DeepMind did for its Atari playing algorithm. Instead of a neural network architecture that accepts a state and an action as inputs and outputs the value of that single state-action pair, DeepMind built a network that just accepts a state and outputs separate Q-values for each possible action in its output layer. This is pretty clever because in Q-learning we need to get the <span class="math">\(maxQ(s', a')\)</span> [max of the Q values for every possible action in the new state s']. Rather than having to run our network forward for every action, we just need to run it forward once. The result is the same, however, it's just more efficient.</p>
<p><img src="images/RL/rl3net.png" /></p>
<div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">keras.models</span> <span class="kn">import</span> <span class="n">Sequential</span>
<span class="kn">from</span> <span class="nn">keras.layers.core</span> <span class="kn">import</span> <span class="n">Dense</span><span class="p">,</span> <span class="n">Dropout</span><span class="p">,</span> <span class="n">Activation</span>
<span class="kn">from</span> <span class="nn">keras.optimizers</span> <span class="kn">import</span> <span class="n">RMSprop</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">model</span> <span class="o">=</span> <span class="n">Sequential</span><span class="p">()</span>
<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">Dense</span><span class="p">(</span><span class="mi">164</span><span class="p">,</span> <span class="n">init</span><span class="o">=</span><span class="s1">'lecun_uniform'</span><span class="p">,</span> <span class="n">input_shape</span><span class="o">=</span><span class="p">(</span><span class="mi">64</span><span class="p">,)))</span>
<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">Activation</span><span class="p">(</span><span class="s1">'relu'</span><span class="p">))</span>
<span class="c1">#model.add(Dropout(0.2)) I'm not using dropout, but maybe you wanna give it a try?</span>
<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">Dense</span><span class="p">(</span><span class="mi">150</span><span class="p">,</span> <span class="n">init</span><span class="o">=</span><span class="s1">'lecun_uniform'</span><span class="p">))</span>
<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">Activation</span><span class="p">(</span><span class="s1">'relu'</span><span class="p">))</span>
<span class="c1">#model.add(Dropout(0.2))</span>
<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">Dense</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span> <span class="n">init</span><span class="o">=</span><span class="s1">'lecun_uniform'</span><span class="p">))</span>
<span class="n">model</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">Activation</span><span class="p">(</span><span class="s1">'linear'</span><span class="p">))</span> <span class="c1">#linear output so we can have range of real-valued outputs</span>
<span class="n">rms</span> <span class="o">=</span> <span class="n">RMSprop</span><span class="p">()</span>
<span class="n">model</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">loss</span><span class="o">=</span><span class="s1">'mse'</span><span class="p">,</span> <span class="n">optimizer</span><span class="o">=</span><span class="n">rms</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">state</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">64</span><span class="p">),</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="c1">#just to show an example output; read outputs left to right: up/down/left/right</span>
</pre></div>
<div class="highlight"><pre><span></span>array([[-0.02812552, -0.04649779, -0.08819015, -0.00723661]])
</pre></div>
<p>So that's the network I've designed. An input layer of 64 units (because our state has a total of 64 elements, remember its a 4x4x4 numpy array), 2 hidden layers of 164 and 150 units, and an output layer of 4, one for each of our possible actions (up, down, left, right) [in that order].</p>
<p>Why did I make the network like this? Honestly, I have no good answer for that. I just messed around with different hidden layer architectures and this one seemed to work fairly well. Feel free to change it up. There's probably a better configuration. (If you discover or know of a much better network architecture for this, let me know).</p>
<h3>Online Training</h3>
<p>Below is the implementation for the main loop of the algorithm. In broad strokes:
1. Setup a for-loop to number of epochs
2. In the loop, setup while loop (while game is in progress)
3. Run Q network forward.
4. We're using an epsilon greedy implementation, so at time <em>t</em> with probability <span class="math">\(\epsilon\)</span> we will choose a random action. With probability <span class="math">\(1-\epsilon\)</span> we will choose the action associated with the highest Q value from our neural network.
5. Take action <span class="math">\(a\)</span> as determined in (4), observe new state <span class="math">\(s'\)</span> and reward <span class="math">\(r_{t+1}\)</span>
6. Run the network forward using <span class="math">\(s'\)</span>. Store the highest Q value (<code>maxQ</code>).
7. Our target value to train the network is <code>reward + (gamma * maxQ)</code> where <code>gamma</code> is a parameter (<span class="math">\(0 <= \gamma <= 1\)</span>).
8. Given that we have 4 outputs and we only want to update/train the output associated with the action we just took, our target output vector is the same as the output vector from the first run, except we change the one output associated with our action to: <code>reward + (gamma * maxQ)</code>
9. Train the model on this 1 sample. Repeat process 2-9</p>
<p>Just to be clear, when we first run our neural network and get an output of action-values like this</p>
<div class="highlight"><pre><span></span><span class="n">array</span><span class="p">([[</span><span class="o">-</span><span class="mf">0.02812552</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.04649779</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.08819015</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.00723661</span><span class="p">]])</span>
</pre></div>
<p>our target vector for one iteration may look like this:</p>
<div class="highlight"><pre><span></span><span class="n">array</span><span class="p">([[</span><span class="o">-</span><span class="mf">0.02812552</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.04649779</span><span class="p">,</span> <span class="mi">10</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.00723661</span><span class="p">]])</span>
</pre></div>
<p>Also note, I initialize epsilon (for the <span class="math">\(\epsilon\)</span>-greedy action selection) to be 1. It decrements by a small amount on every iteration and will eventually reach 0.1 where it stays. Google DeepMind also used an <span class="math">\(\epsilon\)</span>-greedy action selection and also initialized epsilon to be 1 and decremented during the game play.
if taking action 2 one step (left) resulted in reaching the goal. So we just keep all other outputs the same as before and just change the one for the action we took.</p>
<p>Okay, so let's go ahead and train our algorithm to learn the easiest variant of the game, where all elements are placed deterministically at the same positions every time.</p>
<div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">IPython.display</span> <span class="kn">import</span> <span class="n">clear_output</span>
<span class="kn">import</span> <span class="nn">random</span>
<span class="n">epochs</span> <span class="o">=</span> <span class="mi">1000</span>
<span class="n">gamma</span> <span class="o">=</span> <span class="mf">0.9</span> <span class="c1">#since it may take several moves to goal, making gamma high</span>
<span class="n">epsilon</span> <span class="o">=</span> <span class="mi">1</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">epochs</span><span class="p">):</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">initGrid</span><span class="p">()</span>
<span class="n">status</span> <span class="o">=</span> <span class="mi">1</span>
<span class="c1">#while game still in progress</span>
<span class="k">while</span><span class="p">(</span><span class="n">status</span> <span class="o">==</span> <span class="mi">1</span><span class="p">):</span>
<span class="c1">#We are in state S</span>
<span class="c1">#Let's run our Q function on S to get Q values for all possible actions</span>
<span class="n">qval</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">state</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">64</span><span class="p">),</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="n">random</span><span class="o">.</span><span class="n">random</span><span class="p">()</span> <span class="o"><</span> <span class="n">epsilon</span><span class="p">):</span> <span class="c1">#choose random action</span>
<span class="n">action</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randint</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span> <span class="c1">#choose best action from Q(s,a) values</span>
<span class="n">action</span> <span class="o">=</span> <span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">qval</span><span class="p">))</span>
<span class="c1">#Take action, observe new state S'</span>
<span class="n">new_state</span> <span class="o">=</span> <span class="n">makeMove</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">action</span><span class="p">)</span>
<span class="c1">#Observe reward</span>
<span class="n">reward</span> <span class="o">=</span> <span class="n">getReward</span><span class="p">(</span><span class="n">new_state</span><span class="p">)</span>
<span class="c1">#Get max_Q(S',a)</span>
<span class="n">newQ</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">new_state</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">64</span><span class="p">),</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="n">maxQ</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">max</span><span class="p">(</span><span class="n">newQ</span><span class="p">)</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">1</span><span class="p">,</span><span class="mi">4</span><span class="p">))</span>
<span class="n">y</span><span class="p">[:]</span> <span class="o">=</span> <span class="n">qval</span><span class="p">[:]</span>
<span class="k">if</span> <span class="n">reward</span> <span class="o">==</span> <span class="o">-</span><span class="mi">1</span><span class="p">:</span> <span class="c1">#non-terminal state</span>
<span class="n">update</span> <span class="o">=</span> <span class="p">(</span><span class="n">reward</span> <span class="o">+</span> <span class="p">(</span><span class="n">gamma</span> <span class="o">*</span> <span class="n">maxQ</span><span class="p">))</span>
<span class="k">else</span><span class="p">:</span> <span class="c1">#terminal state</span>
<span class="n">update</span> <span class="o">=</span> <span class="n">reward</span>
<span class="n">y</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="n">action</span><span class="p">]</span> <span class="o">=</span> <span class="n">update</span> <span class="c1">#target output</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Game #: </span><span class="si">%s</span><span class="s2">"</span> <span class="o">%</span> <span class="p">(</span><span class="n">i</span><span class="p">,))</span>
<span class="n">model</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">state</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">64</span><span class="p">),</span> <span class="n">y</span><span class="p">,</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">nb_epoch</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">verbose</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">new_state</span>
<span class="k">if</span> <span class="n">reward</span> <span class="o">!=</span> <span class="o">-</span><span class="mi">1</span><span class="p">:</span>
<span class="n">status</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">clear_output</span><span class="p">(</span><span class="n">wait</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">if</span> <span class="n">epsilon</span> <span class="o">></span> <span class="mf">0.1</span><span class="p">:</span>
<span class="n">epsilon</span> <span class="o">-=</span> <span class="p">(</span><span class="mi">1</span><span class="o">/</span><span class="n">epochs</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>Game #: 999
Epoch 1/1
1/1 [==============================] - 0s - loss: 0.0265
</pre></div>
<p>Alright, so I've empirically tested this and it trains on the easy variant with just 1000 epochs (keep in mind every epoch is a full game played to completion). Below I've implemented a function we can use to test our trained algorithm to see if it has properly learned how to play the game. It basically just uses the neural network model to calculate action-values for the current state and selects the action with the highest Q-value. It just repeats this forever until the game is won or lost. I've made it break out of this loop if it is making more than 10 moves because this probably means it hasn't learned how to win and we don't want an infinite loop running.</p>
<div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">testAlgo</span><span class="p">(</span><span class="n">init</span><span class="o">=</span><span class="mi">0</span><span class="p">):</span>
<span class="n">i</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">if</span> <span class="n">init</span><span class="o">==</span><span class="mi">0</span><span class="p">:</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">initGrid</span><span class="p">()</span>
<span class="k">elif</span> <span class="n">init</span><span class="o">==</span><span class="mi">1</span><span class="p">:</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">initGridPlayer</span><span class="p">()</span>
<span class="k">elif</span> <span class="n">init</span><span class="o">==</span><span class="mi">2</span><span class="p">:</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">initGridRand</span><span class="p">()</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Initial State:"</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">dispGrid</span><span class="p">(</span><span class="n">state</span><span class="p">))</span>
<span class="n">status</span> <span class="o">=</span> <span class="mi">1</span>
<span class="c1">#while game still in progress</span>
<span class="k">while</span><span class="p">(</span><span class="n">status</span> <span class="o">==</span> <span class="mi">1</span><span class="p">):</span>
<span class="n">qval</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">state</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">64</span><span class="p">),</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="n">action</span> <span class="o">=</span> <span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">qval</span><span class="p">))</span> <span class="c1">#take action with highest Q-value</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Move #: </span><span class="si">%s</span><span class="s1">; Taking action: </span><span class="si">%s</span><span class="s1">'</span> <span class="o">%</span> <span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">action</span><span class="p">))</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">makeMove</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">action</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">dispGrid</span><span class="p">(</span><span class="n">state</span><span class="p">))</span>
<span class="n">reward</span> <span class="o">=</span> <span class="n">getReward</span><span class="p">(</span><span class="n">state</span><span class="p">)</span>
<span class="k">if</span> <span class="n">reward</span> <span class="o">!=</span> <span class="o">-</span><span class="mi">1</span><span class="p">:</span>
<span class="n">status</span> <span class="o">=</span> <span class="mi">0</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Reward: </span><span class="si">%s</span><span class="s2">"</span> <span class="o">%</span> <span class="p">(</span><span class="n">reward</span><span class="p">,))</span>
<span class="n">i</span> <span class="o">+=</span> <span class="mi">1</span> <span class="c1">#If we're taking more than 10 actions, just stop, we probably can't win this game</span>
<span class="k">if</span> <span class="p">(</span><span class="n">i</span> <span class="o">></span> <span class="mi">10</span><span class="p">):</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Game lost; too many moves."</span><span class="p">)</span>
<span class="k">break</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="n">testAlgo</span><span class="p">(</span><span class="n">init</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="nt">Initial</span><span class="w"> </span><span class="nt">State</span><span class="o">:</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'P'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'+'</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">0</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">3</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'P'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'+'</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">1</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">3</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'P'</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'+'</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">2</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">1</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'P'</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'+'</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">3</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">1</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">'P'</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'+'</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">4</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">1</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Reward</span><span class="o">:</span><span class="w"> </span><span class="nt">10</span>
</pre></div>
<p>Can we get a round of applause for our gridworld player here? Clearly it knows what its doing; it went straight for the prize!</p>
<h3>Playing the the harder variant, catastrophic forgetting, and experience replay</h3>
<p>We're slowly building up our chops and we want our algorithm to train on the harder variant of the game where every new game the player is randomly placed on the grid. It can't just memorize a sequence of steps to take as before, it needs to be able to take the shortest path to the goal (without stepping into the pit) from wherever it starts on the grid. It needs to develop a slightly more sophisticated representation of its environment.</p>
<p>Unfortunately, there is a problem we may need to deal with as our problem becomes increasingly more difficult. There is a known problem called <strong>catastrophic forgetting</strong> that is associated with gradient descent based training methods in online training. </p>
<p>Imagine that in game #1 that our algorithm is training on (learning Q-values for) the player is placed in between the pit and the goal such that the goal is on the right and the pit is on the left. Using epsilon-greedy strategy, the player takes a random move and by chance takes a step to the right and hits the goal. Great, the algorithm will try to learn that this state-action pair is associated with a high reward by updating its weights in such a way that the output will more closely match the target value (i.e backpropagation). Now, the second game gets initialized and the player is again in between the goal and pit but this time the goal is on the <em>left</em> and the pit is on the right. Perhaps to our naive algorithm, the state <em>seems</em> very similar to the last game. Let's say that again, the player chooses to make one step to the right, but this time it ends up in the pit and gets -10 reward. The player is thinking "what the hell I thought going to the right was the best decision based on my previous experience." So now it may do backpropagation again to update its state-action value but because this state-action is very similar to the last learned state-action it may mess up its previously learned weights. </p>
<p>This is the essence of catastrophic forgetting. There's a push-pull between very similar state-actions (but with divergent targets) that results in this inability to properly learn anything. We generally don't have this problem in the supervised learning realm because we do randomized batch learning, where we don't update our weights until we've iterated through some random subset of our training data.</p>
<p>Catastrophic forgetting is probably not something we have to worry about with the first variant of our game because the targets are always stationary; but with the harder variants, it's something we should consider, and that is why I'm implementing something called <strong>experience replay</strong>. Experience replay basically gives us minibatch updating in an online learning scheme. It's actually not a huge deal to implement; here's how it works.</p>
<p>Experience replay:
1. In state <span class="math">\(s\)</span>, take action <span class="math">\(a\)</span>, observe new state <span class="math">\(s_{t+1}\)</span> and reward <span class="math">\(r_{t+1}\)</span>
2. Store this as a tuple <span class="math">\((s, a, s_{t+1}, r_{t+1})\)</span> in a list.
3. Continue to store each experience in this list until we have filled the list to a specific length (up to you to define)
4. Once the experience replay memory is filled, randomly select a subset (e.g. 40)
5. Iterate through this subset and calculate value updates for each; store these in a target array (e.g. <code>y_train</code>) and store the state <span class="math">\(s\)</span> of each memory in <code>X_train</code>
6. Use <code>X_train</code> and <code>y_train</code> as a minibatch for batch training. For subsequent epochs where the array is full, just overwrite old values in our experience replay memory array.</p>
<p>Thus, in addition to learning the action-value for the action we just took, we're also going to use a random sample of our past experiences to train on to prevent catastrophic forgetting.</p>
<p>So here's the same training algorithm from above except with experience replay added. Remember, this time we're training it on the harder variant of the game where the player is randomly placed on the grid.</p>
<div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">loss</span><span class="o">=</span><span class="s1">'mse'</span><span class="p">,</span> <span class="n">optimizer</span><span class="o">=</span><span class="n">rms</span><span class="p">)</span><span class="c1">#reset weights of neural network</span>
<span class="n">epochs</span> <span class="o">=</span> <span class="mi">3000</span>
<span class="n">gamma</span> <span class="o">=</span> <span class="mf">0.975</span>
<span class="n">epsilon</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">batchSize</span> <span class="o">=</span> <span class="mi">40</span>
<span class="n">buffer</span> <span class="o">=</span> <span class="mi">80</span>
<span class="n">replay</span> <span class="o">=</span> <span class="p">[]</span>
<span class="c1">#stores tuples of (S, A, R, S')</span>
<span class="n">h</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">epochs</span><span class="p">):</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">initGridPlayer</span><span class="p">()</span> <span class="c1">#using the harder state initialization function</span>
<span class="n">status</span> <span class="o">=</span> <span class="mi">1</span>
<span class="c1">#while game still in progress</span>
<span class="k">while</span><span class="p">(</span><span class="n">status</span> <span class="o">==</span> <span class="mi">1</span><span class="p">):</span>
<span class="c1">#We are in state S</span>
<span class="c1">#Let's run our Q function on S to get Q values for all possible actions</span>
<span class="n">qval</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">state</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">64</span><span class="p">),</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="n">random</span><span class="o">.</span><span class="n">random</span><span class="p">()</span> <span class="o"><</span> <span class="n">epsilon</span><span class="p">):</span> <span class="c1">#choose random action</span>
<span class="n">action</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">randint</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span> <span class="c1">#choose best action from Q(s,a) values</span>
<span class="n">action</span> <span class="o">=</span> <span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">qval</span><span class="p">))</span>
<span class="c1">#Take action, observe new state S'</span>
<span class="n">new_state</span> <span class="o">=</span> <span class="n">makeMove</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">action</span><span class="p">)</span>
<span class="c1">#Observe reward</span>
<span class="n">reward</span> <span class="o">=</span> <span class="n">getReward</span><span class="p">(</span><span class="n">new_state</span><span class="p">)</span>
<span class="c1">#Experience replay storage</span>
<span class="k">if</span> <span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">replay</span><span class="p">)</span> <span class="o"><</span> <span class="n">buffer</span><span class="p">):</span> <span class="c1">#if buffer not filled, add to it</span>
<span class="n">replay</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">state</span><span class="p">,</span> <span class="n">action</span><span class="p">,</span> <span class="n">reward</span><span class="p">,</span> <span class="n">new_state</span><span class="p">))</span>
<span class="k">else</span><span class="p">:</span> <span class="c1">#if buffer full, overwrite old values</span>
<span class="k">if</span> <span class="p">(</span><span class="n">h</span> <span class="o"><</span> <span class="p">(</span><span class="n">buffer</span><span class="o">-</span><span class="mi">1</span><span class="p">)):</span>
<span class="n">h</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">h</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">replay</span><span class="p">[</span><span class="n">h</span><span class="p">]</span> <span class="o">=</span> <span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">action</span><span class="p">,</span> <span class="n">reward</span><span class="p">,</span> <span class="n">new_state</span><span class="p">)</span>
<span class="c1">#randomly sample our experience replay memory</span>
<span class="n">minibatch</span> <span class="o">=</span> <span class="n">random</span><span class="o">.</span><span class="n">sample</span><span class="p">(</span><span class="n">replay</span><span class="p">,</span> <span class="n">batchSize</span><span class="p">)</span>
<span class="n">X_train</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">y_train</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">memory</span> <span class="ow">in</span> <span class="n">minibatch</span><span class="p">:</span>
<span class="c1">#Get max_Q(S',a)</span>
<span class="n">old_state</span><span class="p">,</span> <span class="n">action</span><span class="p">,</span> <span class="n">reward</span><span class="p">,</span> <span class="n">new_state</span> <span class="o">=</span> <span class="n">memory</span>
<span class="n">old_qval</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">old_state</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">64</span><span class="p">),</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="n">newQ</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">new_state</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">64</span><span class="p">),</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="n">maxQ</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">max</span><span class="p">(</span><span class="n">newQ</span><span class="p">)</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">1</span><span class="p">,</span><span class="mi">4</span><span class="p">))</span>
<span class="n">y</span><span class="p">[:]</span> <span class="o">=</span> <span class="n">old_qval</span><span class="p">[:]</span>
<span class="k">if</span> <span class="n">reward</span> <span class="o">==</span> <span class="o">-</span><span class="mi">1</span><span class="p">:</span> <span class="c1">#non-terminal state</span>
<span class="n">update</span> <span class="o">=</span> <span class="p">(</span><span class="n">reward</span> <span class="o">+</span> <span class="p">(</span><span class="n">gamma</span> <span class="o">*</span> <span class="n">maxQ</span><span class="p">))</span>
<span class="k">else</span><span class="p">:</span> <span class="c1">#terminal state</span>
<span class="n">update</span> <span class="o">=</span> <span class="n">reward</span>
<span class="n">y</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="n">action</span><span class="p">]</span> <span class="o">=</span> <span class="n">update</span>
<span class="n">X_train</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">old_state</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">64</span><span class="p">,))</span>
<span class="n">y_train</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">y</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">4</span><span class="p">,))</span>
<span class="n">X_train</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">X_train</span><span class="p">)</span>
<span class="n">y_train</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">y_train</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Game #: </span><span class="si">%s</span><span class="s2">"</span> <span class="o">%</span> <span class="p">(</span><span class="n">i</span><span class="p">,))</span>
<span class="n">model</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">X_train</span><span class="p">,</span> <span class="n">y_train</span><span class="p">,</span> <span class="n">batch_size</span><span class="o">=</span><span class="n">batchSize</span><span class="p">,</span> <span class="n">nb_epoch</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">verbose</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="n">state</span> <span class="o">=</span> <span class="n">new_state</span>
<span class="k">if</span> <span class="n">reward</span> <span class="o">!=</span> <span class="o">-</span><span class="mi">1</span><span class="p">:</span> <span class="c1">#if reached terminal state, update game status</span>
<span class="n">status</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">clear_output</span><span class="p">(</span><span class="n">wait</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">if</span> <span class="n">epsilon</span> <span class="o">></span> <span class="mf">0.1</span><span class="p">:</span> <span class="c1">#decrement epsilon over time</span>
<span class="n">epsilon</span> <span class="o">-=</span> <span class="p">(</span><span class="mi">1</span><span class="o">/</span><span class="n">epochs</span><span class="p">)</span>
</pre></div>
<div class="highlight"><pre><span></span>Game #: 2999
Epoch 1/1
40/40 [==============================] - 0s - loss: 0.0018
</pre></div>
<p>I've increased the training epochs to 3000 just based on empiric testing. So let's see how it does, we'll run our <code>testAlgo()</code> function a couple times to see how it handles randomly initialized player scenarios.</p>
<div class="highlight"><pre><span></span><span class="n">testAlgo</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span> <span class="c1">#run testAlgo using random player placement => initGridPlayer()</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="nt">Initial</span><span class="w"> </span><span class="nt">State</span><span class="o">:</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">'+'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'P'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">0</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">3</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">'+'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'P'</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">1</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">0</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">'+'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">'P'</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">2</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">0</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">'+'</span><span class="w"> </span><span class="s1">'P'</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">3</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">2</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Reward</span><span class="o">:</span><span class="w"> </span><span class="nt">10</span>
</pre></div>
<p>Fantastic. Let's run the <code>testAlgo()</code> one more time just to prove it has generalized.</p>
<div class="highlight"><pre><span></span><span class="n">testAlgo</span><span class="p">(</span><span class="n">init</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="c1">#Of course, I ran it many times more than I'm showing here</span>
</pre></div>
<div class="highlight"><pre><span></span><span class="nt">Initial</span><span class="w"> </span><span class="nt">State</span><span class="o">:</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">'+'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'P'</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">0</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">2</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">'+'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">'P'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">1</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">0</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">'P'</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">'+'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">2</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">0</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">'P'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">'+'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">3</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">3</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'P'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">'+'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">4</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">3</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'P'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">'+'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Move</span><span class="w"> </span><span class="err">#</span><span class="o">:</span><span class="w"> </span><span class="nt">5</span><span class="o">;</span><span class="w"> </span><span class="nt">Taking</span><span class="w"> </span><span class="nt">action</span><span class="o">:</span><span class="w"> </span><span class="nt">1</span>
<span class="cp">[</span><span class="err">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">'-'</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">'W'</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span>
<span class="w"> </span><span class="cp">[</span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="w"> </span><span class="s1">' '</span><span class="cp">]</span><span class="o">]</span>
<span class="nt">Reward</span><span class="o">:</span><span class="w"> </span><span class="nt">10</span>
</pre></div>
<p>I'll be darned. It seems to have learned to play the game from any starting position! Pretty neat.</p>
<h3>The Hardest Variant</h3>
<p>Okay, I lied. I will not be showing you the algorithm learning the hardest variant of the game (where all 4 elements are randomly placed on the grid each game). I'm leaving that up to you to attempt and let me know how it goes via email (outlacedev@gmail.com). The reason is, I'm doing all this on a Macbook Air (read: no CUDA gpu) and thus I cannot train the algorithm to a sufficiently large number of epochs for it to learn the problem. I suspect it may require significantly more epochs, perhaps more than 50,000. So if you have an nVIDIA GPU and can train it that long, let me know if it works. I could have used Lua/Torch7 since there is an OpenCL version but no one would read this if it wasn't in Python =P.</p>
<h3>Conclusion</h3>
<p>There you have it, basic Q-learning using neural networks.</p>
<p>That was a lot to go through, hopefully I didn't make too many mistakes (as always, email if you spot any so I can post corrections). I'm hoping you have success training Q-learning algorithms on more interesting problems than the gridworld game.</p>
<p>I'd say this is definitely the climax of the series on reinforcement learning. I plan to release a part 4 that will be about other temporal difference learnings algorithms that use eligibility traces. Since that's a relatively minor new concept, I will likely use it on another toy problem like gridworld. However, I do, at some point, want to release a post about setting up and using the Arcade Learning Environment (ALE) [fmr. Atari Learning Environment] and training an alogorithm to play Atari games, however, that will likely be a long while from now so don't hold your breath.</p>
<p>Cheers</p>
<h3>Download this IPython Notebook</h3>
<p><a href="https://github.com/outlace/outlace.github.io/blob/master/notebooks/rlpart3.ipynb">https://github.com/outlace/outlace.github.io/blob/master/notebooks/rlpart3.ipynb</a></p>
<h3>Download the Gridworld Game</h3>
<p><a href="https://github.com/outlace/Gridworld">https://github.com/outlace/Gridworld</a></p>
<h3>References</h3>
<ol>
<li>http://www.computervisiontalks.com/deep-learning-lecture-16-reinforcement-learning-and-neuro-dynamic-programming-nando-de-freitas/</li>
<li>https://www.youtube.com/watch?v=yNeSFbE1jdY</li>
<li>http://www.researchgate.net/profile/Marco_Wiering/publication/236645821_Reinforcement_Learning_to_Train_Ms._Pac-Man_Using_Higher-order_Action-relative_Inputs/links/0deec518a22042f5d7000000.pdf?inViewer=true&pdfJsDownload=true&disableCoverPage=true&origin=publication_detail</li>
<li>"Reinforcement Learning An Introduction" Sutton & Barto, 1996</li>
<li>"Human-level control through deep reinforcement learning" Mnih et al, 2015 (Google DeepMind Atari paper)</li>
</ol>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>Reinforcement Learning - Monte Carlo Methods2015-10-25T00:00:00-05:002015-10-25T00:00:00-05:00Brandon Browntag:outlace.com,2015-10-25:/rlpart2.html<p>Part 2 of the RL series. A slightly deeper dive into reinforcement learning methods by learning how to use Monte Carlo simulations to learn how to play blackjack.</p><h3>Playing Blackjack with Monte Carlo Methods</h3>
<h5>Introduction</h5>
<p>In part 1, we considered a very simple problem, the n-armed bandit problem, and devised an appropriately very simple algorithm to solve it (<span class="math">\(\epsilon\)</span>-greedy evaluation). In that case, the problem only has a single state: a choice among 10 actions with stationary probability distributions of rewards. Let's up the ante a bit and consider a more interesting problem with multiple (yet finite) states: the card game black jack (aka 21). Hunker down, this is a long one.</p>
<p>Rules and game-play of blackjack (check out https://www.youtube.com/watch?v=qd5oc9hLrXg if necessary):
1. There is a dealer and 1 or more players that independently play against the dealer.
2. Each player is delt 2 cards face-up. The dealer is delt two cards, one face-up, one face-down.
3. The goal is to get the sum of your cards value to be as close to 21 as possible without going over.
4. After the initial cards are dealt, each player can choose to 'stay' or 'hit' (ask for another card).
5. The dealer always follows this policy: hit until cards sum to 17 or more, then stay.
6. If the dealer is closer to 21, the dealer wins and the player loses, and vice versa.</p>
<p>So what's the state space for this problem? It's relatively large, much much larger than the single state in n-armed bandit. In reinforcement learning, a state is all information available to the agent (the decision maker) at a particular time <span class="math">\(t\)</span>. The reason why the n-armed bandit state space includes just 1 state is because the agent is only aware of the same 10 actions at any time, no new information is available nor do the actions change.</p>
<p>So what are all the possible combinations of information available to the agent (the player) in blackjack? Well, the player starts with two cards, so there is the combination of all 2 playing cards. Additionally, the player knows one of the two cards that the dealer has. Thus, there are a lot of possible states (around 200). As with any RL problem, our ultimate goal is to find the best <em>policy</em> to maximize our rewards. </p>
<p>A policy is roughly equivalent to a strategy. There are reinforcement learning methods that essentially rely on brute force to compute every possible action-state pair (every possible action in a given state) and the rewards received to find an optimal policy, but for most of the problems we care about, the state-action space is much too large for brute force methods to be computationally feasible. Thus we must rely on experience, i.e. playing the game, trying out various actions and learning what seems to result in the greatest reward returns; and we need to devise an algorithm that captures this experiential learning process.</p>
<p>The most important take-aways from part 1 and 2 are the concepts of state values, state-action values, and policies. Reinforcement learning is in the business of determining the value of states or of actions taken in a state. In our case, we will primarily concern ourselves with action values (value of an action taken in a given state) because it is more intuitive in how we can make an optimal action. I find the value of being in a given state less intuitive because the value of a state depends on your policy. For example, what is the value of being in a state of a blackjack game where your cards total to 20? Most people would say that's a pretty good position to be in, but it's only a good state if your policy is to stay and not hit. If your policy is to hit when you have 20 (of course it's a bad policy), then that state isn't very good. On the other hand, we can ask the question of, what's the value of hitting when I have 20 versus the value of staying when I have 20, and then just choose whichever action has the highest value. Of course staying would produce the highest value in this state (on average).</p>
<p>Our main computational effort, therefore, is in iteratively improving our estimates for the values of states or state-action pairs. In parts 1 and 2, we keep track of every single state-action pair we encounter, and record the rewards we receive for each and average them over time. Thus, over many iterations, we go from knowing nothing about the value of state-actions to knowing enough to be able to choose the highest value actions. Problems like the n-armed bandit problem and blackjack have a small enough state or state-action space that we can record and average rewards in a lookup table, giving us the exact average rewards for each state-action pair. Most interesting problems, however, have a state space that is continuous or otherwise too large to use a lookup table. That's when we must use function approximation (e.g. neural networks) methods to serve as our <span class="math">\(Q\)</span> function in determining the value of states or state-actions. We will have to wait for part 3 for neural networks.</p>
<h4>Learning with Markov Decision Processes</h4>
<p>A Markov decision process (MDP) is a decision that can be made knowing only the current state, without knowledge of or reference to previous states or the path taken to the current state. That is, the current state contains enough information to choose optimal actions to maximize future rewards. Most RL algorithms assume that the problems to be learned are (at least approximately) Markov decision processes. Blackjack is clearly an MDP because we can play the game successfully by just knowing our current state (i.e. what cards we have + the dealer's one face-up card). Google DeepMind's deep Q-learning algorithm learned to play Atari games from just raw pixel data and the current score. Does raw pixel data and the score satisfy the Markov property? Not exactly. Say the game is Pacman, if our state is the raw pixel data from our current frame, we have no idea if that enemy a few tiles away is approaching us or moving away from us, and that would strongly influence our choice of actions to take. This is why DeepMind's implementation actually feeds in the last 4 frames of gameplay, effectively changing a non-Markov decision process into an MDP. With the last 4 frames, the agent has access to the direction and speed of each enemy (and itself).</p>
<h4>Terminology & Notation Review</h4>
<ol>
<li><span class="math">\(Q_k(s, a)\)</span> is the function that accepts an action and state and returns the value of taking that action in that state at time step <span class="math">\(k\)</span>. This is fundamental to RL. We need to know the relative values of every state or state-action pair.</li>
<li><span class="math">\(\pi\)</span> is a policy, a stochastic strategy or rule to choose action <span class="math">\(a\)</span> given a state <span class="math">\(s\)</span>. Think of it as a function, <span class="math">\(\pi(s)\)</span>, that accepts state, <span class="math">\(s\)</span> and returns the action to be taken. There is a distinction between the <span class="math">\(\pi(s)\)</span> <em>function</em> and a specific policy <span class="math">\(\pi\)</span>. Our implementation of <span class="math">\(\pi(s)\)</span> as a function is often to just choose the action <span class="math">\(a\)</span> in state <span class="math">\(s\)</span> that has the highest average return based on historical results, <span class="math">\(argmaxQ(s,a)\)</span>. As we gather more data and these average returns become more accurate, the actual policy <span class="math">\(\pi\)</span> may change. We may start out with a policy of "hit until total is 16 or more then stay" but this policy may change as we gather more data. Our implemented <span class="math">\(\pi(s)\)</span> function, however, is programmed by us and does not change.</li>
<li><span class="math">\(G_t\)</span>, return. The expected cumulative reward from starting in a given state until the end of an episode (i.e. game play), for example. In our case we only give a reward at the end of the game, there are no rewards at each time step or move.</li>
<li>Episode: The full sequence of steps leading to a terminal state and receiving a return. E.g. from the beginning of a blackjack game until the terminal state (someone winning) constitutes an episode of play.</li>
<li><span class="math">\(v_\pi\)</span>, a function that determines the value of a state given a policy <span class="math">\(\pi\)</span>. We do not really concern our selves with state values here, we focus on action values.</li>
</ol>
<h3>Monte Carlo & Tabular Methods</h3>
<p>Monte Carlo is going to feel very familiar to how we solved the n-armed bandit problem from part 1. We will store the history of our state-action pairs associated with their values in a table, and then refer to this table during learning to calculate our expected rewards, <span class="math">\(Q_k\)</span>.</p>
<p>From Wikipedia, Monte Carlo methods "rely on repeated random sampling to obtain numerical results." We'll use random sampling of states and state-action pairs and observe rewards and then iteratively revise our policy, which will hopefully converge on the optimal policy as we explore every possible state-action couple.</p>
<p>Here are some important points:</p>
<ol>
<li>We will asign a reward of +1 to winning a round of blackjack, -1 for losing, and 0 for a draw.</li>
<li>We will establish a table (python dictionary) where each key corresponds to a particular state-action pair and each value is the value of that pair. i.e. the average reward received for that action in that state.</li>
<li>The state consists of the player's card total, whether or not the player has a useable ace, and the dealer's one face-up card</li>
</ol>
<h3>Blackjack Game Implementation</h3>
<p>Below I've implemented a blackjack game. I think I've commented it well enough to be understood but it's not critical that you understand the game implementation since we're just concerned with how to learn to play the game with machine learning.</p>
<p>This implementation is completely functional and stateless. I mean that this implementation is just a group of functions that accept data, transform that data and return new data. I intentionally avoided using OOP classes because I think it complicates things and I think functional-style programming is useful in machine learning (see my post about computational graphs to learn more). It is particularly useful in our case because it demonstrates how blackjack is an MDP. The game does not store any information, it is stateless. It merely accepts states and returns new states. The player is responsible for saving states if they want.</p>
<p>The state is just a Python tuple where the first element is the player's card total, the 2nd element is a boolean of whether or not the player has a useable ace. The 3rd element is the card total for the dealer and then another boolean of whether or not its a useable ace. The last element is a single integer that represents the status of the state (whether the game is in progress, the player has won, the dealer has won, or it was a draw).</p>
<p>We actually could implement this in a more intuitive way where we just store each player's cards and not whether or not they have a useable ace (useable means, can the ace be an 11 without losing the game by going over 21, because aces in blackjack can either be a 1 or an 11). However, as you'll see, storing the player card total and an useable ace boolean is equivalent and yet compresses our state space (without losing any information) so we can have a smaller lookup table.</p>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">math</span>
<span class="kn">import</span> <span class="nn">random</span>
<span class="c1">#each value card has a 1:13 chance of being selected (we don't care about suits for blackjack)</span>
<span class="c1">#cards (value): Ace (1), 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack (10), Queen (10), King (10)</span>
<span class="k">def</span> <span class="nf">randomCard</span><span class="p">():</span>
<span class="n">card</span> <span class="o">=</span> <span class="n">random</span><span class="o">.</span><span class="n">randint</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">13</span><span class="p">)</span>
<span class="k">if</span> <span class="n">card</span> <span class="o">></span> <span class="mi">10</span><span class="p">:</span>
<span class="n">card</span> <span class="o">=</span> <span class="mi">10</span>
<span class="k">return</span> <span class="n">card</span>
<span class="c1">#A hand is just a tuple e.g. (14, False), a total card value of 14 without a useable ace</span>
<span class="c1">#accepts a hand, if the Ace can be an 11 without busting the hand, it's useable</span>
<span class="k">def</span> <span class="nf">useable_ace</span><span class="p">(</span><span class="n">hand</span><span class="p">):</span>
<span class="n">val</span><span class="p">,</span> <span class="n">ace</span> <span class="o">=</span> <span class="n">hand</span>
<span class="k">return</span> <span class="p">((</span><span class="n">ace</span><span class="p">)</span> <span class="ow">and</span> <span class="p">((</span><span class="n">val</span> <span class="o">+</span> <span class="mi">10</span><span class="p">)</span> <span class="o"><=</span> <span class="mi">21</span><span class="p">))</span>
<span class="k">def</span> <span class="nf">totalValue</span><span class="p">(</span><span class="n">hand</span><span class="p">):</span>
<span class="n">val</span><span class="p">,</span> <span class="n">ace</span> <span class="o">=</span> <span class="n">hand</span>
<span class="k">if</span> <span class="p">(</span><span class="n">useable_ace</span><span class="p">(</span><span class="n">hand</span><span class="p">)):</span>
<span class="k">return</span> <span class="p">(</span><span class="n">val</span> <span class="o">+</span> <span class="mi">10</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="n">val</span>
<span class="k">def</span> <span class="nf">add_card</span><span class="p">(</span><span class="n">hand</span><span class="p">,</span> <span class="n">card</span><span class="p">):</span>
<span class="n">val</span><span class="p">,</span> <span class="n">ace</span> <span class="o">=</span> <span class="n">hand</span>
<span class="k">if</span> <span class="p">(</span><span class="n">card</span> <span class="o">==</span> <span class="mi">1</span><span class="p">):</span>
<span class="n">ace</span> <span class="o">=</span> <span class="kc">True</span>
<span class="k">return</span> <span class="p">(</span><span class="n">val</span> <span class="o">+</span> <span class="n">card</span><span class="p">,</span> <span class="n">ace</span><span class="p">)</span>
<span class="c1">#The first is first dealt a single card, this method finishes off his hand</span>
<span class="k">def</span> <span class="nf">eval_dealer</span><span class="p">(</span><span class="n">dealer_hand</span><span class="p">):</span>
<span class="k">while</span> <span class="p">(</span><span class="n">totalValue</span><span class="p">(</span><span class="n">dealer_hand</span><span class="p">)</span> <span class="o"><</span> <span class="mi">17</span><span class="p">):</span>
<span class="n">dealer_hand</span> <span class="o">=</span> <span class="n">add_card</span><span class="p">(</span><span class="n">dealer_hand</span><span class="p">,</span> <span class="n">randomCard</span><span class="p">())</span>
<span class="k">return</span> <span class="n">dealer_hand</span>
<span class="c1">#state: (player total, useable_ace), (dealer total, useable ace), game status; e.g. ((15, True), (9, False), 1)</span>
<span class="c1">#stay or hit => dec == 0 or 1</span>
<span class="k">def</span> <span class="nf">play</span><span class="p">(</span><span class="n">state</span><span class="p">,</span> <span class="n">dec</span><span class="p">):</span>
<span class="c1">#evaluate</span>
<span class="n">player_hand</span> <span class="o">=</span> <span class="n">state</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="c1">#val, useable ace</span>
<span class="n">dealer_hand</span> <span class="o">=</span> <span class="n">state</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="k">if</span> <span class="n">dec</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span> <span class="c1">#action = stay</span>
<span class="c1">#evaluate game; dealer plays</span>
<span class="n">dealer_hand</span> <span class="o">=</span> <span class="n">eval_dealer</span><span class="p">(</span><span class="n">dealer_hand</span><span class="p">)</span>
<span class="n">player_tot</span> <span class="o">=</span> <span class="n">totalValue</span><span class="p">(</span><span class="n">player_hand</span><span class="p">)</span>
<span class="n">dealer_tot</span> <span class="o">=</span> <span class="n">totalValue</span><span class="p">(</span><span class="n">dealer_hand</span><span class="p">)</span>
<span class="n">status</span> <span class="o">=</span> <span class="mi">1</span>
<span class="k">if</span> <span class="p">(</span><span class="n">dealer_tot</span> <span class="o">>&l