Deep Reinforcement Learning in Action (Announcement)
I'm co-authoring a book on Deep Reinforcement Learning!
I'm co-authoring a book on Deep Reinforcement Learning!
Tensor Networks can be seen as a higher-order generalization of traditional deep neural networks, and yet they lack an explicit non-linearity such as applying the ReLU or sigmoid function as we do with neural nets. A deeper understanding of what nonlinearity actually means, however, reveals that tensor networks can indeed learn non-linear functions. The non-linearity of tensor networks arises soley from the architecture and topology of the network itself.
Click to read and post commentsSome announcements about this blog's migration to a different static site generator as well as some new GitHub repos I've created.
In part 5 we combine everything we've learned and compute persistent homology barcodes from raw data.
Click to read and post commentsIn part 4 we use linear algebra to build out simple algorithms to efficiently calculate homology groups and Betti numbers.
Click to read and post commentsIn part 3 we start calculating homology groups and Betti numbers of simplicial complexes.
Click to read and post comments