Why
Quaternion
Identity?
The Quaternion Identity is very powerful & really quite simple - in 3D space it represents zero rotation, a place where an object is ‘perfectly aligned’ with the world around it. But it sounds far more complicated on first exposure - and when I taught VR coding, it always silenced the room & intimidated students, both newbies & experienced developers alike. In many ways, it reflects alot of technology today - seemingly complex at first glance, but able to be explained in simpler, more accessible terms. This is what I am aiming for on my blog - meaningful simplicity in understanding technology concepts. In so doing, I hope that I can better understand some of these concepts … & if the n=2 people that read my blog benefit as well, then that is a definite upside!
Evaluating GANs (Generative Adversarial Networks) is difficult – unlike classification problems there is no final accuracy to compare against, for instance. For my OpenAI Spring Scholars project, I focused on different ways to understand & to evaluate image synthesis GANs, using the approach of Distill’s Activation Atlas.
When I first heard about NST, & of ‘extracting style’ from an image I was deeply suspicious - how can an algorithm define style? Define human creativity & artistic expression after all?… It turns out - pretty easily actually. (Well, extracting sufficient style features to successfully apply them into a pastiche is pretty easy - defining human creativity is a different topic for another time!)
or how a Muggle can perform Math Magic....
I recently had to perform a large amount of dimensionality reduction - & as such needed to consider how to do it - in the end I went with UMAP. It is a relatively new technique so I figured that putting down some thoughts may be of interest.
Generative models can be described in ML terms as learning any kind of data distribution using unsupervised learning. Some examples that you might have seen include removing watermarks, transforming zebra into horses (and vice versa), and creating pictures of people who don't exist, among others. When I started diving in to this field, the range of methods, as well as what they could do, was confusing to me. After alot of research the simple taxonomy developed by Ian Goodfellow remains
I am by no means an expert in ML. However, I am a former consultant and a newcomer to reading ML papers in a program that requires alot of reading them. So you could say that I am an expert in dealing with complicated content that I am not well versed in ;-) So, this week I thought I would put down the tips, tricks, hacks & approach that have helped me in tackling ML research papers.
This week I completed Assignment 2 from the awesome Stanford Cs231n course. This included implementing (among other things) vectorized backpropogation, batch & layer normalization & building a CNN to train CIFAR-10 both in vanilla python and Tensorflow. Implementing batch normalization - particularly the backward pass - was one of the more surprising parts of the assignment, so I thought I would write about it here.