Magical Machine Learning

1 minute read

CGP Grey’s video describes a fundamental thing about machine learning that I find interesting: regarding neural networks specifically, even though we may understand the process of training them so that they will produce a correct result, we can’t really explain their “thought process”.

I wish I’d remembered this video a couple of months ago. It would have been interesting to think about while working on our Makers final project on machine learning - training a convolutional neural network to do facial expression recognition.

In a neural network, you have your input layer and output layer, and in between there are hidden layers. That’s where the magic comes in. And if we find that our model is giving us unexpected results, how would we debug it? If it was a piece of code I would try to gain visibility with print statements or similar.

It’s kind of fitting really, seeing that neural networks are modelled upon biological neurons. We can explain how a single neuron, or bunch of neurons, work in a mechanical sense, but that doesn’t explain how the billions of neurons in biological brains together give rise to consciousness. Like our animal brains, artificial neural networks can be really complex and we may never fully understand how they work inside, but we just know that they do.

Comments