What is UAT?
What exactly is Universal Approximation Theorem? Well, put in laymanâs terms, UAT just means that giving a one hidden layer neural network with enough neurons. It can approximate(or simulate closely) any continuous function within the given input range. It means that a one hidden layer neural network is an ultimate flexible function approximator. Maybe a little too flexible.
A Lesson Learned
**Because of the flexibility, Universal Approximation Theorem used to push AI researchers to focus mostly on shallow neural networks, thus in some way hinders the development progress of deep learning.**This is interesting. Come to think of it, a âshallow and wideâ neural net tends to ârememberâ all features to approximate the target function. Yet deeper networks tend to be more abstract on feature extraction and finds out patterns that can apply to many parts of the dataset. They obviously generalize better. And achieve better results with less computational power.
Going Deeper
What does this sounds to you if youâre a software developer? **âCode Refactoringâ!**Developers refactor their code to put repetitive code snippets into functions and reuse them as much as possible. Cleaner codes usually are better codes. Deep Neural Networks somewhat does the same thing. By having more layers, it enables the network to ârefactorâ themselves better and learn more general patterns, thus more efficient in achieving the same goal. This leads to better (both in performance and efficiency) models.
Software 1.0 vs. Software 2.0
What other software development techniques we can apply to machine learning? More precisely, what we learned from âSoftware 1.0â can be applied to âSoftware 2.0â? (If you are not familiar with the concept of âSoftware 2.0, I highly recommend you watch the below video from Andrej Karpathy, not entirely applicable to everything but definitely worth noting and backed by Teslaâs success!)
According to Karpathy, what we currently do in software engineering where talented people write code to complete tasks and solve a problem is âSoftware 1.0â, where humans contribute to the process by directly telling the computer how to do every single step. In the new paradigm of âSoftware 2.0â where machine learning and deep learning is widely adopted, human contribute by providing a huge amount of examples of people doing something in the form of the dataset, and the computer along with models will figure out how to do that automatically. In fact, a good amount of Teslaâs Autopilot system is powered by deep learning models.
Some people are still skeptical about whether there will be a bright future for the âSoftware 2.0â approach. Our path from 1.0 to 2.0 is still up for debate. Re-apply the wisdom from traditional software engineering might still be a good direction to explore for machine learning researchers and practitioners. Fun time!
If you want to know more about Universal Approximation Theorem, you can refer to my article below:
Written on January 3, 2021 by Michael Li.
Originally published on Medium