Came across this piece talking about the current**#machinelearning**status byMark Saroufim**.**Some views are a bit controversial but good points nonetheless.
Machine Learning: The Great Stagnation
I’ll summarize the key points below: 🧵👇
- ML Researchers — Supposed to be risk-taking and less commercial oriented so ground-breaking progress can be made. Rather, ML Researchers found ways to not taking any risk but getting good pay through FANNG, media, YouTube, etc., and SOTA chasing.
- Math is overrated in Deep Learning. Matrix multiplication is mostly what you need and auto grad removes the real needs for manual gradient calculation. Be real now.
- The empiricism tendency of DL formed a feedback loop, where people/company has more computing power to do many experiments in parallel wins, thus gaining more resources to do even more experiments. Google Brain, OpenAI, DeepMind.
- Transformers are everywhere and so popular
- Graduate Student Descent. Again, more people to do experiments leads to faster breakthroughs. Also, cargo-culting configs, loss functions, frameworks make it possible but less well-thought-out.
- Good innovation in ML: — Keras, fast.ai: user-centric, layered in nature, software engineering for ML is a good direction. — Julia — Differentiable Computing from the ground up. (Swift may also be) — HuggingFace — so popular NLP model — Hasktorch — Haskell based, elegant, less known, and developed — Unity ML agents — great for RL — AlphaFold2 is groundbreaking
On another note, this maybe can explain why China is getting stronger in AI. It has abundant computing power, many aspiring graduate students, mass media market for AI experts, plus ‘unlimited’ data due to loose privacy regulations. Make one think… 🤷♂️🤔🤔
Written on January 27, 2021 by Michael Li.
Originally published on Medium