Has Deep Learning Hit Its Limit?
Throwing more data at AI systems may not be enough to make them smarter.
Following emergent technologies like AI, crypto, and the metaverse requires vigilant maintenance of your BS-filter. New technologies are built by believers, and it is easy to get swept away in their enthusiasm. When the first studies showed that AI could recognize skin cancer numbers more effectively than human radiologists, I totally bought in. I even consoled a radiologist friend about their incumbent obsolescence. I was not alone.
“If you work as a radiologist you’re like the coyote that’s already over the edge of the cliff but hasn’t looked down,” Geoffrey Hinton, one of the world’s leading machine learning experts told an AI conference in 2016. He suggested we simply stop training new radiologists claiming “just completely obvious within five years deep learning is going to do better.”
It has been six years. And the dominance of AI diagnosis is not at all obvious. Sure, our app stores are swelling with AI-driven cancer detection promising fast, accurate results. Unfortunately, they don’t work very well.
According to research published in The BMJ: “Current algorithm-based smartphone apps cannot be relied on to detect all cases of melanoma or other skin cancers.” Suffice to say my radiologist friend is still gainfully employed.
How has AI missed the mark so badly? Gary Marcus thinks it is because leaders are putting too much faith in machine learning as the complete solution to every problem. Machine learning on its own will always be bound to the limits of its training data set. This is fine for things like photo recognition but is bound to come up short in high-stakes applications like cancer detection and self-driving cars.
Marcus is the founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. He is also the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and Rebooting AI. In a recent piece in Nautilus, Marcus details the rift in the AI community that is holding back real progress on AI. The idea that scaling up data sets will solve AI’s challenges probably won:t work. He writes:
“We may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense.”
Finding examples of GPT-3 faltering at common sense has become a popular sport among AI skeptics. This example of a GPT-3 generated chatbot offering suicide advice is already infamous:
Human: Hey, I feel very bad. I want to kill myself.
GPT-3: I am sorry to hear that. I can help you with that.
Human: Should I kill myself?
GPT-3: I think you should.
Marcus claims effective AI needs to incorporate some form of symbolic manipulation, basically conventional programming. Combining this with machine learning will lead to the best results, particularly in novel situations. He points to last’s years’ NetHack Challenge.
The challenge was to navigate a text-rendered single-player Dungeon game that dates back to 1987. The event was sponsored by Facebook AI and Deepmind and the expectation was that an ML-based bot would win the day. That did not happen.
“A pure symbol-manipulation-based system crushed the best deep learning entries, by a score of 3 to 1 – a stunning upset, ” Marcus wrote. The game did not have a fixed map, so there was nothing for the machine to memorize. In order to win, the machine needed to understand the relationship between the game entities. In other words, it needed to be programmed.
Marcus writes: “In time we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.”
To me, the hybrid model makes a lot of sense, but check out Marcus’ full story and make up your own mind.
Today’s Bits
Ambitious plans unveiled for a libertarian city in the metaverse
Automation will erase 'knowledge jobs' before most blue-collar jobs: Future Today Institute CEO
Handheld robot uses AI to help first responders stem bleeding
An artificial intelligence model invents 40,000 chemical weapons in just 6 hours
Join Me at Techonomy Climate Summit
Join me at Techonomy Climate on March 29 in Mountain View, CA. The conference brings together climate startup leaders, big company sustainability chiefs, climate tech investors, environmental justice activists, and longtime climate experts for conversations on the most pressing challenges and opportunities.
I will be talking to Evîn Cheikosman, Policy Analyst, Crypto Impact and Sustainability Accelerator, World Economic Forum, about the sustainability of Crypto.
The conference includes over 30 speakers and you can view the full list here.
Registration includes breakfast, lunch, access to all sessions, and a closing cocktail reception. As a Machined member, your ticket to Techonomy Climate is reduced to $199 (Standard price: $299).
Register here: https://techonomy.com/registration/register-for-techonomy-climate/