Mustafa Suleyman Wants to Balance AI Innovation and Safety
DeepMind co-founder Mustafa Suleyman discusses the transformative power of AI and its societal implications in his book, "The Coming Wave."
In an era where artificial intelligence is rapidly reshaping our world, Mustafa Suleyman is a pivotal figure at the forefront of this technological revolution. As the co-founder of DeepMind, which he sold to Google, Suleyman has been instrumental in advancing AI research and applications, making him a leading authority in the field. Suleyman’s latest contribution, his book The Coming Wave, delves into the profound impact of AI on human society. It’s not just a book about technology; it’s a discourse on how AI will alter the very fabric of how we live, work, and play in the future. In The Coming Wave, Suleyman explores the unprecedented proliferation of artificial intelligence and its implications, offering a nuanced perspective on how AI could either empower or endanger humanity.
His insights are particularly vital in an age when AI’s potential seems boundless, yet its risks and ethical dilemmas loom large. Suleyman doesn’t shy away from these challenges. Instead, he confronts them head-on, proposing frameworks for balancing innovation with safety and technological advancement with ethical governance. Mustafa Suleyman is an essential guide to understanding and navigating the age of AI, highlighting humanity’s opportunities to survive and thrive in this new era. Worth caught up with Suleyman shortly after he was named to the Worthy 100 list.
How do you see the rise of AI changing how we live, work and play?
AI will change almost every fact about our lives…Look around you. Almost everything you see has been touched by intelligence. Every interaction you have is based, at some level, on your intelligence. Our culture, world, economy, and relationships are predicated on layers of intelligence. It’s probably the fundamental feature of humanity. We are about to witness the greatest proliferation in intelligence ever, a plummeting of its cost towards zero, a mega-dispersal of new intelligence throughout society. Everyone will have access to the kind of team and capabilities currently reserved to nation states or CEOs—a team of advisors, strategists, coaches, lawyers, and doctors. If everyone has an agent that can autonomously help achieve their goals, then we are in a world where almost every facet of life will be touched by AI. Whatever you want to do, AI will help. This is coming. It’s a massive opportunity and also a huge challenge.Â
Do you think the world is ready for widespread deployment of AI?
This change will be seismic, and I don’t think society has fully grappled with its extent. In many cases, our key institutions are already malfunctioning, and this next change will be hugely disruptive. Legislators are struggling to keep pace and catch up with a suite of technologies evolving and developing rapidly. In the last decade, [AI] model size has increased by nine orders of magnitude. With AI we are seeing growth in capabilities and scale beyond even the computer and internet era we’ve just lived through. Risks are growing in parallel—many fairly well-trailed, some less so. In The Coming Wave, I talk about technologies as a source of immense productivity growth but also a bad actor empowerment kit, a means for new forms of disinformation or adaptive assault. There’s the possibility of accidents and failures of systems of escalating power. Amidst all this, it’s a positive step that the conversation around AI has become so much more prevalent. But we are still a long way from being ready. This is partly why I felt the time was right to speak more publicly about these questions.Â
How do you envision a balanced approach to containment that doesn’t stifle innovation or economic growth?
There’s no easy, straightforward path to doing this, but that’s the challenge. My idea of containment is that it should be able to achieve this balance. It starts with more investment and effort directed towards foundational technical safety work and builds from there. It’s a narrow path, though. On one side, we might stifle and actively clamp down, creating an extreme response with all kinds of negative impacts. On the other, if we just let open, unchecked development unfurl worldwide, there is the threat of catastrophe. For me, this is the greatest dilemma of the twenty-first century. Finding that balanced approach is critical.Â
Can you elaborate on the ethical frameworks that might effectively govern the development and deployment of AI?Â
Containment, for me, is an overarching framework that can unite all the elements in play right now, bringing together the many pieces of the puzzle. Containment should add up to a watertight societal control of frontier technologies. Regulation is what people immediately turn to, and while it is essential here, alone it’s not enough. The whole field is moving too fast and spreading over too many territories for regulation to be a magic bullet. To be clear, I am a huge advocate for regulation, it’s just that it needs support. As mentioned above, I would start with work on technical safety, which must be a massive global priority. AI safety is far from a solved problem. I also think we need new work on auditing AI systems and building new structures for both companies and international governance. We need cultures of responsibility within companies and a mass movement outside them for change and positive outcomes. At every level, from the code base up to the scale of planetary social forces, we need to work on containing the coming wave.Â
What role do you see for international cooperation in this?
Without international cooperation, there is no containment. People talk about an arms race in technology, in AI. While I pushed back on this framing for years, believing that even talking about it might make it more likely, this is the point we are now at. It’s real; it’s happening already. China and the U.S. both see a strategic need to push forward with this technology. Both are conscious of safety worries, but ultimately it will need meaningful engagement from all sides to manage. Creating a stable, solid international governance regime for AI, one that has all the principal movers working together, isn’t just a nice thing to have. It’s essential. It’s also an incredibly tall order. One first step I am advocating is creating an IPCC [Intergovernmental Panel on Climate Change] for AI. It would help clarify the risks and provide an international evaluation forum. That would be a good start.Â
How do you propose we prepare society for the rapidly increasing capabilities of AI in augmenting human decision-making? Are there historical models that could guide this preparation?
In some ways, AI is unprecedented. We have never dealt with machines gesturing towards autonomy as AI does. It’s not there yet, but it will be soon. We’ve never had a technology scaling the ladder of capabilities this fast, diffusing to this many users so quickly, going from absolute state of the art to open-sourced, available to everyone on the internet in just months. This is new. However, it’s vital to learn from history nonetheless. Nuclear technology is far from perfectly contained, but the level of focus, safety work, international agreement, and so on offers, at least, a model. It shows that some level of containment is possible. We have managed to limit the creation of technologies like chemical weapons; we phased out CFCs to close the hole in the ozone layer; we are working hard to decarbonize our economies, guided by international institutions and spurred on by legal instruments. All these efforts don’t map perfectly onto AI, which presents a unique challenge, but they offer pointers. And they offer hope.Â
Lastly, as individuals who are neither policymakers nor scientists, what concrete steps can we take to contribute to a safer and more responsible deployment of these transformative technologies?
Get involved! One of the big arguments I’m making about containment and the coming wave is that it will only come together with true popular engagement and pressure. We need major movements, mass interest, and people caring about these questions. We also need critics on the inside, building it. AI shouldn’t just be created by cheerleaders; it should also be fashioned by skeptics who are alert, from the outset, to worries and risks. Everyone has a stake in AI. Making it successful will be an innately collective endeavor.
Techonomy at Davos: 2024
I’m happy to say that I will be at Davos again this year. Last year, my first, was a whirlwind. This year, I hope to do a little more actual reporting. I’ll be doing standup interviews on the promenade during the day. I will also be hosting a series of conversations examining the impact of Artificial Intelligence on business and society. The events will explore AI’s influence on various topics, from marketing and business ethics to the broader landscape of cognitive work.
I’d love to see some Machined subscribers if you will be there. Both sessions are sold out, but I can get you in if you get there early. Details below:
Code to Conscience | Responsible AI in Business
January 17th | 7:30 AM CET
The conversation will include Alex McMullan, CTO International of Pure Storage, and Suzanne Dann, CEO of Americas from Wipro, about navigating the intricacies of AI in the enterprise. (Register Here)
AI in the C-suite | How Tech’s Biggest Trend is Disrupting Executive Work
January 17th | 4 PM CET
This discussion explores how executives can thrive in an AI-driven landscape through continuous learning, adaptation, and harnessing AI to augment cognitive skills. (Register Here)