Industry Leaders and Policy Experts Discuss AI Regulation in HumanX Conference
Conflicting AI regulations create challenges for businesses and raise questions about the future of global tech leadership.
Source: ChatGPT
At the HumanX conference last week, industry leaders and policy experts discussed one of the most pressing issues in technology today: How should artificial intelligence be regulated globally? The session, Redrawing Boundaries: AI Regulation on a Global Scale, brought together voices from academia, industry, and policy to examine the geopolitical and economic implications of AI regulation.
Worth hosted a panel featuring David Danks, data science, philosophy, and policy professor at UC San Diego; Lexi Reese, a former tech executive and U.S. Senate candidate; and Nick Pickles from Tools for Humanity. As AI technologies rapidly evolve, governments struggle to keep pace with appropriate regulations. The debate is no longer just about whether AI should be regulated but how—and by whom.
AI Regulation: A Complex, Global Puzzle
The conversation began with a fundamental distinction: regulation versus governance. “One of the things I’ve really been focused on is how do we have practical guidance for small businesses to develop AI responsibly—even when it’s not required,” said Danks. “We need to think about governance beyond just regulation.”
While regulation typically refers to government-enforced laws, governance can include industry standards, ethical guidelines, and voluntary compliance frameworks. The difference is crucial: whereas regulation is legally binding, governance can be flexible and adapt to the fast-moving AI landscape.
Reese emphasized the disconnect between policymakers and industry leaders. “Governments have one conception of how technology businesses are built and scaled, but the reality of how technology businesses are built and scaled is completely different. And in that gap lies the tension we need to explore today.”
This tension is evident in the vastly different approaches taken by global regulators.
The United States has primarily relied on self-regulation, with initiatives like the White House’s Blueprint for an AI Bill of Rights, which provides guidelines but lacks enforcement mechanisms.
The European Union has introduced the AI Act, which categorizes AI systems by risk level and imposes strict requirements on high-risk applications.
China has implemented proactive AI regulations, particularly for generative AI, requiring companies to ensure their models align with government-approved data sets.
Intellectual Property Challenges in AI
The panelists addressed one of the thorniest issues in AI development: intellectual property.
“If you build an AI tool in the U.S. but want to expand into Europe, suddenly you’re dealing with an entirely different set of rules,” said Danks. “That slows down innovation and raises costs.”
This regulatory fragmentation creates compliance burdens for AI startups, which may struggle to navigate the legal frameworks of different jurisdictions. For example, under the EU AI Act, developers must document and test their systems to meet stringent safety and transparency standards. Meanwhile, the U.S. Copyright Office is still debating whether AI-generated works should receive copyright protection.
Reese pointed to her experience at Google. “We were dealing with regulatory conversations before most people even understood what cookies were. AI is at that stage right now. We’re having legal conversations about an ecosystem that isn’t even fully formed yet.”
The Geopolitical Divide in AI Regulation
AI regulation is no longer just a legal issue—it’s a geopolitical one. Nations are shaping AI policies to protect consumers and gain a competitive advantage in the global tech race. “It’s no longer just about who develops the best AI, but which country controls the infrastructure behind it,” said Reese.
She described the contrasting approaches of different governments. “The U.S. is focused on innovation and corporate-driven progress, while China sees AI as a tool for economic dominance. And that changes the way each side approaches regulation.”
The numbers reflect this reality. According to a Stanford AI Index Report (2024):
China accounted for 60% of global AI research funding in 2023, compared to 30% in the U.S. and 10% in Europe.
The EU AI Act, if enforced strictly, could cost European companies up to €30 billion in compliance fees over the next five years.
73% of AI patents filed in 2023 came from just three countries: the U.S., China, and Japan.
Danks highlighted the implications of Europe’s regulatory stance. “Europe is setting the gold standard for AI accountability. The problem is, when regulations become too restrictive, smaller companies can’t compete with the tech giants.”
Indeed, stringent AI laws often favor established tech firms—which can afford compliance—while stifling innovation at startups. In contrast, China’s regulatory approach centralizes AI development, ensuring that models align with state interests.
Global Standards: Realistic or Unattainable?
The discussion turned to whether a universal AI regulatory framework is possible or desirable. “If we don’t establish best practices now, we’re going to see AI development splinter into separate ecosystems, each with its own rules and limitations,” said Danks.
However, global cooperation in AI regulation remains elusive. Efforts like the OECD AI Principles (adopted by over 40 countries) and the G7 AI Code of Conduct are steps toward alignment, but enforcement mechanisms remain weak. Reese was skeptical. “The reality is, governments move too slowly. By the time a global regulatory framework is agreed upon, AI will have already evolved past it. So instead of creating one-size-fits-all laws, we need to focus on principles that can adapt as AI advances.”
The panelists debated governance models, including:
Self-Regulation: Companies setting their own AI ethics policies, similar to the Financial Industry Regulatory Authority (FINRA) in finance.
Public-Private Partnerships: Governments and tech firms collaborating on responsible AI development.
Sector-Specific Rules: Tailoring AI laws to industries such as healthcare, finance, and defense rather than imposing blanket regulations.
“We need to stop thinking of regulation as the only answer,” said Danks. “Governance, industry standards, and ethical frameworks can all play a role in shaping AI’s future.”
Reese countered with a warning. “There’s a risk that companies won’t self-regulate until there’s a crisis. Just look at social media—regulation only became a real conversation after misinformation became a global problem.”
Consumer Choice and AI Business Models
A key question from the audience asked about the role of consumer demand in shaping AI governance.
“Consumers have more power than they realize,” said Danks. “If users demand transparency and ethical AI development, companies will adapt. But the challenge is educating people on why these issues matter in the first place.”
Reese pointed to business incentives. “If people see value in AI platforms that compensate content creators fairly, those platforms will win. But if the incentives are stacked against creators, we’ll end up with the same problems we saw in digital media—where only a handful of platforms control all the revenue.”
Consumer trust in AI is still uncertain. According to a Pew Research Study (2024):
62% of Americans believe AI should be more tightly regulated.
Only 28% trust AI-powered news platforms to provide unbiased information.
57% of businesses say they would prefer a “soft governance” approach over strict regulatory oversight.
The Road Ahead
The discussion concluded with an acknowledgment that AI regulation is an ongoing challenge—one that requires balancing innovation, ethics, and international cooperation. “The AI race isn’t just about technology,” said Reese. “It’s about power. Who controls AI today will control the future of innovation, economics, and even democracy. So the real question is—who do we want shaping that future?”
As AI continues to reshape industries and societies, that question remains unanswered. But one thing is clear: the future of AI governance will be shaped by a mix of regulation, industry leadership, and consumer influence—not by a single approach alone.