Trump’s Plan to Ban US States from AI Regulation will ‘hold us back’, says Microsoft Science Chief Eric Horvitz

Microsoft’s Chief Scientist, Eric Horvitz, has raised a critical alarm over Donald Trump’s proposed ban on state-level regulations for artificial intelligence (AI). He emphasized that such a move would ultimately “hold us back” in the race for technological advancement.

This cautionary stance stands in stark contrast to reports of Microsoft collaborating with tech giants like Google, Meta, and Amazon to push for the very ban that could stifle innovation. Horvitz, a former technology advisor to President Joe Biden, asserts that eliminating state-level regulations on AI could severely impede not only scientific progress but also the effective implementation of AI technologies in real-world applications. 

The Trump administration is advocating for a decade-long moratorium on any state laws designed to limit or govern AI systems, driven by fears that China could outpace the U.S. in reaching human-level AI. 

This proposal is also influenced by tech investors, including Marc Andreessen of Andreessen Horowitz, who argue that consumer-facing applications should be regulated instead of imposing restrictions on foundational research. U.S. Vice President JD Vance has echoed this urgency, warning that if the U.S. hesitates in developing AI, China will not, potentially leaving the U.S. at a significant technological disadvantage.

Horvitz has expressed deep concerns over the misuse of AI for spreading misinformation, influencing public opinion, and even facilitating harmful activities related to biological hazards. His apprehensions resonate with the growing consensus that unregulated AI development could lead to disastrous outcomes for society.

During a recent forum organized by the Association for the Advancement of Artificial Intelligence, Horvitz powerfully argued, “Guidance, regulation, and reliability controls are not obstacles; they are the cornerstones of progress in the field.” 

These regulations can actually fast-track advancements by ensuring safety and accountability in AI technologies. Adding to this urgency, Stuart Russell, a prominent professor at the University of California, Berkeley, posed a provocative question: why would we allow the deployment of a technology that its own creators acknowledge carries a 10% to 30% risk of causing human extinction? 

Comparatively, we would never accept such perilous risks in any other technological domain. The apparent contradiction between Horvitz’s pressing warnings and Microsoft’s lobbying efforts reveals a troubling focus on short-term profits over long-term societal welfare. 

Microsoft’s monumental $14 billion investment in OpenAI, the maker of ChatGPT, underscores the stakes involved. Sam Altman, OpenAI’s CEO, recently projected that within five to ten years, “great human-like robots” will roam our streets, marking a profound shift in our interaction with technology.

As experts project a wide range of timelines for achieving artificial general intelligence (AGI) from merely a few years to several decades, the pressure intensifies. Meta’s Chief Scientist, Yann LeCun, cautions that AGI is decades away, while Mark Zuckerberg has recently committed $15 billion to achieve “superintelligence.” 

The time is ripe for a robust and balanced approach to AI regulation, one that cultivates innovation while safeguarding humanity’s future. It’s crucial we take these warnings seriously and engage in meaningful dialogue about how to shape AI responsibly. 

The path forward depends on our ability to prioritize ethical considerations alongside technological advancement, ensuring we harness the power of AI for the greater good.

Sumith Roul
Sumith Roul

Sumith Roul has always been intrigued by the surge of AI & its products. He has written over 1000 product reviews, descriptions, blogs, and news posts. Sumith has more than 20 years of writing experience.
When he is not writing, you can find him playing with two kids or relaxing, listening to music.

Articles: 80