Don’t Be Deceived: The Realities of AI Regulation in the US

aAt first glance, the current landscape of artificial intelligence policy indicates a strategic step back from regulation. Recently, AI leaders in the United States and beyond have echoed this sentiment. J.D. Vance describes AI policy as “Deregulation flavor.” Congress seems poised for a 10-year suspension. This is especially true regarding state AI laws. On cue, the Trump administration’s AI action plan warns against obscuring the technology “through bureaucracy at this early stage.”

However, the emphasis on deregulation is a significant misunderstanding. Although the U.S. federal government adopts a hands-off stance toward applications like chatbots and image generators, it is deeply engaged in the fundamental aspects of AI. For instance, both the Trump and Biden administrations have actively dealt with AI chips, crucial components of advanced AI systems. The Biden administration restricts access to these chips to safeguard against competitive nations such as China. The Trump administration sought a deal with countries like the UAE for AI sales.

Both administrations have significantly influenced AI systems in their respective manners. The United States is not deregulating AI; rather, it is regulating where many are not looking. Beneath the rhetoric of a free market, Washington is stepping in to shape the components of AI systems.


Embracing the comprehensive nature of the AI technology stack—analyzing the contributions of hardware, data centers, and software operating in the background of applications like ChatGPT—reveals that nations are targeting different components of AI systems. Early frameworks, such as the EU’s AI law, prioritized prominent applications, banning high-risk uses in sectors like health, employment, and law enforcement to mitigate social harm. However, nations are now focusing on the fundamental building blocks of AI. China restricts certain models to combat deepfakes and misinformation. Citing national security concerns, the U.S. has limited exports of advanced chips, and under the Biden administration, model weights—the “secret sauce” that converts user inputs into results. These AI regulations are embedded within dense administrative terminologies like “implementation of additional export controls” and “end uses of supercomputers and semiconductors,” obscuring their foundational rationale. Nevertheless, clear trends emerge behind this complex vernacular, indicating a shift from regulating AI applications to regulating their foundational elements.

The initial wave of regulations targeted applications within jurisdictions like the EU, emphasizing issues such as discrimination, surveillance, and environmental damage. Subsequently, rival nations like the United States and China adopted a national security approach, aiming to retain military dominance and thwart malicious entities from leveraging AI for obtaining nuclear weapons or disseminating disinformation. A third wave of AI regulation is emerging as countries tackle parallel social and security challenges. Our research indicates that this hybrid approach is more effective as it breaks down silos and minimizes redundancy.

Overcoming the allure of laissez-faire rhetoric necessitates a more thorough analysis. Viewed through the lens of the AI stack, U.S. AI policy resembles a redefinition of regulatory focus rather than an abdication of responsibility. This translates to a facade of leniency while maintaining a firm grip on core elements.

No global framework can be effective if the United States—the host of the world’s largest AI research institution—continues to project an image of complete deregulation. The country’s proactive stance on AI chips undermines this narrative. U.S. AI policy is anything but laissez-faire. Decisions regarding intervention reflect a strategic inclination. While politically convenient, the myth of deregulation is largely a fabrication.

The public demands enhanced transparency concerning the rationale and framework of government regulations on AI. It is difficult to rationalize the ease with which the U.S. government intervenes in chip regulation for national security while remaining muted on social implications. Awareness of all regulatory aspects—ranging from export controls to trade policies—is the first step toward fostering effective global cooperation. Without such clarity, discussions surrounding global AI governance will remain superficial.

Source: www.theguardian.com