The American Approach to Free Speech & AI Governance
“Free Speech”, Executive Order 14179, the AI Action Plan & tension between MAGA and the AI Act.
The Meaning of “Free Speech”
“In Britain and across Europe, free speech, I fear, is in retreat”
So said JD Vance during his pompous address at the Munich Security Conference on February 14. The comment was made with explicit reference to a series of incidents MAGA users on X are furious about. For example, the Presidential election in Romania was overturned on grounds of Russian interference. Coran burnings in Sweden are criminalized. The police in Germany raided numerous people suspected of making anti-feminist comments online.
Vance was not catering at all to the state officials and military leaders who were physically present at the conference. The address was a tribute to the online MAGA crowd. Anyone familiar with Europe's horrendous past under fascism can understand why free speech is a sensitive topic. Vance was either unaware or, more likely, willfully ignorant of the deep scars from two World Wars that have resulted in a low tolerance for hate speech in Europe.
On the other hand, the MAGA crowd regards empathy as a weakness. The movement is an open rebellion against science, history, political and administrative institutions, higher education, immigrants, LGBTQ+, minorities, social security, non-authoritarian countries, legacy media, rationality, facts, and common sense.
Paradoxically, while JD Vance is lamenting about European censorship and the “threat from within”, a French researcher was recently denied entry to the US after his computer and phone were inspected at the airport border, and it turned out he had criticized the Trump administration in a text message to a colleague.
It’s forbidden for scientists in the US who receives funding from the National Science Funding (NSF) to include terms such as “female,” “disability”,“ LGBT, “diversity”, “inequality”, and “discrimination” as keywords in research papers.
This would count as censorship by any definition of the word. What does it tell us? MAGA’s anti-wokeness campaign is not a pursuit of free speech. On the contrary, I believe it’s a political mission championed by the ultrarich to suppress voices of civilians who hold opinions and ideas that may hinder the ruling class in their quest to acquire still more resources and political power. That is how censorship has worked throughout the ages, and how it still works today. Censorship is a promotion of “inverse free speech” which works not to diversify the public conversations but to silence opposition to the rulers.
I fear that AI is used as an excuse to further MAGA’s version of “free speech” which really is a thinly disguised form of censorship. Eventually and increasingly, MAGA’s towering AI ambitions and its interpretation of “free speech” will clash with European laws and traditions. Over time, I expect the EU’s legal framework to win out by setting the default in America, instead of vice versa. To a non-insignificant extent, European AI governance standards are already influential in the US today, much more so than the political leaders would like.
America’s AI Governance Framework
If we sketch out what AI governance looks like in three of the world’s centers of power - the European Union, the People's Republic of China, and the United States - it would roughly look something like this:
The EU has opted for a horizontal approach to AI governance with one single framework - the AI Act - to govern all AI applications across sectors and industries.
China has opted for a vertical approach to AI governance with several frameworks in place to cover different applications of AI, including separate regulations of generative AI, recommendation algorithms, and deepfakes.
The US has opted for no approach to AI governance. The Trump administration repealed even the light-touch Biden’s 2023 Executive Order on Safe, Secure, and Trustworthy AI within its first month in office and adopted Executive Order 14179, titled "Removing Barriers to American Leadership in Artificial Intelligence” on January 23, 2025.
Tech journalist Casey Newton calls America’s new AI policy a “let’s see what happens” policy. However, the fact that the US is now leaning heavily towards a self-regulation model for AI companies, does not mean it is unconcerned with AI risks. Far from it. It’s just that the risks, the US is concerned about are very different from those of its traditional European allies.
Section 1 in Trump’s Executive Order 14179 states that to maintain America’s global leadership in AI, “we must develop AI systems that are free from ideological bias or engineered social agendas”. In addition, the order “revokes certain existing AI policies and directives that act as barriers to American AI innovation”.
This fulfills Trump’s campaign promise to “Repeal Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.” “Radical Leftwing ideas” include safety testing AI models before releasing them and developing and deploying AI responsibly. As reported by WIRED:
“The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”
Reducing so-called “ideological bias” and removing barriers to innovation such as policies that prescribe what AI companies are allowed and not allowed to do are two foundational pillars of America’s AI policy. A third pillar is the AI Action Plan introduced by Trump’s Executive Order 14179.
The AI Action Plan is set to be released at latest by October 25, 2025, and the White House recently solicited public comments for it. As the goals and priorities of the US government seem to be very much aligned with those of its BigTech sector, we can learn a thing or two by looking at submissions from the leading BigTech companies such as OpenAI and Google.
OpenAI and Google propose a federal framework for AI that would preempt state laws and set a uniform standard for AI governance nationally. However, the companies don’t want this framework to be a widely applicable “one-size-fits-all” regulation like Europe’s AI Act. OpenAI proposes a voluntary partnership between the federal government and the private sector to “coordinate expertise” instead of imposing legally binding requirements. Google proposes that the government advances a “pro-innovation policy” and sector-specific, risk-based standards for AI governance based on “existing regulations”. Regarding the phrase “existing regulations”, one of Google’s main concerns is the fair use doctrine which it believes (and built its business upon) exempts AI training from copyright rules in the US.
Both OpenAI and Google are asking the government for a federal policy affirming that AI training is permissible without the consent of right holders. Google calls for “balanced copyright rules” which it deems critical for “unlocking scientific and social advances”. OpenAI mirrors the language of the Trump administration by claiming that it wants a copyright law exemption for AI training in the name of national security and competition with China: “If the PRC’s [China’s] developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over”.
Additionally, OpenAI proposes that the US government should be “working to prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress.” Likewise, Google asks the government to “combat restrictive foreign AI barriers that hinder American businesses and innovation” and writes that “governments should generally not impose regulatory checkpoints on the development of underlying AI models or AI innovation”
Speaking of which…
American vs European AI Governance
The US is seeking to expand its regulatory framework for AI and tech to the rest of the world. However, the framework clashes with European laws including but not limited to the AI Act, GDPR, Digital Markets Act, and Digital Services Act.
The divide between the EU's cautious and rule-based approach to AI vs the American market-oriented, hands-off approach will inevitably lead to a conflict. The EU is not going to rewrite its laws to accommodate America’s BigTech sector. JD Vance made clear in his keynote speech at the Paris AI Summit on February 11, 2025, that the US will not budge either:
“The president's recent executive order on AI is developing an AI action plan that avoids an overly precautionary regulatory regime while ensuring that all Americans benefit from the technology and its transformative potential. We invite your countries to work with us and to follow that model if it makes sense for your nations.
However, the Trump Administration is troubled by reports that some foreign governments are considering tightening the screws on US tech companies with international footprints. America cannot and will not accept that, and we think it's a terrible mistake not just for the United States of America but for your own countries.
US innovators of all sizes already know what it's like to deal with onerous international rules. Many of our most productive tech companies are forced to deal with the EU's Digital Services Act and the massive regulations it created about taking down content and policing so-called misinformation. Of course, we want to ensure the internet is a safe place, but it is one thing to prevent a predator from preying on a child on the internet and quite another to prevent a grown man or woman from accessing an opinion that the government thinks is misinformation.
Meanwhile, for smaller firms, navigating the GDPR means paying endless legal compliance costs or otherwise risking massive fines. For some, the easiest way to avoid the dilemma has been to simply block EU users in the first place. Is this really the future that we want? I think the answer for all of us should be no.”
Which AI governance framework will prevail? Europe’s “overly precautionary regulatory regime” with “onerous rules” and “endless legal compliance costs” or the American “free speech” approach which emphasizes “American innovation” and national economic and security interests above consumer protection?
In spite of the confident ethos carried by Vance, Europe’s restrictive approach to AI is sneaking into the US through the backdoor at a state level. As OpenAI noted in their public comment on the AI Action Plan, 781 AI bills have been proposed in the US in 2025 alone. Some of these are quite ambitious. Colorado was the first state to pass a comprehensive AI bill with SB 205 in May 2024. The bill follows the same risk-based structure as the AI Act. Other states are currently considering substantially similar laws, including California, Connecticut, Iowa, Illinois, Maryland, Massachusetts, Nebraska, New Mexico, New York, Oklahoma, Texas, and Virginia. See this piece by Dean W. Ball for further reading.
The federal US approach to AI governance is incompatible with the EU's approach at the moment. We will see how it all pans out. Hopefully, the major AI labs will continue to abide by voluntary AI governance standards such as NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile and the practical AI governance guide for organizations released by the US Department of State, Risk Management Profile for Artificial Intelligence and Human Rights. This will set the companies up for compliance with the AI Act without the need for federal reforms in the US to integrate European AI governance standards.
😂