OpenAI’s Public Support for Regulation and Private Lobbying Against It: A Dichotomy

Public Statements in Favor of Regulation

OpenAI, the San Francisco-based artificial intelligence lab, has publicly voiced its support for stronger AI regulation. Sam Altman, CEO of OpenAI, has been a leading advocate for enhanced collaboration between the U.S. and China on AI development and regulation. He argued that the emergence of increasingly powerful AI systems has made global cooperation more crucial than ever, and he has sought to convince the world to regulate his industry. Altman has met with policymakers around the world, including those in South America, Africa, Europe, and Asia, to encourage and influence the development of AI regulations[7].

In addition, Altman testified to the Senate Judiciary Subcommittee, where he explained OpenAI’s internal and self-imposed safety assessment practices. He expressed the belief that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful AI models. He also expressed interest in international standards, suggesting that companies like OpenAI can partner with governments on establishing and updating safety requirements and examining opportunities for global coordination[8][9].

Private Lobbying Against Regulation

While OpenAI has been publicly advocating for greater regulation, reports suggest that the company has been privately lobbying against certain regulatory measures. Documents obtained by TIME showed that OpenAI lobbied the European Union to weaken forthcoming AI regulation, known as the EU’s AI Act, which is one of the most comprehensive AI legislations globally. OpenAI proposed amendments that were later included in the final text of the law, reducing the regulatory burden on the company[15].

OpenAI argued that its general-purpose AI systems, including GPT-3 and Dall-E 2, should not be considered “high risk,” a designation that would subject them to stringent legal requirements including transparency, traceability, and human oversight. This stance aligns with other tech giants like Microsoft and Google, which have also lobbied EU officials for loosening the Act’s regulatory burden on large AI providers[17].

Reports reveal that OpenAI had been critical of the EU regulation and even suggested that it might cease operations in Europe if it deemed itself unable to comply with the rules. However, the company later reassured that it had no plans to leave and intends to cooperate with the EU[19].

OpenAI’s lobbying efforts appear to have been successful, as the final draft of the Act approved by EU lawmakers did not contain wording suggesting that general-purpose AI systems should be considered inherently high risk. Instead, the law called for providers of “foundation models” or powerful AI systems trained on large quantities of data, to comply with a smaller set of requirements. OpenAI supported the late introduction of “foundation models” as a separate category in the Act[20].

In its lobbying efforts, OpenAI pushed back against a proposed amendment to the AI Act that would have classified generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could falsely appear to a person to be human-generated and authentic. OpenAI recommended scrapping the amendment, arguing that it would be sufficient to rely on another part of the Act that mandates AI providers sufficiently label AI-generated content and be clear to users that they are interacting with an AI system[21].

The amendment that OpenAI took issue with was not included in the final text of the AI Act. This indicates that OpenAI, like many Big Tech companies, used the argument of utility and public benefit of AI to mask their financial interest in watering down the regulation[22].

In response to these reports, an OpenAI spokesperson stated that at the request of policymakers in the EU, they provided an overview of their approach to deploying systems like GPT-3 safely and commented on the then-draft of the AI Act based on that experience[23].


The dichotomy of OpenAI’s public statements in favor of AI regulation and its private lobbying efforts against specific regulatory measures illustrates the complex and nuanced nature of AI regulation. While OpenAI’s public advocacy for AI regulation can be seen as an effort to guide the ethical and safe development of AI, its private lobbying raises questions about the alignment of its actions with its stated principles. The debate about AI regulation is far from over, and it’s crucial for companies, governments, and society at large to engage in open, transparent discussions to ensure that the benefits of AI are realized while minimizing its potential risks.


Leave a Reply