The European Union’s (EU) AI Office is setting the tone for how ethical AI regulation should look with the release of its first draft of the General-Purpose AI Code of Practice. The document established the responsible development and risk management principles of general-purpose AI (GPAI) models, such as large language models, image-generation tools, learning agents, and more.
Although GPAI has more than proven its efficiency and versatility, it is not without risks. The risk of bias, mass misinformation, and misuse have raised alarm for legislators and the public. The initial draft of the AI Code of Practice tackles these issues by creating guidelines for transparency, accountability, safety, and risk management.
The Code of Practice isn’t in effect yet, but the EU’s AI office is under pressure to finalize it by May 2025, with plans for implementation starting in August of the same year. Once the legislation is operational, the Code will serve as the critical framework for AI stakeholders to develop and use GPAI technology responsibly.
Key Takeaways for AI Stakeholders
The following are the key takeaways from the EU Code of Practice for developers, AI providers, and other key players in the AI ecosystem:
- Disclose System Details: Developers and AI providers must clearly explain how their general-purpose AI models work, their capabilities, and the potential risks associated with their use.
- Establish Risk Protocols: Developers and providers are encouraged to build safety and security frameworks (SSFs) to identify, report, and mitigate risks. This is especially crucial for high-risk GPAI systems used in areas like hiring, profiling, healthcare, finance, etc.
- Continuous Risk Assessment and Mitigation: AI risk mitigation isn’t a one-and-done type of deal. The Code of Practice states that risks must be identified, assessed, and mitigated on a recurring basis to ensure the safe and ethical use of GPAI.
A Collaborative Effort for AI Regulation
What makes this Code of Practice unique is its invitation for developers, AI providers, researchers, and advocacy groups to contribute their input on the future of AI regulation. This shows the EU’s AI Office understands that there are aspects outside its scope of view, and they also want those blindspots covered. The draft also recognizes the dynamic nature of AI technology, emphasizing the need for continuous updates to ensure the Code remains relevant as AI evolves.
What’s Next?
The EU isn’t alone in its pursuit of AI regulation. According to IAPP’s Global AI Law and Policy Tracker, countries like China, Canada, and the United States have AI governance in effect or in the making. In the U.S., 45 states have proposed AI-related bills, with 20 percent of them being passed. Many state governments are forming task forces to research and assess AI’s impact so legislators can establish the appropriate laws to mitigate the risks.
AI regulation’s global momentum won’t be slowing down anytime soon, underscoring that ethical AI isn’t just a regional concern but a global necessity.
Read our guide to navigating AI’s ethical challenges to learn more about this thorny issue and how businesses and governments are addressing it.