AI Regulation 2025: EU AI Act, California SB 53, and the Future of AI Governance

Artificial intelligence is moving from laboratories into society at an accelerating pace. Language models write code and screen resumes; recommender systems shape public discourse; autonomous vehicles navigate city streets. With these advances come risks of bias, accidents and abuse. Policymakers worldwide are scrambling to establish rules that ensure AI is trustworthy, transparent and accountable. In 2025 two regulatory frameworks stand out: the European Union’s AI Act, the first comprehensive law governing AI systems, and California’s Transparency in Frontier AI Act (SB 53), the first U.S. state law directly regulating developers of large foundation models.
Stefaan Vervaet
November 7, 2025

Introduction: Why regulate AI?

AI systems are increasingly embedded in high‑impact domains, healthcare, employment, finance, law enforcement and education. When they fail or are misused, the consequences can be severe: discriminatory hiring or lending, wrongful arrests, deepfake misinformation or safety‑critical accidents. Public trust hinges on addressing these harms. Regulators face a delicate balance: encourage innovation while protecting fundamental rights and preventing harm. Early voluntary guidelines and ethics principles have proven insufficient; companies’ incentives to be first to market often override caution. Binding regulation is therefore emerging as a key lever.

Europe has taken the lead by designing the AI Act, the first attempt at horizontal, risk‑based regulation. The Act categorises AI systems by risk (unacceptable, high, limited, minimal) and imposes obligations accordingly. High‑risk systems, such as those used in credit, employment or law enforcement, must undergo conformity assessments, maintain risk management systems and ensure human oversight. Unacceptable uses, like social scoring by governments, are banned. In contrast, California’s SB 53 focuses narrowly on frontier model developers, requiring transparency and safety measures for models above certain compute thresholds. Together, these regimes illustrate the diversity of regulatory approaches.

The EU AI Act: Status and New Strategies

Timeline and applicability

The AI Act was adopted in April 2024 and entered into force on 1 August 2024. Most obligations will become applicable two years later, on 2 August 2026, with extended transition periods for high‑risk systems into 2027. Providers of general‑purpose AI models (GPAI) must comply with transparency and copyright provisions by August 2025. This phased approach reflects the complexity of the law and the need for industry to adapt.

Apply AI Strategy and AI in Science Strategy

In September 2025 the European Commission released two strategies to encourage responsible AI adoption: the Apply AI Strategy and the AI in Science Strategy. According to the European American Chamber of Commerce, the Apply AI Strategy aims to embed AI across sectors and support small and medium‑sized enterprises. The AI in Science Strategy seeks to boost scientific research and includes the creation of RAISE, a virtual institute to provide access to large models and compute resources. RAISE is scheduled to launch in November 2025, offering researchers “virtual GPU cabinets” and training on large‑scale computing. These strategies indicate that the EU is pairing regulation with investment to ensure competitiveness.

Serious incident reporting and draft guidance

A critical component of the AI Act is Article 73, which obligates providers and deployers of high‑risk AI systems to report serious incidents. The European Commission defines a serious incident as an event that leads to the death or serious harm of a person, significant disruption of critical infrastructure or serious and irreversible damage to property or the environment. On 26 September 2025, the Commission published draft guidance and a reporting template to help providers prepare. The guidance clarifies definitions, gives examples and explains how reporting interacts with other legal regimes. The aim is to detect risks early, ensure accountability and build public trust in AI technologies. Stakeholders were invited to submit feedback until 7 November 2025, illustrating the EU’s consultative approach.

National implementation and coordination

Implementation requires national regulators. Ireland announced the creation of a National AI Implementation Committee comprising 15 regulatory bodies to coordinate enforcement. The Committee will establish a National AI Office by August 2026 to oversee compliance, provide guidance and liaise with the EU AI Office. Other member states are developing similar structures. This distributed governance reflects the EU’s federated nature but may lead to inconsistent interpretations. Harmonisation efforts, including guidance from the AI Office, will be crucial.

California’s Transparency in Frontier AI Act

Scope and thresholds

On 29 September 2025 California Governor Gavin Newsom signed SB 53, also known as the Transparency in Frontier Artificial Intelligence Act (TFAIA). This law applies to developers that train AI models requiring more than 10^26 floating‑point operations (FLOPs) and have annual revenue exceeding $500 million. In practice, this threshold captures the largest AI labs and excludes startups and most open‑source projects.

Obligations and safety measures

Under SB 53, covered developers must implement a risk management framework covering model training, deployment and post‑deployment monitoring. They must document training data, alignment methods and safety controls; publish model transparency reports summarising capabilities, limitations and potential misuse; and conduct critical safety incident analyses when harms occur. In addition, developers must report incidents to the state attorney general within 14 days and share mitigation plans with regulators. The law also mandates third‑party audits of safety protocols and prohibits the deployment of systems that could cause catastrophic harm without robust safety measures.

SB 53 positions California as the first U.S. state to directly regulate frontier AI developers. It draws inspiration from voluntary commitments secured by the White House but goes further by imposing mandatory obligations and potential penalties. Enforcement mechanisms include civil fines and injunctions. Because federal AI legislation remains stalled in Congress, SB 53 could become a model for other states or prompt federal preemption.

Industry responses and challenges

Major AI labs like OpenAI, Anthropic and Google DeepMind have expressed support for safety and transparency but caution against fragmented state regulations that could create compliance burdens. Industry groups advocate for federal standards to avoid a patchwork of rules. Small developers and open‑source communities worry that overly broad definitions could stifle innovation. SB 53 attempts to address these concerns by setting high compute and revenue thresholds. Still, some critics argue that compute thresholds are a blunt tool: 10^26 FLOPs may soon be surpassed by open labs, and adversaries could simply train models abroad or obfuscate training footprints.

Global trends and emerging frameworks

The EU AI Act and California’s SB 53 are two high‑profile examples, but AI governance is evolving globally. The OECD and G7 have endorsed AI principles covering transparency, fairness and accountability. The GPAI (Global Partnership on AI) facilitates cooperation on AI safety and innovation. China has issued a suite of AI regulations focusing on algorithmic recommendation services, generative AI and deep synthesis, emphasising censorship and national security. The United Kingdom released a pro‑innovation AI white paper, opting for a sectoral approach with non‑statutory guidance. Canada’s AIDA (Artificial Intelligence and Data Act) and Brazil’s draft AI law are progressing through legislatures. Multilateral alignment remains challenging: jurisdictions vary in risk tolerance, values and economic priorities.

Across these frameworks, several common themes emerge: risk‑based classification, transparency obligations, incident reporting, human oversight and accountability. Divergences revolve around enforcement (self‑assessment vs. third‑party audits), scope (frontier models vs. all high‑risk systems) and underlying values (liberty vs. security). There is a growing consensus that high‑impact AI should be subject to regulatory scrutiny, but no uniform blueprint exists.

Deep dives into regulatory details

The high‑level overview above conceals a wealth of detail. Understanding the specific obligations, enforcement mechanisms and debates within major frameworks sheds light on the challenges ahead.

EU AI Act obligations, penalties and interactions with other laws

The AI Act’s risk tiers are more than labels; they determine concrete obligations. High‑risk systems must implement a risk management system, including hazard identification, mitigation measures and monitoring. Providers must maintain technical documentation detailing model architecture, training data and performance metrics; ensure human oversight and the possibility for human intervention; design systems that are robust, secure and resilient; and perform conformity assessments before placing products on the market. Users of high‑risk AI must conduct data protection impact assessments and ensure proper use. Fines for non‑compliance mirror the GDPR: up to 6 % of global annual turnover or €30 million, whichever is higher.

The Act interacts with other EU laws. Data protection under the GDPR, product safety under the General Product Safety Regulation, and fundamental rights protections under the Charter of Fundamental Rights all influence AI deployment. For example, biometric identification in public spaces triggers both AI Act prohibitions and GDPR obligations. Providers must navigate overlapping regimes and may face joint enforcement by data protection authorities and AI regulators. The Act also introduces a database of high‑risk systems, enabling public scrutiny and regulatory coordination. Critics argue that the complexity could overwhelm small developers, while proponents believe the layered approach ensures comprehensive safeguards.

Governance structures: EU AI Office, national bodies and standards organisations

Effective governance requires institutional capacity. The AI Act establishes an AI Office within the European Commission to oversee implementation, issue guidance and coordinate with national authorities. The Office will manage the high‑risk systems database, develop harmonised technical standards and facilitate international cooperation. Each member state must designate or create national supervisory authorities tasked with market surveillance, enforcement and guidance. These authorities will cooperate with data protection agencies, consumer protection bodies and sector‑specific regulators.

Standardisation organisations like CEN–CENELEC and ISO/IEC JTC 1 will develop technical standards referenced by the AI Act. For example, ISO/IEC 42001 is under development as a management system standard for AI. Participation of industry and civil society in standardisation processes is crucial to ensure that requirements reflect real‑world needs and promote interoperability. The European AI Alliance and stakeholder boards provide fora for dialogue and feedback, further embedding co‑governance into the regulatory architecture.

The U.S. regulatory landscape beyond California

California’s SB 53 is part of a patchwork of U.S. initiatives. At the federal level, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights in 2022, outlining principles such as safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives. While not legally binding, the Blueprint influences agency guidance and procurement standards. The National Institute of Standards and Technology (NIST) published the AI Risk Management Framework in 2023, offering voluntary processes for managing AI risks across the lifecycle. Bills such as the Algorithmic Accountability Act and the American Data Privacy and Protection Act have been introduced in Congress but have yet to pass. Agencies like the Federal Trade Commission and the Food and Drug Administration are asserting jurisdiction over deceptive AI practices and medical devices respectively.

States are also active. New York City’s Automated Employment Decision Tools law (AEDT) requires bias audits and notices for AI‑driven hiring. Illinois’s Biometric Information Privacy Act (BIPA) imposes consent and retention requirements for facial recognition. Washington state is considering its own frontier AI bill, and other jurisdictions are examining SB 53 as a model. This mosaic creates uncertainty for developers operating across borders but also allows regulatory experimentation.

International frameworks and soft‑law instruments

Beyond the EU and U.S., international organisations have issued soft‑law instruments to guide AI governance. The OECD AI Principles, adopted by 42 countries in 2019, emphasise inclusive growth, human‑centred values, transparency, robustness and accountability. The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) calls for protecting human rights, diversity and the environment in AI development. The G7 Hiroshima AI Process, launched in 2023, seeks to develop guidelines for advanced AI systems, including safety evaluations and responsible innovation. These frameworks are non‑binding but shape national legislation and corporate policies by setting expectations.

Regional collaborations are emerging. The Council of Europe is drafting a binding Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, which would require signatories to ensure AI compliance with human rights standards. In Africa, the African Union is developing its own AI strategy emphasising capacity building and local innovation. The ASEAN Guide on AI Ethics and Governance offers principles tailored to Southeast Asian contexts. Together, these instruments illustrate a global movement toward shared norms, albeit with differing emphases.

Algorithmic auditing, impact assessments and technical governance

One emerging approach to operationalising AI regulation is algorithmic auditing. Audits can assess bias, accuracy, security and compliance with legal requirements. They may be conducted internally or by third parties and could become mandatory for high‑risk systems under the AI Act. Algorithmic impact assessments (AIAs) complement audits by evaluating potential societal impacts before deployment. Canada’s AIDA proposes mandatory AIAs for government uses of AI. Standardised audit procedures and impact assessments could harmonise expectations across jurisdictions and make compliance measurable.

Technical governance tools such as model cards and data sheets provide structured documentation about AI systems, including training data composition, evaluation metrics and limitations. Red‑teaming—structured adversarial testing—helps identify vulnerabilities and misuse scenarios. Responsible scaling policies, adopted by some labs, prescribe safety thresholds and incremental deployment based on risk evaluations. These practices, currently voluntary, could be formalised by regulators to ensure that safety keeps pace with capability.

Debates over compute thresholds and open source

SB 53’s reliance on compute thresholds (10^26 FLOPs) has sparked debate. Advocates argue that compute approximates capability and risk: larger models are more likely to produce emergent behaviour and require greater scrutiny. Critics counter that smaller models can also cause harm, while compute measures can be gamed or obscured. Moreover, focusing on training thresholds ignores energy consumption and downstream impacts. Open‑source communities fear that broad thresholds could ensnare volunteers who fine‑tune large models or create powerful systems using fewer FLOPs. Some propose risk‑based triggers—such as potential for biological misuse or societal disruption—rather than compute alone.

Open‑source developers face unique challenges. They often lack the resources for compliance and risk management required by laws like SB 53. At the same time, open source can enhance transparency and security by enabling peer review. Policymakers must balance the need to prevent misuse with the benefits of open innovation. Exemptions for non‑commercial projects, safe harbours and public funding for safety research are possible solutions. The EU AI Act exempts most open‑source software but may still apply if it powers high‑risk systems, creating a grey area.

Notable incidents shaping regulation

Regulation does not emerge in a vacuum; it is often prompted by high‑profile failures. Examples include Uber’s autonomous vehicle killing a pedestrian in 2018, which spurred calls for safety standards in self‑driving cars; Amazon’s experimental hiring algorithm that discriminated against female applicants, highlighting algorithmic bias; and deepfake scams that have defrauded individuals and companies. These incidents demonstrate that AI can amplify existing inequalities, cause physical harm and erode trust. They also illustrate that problems often arise in the deployment and integration of AI, not just in the underlying models. Policies therefore emphasise monitoring, human oversight and incident reporting to catch issues early.

Other national frameworks and case studies

While the EU and U.S. dominate headlines, other countries are crafting their own AI laws. Canada’s Artificial Intelligence and Data Act (AIDA) proposes a tiered approach similar to the EU but focuses on impact assessments and appoints the Minister of Innovation as regulator. The Act would require organisations deploying high‑impact AI to assess and mitigate risks, document compliance and notify users when decisions are automated. Canadian regulators have also issued guidance on privacy‑preserving machine learning and are considering penalties aligned with privacy laws. Brazil’s draft AI bill (PL 21/2020) initially mirrored the EU’s risk‑based framework but has evolved to include principles such as protection of vulnerable groups and promotion of responsible innovation. Brazil’s Civil Rights Framework for the Internet provides a foundation for data protection, and lawmakers aim to balance innovation with rights.

The United Kingdom released a pro‑innovation AI white paper in 2023 that eschews a single new law in favour of empowering sectoral regulators to issue guidance. The UK emphasises five principles: safety, transparency, fairness, accountability and contestability. Regulators like the Information Commissioner’s Office and the Competition and Markets Authority will supervise AI within existing mandates, creating a flexible but fragmented regime. Japan’s government is updating the Social Principles of Human‑centric AI and exploring certification schemes for trusted AI. India’s National Strategy for Artificial Intelligence advocates for responsible AI while cautioning against overregulation. These diverse models reflect local cultures, legal systems and economic priorities.

Technical measures for trustworthy AI

Regulatory frameworks are only effective if accompanied by technical methods to make AI systems trustworthy. Fairness metrics quantify disparate impacts across protected groups, enabling developers to detect bias. Adversarial robustness tests evaluate how models respond to small input perturbations, revealing vulnerabilities to manipulation. Privacy‑enhancing technologies such as differential privacy, federated learning and homomorphic encryption allow models to learn from sensitive data without exposing it. Interpretability techniques like SHAP, LIME and attention visualisations help humans understand model reasoning and identify potential errors. Regulators may soon require documentation of such tests as part of compliance.

Safety research is advancing too. Alignment techniques aim to ensure that AI systems behave in accordance with human values and instructions. Methods like reinforcement learning from human feedback, constitutional AI and control theory are being explored to mitigate reward hacking and goal misgeneralisation. Formal verification—common in safety‑critical software—could prove some properties of AI systems, though it remains challenging for deep networks. As regulation matures, expectations for rigorous technical safety processes will likely increase, necessitating collaboration between machine‑learning researchers, ethicists and lawyers.

AI and labour: Displacement, augmentation and rights

AI regulation intersects with labour policy. Automation can displace workers, alter job quality and change skill requirements. The EU’s Platform Worker Directive seeks to protect gig workers from algorithmic exploitation by improving transparency and granting collective bargaining rights. Unions advocate for algorithmic impact assessments on employment decisions and for workers to have a say in AI deployment. The International Labour Organization is studying AI and the future of work, noting that while some jobs will disappear, others will be augmented by AI tools. Policymakers must coordinate AI regulation with labour laws, education and social safety nets to ensure a just transition.

Case studies reveal tensions. In the Netherlands, an algorithm used to detect welfare fraud discriminated against low‑income and immigrant populations, leading courts to deem it unlawful and sparking public backlash. In the U.S., delivery drivers have protested algorithmic scheduling systems that reduce pay and increase surveillance. These incidents underscore the need for worker voice in AI governance and for regulators to address power asymmetries between employers and workers.

Enforcement and penalty case studies

Regulation becomes meaningful only when it translates into enforcement actions and consequences. The last few years have provided cautionary tales. Early in 2023, Italy’s privacy regulator temporarily banned a large language model for alleged violations of the General Data Protection Regulation (GDPR) and lack of transparency in data collection. Although the ban was lifted after the provider implemented privacy measures and age verification, the episode signalled that EU regulators will not hesitate to take drastic steps when fundamental rights are at stake. In Germany and France, data protection authorities fined companies using facial‑recognition databases scraped from social media, arguing that indiscriminate biometric scraping violated consent rules. These cases illustrate the interaction between AI regulation and existing privacy laws. Under the AI Act, such enforcement could be even more powerful: high‑risk providers failing to report serious incidents or comply with design requirements may face fines of up to 6 % of global revenue—paralleling GDPR penalties.

California’s SB 53 has yet to be enforced, but historical analogues provide insight. For instance, under the state’s data breach notification laws, companies that delayed disclosure faced multi‑million dollar settlements. Regulators may adopt similar strategies for AI: failure to report safety incidents could trigger investigations, while pattern‑based enforcement (tracking repeated non‑compliance across firms) could lead to coordinated actions. Meanwhile, other states are watching closely; New York, Washington and Massachusetts have introduced bills that mimic SB 53’s transparency requirements. If state attorneys general start coordinating investigations, the penalty landscape could resemble that of consumer‑protection or antitrust law.

Algorithmic impact assessments in practice

The concept of an algorithmic impact assessment (AIA) is gaining traction as a bridge between abstract principles and operational compliance. An AIA typically involves mapping how an AI system functions, identifying stakeholders, assessing potential harms and benefits, consulting affected communities, and documenting mitigation measures. Canada’s proposed AIDA mandates AIAs for high‑impact systems, and the EU’s AI Act encourages them as part of risk management. Some cities and agencies in the United States have piloted AIAs for predictive policing tools, identifying racial biases and recommending changes. Companies are experimenting as well: major technology firms have begun publishing system cards—extended AIAs—for their most advanced models, detailing capabilities, misuse scenarios and safety mitigations. Early experiences show that AIAs can surface hidden dependencies, such as reliance on outdated datasets or third‑party components lacking oversight, prompting redesign before deployment. However, the quality of AIAs depends on methodology; without common standards, assessments risk becoming checklists. Regulators may need to specify minimum elements, such as stakeholder engagement, bias testing and public summary publication, to ensure meaningful practice.

Bridging compliance and innovation through RegTech

Meeting complex regulatory obligations can itself benefit from AI. Regulatory technology (RegTech) refers to digital tools that automate compliance processes. In the financial sector, RegTech systems monitor transactions for anti‑money‑laundering requirements; similar approaches are emerging for AI governance. For example, some companies are developing model risk dashboards that track model provenance, training data lineage, performance metrics and compliance status in real time. Others integrate legal requirements into software development pipelines so that issues like data licensing, consent and bias testing are flagged during model design. Natural‑language processing can summarise legislation and map clauses to technical requirements. These tools lighten the burden on small teams and reduce human error. For regulators, RegTech can facilitate auditing by enabling secure data‑sharing and automatic generation of compliance reports. Ensuring interoperability between RegTech solutions and regulatory databases, like the EU’s high‑risk systems registry, would further streamline governance.

The future of AI regulation: Beyond 2025

Looking ahead, several trends are likely to shape the next generation of AI governance. First, integrated oversight will blur the lines between design‑time and post‑deployment regulation. Regulators may monitor models continuously, using telemetry and incident reports to adjust obligations dynamically. Second, sector‑specific regulation will grow; health‑care AI may require rigorous clinical trials, while education‑tech may need parental consent frameworks. Third, harmonisation efforts through international organisations will intensify. The EU’s desire for a Brussels effect may lead it to align with the U.S. and Asia on definitions of high‑risk AI and on data‑flow safeguards. Success will depend on balancing privacy, trade and innovation. Fourth, compute and energy governance will intersect with AI policy. As large models consume vast amounts of electricity, regulators will demand sustainability disclosures and may tie approvals to environmental performance. Finally, advances in alignment research and AI safety science could provide the evidence base for regulation, moving debates from speculation to empirical assessment.

To remain nimble, policymakers should institutionalise periodic review mechanisms. Sunset clauses in legislation can force updates based on technological progress and societal feedback. Public‑private research partnerships, similar to the EU’s RAISE institute, can develop best practices and share lessons across sectors. Transparent metrics—such as the number of serious incidents reported, resolution times and compliance rates—can help evaluate effectiveness and drive improvements. By coupling long‑term vision with adaptability, regulators can protect society while fostering innovation.

Criticisms and open questions

Risk of overregulation and innovation slowdown

Some technologists warn that heavy‑handed regulation could stifle innovation and drive AI development to less regulated jurisdictions. They argue that the EU’s prescriptive approach may deter startups or push them to the United States or China. However, proponents counter that clear rules provide legal certainty and public trust, enabling sustainable innovation. The EU’s combination of regulation and investment via Horizon Europe and the AI in Science Strategy aims to mitigate this concern.

Enforcement challenges

Laws are only as effective as their enforcement. The AI Act relies on national authorities for surveillance and sanctioning. Resourcing these bodies will be critical. In California, the attorney general will need expertise to evaluate complex AI systems and enforce SB 53. Regulators must also remain adaptive: AI is evolving quickly, and static rules risk obsolescence.

Global fragmentation

With divergent regimes across jurisdictions, companies may face a patchwork of laws. This fragmentation increases compliance costs and may encourage forum shopping. International organisations and trade agreements could help harmonise standards. The EU has expressed hope that its AI Act will become a “Brussels effect” law, influencing global norms as the GDPR did. Whether other regions will adopt similar risk‑based frameworks remains to be seen.

Ethics versus law and the role of corporate governance

AI regulation does not exist in a vacuum; it builds on decades of tech ethics discourse. Many organisations developed ethical guidelines—fairness, accountability, transparency, explainability—but without legal force. Companies could adopt principles selectively, and ethical boards often lacked authority. Regulation seeks to translate ethical aspirations into enforceable norms. However, law cannot substitute for organisational culture. Corporate governance—boards of directors, risk committees and internal policies—must internalise ethical values. For instance, integrating ethics and compliance officers into product teams can catch risks earlier. Boards should receive regular briefings on AI risk management, and executive compensation should align with safety metrics. Investors increasingly demand environmental, social and governance (ESG) disclosures, including AI governance, signalling market pressure for responsible practices.

Co‑regulation and industry self‑governance

Purely top‑down regulation may be too slow for fast‑moving technology. Many experts advocate co‑regulation, where industry develops standards and codes of conduct under regulatory oversight. The EU’s Digital Services Act uses codes of conduct for content moderation; a similar model could apply to AI. Industry consortia like the Partnership on AI and the Collective Intelligence Project are developing best practices for safety and alignment. Voluntary commitments secured by the White House—such as external red‑teaming and transparency reports—exemplify self‑governance. Co‑regulation allows flexibility and innovation while ensuring accountability. Critics warn that industry capture is possible, so public interest groups must be involved and regulators should retain enforcement powers.

Regulatory innovation and adaptive approaches

Given the pace of AI, regulators are experimenting with sandbox environments, dynamic risk scoring and ex post oversight. Sandboxes allow companies to test new technologies under supervision, collect evidence of risks and refine requirements. The UK’s Financial Conduct Authority pioneered sandboxes for fintech, and similar models are being explored for AI. Dynamic risk scoring could update obligations based on real‑time monitoring of AI performance and incident reports. This contrasts with static classification; a system could move from low‑risk to high‑risk if harmful patterns emerge. Ex post oversight focuses on outcomes rather than design: regulators audit deployed systems and sanction harms. These innovations require new data flows between companies and regulators, raising privacy and confidentiality concerns.

Multi‑stakeholder governance and global cooperation

AI’s global footprint demands multi‑stakeholder governance. Civil society, academia, labour unions and marginalised communities should have seats at the table. Participatory mechanisms—such as citizens’ juries, ethical review boards and community impact agreements—can democratise AI governance. Internationally, forums like the Internet Governance Forum and World Summit on the Information Society could incorporate AI dialogues. UNESCO’s AI ethics recommendations emphasise indigenous knowledge and cultural diversity, reminding policymakers that governance must respect pluralism. Building bridges across jurisdictions through mutual recognition agreements and regulatory dialogues can reduce fragmentation. A global registry of high‑risk AI systems, managed by an international body, could enhance transparency and coordination.

Akave’s perspective: Building effective AI governance

Akave welcomes robust, risk‑based regulation that protects people and ensures AI serves the public good. Our guiding principles for governance are:

  • Proportionality: Regulation should scale with risk. High‑impact systems require stringent obligations; low‑risk applications should face minimal burdens. SB 53’s compute and revenue thresholds reflect this principle, though they may need adjustment as technology advances.
  • Transparency and accountability: Developers must document data sources, training methods and safety measures. Public reporting and independent audits build trust. We support the EU’s serious incident reporting and California’s transparency reports. To avoid bureaucracy, reporting should be standardised and digital.
  • Collaboration and consultation: Policymakers should consult diverse stakeholders—industry, civil society, academia and affected communities. The EU’s open consultation on Article 73 guidance is a positive example. Inclusive governance fosters legitimacy and reduces unintended consequences.
  • Interoperability and harmonisation: Fragmentation hinders innovation. Wherever possible, regulators should align definitions, risk categories and reporting formats. We encourage coordination among the EU, U.S. states, OECD and emerging economies to build compatible frameworks.
  • Support for open source and small developers: Regulation should not entrench incumbents. Open‑source developers and SMEs often drive innovation but lack compliance resources. Exemptions, sandboxes and public funding can help them meet obligations without stifling creativity.
  • Adaptive regulation: AI evolves rapidly. Governance frameworks must include mechanisms for periodic review and updating. Sunset clauses, pilot programmes and regulatory sandboxes can ensure that rules stay relevant.

Conclusion

AI governance is entering a critical phase. The EU AI Act and California’s SB 53 signal a shift from voluntary ethics to binding law. These frameworks differ in scope and approach, yet they share common goals: prevent harm, promote transparency and build public trust. As more jurisdictions develop their own rules, the need for coordination and proportionality will grow. Businesses and developers must prepare to navigate a complex regulatory landscape. Those who embrace safety, openness and accountability will not only comply with laws but also gain a competitive advantage by earning the public’s trust. At Akave we believe that responsible AI is a shared responsibility: regulators set guardrails, developers innovate within them and society benefits from technology that respects rights and advances human flourishing.

About Akave Cloud

Akave Cloud is an enterprise-grade, distributed and scalable object storage designed for large-scale datasets in AI, analytics, and enterprise pipelines. It offers S3 object compatibility, cryptographic verifiability, immutable audit trails, and SDKs for agentic agents; all with zero egress fees and no vendor lock-in saving up to 80% on storage costs vs. hyperscalers.

Akave Cloud works with a wide ecosystem of partners operating hundreds of petabytes of capacity, enabling deployments across multiple countries and powering sovereign data infrastructure. The stack is also pre-qualified with key enterprise apps such as Snowflake and others.

Modern Infra. Verifiable By Design

Whether you're scaling your AI infrastructure, handling sensitive records, or modernizing your cloud stack, Akave Cloud is ready to plug in. It feels familiar, but works fundamentally better.