Bin Ling, Runyi Ma
2025, 4(3): 37-94.
The study offers a systematic and comparative framework of three ideal-typical models of global artificial intelligence (AI) governance: Paternal Prevention (exemplified by the European Union),Guardian Supervision (characteristic of the United States),and Companion Regulation (manifested in differing forms in China and the United Kingdom). Through in-depth analysis across four key governance dimensions—legal tools and policy orientation,administrative enforcement and structure,judicial review and mechanisms of checks and balances,and local experimentation and power allocation—the study reveals the normative foundations,institutional logics,and strategic approaches each model employs to balance AI innovation with risk mitigation. Particular emphasis is placed on the rising prominence of Companion Regulation as a potentially adaptive and globally influential governance path.
The Paternal Prevention model pursued by the European Union embodies a risk-averse,precautionary logic,centered on preemptively constraining AI applications that may infringe upon fundamental rights. Anchored in the binding AI Act,the EU constructs a unified,risk-tiered regulatory framework that applies directly across member states. This framework mandates ex-ante compliance measures for high-risk systems,such as human oversight and conformity assessments,while prohibiting certain high-risk uses altogether. The EU's approach is supported by a multilevel enforcement architecture,including the European AI Board and designated national supervisory authorities,with substantial sanctioning powers. The model is undergirded by a strong judiciary capable of reviewing both administrative actions and legislative compliance,further reinforcing fundamental rights. However,the supranational and centralized nature of this model limits member states' autonomy and scope for localized experimentation,potentially constraining innovation flexibility.
By contrast,the Guardian Supervision model exemplified by the United States emphasizes post hoc oversight within a market-oriented,innovation-friendly environment. Lacking a comprehensive federal AI law,the U.S. relies on a decentralized patchwork of sector-specific regulations,supplemented by soft law instruments such as the NIST AI Risk Management Framework and executive guidance like the “Blueprint for an AI Bill of Rights”. Enforcement is fragmented across existing agencies (e.g.,FTC,FDA,EEOC),with no centralized authority for AI regulation. The judiciary intervenes only after harm has occurred,adjudicating AI-related disputes through the application of general legal principles rather than AI-specific norms. Local jurisdictions,particularly states and municipalities,serve as regulatory innovators,adopting diverse measures that reflect localized priorities but also contribute to regulatory fragmentation. This model privileges technological dynamism but raises concerns about delayed responses to systemic harms and governance incoherence.
The Companion Regulation model,observed in both China and the United Kingdom,seeks to align public governance with industry innovation through flexible,collaborative,and context-sensitive regulatory mechanisms. In the UK,this model is instantiated through a “pro-innovation” approach that emphasizes principles-based,sector-led guidance over comprehensive legislative codification. Regulators such as the Information Commissioner's Office and Financial Conduct Authority lead AI oversight within their sectors,supported by coordination platforms like the Digital Regulation Cooperation Forum. Judicial interventions,as seen in key cases on facial recognition and algorithmic bias,reinforce rights-based accountability. While the UK system is more centralized than that of the U.S.,it still permits targeted experimentation through regulatory sandboxes and devolved competencies.
China's version of Companion Regulation is more interventionist and regulatory. It combines robust top-down mandates with strategic state—industry coordination. Regulatory instruments include binding measures for specific technologies (e.g.,generative AI,recommendation algorithms),supported by broader legal frameworks such as the Cybersecurity Law and the Personal Information Protection Law. Enforcement is led by the Cyberspace Administration of China and implemented through a vertically integrated regulatory matrix spanning multiple ministries. While judicial review plays a supplementary role,local pilot zones (e.g.,in Shanghai and Beijing) enable experimentation with regulatory approaches under central guidance. Successful local practices are often scaled nationally,reflecting a model of iterative governance rooted in strong administrative capacity.
This tripartite framework also applies to understanding other countries in global AI governance. For example,South Korea,Bahrain,Brazil,Canada,and Turkey,as well as many international forums,tend toward EU-style Paternal Prevention,while India,Saudi Arabia,the UAE,and Israel are closer to U.S.-style Guardian Supervision,and Singapore,Japan,Australia,and New Zealand exhibit Companion Regulation characteristics similar to those of the UK and China.
The study concludes that these three models represent divergent responses to the governance challenges posed by AI's rapid development and social entrenchment. The EU model prioritizes legal certainty and rights protection through preemptive regulation; the U.S. approach champions innovation and institutional pluralism but often lags in anticipatory oversight; the China's and UK pathway,through different institutional arrangements,attempt to harmonize regulatory responsiveness with developmental goals. Among these,Companion Regulation emerges as a particularly salient alternative,offering a dynamic balance between flexibility and control. Its success,however,depends on the state's capacity to deploy technical expertise,coordinate across sectors,and adapt regulatory strategies in real time. As AI technologies continue to evolve,this model—grounded in adaptive governance and collaborative oversight—may offer a more effective pathway toward a responsible and sustainable AI future