
Columnist Kaleke Kolawole Navigating AI regulation: the EU vs the UK A rtificial intelligence (AI) is a rapidly evolving technology, transforming industries at a significant scale, automating repetitive tasks, and improving efficiency and decision-making. With the advancement of AI, the regulatory and legislative landscapes have quickly adapted to appropriately respond to the market and produce frameworks and standards for ensuring civil rights, ethical practices, and fostering innovation. There are two notable approaches to date: the European Unions AI Act and the UKs pro-innovation approach (this document will not be implemented on a statutory footing initially, but the government anticipates introducing a statutory duty on regulators, requiring them to have due regard to the principles). So, what are the key differences between the documents, and what are the implications for market research? The EU approach The EU Artificial Intelligence Act is a comprehensive and ambitious framework introduced by the European Union. The act is the first of its kind in Europe and the first AI legislation in the world. The objective of the rules is to ensure that AI systems are overseen by people, rather than by automation, to prevent harmful outcomes. The cornerstone of the act is the classification system it aims to govern AI systems, addressing a wide range of applications, from chatbots to complex machine-learning algorithms. The EU classification system is as follows: Unacceptable risk High risk Limited risk Minimal or no risk. Unacceptable risk AI systems are ones considered a threat to people and will be banned. They include: Cognitive behavioural manipulation of people or specific vulnerable groups for example, voice-activated toys that encourage dangerous behaviour in children Social scoring classifying people based on behaviour, socioeconomic status or personal characteristics R eal-time and remote biometric identification systems, such as facial recognition. All high-risk AI systems will be assessed before being put on the market and throughout their life-cycle. Generative AI systems, such as ChatGPT, would have to comply with transparency requirements: Disclosing that the content was generated by AI Designing the model to prevent it from generating illegal content Publishing summaries of copyrighted data used for training. Market research agencies using high-risk AI systems, such as those affecting fundamental rights, will need to undergo conformity assessments, maintain detailed documentation, and provide explicit user consent. This can lead to increased operational costs and potential delays in project execution. Limited-risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. For example, an individual interacting with a chatbot must be informed that they are engaging with a machine so they can decide whether to proceed or request to speak with a human instead. Minimal-risk applications are already widely deployed and make up most of the AI systems we interact with today. Examples include spam filters, AI-enabled video games, and inventorymanagement systems. The UK approach The UK white paper was published for consultation on 29 March 2023. It sets out an innovative and principles-based approach to regulating AI and defers to regulators (market/industry expertise) to implement the principles and issue guidance and resources. The objective of the UK approach is to drive growth and prosperity, ensure public trust in AI and strengthen the UKs position as a global leader. The UK approach is aimed towards improving business and innovation, rather than enforcing rigid and onerous legislative requirements, which could hold back AI innovation and reduce the UKs ability to respond quickly, and proportionately, to future technological advances. Instead, the direction makes use of regulators domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used. The five principles of the UK approach are: 48 Impact ISSUE 44 2023_pp48-49 Legal Kaleke.indd 48 05/12/2023 12:14