Here’s what you need to know about the race to create laws around AI.
All across the world, governments are racing to legislate new rules around Artificial Intelligence (AI). The rapid evolution of AI has prompted concerns around security, privacy, and AI’s future capabilities as the technology evolves. As recently as July 2023, AI researcher and University of Montreal professor Yoshua Bengio urged US Congress to enact laws regulating the future of AI. Bengio and other experts testified before Congress that runaway AI could result in corrupted elections, bank fraud, automated nuclear weapons ready to go rogue at any moment, or even the dissemination of instructions for how to implement a biological attack on North American soil.
There’s no shortage of “what if” scenarios surrounding AI. And certainly, as the technology develops, it becomes more capable of executing more complex tasks. But is there any substance to these concerns, or are they purely science fiction? And what, if any, guardrails should be implemented to stop bad actors? Here’s a quick rundown of the legislation currently in progress – and the concerns governments are trying to address with these laws.
Canada was the first country in the world to propose legislation regulating AI. The current Bill C-27, the Artificial Intelligence and Data Act, is winding its way through Parliament. If passed, Bill C-27 would govern any piece of technology that processes data “related to human activities” through a neural network, machine learning, or another technique to generate content, make decisions, or create recommendations or predictions. That’s a broad definition – by design. Bill C-27 further divides AI systems into “high-impact” and “non-high-impact” systems.
If passed, the AID Act would require developers of high-impact AI systems to mitigate risks or harms of biased output, monitor the effectiveness of mitigation measures, and publish a plain-language description of how the AI system works. Most importantly, developers of high-impact AI systems would have to notify the Ministry if the system causes or is likely to cause material harm.
For non-high-impact systems, the obligations would be less onerous. Owners of non-high-impact systems would only need to assess whether their system qualifies as high-impact and, if not, create measures to ensure any data used or made available for use is fully anonymized.
In the United States, the Biden Administration has made regulating AI an important policy priority. The Biden White House has developed a blueprint for an AI Bill of Rights, a document that lays out the fundamental features of what an AI Bill of Rights should look like. While not yet law, this document outlines the Biden Administration’s 5 key priorities and approach to regulating AI.
The first element of the AI Bill of Rights is safe and effective systems: In essence, AI systems should undergo testing, risk mitigation, and ongoing monitoring to ensure those systems are safe and effective.
The second element is protection against discrimination: AI algorithms should be developed in a way that does not contribute to racism, sexism, or discrimination on the basis of age, disability, religion, or other protected classes.
The third element is data privacy: AI systems should have built-in privacy protections to ensure users have control over how their data is used. These privacy protections should be enabled by default and should ensure that only necessary data is collected.
The fourth element is notice and explanation: When AI systems are being used, the user should be made aware that AI is in the mix, and an explanation should be provided that delves into how the AI system works and what information it collects.
The fifth element is human alternatives, consideration, and fallback: Consumers should be able to opt out of an automated system where possible, using a human alternative when one is available.
The European Union has developed draft legislation designed to take a risk-based approach to AI; under this legislation, certain AI practices would be prohibited. For instance, the legislation would place outright bans on real-time biometric IDs in public spaces, predictive policing systems, and facial recognition databases.
While the majority of the proposed EU law focuses on these high-risk AI activities, there are other foundational elements in the law designed to protect consumers against lower-risk activities. In the draft law, providers of foundational AI models would have new obligations to guarantee protection of human rights and health & safety. These providers would also need to implement guardrails to ensure their AI systems could not be used to subvert democracy and the rule of law. Finally, AI providers would be required to prevent their models from generating illegal content or publishing summaries of copyrighted data.
While none of the above bills are yet law, they all address important concerns around the use of AI. AI systems, if left unchecked, could very well be used to commit racial profiling, interfere in elections, and perform a variety of other nefarious acts. While these risks are real, they underscore the power of AI.
AI can be a robust tool, and like all tools, it can be used in positive or negative ways. Governments all around the world are working to ensure that AI is used only in a positive manner by legitimate actors. At Appara, for instance, we use AI to help law firms accelerate paperwork and reduce errors in filings.
These regulations also underscore the positive potential of AI to improve life for all humankind. The driving philosophy behind these regulations is that AI should serve as a tool for human betterment and not a weapon; that’s a philosophy everyone can agree with. If AI truly is powerful enough to require legislation, then imagine what it can accomplish when it’s devoted toward a noble purpose.
How would you use AI to improve your life? Do you want to see generative AI in action? Schedule a demo of Appara today to discover how artificial intelligence can save you hours on paperwork.
Engaging insights and the latest news, designed for legal professionals.