AI regulation: where are China, the EU and the US? | Foley & Lardner LLP

Thanks to co-author Lara Coole, summer associate in the Jacksonville office of Foley & Lardner, for her contributions to this article.

Artificial Intelligence (AI) systems are poised to dramatically change the way businesses and governments operate globally, with significant changes already underway. This technology has manifested in multiple forms, including natural language processing, machine learning, and autonomous systems, but with the right inputs, it can be harnessed to make predictions, recommendations, and even decisions.

Consequently, businesses are increasingly adopting this dynamic technology. A 2022 global study by IBM found that 77% of companies are currently using AI or exploring AI for future use, creating value by increasing productivity through automation, better decision making and an improved customer experience. Additionally, according to a 2021 PwC study, the COVID-19 pandemic accelerated the pace of AI adoption for 52% of companies as they sought to mitigate the impact of the crisis on business planning. workforce, supply chain resilience and demand projection.

Challenges of global regulation

For these many companies investing significant resources in AI, understanding the current and proposed legal frameworks governing this new technology is essential. Specifically for companies operating globally, the task of ensuring that their AI technology complies with applicable regulations will be complicated by the various standards emerging from China, the European Union (EU) and the United States.

China

China has taken the lead in moving AI regulations past the proposal stage. In March 2022, China passed regulations governing companies’ use of algorithms in online recommendation systems, requiring such services to be moral, ethical, accountable, transparent, and “spread positive energy.” The regulations require companies to notify users when an AI algorithm is playing a role in determining what information to display to them and to give users the option not to be targeted. In addition, the regulation prohibits algorithms that use personal data to offer different prices to consumers. We expect these themes to manifest in AI regulations around the world as they develop.

European Union

Meanwhile, in the EU, the European Commission has published a proposed comprehensive regulatory framework called the Artificial Intelligence Law, which would have a much broader scope than the regulations enacted by China. The proposal focuses on the risks created by AI, with applications classified into categories of minimal risk, limited risk, high risk or unacceptable risk. Depending on the designated risk level of an application, there will be a corresponding government stock or bonds. So far, the proposed obligations focus on improving the security, transparency, and accountability of AI applications through human oversight and ongoing monitoring. Specifically, companies will be required to register autonomous high-risk AI systems, such as biometric remote identification systems, in an EU database. If the proposed regulations are passed, the earliest compliance date would be the second half of 2024 with potential fines for non-compliance ranging from 2-6% of a company’s annual revenue.

Additionally, the previously adopted EU General Data Protection Regulation (GDPR) already has implications for AI technology. Section 22 prohibits decisions based on solely automated processes that produce legal consequences or similar effects for individuals, unless the program obtains the user’s explicit consent or meets other requirements.

United States

In the United States, there has been a piecemeal approach to AI regulation so far, with states passing their own disparate AI laws. Many of the regulations passed focus on creating various commissions to determine how state agencies can use AI technology and to study the potential impacts of AI on the workforce and consumers. The pending joint initiatives by states go one step further and would regulate the accountability and transparency of AI systems when processing and making decisions based on consumer data.

At the national level, the U.S. Congress signed into law the National AI Initiative Act in January 2021, creating the National AI Initiative which provides “a comprehensive framework to strengthen and coordinate AI research, development, demonstration, and education activities. AI in all US departments and agencies. . . .” The act created new offices and task forces aimed at implementing a national AI strategy, involving a host of US administrative agencies, including the Federal Trade Commission (FTC), the Department of Defense, the Department of Agriculture, Ministry of Education and Ministry of Health and Health. Personal services.

National pending legislation includes the Algorithmic Accountability Act of 2022, which was introduced in both houses of Congress in February 2022. “covered entities,” including companies meeting certain criteria, to conduct impact assessments when using automated decision-making processes. This would specifically include those derived from AI or machine learning.

The Federal Trade Commission is proactive

Although the FTC hasn’t enacted AI-specific regulations, the technology is on the agency’s radar. In April 2021, the FTC issued a notice advising companies that using AI that produces discriminatory results amounts to a violation of Section 5 of the FTC Act, which prohibits unfair or deceptive practices. And the FTC may soon take that warning one step further – in June 2022, the agency indicated that it would submit an Advance Notice of Draft Rulemaking to “ensure that algorithmic decision-making does not result in of Harmful Discrimination” with the public comment period ending in August 2022. The FTC also recently released a report to Congress on how AI can be used to combat online harms, ranging from scams, counterfeits and opioid sales, but cautioned against over-reliance on these tools, citing the technology’s susceptibility to producing inaccurate, biased and discriminatory results.

Potential corporate liability in the United States

Companies should carefully discern whether other non-AI-specific regulations could expose them to potential liability for their use of AI technology. For example, the U.S. Equal Employment Opportunity Commission (EEOC) issued guidance in May 2022 warning companies that their use of algorithmic decision-making tools to assess job applicants and employees could violate the Americans with Disabilities Act in part, intentionally or unintentionally. eliminate people with disabilities. A more in-depth analysis of the EEOC guidelines can be found here.

Broader impact on American businesses

Many other US agencies and offices are beginning to dive into the AI ​​fray. In November 2021, the White House Office of Science and Technology Policy sought engagement from stakeholders across sectors with the goal of developing a “Bill of Rights for an Automated Society.” Such a bill of rights could cover topics such as the role of AI in the criminal justice system, equal opportunities, consumer rights and the healthcare system. Additionally, the National Institute of Standards and Technology (NIST), under the U.S. Department of Commerce, is engaging with stakeholders to develop “a voluntary risk management framework for trusted AI systems.” The outcome of this project may be analogous to the regulatory framework proposed by the EU, but in a voluntary format.

And after?

The overarching theme of adopted and pending AI regulations globally is to maintain AI accountability, transparency, and fairness. For companies leveraging AI technology, ensuring their systems remain compliant with the various regulations intended to achieve these goals could prove difficult and costly. Two aspects of AI’s decision-making process make oversight particularly demanding:

  • Opacity where users can control data inputs and view outputs, but are often unable to explain how and with what data points the system made a decision.
  • Frequent adaptation where processes evolve over time as the system learns.

Therefore, it is important that regulators avoid overburdening companies to ensure that stakeholders can still leverage the great benefits of AI technologies in a cost-effective manner. The United States has an opportunity to observe the results of current Chinese and EU regulatory action to determine whether their approaches strike a favorable balance. However, the United States should potentially accelerate the enactment of similar laws so that it can play a role in setting the global tone for AI regulatory standards.

[View source.]

Comments are closed.