The European Union is without a doubt the world leader in privacy regulation, thanks to the General Data Protection Regulation (DGPR) and other initiatives such as the Data Act or the European Governance Data Act. In the field of artificial intelligence (AI), the EU, despite lagging behind the two superpowers, China and the United States, is developing a comprehensive regulatory framework for AI.
The United States does not yet have federal laws on privacy and artificial intelligence – most of the existing legislation is at the state level – but in recent months there have been several legislative and regulatory developments in these areas that suggest that federal legislation may not be far away. away.
The various AI legislative initiatives in the United States seek to address the development and use of AI systems, in particular risks related to algorithmic bias and discrimination.
May 12, Senator Michael Bennet introduced the Digital Platform Commission Act of 2022 (S. 4201), which would create a new federal agency, the Federal Digital Platform Commission, responsible for developing rules for online platforms that facilitate interactions between consumers as well as between consumers and entities offering goods and services.
Interestingly, the bill, in addition to regulating platforms, requires online platform algorithms to be “fair, transparent and without prejudice, abuse, anti-competitive or misleading bias.”
Read more: Bill would create new US high-tech regulator
Another bill that could regulate corporate algorithms is the two-part one American Data Privacy and Protection Act (HR 8152), introduced by a group of legislators led by Representative Frank Pallone. As the name rightly suggests, this bill mostly regulates privacy and data, but it would also require large data holders to perform “algorithmic impact assessments” of algorithms that could cause potential harm to a person. The bill does not ban the use of artificial intelligence or algorithms, but it does seek to impose on companies some obligations to share information about how their algorithms work – and if they can cause harm, how they plan to reduce the risk to individuals.
Federal regulators are also active in AI regulation. Last month, in June, the Federal Trade Commission (FTC) sent a report to Congress warning of the use of AI tools to combat online harm. The FTC has also previously warned of the potential imbalances associated with AI, and the regulator is exploring new rules for this sector. The National Institute of Standards and Technology (NIST) has also published several reports on this space. The first provides guidance on risk management in the design, development, use and evaluation of AI systems. The second is intended to provide guidance for mitigating harmful bias in AI systems.
See also: The Trade Department’s NIST unit seeks comments on draft AI rules for the financial sector
After years of political stalemate over federal privacy legislation, Republicans and Democrats agreed to introduce legislation in this area. The U.S. Data Protection Act was enacted on June 23, and it represents the most comprehensive legislation to date at the federal level to regulate privacy.
The bill will regulate entities that collect, process and transmit data, and they will only be able to perform these activities “for what is reasonably necessary and proportionate.” The bill will also ban the collection and processing of certain particularly sensitive data, except in some cases. Consumers will also have the right to access, delete, correct and portability of their data.
Despite the agreement between two parties, there are still several obstacles to this bill becoming law, such as deciding whether individuals can sue companies for data breaches. But one thing seems clear: the United States is closer to having federal legislation on privacy and artificial intelligence.
Read more: The bill on data protection passes the US House Panel
For all PYMNTS TechREG coverage, subscribe to the daily TechREG newsletter.