Europe proposes strict rules for artificial intelligence

The European Union on Wednesday unveiled strict rules for the use of artificial intelligence, a unique policy that outlines how companies and governments can use a technology seen as one of the most important, but ethically charged, scientific breakthroughs in recent memory.

The draft rules would limit the use of artificial intelligence in a range of activities, from self-driving cars to recruitment decisions, bank loans, school enrollment selections and exam scoring. It would also cover the use of artificial intelligence by law enforcement and judicial systems – areas considered “high risk” because they could endanger people’s security or fundamental rights.

Some uses would be banned altogether, including live facial recognition in public areas, although there would be several exceptions for national security and other purposes.

The 108-page policy is an attempt to regulate an emerging technology before it goes mainstream. The rules have far-reaching implications for major technology companies, including Amazon, Google, Facebook and Microsoft, which have invested resources in the development of artificial intelligence, as well as numerous other companies using the software to develop drugs, purchase insurance policies and creditworthiness. . Governments have used versions of the technology in criminal law and in the provision of public services such as income support.

Companies that violate the new regulations, which could take several years to complete the European Union policy-making process, could face fines of up to 6 percent of global sales.

“In the field of artificial intelligence, trust is a must, not a fun thing,” Margrethe Vestager, the European Commission vice president overseeing digital policy for the 27-country bloc, said in a statement. “With these historic rules, the EU is at the forefront of developing new global standards to ensure AI can be trusted.”

European Union regulations would require companies providing artificial intelligence in high-risk areas to provide regulators with evidence of its safety, including risk assessments and documentation explaining how the technology makes decisions. The companies must also guarantee human oversight of how the systems are created and used.

Some applications, such as chatbots that provide human conversations in customer service situations, and software that create manipulated images that are difficult to detect, such as “deepfakes,” should tell users that what they are seeing is computer generated.

For the past decade, the European Union has been the most aggressive watchdog of the technology industry in the world, with policies often used as blueprints by other countries. The block has already enacted the world’s most far-reaching data privacy regulation and is debating additional antitrust and content moderation laws.

But Europe is no longer alone in pushing for tighter controls. The largest tech companies are now facing a broader crackdown from governments around the world, each with their own political and policy motivations, to diminish the power of the industry.

In the United States, President Biden has filled his administration with industry critics. Britain is creating a technical regulator to oversee the industry. India is tightening up social media surveillance. China has focused on domestic technology giants such as Alibaba and Tencent.

The results in the coming years could reshape the way the global internet works and how new technologies are used, giving people access to different content, digital services or online freedoms based on where they are.

Artificial intelligence – training machines to perform tasks and make their own decisions by studying vast amounts of data – is seen by technologists, business leaders and government officials as one of the world’s most transformative technologies, promising huge productivity gains.

But as the systems become more sophisticated, it can become more difficult to understand why the software is making a decision, a problem that could get worse as computers become more powerful. Researchers have raised ethical questions about its use, suggesting that it could perpetuate existing prejudice in society, invade privacy, or automate more jobs.

The bill’s publication by the European Commission, the bloc’s executive body, elicited mixed reactions. Many industry groups were relieved that the regulations were not stricter, while community groups said they should have gone further.

“There has been a lot of discussion in recent years about what it would mean to regulate AI, and the fallback option until now has been to do nothing and see what happens,” said Carly Kind, director of the Ada Lovelace. Institute in London, which studies the ethical use of artificial intelligence. “This is the first time a country or regional bloc has tried it.”

Ms. Kind said many were concerned that the policy was too broad and that companies and technology developers had too much discretion to regulate themselves.

“If it doesn’t set strict red lines and guidelines and very clear boundaries on what is acceptable, it opens a lot to interpretation,” she said.

The development of fair and ethical artificial intelligence has become one of the most controversial issues in Silicon Valley. In December, the co-leader of a team at Google studying ethical use of the software said she was fired for criticizing the company’s lack of diversity and the biases built into modern artificial intelligence software. Debates have been held within Google and other companies about the sale of advanced software to governments for military use.

In the United States, the risks of artificial intelligence are also weighed up by the government.

This week, the Federal Trade Commission warned against selling artificial intelligence systems that use racially biased algorithms, or algorithms that could deny people “work, housing, credit, insurance, or other benefits.”

Elsewhere, in Massachusetts, and in cities like Oakland, California; Portland, Ore .; and San Francisco, governments have taken steps to limit police use of facial recognition.

Source