Bill of Rights: The US wants to set the law for artificial intelligence

The White House wants standards in the AI industries with the Bill of Rights for AI.
12 October 2021

WILMINGTON, DELAWARE – JANUARY 16: Director of the Office of Science and Technology Policy (OSTP) nominee and presidential science adviser designate Eric Lander speaks during an announcement January 16, 2021 at the Queen theater in Wilmington, Delaware. President-elect Joe Biden has announced key members of his incoming White House science team. Alex Wong/Getty Images/AFP (Photo by ALEX WONG / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

  • Reports claim that the White House Office of Science and Technology Policy is developing law to guard against artificial intelligence — with input from the public.
  • Biden’s chief science adviser voiced the need to develop new safeguards against faulty and harmful uses of AI that can unfairly discriminate against people or violate their privacy.

This year has so far been an important inflection point for artificial intelligence (AI) law. Within the first few months of 2021 alone, government bodies — including US financial regulators, the US Federal Trade Commission, and the European Commission — have announced guidelines or policies for regulating AI. 

Now, top science advisers to President Joe Biden are urging for a new “bill of rights” to guard against powerful new AI technology. In fact, The White House’s Office of Science and Technology Policy last Friday, launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character. If anything, it reflects how the regulation of AI is rapidly evolving. 

The right regulation, like the one by the European Commission, that gets out in front of emerging technology can protect consumers and drive innovation. The Biden administration is aware of that, hence, in an op-ed published yesterday in Wired, Biden’s chief science adviser, Eric Lander, and the deputy director for science and society, Alondra Nelson, mentioned the need to develop new safeguards against faulty and harmful uses of AI that can unfairly discriminate against people or violate their privacy.

They both reckon enumerating the rights is just a first step. “What might we do to protect them? Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this ‘bill of rights,’ or adopting new laws and regulations to fill gaps.”

Truth be told, even though this is not the first time the Biden administration has spoken about AI and its possible consequences to society. It is, however, one of America’s clearest steps toward doing something about it. The federal document that was filed on Friday basically seeks public comments from AI developers, experts and anyone who has been affected by biometric data collection.

According to a report by the Associated Press, the software trade association BSA, backed by companies such as Microsoft, IBM, Oracle and Salesforce, said it welcomed the White House’s attention to combating AI bias but is pushing for an approach that would require companies to do their own assessment of the risks of their AI applications and then show how they will mitigate those risks. 

“It enables the good that everybody sees in AI but minimizes the risk that it’s going to lead to discrimination and perpetuate bias,” the report said, quoting BSA’s vp for global policy Aaron Cooper.

Both Biden’s top science advisers said the government is “starting here because of how widely they’re (AI) being adopted, and how rapidly they’re evolving, not just for identification and surveillance, but also to infer our emotional states and intentions.” Both believe that developing a bill of rights for an AI-powered world won’t be easy, but it’s critical.

Elsewhere, European regulators in April this year have already taken measures to rein in the riskiest AI applications that could threaten people’s safety or rights. To top it off, just last week, European Parliament lawmakers passed a non-binding resolution calling for a ban on law enforcement use of facial recognition technology in public places.