President Joe Biden’s top scientific advisers are demanding a new “right law” to protect against powerful new artificial intelligence technology.
The White House Office of Scientific and Technical Policy on Friday unveiled a fact-finding mission that looks at other biometric tools used to identify people or to assess their emotional or mental state and character.
Aldra Nelson, Biden’s chief scientific adviser and Eric Lander, deputy director of science and community, published a commentary on the wire magazine, describing the need to develop new defenses against AI’s misleading and harmful applications. Their privacy.
“Counting rights is a first step,” they wrote. “What can we do to protect them? The potential is that the federal government refuses to buy software or technology products, fails to respect these rights, and adopts new laws and regulations to fill gaps in this ‘rights bill’ that federal contractors must use following technologies.”
This is not the first time Biden management has raised concerns about the harmful effects of AI, but it is one of the clearest steps to do something about it.
European regulators have already taken steps to curb dangerous AI applications that threaten the control or rights of the people. EU lawmakers took action this week in support of banning biometric mass surveillance, although none of the camp countries were bound by Tuesday’s vote, calling for new rules prohibiting the scanning of facial features in public places.
Political leaders in Western democracies have said they want to balance the desire to tap the economic and social potential of AI while addressing the growing concern about the reliability of tools that can track individuals and make recommendations for obtaining profiles and jobs, loans and educational opportunities. .
A federal document filed on Friday called for public feedback from AI developers, experts and anyone affected by biometric data collection.
The PSA, a software trading association backed by companies such as Microsoft, IBM, Oracle and Salesforce, said it welcomed the White House’s focus on combating AI bias, but stressed the need for companies to make their own assessment of risks. Their AI applications then show them how to minimize those risks.
“It implements the good that everyone sees in AI, but it also leads to discrimination and reduces the risk of perpetuating bias,” said Aaron Cooper, the group’s vice president of global policy.