Can AI Be Regulated Before It’s Too Late?

strongIn this photo illustration, an Open AI logo is seen displayed on a smartphone and in the background. The White House is looking to release new policies to undercut the possible damage that AI technology can bring, which includes the capacity to displace millions from their jobs. AVISHEK DAS/GETTY IMAGES/strong
strongIn this photo illustration, an Open AI logo is seen displayed on a smartphone and in the background. The White House is looking to release new policies to undercut the possible damage that AI technology can bring, which includes the capacity to displace millions from their jobs. AVISHEK DAS/GETTY IMAGES/strong


By Natan Ponieman

A newly released report by the Biden-Harris administration recognized, “AI technologies pose risks that can negatively impact individuals, groups, organizations, communities, society, the environment, and the planet.”

Public reactions to the rapid growth of artificial intelligence have quickly shifted from excitement to fear, as developments in generative machine learning reached new levels during 2023. 

Politicians are quickly catching up to these concerns, and the White House is looking to release new policies to undercut the possible damage that AI technology can bring, which includes the capacity to displace millions from their jobs.

Although a bill was introduced earlier this year to regulate AI, legislative action on the issue is moving much slower than the progress of the technology itself.

On Wednesday, Alphabet Inc CEO Sundar Pichai said, “AI is too important not to regulate well,” in agreement with previous comments by OpenAI’s Sam Altman and Tesla Inc. CEO Elon Musk.

AI is presenting an impressive capacity to disrupt the status quo. “If this technology goes wrong — it can go quite wrong,” said Altman, who testified before Congress last week and whose company is behind the groundbreaking release of ChatGPT and GPT-4. 

This week, the White House announced “new efforts that will advance the research, development, and deployment of responsible artificial intelligence,” with the goal of protecting individuals’ rights and safety.

National Security Council Coordinator Admiral John Kirby speaks during the White House Press Briefing at the White House in Washington D.C., United States on June 26, 2023. The Biden-Harris administration acknowledged in a recently released report that “AI technologies pose risks that can negatively impact individuals, groups, organizations, communities, society, the environment, and the planet.” CELAL GUNES/GETTY IMAGES

In February 2022, the White House’s Office of Science and Technology Policy issued a request for information on artificial intelligence, getting more than 60 responses from researchers, research organizations, professional societies, civil society organizations and individuals.

These were used to update the National AI R&D Strategic Plan, a 56-page document initially released in 2016 and later updated in 2019. Contrary to past editions, the latest expert submissions focused mainly on the ethical, legal, and societal implications of AI as well as the safety and security of AI systems.

The 2023 version of the report said that the responses underscore a heightened priority across academia, industry and the public for developing AI systems that are safe, transparent, improve equity, and don’t do not violate privacy.

“Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities,” said the report.

What’s in Biden’s AI Rights Strategy?

The Office of Science and Technology Policy also recently released a blueprint for an “AI Bill of Rights” that is intended as a guide for a society that protects all people from the many possible threats that this technology can bring.

According to the agency, rights around AI should include:

  • Being protected from unsafe or ineffective AI systems.
  • Not face discrimination by algorithms and systems, which should be used and designed in an equitable way.
  • Be protected from abusive data practices and have agency over how your data is used.
  • Receive a notice when an automated system is being used and understand how and why it’s being used.
  • Have the right to opt-out from AI and get access to a person who can quickly consider and remedy your problems.

Produced in association with Benzinga



The post Can AI Be Regulated Before It’s Too Late? appeared first on Zenger News.