31-5-2023 (WASHINGTON) The White House finds itself on the frontline of the ongoing battles surrounding artificial intelligence (AI) as it grapples with the increasing risks posed by AI-generated content. Recently, the White House press shop swiftly debunked a fake image of a Pentagon bombing that circulated online, helping the stock market recover from a momentary loss of $500 billion in value. However, as AI technology continues to develop and improve, the production of AI-generated text, audio, and video that closely resembles human creations poses a growing concern.
The press shop has already been briefed on the national security risks associated with AI-altered images and videos, emphasizing the need for caution. Beyond the press shop, the White House has escalated its efforts to evaluate and manage AI risks. During meetings with AI companies, the administration has stressed the responsibility of these companies to ensure the safety of their products. Additionally, the strategic plan for AI research and development has been updated for the first time in four years, and a process has been initiated to develop an AI bill of rights.
Prominent industry figures, including OpenAI CEO Sam Altman, issued a statement calling for global leaders to prioritize mitigating the risks of AI, stating that it should be considered alongside other significant societal-scale risks such as pandemics and nuclear war. The White House press secretary, Karine Jean-Pierre, refrained from confirming whether President Biden shares the belief that mismanaged AI could lead to extinction. However, she acknowledged the immense power of AI and emphasized the administration’s commitment to risk mitigation.
Various proposals for AI regulation, including legislation put forth by Senator Michael Bennet, aim to oversee AI and Big Tech more broadly. Concerns persist over the rise of deepfake videos and manipulated images on social media platforms. The White House assistant press secretary, Robyn Patterson, highlights the need for awareness of this growing trend and its potential exponential expansion.
While AI holds enormous potential, triggering a global race to harness its power, the unforeseen consequences could be severe, particularly in the context of upcoming elections. The prevalence of AI-generated content, disseminated at scale, has the potential to create a collective inauthenticity that undermines public trust in facts. In a country already grappling with misinformation and conspiracy theories, the impact of AI on public trust is a significant concern. The erosion of trust poses a threat to the foundations of democracy.
The White House has initiated a new working group within the President’s Council of Advisors on Science and Technology to address AI-related issues. AI researchers, including Sarah Kreps from Cornell University’s Brooks School Tech Policy Institute, have been invited to contribute their expertise to the working group. The challenges posed by AI call for a careful and proactive approach to mitigate risks and safeguard the integrity of information in an increasingly AI-driven world.