US administration announces new actions to promote responsible AI

President Biden emphasizes that responsible AI innovation should serve the public good while providing safeguards for society, security, and economy.

Share the post

The Biden-Harris Administration announced new actions to promote responsible AI innovation and safeguard people’s rights and safety. These actions build on the government’s ongoing efforts to address risks and opportunities posed by AI technology, aiming to improve the lives of the American people.

President Biden emphasizes that responsible AI innovation should serve the public good while providing safeguards for society, security, and economy. 

Vice President Kamala Harris meet with executives from Alphabet, Anthropic, Microsoft, and OpenAI to underscore the importance of ethical AI development. This meeting is part of a broader effort to engage stakeholders, including advocates, companies, researchers, and civil rights organizations, in AI discussions.

The Vice President will join other top Biden officials, such as Commerce Secretary Gina Raimondo, Chief of Staff Jeff Zients, National Security Advisor Jake Sullivan, and Office of Science and Technology Policy Director Arati Prabhakar, at the summit.

The Administration has taken steps to foster responsible AI innovation, including the Blueprint for an AI Bill of Rights, executive actions, and a Risk Management Framework and Research Resource roadmap.

In February, President Biden signed an Executive Order requiring federal agencies to guard against bias in AI technologies. Last week, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Department of Justice’s Civil Rights Division affirmed their commitment to use existing legal powers to protect Americans from AI-related harm.

The Administration is addressing national security concerns surrounding AI, particularly in cybersecurity, biosecurity, and safety. Government cybersecurity experts have been enlisted to ensure leading AI companies have access to best practices for protecting models and networks.

The National Science Foundation will invest $140 million to launch seven new Artificial Intelligence Research Institutes across the United States, extending their network of organizations into nearly every state.

Leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, have committed to participating in an independent evaluation of their systems at DEFCON 31’s AI Village. The evaluation will be conducted on a platform developed by Scale AI and will adhere to responsible disclosure principles.

Nathan Yasis

Nathan Yasis

Nathan studied information technology and secondary education in college. He dabbled in and taught creative writing and research to high school students for three years before settling in as a digital journalist.

banner place

What to read next...
Nathan Yasis

Nathan Yasis

Nathan studied information technology and secondary education in college. He dabbled in and taught creative writing and research to high school students for three years before settling in as a digital journalist.