Business
Top AI Companies Join Government Effort to Set Safety Standards
The top U.S. artificial intelligence companies will participate in a government-led effort intended to craft federal standards on the technology to ensure that it’s deployed safely and responsibly, the Commerce Department said Thursday.
OpenAI, Anthropic, Microsoft Corp., Meta Platforms Inc. and Alphabet Inc.’s Google are among more than 200 members of a newly established AI Safety Institute Consortium under the department, Commerce Secretary Gina Raimondo said. Also on the list are Apple Inc., Amazon Inc., Hugging Face Inc. and IBM.
The top industry players will work with the National Institute of Standards and Technology, a body within Commerce, along with other Technology companies, civil society groups, academics, and state and local government officials to establish safety standards regarding AI.
Read More: Biden Economic Adviser Elizabeth Kelly Picked to Lead AI Safety Testing Body
“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem,” Raimondo said in a statement.
Major tech companies have been engaging with the Biden administration and policymakers in Washington on regulating AI as the Technology rapidly advances and is poised to disrupt industries. Federal officials are seeking to maintain U.S. leadership on AI development, intending to set rules that protect Americans from hazards, such as misinformation and privacy violations, but still promote the Technology’s potential to spur progress in Health care, Education, and other industries.
“Progress and responsibility have to go hand in hand. Working together across industry, government and civil society is essential if we are to develop common standards around safe and trustworthy AI,” Nick Clegg, president of global affairs at Meta, said in a statement. “We’re enthusiastic about being part of this consortium and working closely with the AI Safety Institute.”
Thursday’s initiative comes as part of President Joe Biden’s sweeping executive order signed last fall that charged the Commerce Department with facilitating the development of safety, security, and testing standards for AI models as well as rules for watermarking AI-generated content.
Read More: Why Biden’s AI Executive Order Only Goes So Far
Prominent industry startups, including Scale AI, which provides training data for generative AI models, and Altana AI, which maps global supply chains using AI, will also take part in establishing the safety standards.
“In doing so, we not only contribute to the responsible use of AI, but also reinforce the United States’ position as the global leader in the realm of artificial intelligence,” John Brennan, Scale AI’s public sector general manager, said in a statement.
-
Business1d ago
US House passes measure that could punish nonprofits Treasury Department decides are ‘terrorist’
-
Business1d ago
Fast fashion may seem cheap, but it’s taking a costly toll on the planet − and on millions of young customers
-
Business2d ago
New Information: These HV Big Lots Are Now Staying Open
-
Business2d ago
Brush Fire Rages On Near Butternut In Great Barrington, MA
-
Business4d ago
Carbon offsets can help bring energy efficiency to low-income Americans − our Nashville data shows it could be a win for everyone
-
Business4d ago
Workplace diversity training programs are everywhere, but their effectiveness varies widely
-
Business5d ago
Firm bosses urged to make use of Welsh language to revitalise rural economic system
-
Business6d ago
Donor-advised funds are drawing a lot of assets besides cash – taking a bigger bite out of tax revenue than other kinds of charitable giving