Technology
AI companies join safety consortium to address risks
The Biden administration on Thursday said leading artificial intelligence companies are among more than 200 entities joining a new US consortium to support the safe development and deployment of generative AI.
Commerce Secretary Gina Raimondo announced the US AI Safety Institute Consortium (AISIC), which includes OpenAI, Alphabet's Google, Anthropic and Microsoft along with Facebook-parent Meta Platforms, Apple, Amazon.com, Nvidia, Palantir, Intel, JPMorgan Chase and Bank of America.
"The US government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a statement.
The consortium, which also includes BP, Cisco Systems, IBM, Hewlett Packard, Northop Grumman, Mastercard, Qualcomm, Visa and major academic institutions and government agencies, will be housed under the US AI Safety Institute (USAISI).
The group is tasked with working on priority actions outlined in President Biden’s October AI executive order "including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content."
Major AI companies last year pledged to watermarkAI-generated content to make the technology safer. Red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to US Cold War simulations where the enemy was termed the "red team."
Biden's order directed agencies to set standards for that testing and to address related chemical, biological, radiological, nuclear, and cybersecurity risks.
In December, the Commerce Department said it was taking the first step toward writing key standards and guidance for the safe deployment and testing of AI.
The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a "new measurement science in AI safety," Commerce said.
Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.
While the Biden administration is pursuing safeguards, efforts in Congress to pass legislation addressing AI have stalled despite numerous high-level forums and legislative proposals.
-
Technology1d ago
AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond
-
Technology1d ago
Awkwardness can hit in any social situation – here are a philosopher’s 5 strategies to navigate it with grace
-
Technology1d ago
No need to overload your cranberry sauce with sugar this holiday season − a food scientist explains how to cook with fewer added sweeteners
-
Technology2d ago
There Is a Solution to AI’s Existential Risk Problem
-
Technology2d ago
Public health surveillance, from social media to sewage, spots disease outbreaks early to stop them fast
-
Technology2d ago
Why a Technocracy Fails Young People
-
Technology3d ago
Transplanting insulin-making cells to treat Type 1 diabetes is challenging − but stem cells offer a potential improvement
-
Technology3d ago
Should I worry about mold growing in my home?