Connect with us

Technology

How Commerce Secretary Gina Raimondo Became America’s Point Woman on AI

Published

on

/ 7412 Views

Until mid-2023, artificial intelligence was something of a niche topic in Washington, largely confined to small circles of tech-policy wonks. That all changed when, nearly two years into Gina Raimondo’s tenure as Secretary of Commerce, ChatGPT’s explosive popularity catapulted AI into the spotlight. 

Raimondo, however, was ahead of the curve. “I make it my business to stay on top of all of this,” she says during an interview in her wood-paneled office overlooking the National Mall on May 21. “None of it was shocking to me.”

But in the year since, even she has been startled by the pace of progress. In February 2023, a few months after ChatGPT launched, OpenAI’s leadership previewed its latest model, GPT-4, to Raimondo, who used it to write a speech she says was “alarmingly close” to her own prose. Today, tech companies continue to roll out new products with capabilities that would have seemed like science fiction just months earlier. As AI has rocketed up the government’s priority list, President Joe Biden made Raimondo point woman, charging her with controlling access to the specialized semiconductor chips required to train the most advanced AI systems and with ensuring that those systems are safe.

With her Business-friendly approach, Raimondo, 53, is popular among the leaders of the very companies she’s tasked with steering. “She has transformed the Commerce Department from a department that really did not focus on Technology issues under President Trump to, in many ways, the very center of the federal government for a focus on next-generation Technology,” says Brad Smith, vice chair and president of Microsoft and one of Raimondo’s many tech industry advisers.

But questions linger over whether her department—focused on promoting rather than regulating U.S. industry—is well-suited to lead the government’s AI response. Many existing laws already apply to AI, with various agencies responsible for enforcement—the Federal Trade Commission, for example, has long regulated the use of AI in loan application assessments. But Biden’s Executive Order in October 2023 made Commerce the de facto authority on the general-purpose AI systems like those powering ChatGPT. Congress has not addressed the new technology, leaving Raimondo to rely on voluntary cooperation. Her department is also chronically underfunded: the U.S. AI Safety Institute’s $10 million budget, for example, is dwarfed by its British counterpart’s $127 million.

If the powerful systems that Commerce grapples with keep improving at the rate many predict, the stakes are incredibly high. AI could be decisive in the U.S.-China cold war, displace countless workers, and even pose an existential risk to humanity. Raimondo, with limited resources and legal authority, has to confront what she calls the immense opportunities and threats of AI. 

“We’re gonna get it done, though it is also true: Congress needs to act, we need more money, and it’s super daunting,” she says. “We’re running as fast as we can.”


Navigating this monumental challenge requires technological acumen, and Raimondo’s background has prepared her well. She spent most of her pre-politics career in venture capital. “I was, once upon a time, a tech investor,” she says. “It comes naturally to me.”

Even after pivoting into politics, serving first as Rhode Island’s general treasurer and then as a two-term governor, Raimondo continued to follow technology. Entrepreneur Reid Hoffman hosted a dinner in 2016 for a group of Bay Area intellectuals and Raimondo, who was touring Silicon Valley at the time. “A lot of people from around the world come to Silicon Valley to try to understand what they can learn to improve their region,” says Hoffman. “The funny thing is, very few U.S. politicians do that. Gina is one of those few.”

Raimondo still speaks with executives like Hoffman, as well as advocates, academics, and venture capitalists. “I try so hard to talk to as many people in the industry as possible,” she says, confirming regular contact with CEOs from Anthropic, OpenAI, Microsoft, Google, and Amazon. Her closeness with the tech industry has drawn criticism, with Senator Elizabeth Warren accusing Commerce of “lobbying on behalf of Big Tech companies overseas.” 

That perhaps makes sense, since the department's mission is to be a pro-business voice within the government, aiming to “to create the conditions for economic growth and opportunity for all communities.” Raimondo’s prominence stems from Congress’ failure to confer legal authority elsewhere in government to regulate AI—and that job, she says, should not come to Commerce. “Commerce’s magic is that we’re not a regulator,” Raimondo says. “So businesses talk to us freely—they think of us as a partner in some ways.” 

That friendliness has proved useful in securing voluntary commitments from AI companies. Biden’s AI Executive Order requires tech companies to inform Commerce about their AI-model training and safety-testing plans, but it doesn’t mandate Commerce’s direct testing of those models. Raimondo says the AI Safety Institute will soon test all new advanced AI models before deployment, and that the leading companies have agreed to share their models. Despite reports of AI firms’ failing to honor voluntary commitments to the U.K. institute, Raimondo remains confident. “We have had no pushback, and that’s why I work so closely with these companies,” she says. As for relationships with individual CEOs, Raimondo emphasizes there’s no preferential treatment. “I’m pretty clear-eyed. Every Business person I talk to has an angle—they want to make as much money as they can and maximize shareholder profit. My job is to serve the American people.”


Raimondo leads Commerce’s efforts to maintain U.S. technological supremacy by controlling the supply of specialized semiconductor chips needed for advanced AI. This includes overseeing the distribution of $39 billion in CHIPS Act grants to semiconductor companies and imposing export restrictions on chips and chip-manufacturing equipment. Commerce is also developing safety tests and standards for powerful AI systems in coordination with international partners. While some of these activities could have been housed elsewhere in government, Alondra Nelson, a social science professor at the Institute for Advanced Study and former White House adviser, sees Raimondo’s competence as a key factor. “It is a manifestation of the President’s confidence in her leadership that she has been tasked with taking the baton on these historic initiatives,” she says.

Raimondo
Raimondo, right, visits defense firm BAE Systems on Dec. 11.Steven Senne—Pool/AP

By leveraging policy tools and the dominance of American AI companies, Raimondo hopes to make access to cutting-edge AI contingent on adherence to U.S.-led safety standards. She played a crucial role in brokering a deal between Microsoft and UAE-based G42 in which the latter agreed to remove Chinese technology from its operations. “What we have said to [the UAE], and any country for that matter, is you guys gotta pick,” Raimondo says. “These are the best-in-class standards for how AI is used in our ecosystem. If you want to follow those rules, we want you with us.”

She also believes U.S. leadership on AI can help promote more responsible practices in countries like the UAE. “We have something the world wants,” she says. “To the extent that we can use that to bring other countries to us and away from China, and away from human-rights abuses with the use of technology, that’s a good thing.”

The push to set global AI standards is rooted in an understanding that many of the challenges posed by AI transcend borders. Concerns about the dangers of highly advanced AI, including the possibility of human extinction, have gradually gained traction in tech circles. In May 2023, executives at prominent tech companies and many world-leading researchers signed a statement reading: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

These fears have spread to Washington in recent months, becoming part of serious policy discussions. In a December 2023 Senate forum on “Risk, Alignment, & Guarding Against Doomsday Scenarios,” Senate Majority Leader Chuck Schumer asked each attendee to state their p(doom)—the probability they assign to AI causing human extinction, or some equally catastrophic outcome. (Raimondo declines to give a specific estimate. “I'm just a very practical person, so I wouldn't think of that,” she says, but notes AI-enabled bioterrorism is a primary concern.)

While some in Washington take doomsday scenarios seriously, others remain skeptical. In April, Raimondo appointed Paul Christiano, a researcher with a track record of grave predictions about AI apocalypse, as head of AI safety at the U.S. AI Safety Institute. Some employees at the National Institute of Standards and Technology (NIST), which houses the Safety Institute, were reportedly unhappy with the appointment, but Raimondo sees value in the disagreement. “The fact that Paul’s view is different than some [NIST employees] is a really good thing,” she says. “He can push, they push back.”

Limited resources may partly explain the internal NIST coNFLict. The $10 million for the Safety Institute was pulled from NIST’s budget, which shrank in 2024 despite the agency’s being drastically underresourced, with many of its campuses falling into disrepair. Biden’s AI Executive Order “puts a tremendous burden on Commerce to do a lot of the implementation,” says Dewey Murdick, executive director at Georgetown’s Center for Security and Emerging Technology. “I don’t think the funding is anywhere connected with what is realistic.”


Shortly after our interview, Raimondo sits motionless, eyes lowered, listening intently to a briefing from Seoul, where AI Safety Institute director Elizabeth Kelly is representing the U.S. It’s good news: the other nations present seem eager to participate in a U.S.-led plan to establish a global network of AI-safety institutes. Raimondo grins at the three officials around the table. “This is kind of like how it’s supposed to work,” she says. But as she and her team talk next steps, the grin fades. It’s mid-May and there’s a lot to do before the group of AI-safety institutes convene in San Francisco in October, she says. 

The November election quickly follows. If Trump wins, which she says would be “tragic on every level, including for AI policy,” Raimondo rules out a move into the tech industry. If Biden is re-elected, Raimondo says she’ll stay at Commerce “if he wants me to.” 

Either way, the quest for U.S. chip superiority has bipartisan support, and will likely endure. However, NIST may struggle to maintain its apolitical reputation if AI remains a hot-button issue in Washington. “Now, too much money is being made, too much impact on real life is happening,” Murdick says. That means AI is inevitably going to become more political.

Politics will of course feature in the potential passage of any AI legislation, which Raimondo says she plans to shepherd on the Hill as she did the CHIPS Act. For now, absent a regulator that such legislation would empower, Commerce’s AI duties continue to expand. While Raimondo’s CHIPS Act investments have unleashed a surge of private funding, analysts are less optimistic about Commerce’s ability to choke off China’s access to chips. And insufficient funding could cause the AI Safety Institute to fall short. In practice, this could mean China rapidly closes the AI gap on the U.S. as it secures needed chips, with neither side able to guarantee well-behaved systems—the doomsday race scenario feared by those who believe AI might cause human extinction. 

But Raimondo remains confident. “There has been moment after moment after moment in United States history, when [we are] confronted with moonshot moments and huge challenges,” she says. “Every time we find a way to meet the mission. We will do that again.”

Trending