Technology
Microsoft chief says deep fakes are biggest AI concern
Microsoft President Brad Smith said Thursday that his biggest concern around artificial intelligence was deep fakes, realistic looking but false content.
In a speech in Washington aimed at addressing the issue of how best to regulate AI, which went from wonky to widespread with the arrival of OpenAI's ChatGPT, Smith called for steps to ensure that people know when a photo or video is real and when it is generated by AI, potentially for nefarious purposes.
"We're going have to address the issues around deep fakes. We're going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians," he said.
"We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI."
Smith also called for licensing for the most critical forms of AI with "obligations to protect security, physical security, cybersecurity, national security."
"We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country's export control requirements," he said.
President of Microsoft Brad Smith reacts during an interview with Reuters at the Web Summit, Europe's largest Technology conference, in Lisbon, Portugal, November 3, 2021. REUTERS/Pedro Nunes
For weeks, lawmakers in Washington have struggled with what laws to pass to control AI even as companies large and small have raced to bring increasingly versatile AI to market.
Last week, Sam Altman, CEO of OpenAI, the startup behind ChatGPT, told a Senate panel in his first appearance before Congress that use of AI interfere with election integrity is a "significant area of concern", adding that it needs regulation.
Altman, whose OpenAI is backed by Microsoft, also called for global cooperation on AI and incentives for safety compliance.
Smith also argued in the speech, and in a blog post issued on Thursday, that people needed to be held accountable for any problems caused by AI and he urged lawmakers to ensure that safety brakes be put on AI used to control the electric grid, water supply and other critical infrastructure so that humans remain in control.
He urged use of a "Know Your Customer"-style system for developers of powerful AI models to keep tabs on how their Technology is used and to inform the public of what content AI is creating so they can identify faked videos.
Some proposals being considered on Capitol Hill would focus on AI that may put people's lives or livelihoods at risk, like in medicine and finance. Others are pushing for rules to ensure AI is not used to discriminate or violate civil rights.
-
Technology1h ago
Awkwardness can hit in any social situation – here are a philosopher’s 5 strategies to navigate it with grace
-
Technology1h ago
No need to overload your cranberry sauce with sugar this holiday season − a food scientist explains how to cook with fewer added sweeteners
-
Technology21h ago
There Is a Solution to AI’s Existential Risk Problem
-
Technology1d ago
Public health surveillance, from social media to sewage, spots disease outbreaks early to stop them fast
-
Technology1d ago
Why a Technocracy Fails Young People
-
Technology1d ago
Transplanting insulin-making cells to treat Type 1 diabetes is challenging − but stem cells offer a potential improvement
-
Technology2d ago
Should I worry about mold growing in my home?
-
Technology2d ago
Blurry, morphing and surreal – a new AI aesthetic is emerging in film