Technology
Governments race to regulate AI tools
Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI's ChatGPT are complicating governments' efforts to agree laws governing the use of the Technology.
Here are the latest steps national and international governing bodies are taking to regulate AI tools:
AUSTRALIA
* Planning regulations
Australia will make search engines draft new codesto prevent the sharing of child sexual abuse material created by AI and the production of deepfake versions of the same material.
BRITAIN
* Planning regulations
Leading AI developers agreed on Nov. 2, at the first global AI Safety Summit in Britain, to work with governments to test new frontier models before they are released to help manage the risks of the developing Technology.
More than 25 countries present at the summit, including the U.S. and China, as well as the EU, on Nov. 1 signed a "Bletchley Declaration" to work together and establish a common approach on oversight.
Britain said at the summit it would triple to 300 million pounds ($364 million) its funding for the "AI Research Resource", comprising two supercomputers which will support research into making advanced AI models safe, a week after Prime Minister Rishi Sunak had said Britain would set up the world's first AI safety institute.
Britain's data watchdog said in October it had issued Snap Inc's (SNAP.N) Snapchat with a preliminary enforcement notice over a possible failure to properly assess the privacy risks of its generative AI chatbot to users, particularly children.
CHINA
* Implemented temporary regulations
Wu Zhaohui, China's vice minister of science and technology, told the opening session of the AI Safety Summit in Britain on Nov. 1 that Beijing was ready to increase collaboration on AI safety to help build an international "governance framework".
China published proposed security requirements for firms offering services powered by generative AI in October, including a blacklist of sources that cannot be used to train AI models.
The country issued a set of temporary measures in August, requiring service providers to submit security assessments and receive clearance before releasing mass-market AI products.
EUROPEAN UNION
* Planning regulations
EU lawmakers and governments reached on Dec. 8 a provisional deal on landmark rules governing the use of AI, including governments' use of AI in biometric surveillance and how to regulate AI systems such as ChatGPT.
The accord requires foundation models and general purpose AI systems to comply with transparency obligations before they are put on the market. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
FRANCE
* Investigating possible breaches
France's privacy watchdog said in April it was investigating complaints about ChatGPT.
G7
* Seeking input on regulations
The G7 countries agreed on Oct. 30 to an 11-point code of conduct for firms developing advanced AI systems, which "aims to promote safe, secure, and trustworthy AI worldwide".
ITALY
* Investigating possible breaches
Italy's data protection authority plans to review AI platforms and hire experts in the field, a top official said in May. ChatGPT was temporarily banned in Italy in March, but it was made available again in April.
JAPAN
* Planning regulations
Japan expects to introduce by the end of 2023regulations that are likely closer to the U.S. attitude than the stringent ones planned in the EU, an official close to deliberations said in July.
The country's privacy watchdog has warned OpenAI not to collect sensitive data without people's permission.
POLAND
* Investigating possible breaches
Poland's Personal Data Protection Office said in September it was investigating OpenAI over a complaint that ChatGPT breaks EU data protection laws.
SPAIN
* Investigating possible breaches
Spain's data protection agency in April launched a preliminary investigation into potential data breaches by ChatGPT.
UNITED NATIONS
* Planning regulations
The UN Secretary-General António Guterres on Oct. 26 announced the creation of a 39-member advisory body, composed of tech company executives, government officials and academics, to address issues in the international governance of AI.
UNITED STATES
* Seeking input on regulations
The US, Britain and more than a dozen other countries on Nov. 27 unveiled a 20-page non-binding agreement carrying general recommendations on AI such as monitoring systems for abuse, protecting data from tampering and vetting software suppliers.
The US will launch an AI safety institute to evaluate known and emerging risks of so-called "frontier" AI models, Secretary of Commerce Gina Raimondo said on Nov. 1 during the AI Safety Summit in Britain.
President Joe Biden issued an executive order on Oct. 30 to require developers of AI systems that pose risks to US national security, the economy, public Health or safety to share the results of safety tests with the government.
The US Federal Trade Commission opened an investigation into OpenAI in July on claims that it has run afoul of consumer protection laws.
-
Technology56m ago
There Is a Solution to AI’s Existential Risk Problem
-
Technology7h ago
Public health surveillance, from social media to sewage, spots disease outbreaks early to stop them fast
-
Technology8h ago
TikTok, PTA host youth safety summit in Pakistan | The Express Tribune
-
Technology12h ago
Why a Technocracy Fails Young People
-
Technology1d ago
Transplanting insulin-making cells to treat Type 1 diabetes is challenging − but stem cells offer a potential improvement
-
Technology1d ago
Japan's $26 billion deep sea discovery sparks serious environmental concerns | The Express Tribune
-
Technology1d ago
Should I worry about mold growing in my home?
-
Technology1d ago
Blurry, morphing and surreal – a new AI aesthetic is emerging in film