Technology
US users more concerned about AI privacy than job loss
Though the rapidly developing artificial intelligence (AI) technologies seemingly make our lives easier, the users of AI tools are more concerned about data privacy than the new Technology replacing them in workplaces.
The use of AI tools shape our work and social life, and bring with them privacy concerns.
While other concerns regarding job losses due to AI replacing human workers are on the rise, the effect of AI tools on personal privacy has become a hot topic.
A survey by the consulting firm KPMG showed that some 1,000 college-educated US consumers believe the benefits of AI technology outweigh the risks attached to using them.
Some 42% of the customers questioned said that generative AI tools have significantly impacted their personal lives, while the remaining 58% stated such applications shaped their professional lives, and 51% of the respondents expressed significant excitement over generative AI.
More than half of the participants in the KPMG survey believe that generative AI tools will enhance a wide range of issues, ranging from physical Health to cybersecurity, from personalized recommendations to Education.
However, the surveyed participants showed concerns over fake news and content, AI scams, data privacy, disinformation, and cybersecurity arising from the increased use of AI.
Among the participants, 51% expressed concerns over job losses due to AI replacing human workers.
As for opinions regarding federal regulations on AI development, 60% of Gen Z and Millennial respondents stated that they are currently “just right” or “too much.”
Additionally, 36% of Gen X and 15% of Boomers and Traditionalist participants agreed with the current government schemes on regulating AI development in the US.
- Biden administration’s executive order on AI
The Biden administration issued an executive order on the safe, secure, and trustworthy development and use of artificial intelligence on Oct. 30, 2023.
The order, issued to protect Americans from potential risks of AI tools, required companies developing AI technologies to share security test results and other information with the US government.
In addition, new rules were introduced to protect people against fraud from AI-made content by implementing verification.
Meanwhile, the US Federal Trade Commission (FTC) launched a wide-range investigation into the ChatGPT-maker OpenAI last year for allegedly violating consumer protection laws.
The FTC launched an investigation into Alphabet, Amazon, Anthropic, Microsoft, and OpenAI’s generative AI investments and partnerships in January.
At the beginning of June, news in the US revealed that the Department of Justice would investigate chipmaker Nvidia for its role in the AI craze.
-
Technology1d ago
I’m a neuroscientist who taught rats to drive − their joy suggests how anticipating fun can enrich human life
-
Technology1d ago
Transform Your Live Experience with Concert AV: The Ultimate Guide to Audio-Visual Setup
-
Technology2d ago
GPS jamming: What is it and how does it work? | The Express Tribune
-
Technology2d ago
Is a Dogecoin price spike on the horizon? | The Express Tribune
-
Technology3d ago
Instagram enhances boosting options, now includes more post formats | The Express Tribune
-
Technology3d ago
What’s Working: Colorado builders stick with rate incentives to attract new-home buyers
-
Technology3d ago
30-minute ChatGPT disruption affects over 19,000 users worldwide | The Express Tribune
-
Technology3d ago
WhatsApp users can now view draft messages without opening chats | The Express Tribune