Technology
As new laws are proposed, Colorado companies share how they use AI to make business better
Wouldn’t it be nice to have someone whisper in your ear the right words to say at the exact moment you need them?
That’s the Business of Kelly Kinnebrew, a psychologist in Boulder. She’s a conversation coach who helps executives find the right words under pressure. She helps leaders build trust with their audience, remember to listen and, when needed, wrap it up. But Kinnebrew doesn’t always listen in anymore. She uses artificial intelligence Technology to generate texts that pop up as the conversation progresses.
“I’d tried a low-tech solution and it worked really well using an earpiece and an iPhone and giving people real-time feedback,” said Kinnebrew, who cofounded Minerva Research in 2018 to provide tech-based coaching. “AI was solidly coming into the horizon and I thought, ‘Can’t AI do this?’”
It can, apparently, especially today’s AI, which can generate a response seemingly on its own. That’s called generative AI, which became a smashing success in late 2022 when San Francisco lab OpenAI unleashed the ChatGPT-4 chatbot to the public. Companies suddenly had access to gobs of data to supplement their own niche AI systems and really expand. Like at Minerva, which anticipates releasing version 2.0 of its AI system in June.
But Kinnebrew is torn.
For the first time in her life, she sat through an hours-long committee hearing at the state Capitol late last month to testify against a bill aimed at protecting consumers from the potential harms of AI. At her business, she follows other state laws that protect consumer data and, as a psychologist, she’s bound by patient confidentiality and other ethics. But if Senate Bill 205 passes as is, companies that develop AI would have to disclose all the possible content used to train their AI and share why the AI responds the way it does. She’s not even sure that can be done. This challenges small businesses like hers that are still figuring out how to build an accurate system. The bill has undergone several changes since she testified about 10 days ago. There’s still confusion.
“Do I have interest as a consumer? Yes, absolutely I want to be protected,” she said. “So on its face, I don’t have endless criticism with the bill, even as it’s written now. But for startups that are trying to figure something out, I don’t know how they do it financially. … People in our AI community want the right legislation that protects consumers.”
National trend to regulate AI
A handful of state legislative proposals nationwide this session, including Colorado’s Senate Bill 205, has the lawmaking world trying to keep up with the latest in technology, even as tech companies are still trying to figure it out themselves. Even Big Tech is fumbling with AI, as evidenced when Google apologized in February for historical inaccuracies in its updated AI system Gemini.
The push to regulate is likely connected to the sluggishness of governments to put guardrails on Big Tech. The U.S. still doesn’t have a national data-privacy law requiring companies to provide more transparency on how consumer data is used, stored and sold (Colorado has a law).
One of the top consumer data privacy laws, Europe’s General Data Protection Regulation, went into effect in 2018 and that came too late in the opinion of Stephen Hutt, an assistant professor of computer science in the University of Denver’s Daniel Felix Ritchie School of Engineering and Computer Science. That was the same year as the Cambridge Analytica data scandal, which exposed how Facebook’s user data is used in unintended ways.
“GDPR passed relatively recently and in the lifespan of the internet, relatively late,” Hutt said.
As for AI though, it’s still early, he said. There’s a lot of venture capital going into the AI industry so it’s difficult to tell what is real and what isn’t. “Right now, you can’t look at a new tech product without it sort of throwing the word AI at you (and) knowing what that means,” he said.
If you don’t involve the right stakeholders in the conversation about regulating AI, he said, “the risk is you end up with either toothless legislation or legislation that can’t be enacted or enforced. And it’s like, ‘Well, good job us. We legislated on AI.’ And actually, it doesn’t really impact or shape the way we move forward.”
There’s actually a law like that already. New York City passed an AI hiring law in 2021 requiring employers that use chatbots, resume scanners or keyword matches to help with hiring to audit the results for possible race or gender bias and share them online. The law went into effect last year but just 18 of 400 employers had posted results, according to a Cornell University report. The Society for Human Resource Management called it “a bust.”
☀️ READ MORE
Colorado bill to regulate generative artificial intelligence clears its first hurdle at the Capitol
In “Not Quite Dead Geniuses” diverse life forms create an unusual alliance
What’s Working: Denver-based Guild’s CEO talks about AI, frontline workers and her new job
There are efforts to address AI at the national level, but that’s mostly from President Joe Biden’s executive order last fall to set new standards for AI safety in terms of Americans’ privacy, security and to advance equity and civil rights. A recent update noted that many of the actions taken so far were guidance to federal contractors and agencies, such as to the housing department to prohibit discrimination when using AI to screen tenants.
Others who represent the tech companies said laws need to focus on harms to consumers rather than the tools because there are bad actors in every niche, industry and Technology.
“Rather than creating regulations that create so much liability that no one would be willing to create an AI tool that can be used in lending, we should look at the specific harm, which is we don’t want lenders to discriminate,” said Chris MacKenzie, senior director of Chamber of Progress, a progressive tech-industry coalition that counts Google and Meta as financial partners. The organization has also testified against the bills in Colorado and Connecticut.
Colorado’s proposed AI bill has changed since it went to committee on April 24. According to sponsor and Senate Majority Leader Robert Rodriguez, the bill is more narrow. It no longer addresses synthetic data, or data generated by an AI system that seems real but isn’t, like “deepfakes” that can manipulate a person’s likeness to make it seem like they’re doing something they did not do. It focuses on “high-risk” systems that decide who gets a loan, a house or apartment, a job or other life-impacting decision.
“This bill has been narrowed down to just discrimination and consequential high risk artificial intelligence systems,” Rodriguez testified last week. “Disclosure of an artificial intelligence system to a consumer. That’s what this bill does. That’s the law of this bill. If you’re interacting with a high-risk decision making tool, they just need to tell you.”
-
Technology12h ago
AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond
-
Technology14h ago
Newborn planet found orbiting young star, defying planet formation timeline | The Express Tribune
-
Technology18h ago
Awkwardness can hit in any social situation – here are a philosopher’s 5 strategies to navigate it with grace
-
Technology18h ago
No need to overload your cranberry sauce with sugar this holiday season − a food scientist explains how to cook with fewer added sweeteners
-
Technology1d ago
Teslas are deadliest road vehicles despite safety features: study | The Express Tribune
-
Technology1d ago
There Is a Solution to AI’s Existential Risk Problem
-
Technology1d ago
US pushes to break up Google, calls for Chrome sell-off in major antitrust move | The Express Tribune
-
Technology1d ago
Public health surveillance, from social media to sewage, spots disease outbreaks early to stop them fast