Connect with us

Technology

Google, Microsoft seem OK with Colorado’s controversial AI law. Local tech not so much.

Published

on

/ 2346 Views

This story was updated on Oct. 22 at 12:52 a.m. to add additional comments from large tech companies that presented at the task force meeting.

Representatives from Google, Microsoft, IBM and other massive Technology companies showed up at the State Capitol on Monday to support and make suggestions for Colorado’s controversial artificial intelligence law, which became the first in the nation to pass last spring.

Big Tech seemed more or less OK with the new law, which aims to put guardrails on machines that make major decisions that could alter the fate of any Coloradan. That specifically includes AI used in decisions for jobs, lending or and financial services, Health care services, insurance, housing, government or legal service or a spot in college. 

From left to right, Soona founder Liz Giorgi, Range Ventures co-founder Adam Burrows and Jon Nordmark, founder of Iterate.ai, shared their concerns and hopes for the revision of Senate Bill 205. The new Colorado law regulates artificial intelligence harm to consumers but there’s concern that it hurts innovation. They spoke during a meeting of the Artificial Intelligence Impact Task Force on Oct. 21, 2024. (Provided by Colorado Technology Association)

But that wasn’t specific enough for a group of local founders who were involved in the most heated discussion during Monday’s Artificial Intelligence Impact Task Force meeting. Their concerns echoed a letter signed by a group of 300 Technology executives sent to Gov. Jared Polis a few weeks after he signed Senate Bill 205. Polis pledged to revise the new law and is relying on the task force to figure those things out. A report is due in February. 

Luke Swanson, chief Technology officer at popular Denver shopping app Ibotta, wondered if cash-rebate offers made to its shoppers are considered “financial services.” Jon Nordmark, a local entrepreneur who founded eBags in the 1990s, said his current company Iterate.ai developed Technology for private AI systems. Customers use Iterate’s tech to build and train their own AI systems. For Iterate to be in compliance, they’d have to know everything a customer does, which they don’t. And they couldn’t afford to, even with 100 employees. Most of its employees are tech experts. It just hired its first staff accountant.

And Liz Giorgi, cofounder and CEO of Soona, a Denver company that provides visuals for ecommerce companies, said she’s struggled to figure out if taking photos of a health care device that Soona enhances with AI tools qualifies as a health care service. 

“When you’re running a 110-person organization, we do not have a lawyer on staff who is guiding us on these decisions,” Giorgi said. “Are we giving health care advice and are we deploying health care services in some inconsequential way? I’ve asked three attorneys. I’ve gotten three different answers.”

After some intense debate, one task force member responded to Giorgi with, “No.”

At least one task force member pointed out the disconnect between big tech and local companies. Seth Sternberg, CEO of Boulder-based home-care company Honor, said that the locals were just sharing their experience. 

“The little tech people got the harder questions than the big tech people. And they got the stronger reactions than the big tech people,” Sternberg said. “And the big tech people get to, frankly, have 1,000 compliance people on their staff that let them then come and say very generic things because they know that if 205 does occur, that it won’t have a substantial impact on them. They can handle it. 

“But the little tech people, if 205 in its current form, the way they’re interpreting it happens, they’re existentially scared for their lives,” he said. “That is their reality.”

Members of the Artificial Intelligence Impact Task Force listen to speakers from TechNet, Amazon, Google and Salesforce talk about how to revise the state’s new AI law during a committee meeting on Oct. 21, 2024 at the State Capitol. (Tamara Chuang, The Colorado Sun)

Others on the task force asked questions about what needed to change. The law requires certain companies to disclose when AI is used or interacts with consumers. How do you assess AI risks or disclose AI usage to customers, said Tatiana Rice, deputy director of the Future of Privacy Forum, which works with organizations and governments to shape policies. 

Another task member, Matt Scherer, senior policy counsel at Center for Democracy and Technology, said he felt a little frustrated that when he was trying to drill down with what local technology founders wanted, nobody was on the same page.

“They kept talking about things that the law does not cover or things that the law does not require,” Scherer said after the meeting. “They talked very generally about that they didn’t want proactive disclosure and that the outside world generally does not know right now when automated systems are being used. But if you don’t have proactive disclosures, just to be blunt, you might as well not do regulation at all.”

What Big Tech said

The global tech companies spoke about the need for regulation — Microsoft’s Ryan Harkins said founder Bill Gates has pushed for a federal privacy law more than two decades ago. They also talked about their companies’ commitments to responsible AI development.

“There’s a growing recognition around the world that it’s all well and good for companies to take it upon themselves to do what they think is right,” Harkins said. But, he added, “We also need laws and there’s a growing conversation around the world about what those laws should look like. From our point of view, we want to see laws put in place that  both facilitate and enable innovation … but will also address serious and real risks of harm. And that is a hard thing.”

A Google representative applauded the state’s risk-based approach to regulation but suggested different approaches based on industry would create “targeted revisions to the law will improve its effectiveness by focusing on truly high risk use cases,” said Alice Friend, Google’s head of AI and emerging tech policy. 

“This is because AI is a general purpose technology. It can help you plan a party or help you manage your retirement savings. Merely using AI should not automatically trigger harmful and onerous regulatory applications,” said Friend, who attended virtually. “We share the goal of protecting Coloradans while leveraging this once in a generation technology.”

Friend also suggested the state consider how its law could work in harmony with future national or global laws, such as the International Organization for Standardization’s AI standard or the White House’s executive order on AI. The task force could work on something similar to the guidance the Massachusetts Attorney General provided to AI developers and users to other existing laws that deal with consumer protection, discrimination and data security.

Trending