Technology
Journalism professor turns NYC chatbot failure into success
Using a recently failed New York City (NYC) chatbot pilot as inspiration a data journalism professor created his own version in minutes which worked, The Markup reported.
The chatbot pilot was supposed to be a useful tool for entrepreneurs instead, it turned into a cautionary tale about the importance of oversight and responsible use of AI.
NYC's chatbot, hosted at chat.nyc.gov, was designed to provide clear guidance to individuals navigating the complexities of starting a business. Its AI-powered interface aimed to streamline processes by offering advice on permits, regulatory compliance, and other business essentials.
However, the report revealed a pattern of misinformation. The chatbot, if followed, could lead users to violate laws unknowingly. Examples included erroneous advice on tip appropriation, housing discrimination, and cash acceptance policies.
Mayor Eric Adams' response, acknowledged the bot's flaws while defending its potential for improvement.
Jonathan Soma, a data journalism professor at Columbia University saw that story. Using the NYC chatbot as a starting point, he demonstrates in a video how to build a similar AI-powered chatbot that could scan and respond to questions based on uploaded documents.
Read also: Pulitzer-winning journalists break new ground with AI-assisted reporting
Jonathan Soma provided insights into the technical aspects of chatbot development. His reaction to the findings mirrored widespread concerns about AI's reliability, particularly in contexts where legal ramifications are at stake. “I would say that there is always the ability of the AI to make things up and hallucinate. Additionally, if you have a very large set of documents, it might be difficult to find the ones that are actually relevant," he added.
Soma's own experiment in building a functional chatbot, which outperformed NYC's bot in accuracy, underscored the challenges of AI deployment. While AI tools offer immense potential, ensuring their accuracy and reliability remains a challenging task.
On that point, Soma said, “It is 100% guaranteed that, at some point, there’s going to be some sort of mistake in that chain, and there’s going to be some sort of error introduced, and you’re going to get a wrong answer.”
Soma's discussion extended beyond technicalities to ethical considerations in AI adoption, particularly in journalism. He emphasised the inevitability of errors in AI-generated content and the critical role of human oversight. Chatbots, while useful for low-stakes tasks, are ill-suited for providing legal or professional advice without rigorous validation mechanisms, “You have to use it for tasks where it’s OK if there is an error.”
The evolving role of AI in data journalism was a central theme, with Soma highlighting AI's capacity for scaling tasks and generating insights. However, he cautioned against overreliance, advocating for a balanced approach that integrates AI capabilities with human judgment and fact-checking protocols, “But you can’t do that when you’re building a chatbot and every single conversation has to be meaningful and has to be accurate for the person who is using it.”
He added, “I think it is explicitly chatbots that are probably the most problematic part because they are so confident in everything that they say."
-
Technology2h ago
Public health surveillance, from social media to sewage, spots disease outbreaks early to stop them fast
-
Technology7h ago
Why a Technocracy Fails Young People
-
Technology19h ago
Transplanting insulin-making cells to treat Type 1 diabetes is challenging − but stem cells offer a potential improvement
-
Technology1d ago
Should I worry about mold growing in my home?
-
Technology1d ago
Blurry, morphing and surreal – a new AI aesthetic is emerging in film
-
Technology2d ago
Rethinking screen time: A better understanding of what people do on their devices is key to digital well-being
-
Technology2d ago
An 83-year-old short story by Borges portends a bleak future for the internet
-
Technology2d ago
Facebook users in Germany can seek compensation for 2018–2019 data misuse | The Express Tribune