How reliable Is AI?

Artificial intelligence is all around us—crafting emails, suggesting films, even powering self-driving cars—but what about the invisible AI? Who (or what) is working in the background creating the algorithms that operate out of sight? And how much can we really trust them?
We interviewed two experts from UC San Diego’s Halıcıoğlu Data Science Institute, part of the School of Computing, Information and Data Sciences (SCIDS), to explore what lies ahead for artificial intelligence—including its potential, obstacles, and inherent limitations.
David Danks, MA ’99, Ph.D. ’01, a professor of data science, philosophy, and policy, focuses on both the construction of AI systems and their broader societal impact. Lily Weng, an assistant professor, heads the Trustworthy Machine Learning Lab, which is dedicated to making AI systems dependable, transparent, and deserving of public trust.
How would You Describe AI In Its Simplest, Most Basic Form?
Danks: At its core, AI is any system that substitutes, supports, or boosts human thinking and decision-making—similar to how machines once took over physical labor. While some may concentrate on the technical side, I focus on a human-centered perspective: what AI empowers people to accomplish.
Weng: AI can be seen as a system that operates differently from humans but is built to help us. The aim isn’t just to make AI more intelligent or efficient, but to ensure it serves and enhances human well-being.
What’s Been The Biggest Surprise as AI Becomes Part Of Daily Life?
Danks: What surprises me most is how eager people are to try out and explore AI systems. Many are willing to invest time and effort experimenting with them. However, this curiosity doesn’t always lead to ongoing use or trust—which is actually a good thing, since many of these systems aren’t ready to be fully trusted. It’s especially interesting that in a society where many claim to be tech-averse, people are still quite open to engaging with AI, at least when there’s little risk involved.
How Does ‘Responsible AI’ Come Into Play?
Weng: Our goal is for AI to be both responsible and trustworthy—for users and developers alike. That means AI should be transparent about what it does and how it makes decisions, so we can evaluate it for bias or other issues. It also needs to be robust—able to withstand interference or manipulation. Ultimately, we want AI systems to align with key principles and behave in ways we can reliably expect.
AI Is Evolving rapidly—What Should Be On Our Minds? What’s On Yours?
Weng: My lab focuses on the lack of transparency in AI systems. For instance, deep learning models are very complex, and even though they perform well in tests, unexpected errors can still occur in real-world situations. We work on making AI systems more interpretable, and when that’s not possible, we strive to make them more explainable.
Danks: I often think about the less obvious applications of AI. For example, I know ChatGPT uses AI, but when I’m driving my car—which is essentially a computer on wheels—I have no idea how much AI is involved. I don’t know if AI is controlling the engine, monitoring me, or even sharing my data with insurance companies.
While AI working behind the scenes can improve things, it also introduces risks that we might not even be aware of, leaving us without proper ways to address potential harm. That’s why we need clear transparency about when AI is in use and what it’s doing.
How will UC San Diego’s Newest School, SCIDS, Tackle The Challenges and Questions You’ve Raised?
Danks: I believe the school offers tremendous opportunities for researchers and educators by bridging the gap between fundamental and experimental research at the Halıcıoğlu Data Science Institute and the large-scale, commercial software applications enabled by the San Diego Supercomputer Center.
The school has the potential to support research that moves seamlessly from the lab to real-world commercial and societal impact. This not only opens up new possibilities but also brings important responsibilities to ensure that this work is done ethically and responsibly, proving that responsible AI is indeed achievable.
What’s One Thing AI Will Never Be Able To Replace?
Danks: Tasks that involve genuine emotional or empathetic connections with others will be very difficult for AI to replace. While AI can mimic empathy, forming lasting, meaningful relationships is something it will struggle with.
Another challenge for AI is handling situations where the definition of success is unclear. These systems are designed to optimize for clear goals, but in many real-life scenarios, success isn’t well-defined and we often discover what matters through experience and trial and error—something AI finds hard to navigate.
What Excites You Most About The Future Of AI?
Weng: I’m really excited about AI’s potential in health care—specifically how it can help deliver higher-quality care. However, trustworthiness is a major challenge because health care involves much greater risks and requires strict safety standards, far beyond those of everyday tools like chatbots.
That’s why having trustworthy diagnoses is crucial, especially when it comes to making AI decisions transparent and reliable. I’m enthusiastic about the research my lab and my colleagues at HDSI are doing to build AI systems that people can truly trust.
Any Last Insights About AI You’d Like To Share With Our Readers?
Danks: It’s easy to see AI as an unstoppable force rushing toward us, something that will drastically change or even disrupt our lives with no way to stop it. But I believe that’s the wrong perspective. AI is created by people—it’s a future we are actively shaping. We should see it as an opportunity, not as an uncertain threat we have no control over.
Read the original article on: Tech Xplore
Read more: Snowflake Plans to Acquire Database Startup Crunchy Data
Leave a Reply