Download the report

How Climate TRACE Guards Against the AI “Hallucination Problem” - Climate TRACE

News & Insights

How Climate TRACE Guards Against the AI “Hallucination Problem”

Aug 24, 2023

By Gavin McCormick

News

Statistician George Box famously once said, “All models are wrong. Some are useful.” Everyone who works in data — including the world of artificial intelligence and machine learning (AI/ML) — knows that there is no measurement approach on the planet that is perfect. But some are definitely better than others. And so as with any data project, we know that Climate TRACE is not completely free of errors… yet we also go to great efforts to make our models increasingly accurate and bias-free.

Issues such as these have recently come to the forefront of public consciousness, in part thanks to the massive popularity explosion of large language models (LLMs) such as ChatGPT. One artificial intelligence (AI) problem in particular has risen to a kind of infamy: hallucinations.

Media outlets ranging from the New York Times to CNBC to Fortune to IEEE Spectrum have covered it. Meanwhile, tech companies from OpenAI (the group behind ChatGPT) to Google to Microsoft are tackling it. But what is the AI hallucination problem? How prone are Climate TRACE’s algorithms to it? And what are we doing to guard against it?

The AI hallucination problem, defined

The hallucination problem refers to times when AI confidently asserts that it’s right, yet the output is factually incorrect or even entirely made up (as if out of one’s imagination). The output might sound plausible, but in truth the AI model is wrong and doesn’t give any indication that a hallucination is happening.

With the rapid rise in publicly prominent LLMs and their particular penchant for hallucination, over the past year the hallucination problem has found itself the awkward center of attention. There has been no shortage of famous examples: Finding pandas in images that had none. A lawyer using AI-produced fake citations in court documents. Or claiming that the James Webb Space Telescope — launched in late 2021 — took the first photos of an exoplanet, a milestone that had actually occurred nearly two decades earlier.

This hallucination problem is especially pronounced among LLMs. It’s not that other AI models don’t make mistakes. It’s just that, if built correctly, other models do a much better job of knowing when they might be making mistakes, so humans using AI-based tools can double check those potentially erroneous results, or use them with caution.

What are LLMs and why are they so prone to hallucination?

Fundamentally, LLMs can seem as if they understand what they’re saying, but they actually work by matching observed patterns in language. For example, an LLM might note that it’s common in a scientific paper to cite a source.

And what does citing a source look like to an LLM, which is only paying attention to language patterns? Answer: It looks like someone putting a superscript number at the end of a sentence, then the same number at the bottom of a page with a hyperlink and the name of a source. Of course, whether that source is true or not — or even exists at all — is a very different question!

Now, LLMs are sophisticated enough to also look for patterns in the language of sources, so it’s not like they always make things up. But the problem is that under certain circumstances, the thing the AI is trying to maximize — its fit to language patterns — can be very high even if the facts are totally wrong. That’s when, where, and how the hallucination problem creeps in. 

But hallucinations are just one way that AI models can contain errors. So let’s flip that on its head and take a closer look at how AI algorithms boost their accuracy.

Loss functions: how AI seeks the truth (and avoids untruths)

All AI models work by minimizing a “loss function,” which is essentially a measure of how wrong they are at something. In other words, by minimizing a loss function, AI models are also boosting the likelihood that they’re right… by whatever measure they use to define right vs. wrong.

In the case of LLM models, their loss function focuses on their outputs’ match to common linguistic patterns and language. So if a given LLM generates a phrase that matches how people normally talk, it will think it’s right, and not fix the problem — even if what the phrases are saying aren’t factually accurate.

Of course, LLMs are just one form of AI. Here at Climate TRACE, LLMs are not integrated into our outputs. As in, they don’t affect our models for predicting emissions and related variables (like activity or capacity). Organizations in the Climate TRACE coalition sometimes do use LLMs to speed up the process of identifying datasets to train algorithms — as a quick, efficient way to gather sources.

But because of the hallucination problem, we never use the data from an LLM unless a human being has personally verified the facts. There’s always a human in the loop.

Just because LLMs aren’t central to our algorithms doesn’t mean our AI gets a free pass. We are very focused on ensuring that our data are accurate and trustworthy. With that said, how do we boost our models’ accuracy? And how do we know if and when a model is wrong?

5 ways Climate TRACE AI improves accuracy and resists bias

At least five layers of “defense” combine to make Climate TRACE data the accurate, trusted, and ever-improving resource it is today.

Layer 1: ask targeted, specific questions
For starters, we carefully ask our models only very specific questions in training, so we can instantly tell when they’re getting it wrong. An example: “How big is this plume of steam coming out of a power plant?” The more targeted the question, the easier it is to gut check the AI’s answer. If the power plant has a huge, visible steam plume in an image and the AI responds with the equivalent of “What plume?” you’d know there’s a problem right away.

Layer 2: train on representative ground truth data
Another piece of our models being reliable is training using accurate and representative ground truth data. 

Continuing the power plant example, if a model is trained solely on steam plumes from power plants in cold, dry areas, the model could perform well in training and testing when focused on power plants in such locations. But if that same training data were the basis for also predicting characteristically different power plants everywhere around the world, the results might not be reliable when applied to power plants in hot, humid areas where steam stands out less.

So whenever possible, we identify representative data for training. When the available data isn't necessarily representative, we look for other ways to validate or check results of the model.

Layer 3: orient loss functions around that ground truth
Our models’ loss functions focus on match to actual ground truth data. If it generates a result that doesn’t match what actually happened at a real power plant, the result is immediately flagged as not accurate. In that case, the model tries another result. Then one of two things happen… 1) Maybe the model tries a new result that does match the data. Problem solved! 2) Or maybe the model can’t figure out how to generate accurate results. In that case, we throw the model out and say we don’t know the accurate result. 

In our next data update and release later this year, all model predictions on our site will be tagged with the associated confidence and uncertainty values, so users can know how accurate each model is.

Layer 4: use ensemble modeling
Next, we try whenever possible to improve model accuracy by using multiple different models that work in different ways. We then combine them into even more accurate “ensemble” models. In this way, more and more results on the Climate TRACE site are actually the result of combining sometimes up to dozens of different models.

This even further reduces the risk of problems like hallucination (and inaccuracy more broadly), because we are checking the problem from different angles. You can still get individual models that are wrong. But to get a result that we think is right when it’s actually wrong would require a longer and longer string of coincidences the more different types of models you combine (as long as you validate their true accuracy against ground truth data).

Layer 5: apply external validation and meta-modeling
Finally, we do additional hypothesis testing and sense checking on whether models are generating accurate results in the real world. For example, we’re currently comparing how the sum of all our emissions match up to what other scientists are finding about the total amount of emissions in the atmosphere. And whether the fuel consumption of all the fuel-consuming assets we measure in a given area matches the amount of fuels sold. Basically, in many different ways, we check whether the actual facts in our model match actual facts in the real world.

Read More

Loading


Independent Greenhouse gas Emissions Tracking

Loading data from 1,813,558 emissions sources summarized from 662,637,077 assets.

Please, use new browser.
We use modern web technologies which are not supported with your outdated browser.