Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community

If there was ever a truth in politics, it’s that politicians sometimes stretch the truth. And while fact-checking has been, up to this point, largely a people-driven enterprise, technology may be coming to our aid soon to assist in recognizing when, and if, what we’re hearing is BS.
How so? A graduate student at MIT is developing a new technology that utilizes natural language processing — the same technology at the heart of Siri — to help process and check statements from politicians, political action committees and, perhaps, even news organizations themselves against databases such as those from factcheck.org to test their truthfulness. The student, Dan Schultz, calls his project “truth goggles,” which will come in the form of a browser plug-in that would highlight passages or statements where the truth might be stretched a bit.
I have to admit it’s a neat idea, because most of us — no matter our political leanings or education on issues — may not be able to detect when a politician is not being truthful. Not only that, in an increasingly polarized world, everyone seems to have their own set of facts, which just as often as not can be entirely contradictory to the other side’s facts. Using technology to process the massive amounts of statements and information made by politicians and commentators might just give us a way to sift through whose “truth” is more truthful.
Humans still required
While it’s a neat idea to leverage technology like natural language processing to connect us to fact-checking databases, in the end, those same databases require humans to create them. This is because data funneled through an API into the truth goggles still rely on human monitoring skills since, for better or for worse, the databases of fact-checked datapoints are human fact-checked datapoints.
That’s not to say this isn’t a big step forward, since non-partisan fact-checking organizations, like factcheck.org, by and large have the respect of both sides of the political spectrum, or at least while their interpretation of the facts matches those of the politician. And, by giving journalists the tools to better access these databases, it can help them to be better equipped to put statements by politicians into context.
But why confirmation bias may still triumph
In the end, though, you have to wonder if tools like this will make a difference given human nature, since people tend to judge things through their own filters. As people become more educated about issues, even when they are presented with facts, often their own political positions harden.
Take climate change. Studies have shown that as people become more educated about math and science, it only hardens their positions on climate change, on both sides of the issue. Why? Because even in a world where even fact-checked statements, statements can be used to support both sides, and humans tend to only take into account those statements which support their presuppositions. In other words, this is confirmation bias at work, which is the tendency of people to seek out information which supports their own preconception.
Expect more “truth goggles”
Still, even as human nature and political spin will still present significant challenges to the use of technology to discover the truth, it’s likely we’ll see more of this trend. Using technology to dig through mountains of data to validate statements can indeed be useful, particularly as journalists who write about what politicians say become — like all of us — busier. There might even be a time when politicians themselves may start to rely on tools to better validate their statements in real-time, given that much of what they are saying or writing is based on their own recollection of the “facts.”
Image courtesy of Flickr user ????
Could technology like that in Siri help us detect political BS? http://t.co/sdYYKVxo
Using technology to know when a politician is lying to you….Yes Please. http://t.co/PuDB9T6q
Truth Goggles: Could technology like that in Siri help us detect political BS? http://t.co/oSdeSNTY via @gigaom
Using natural language processing for BS detection—check politicians’ facts online, à la Siri. http://t.co/rBDMRiuE
Politisk anti-bullshit-app på vei? http://t.co/NiXRZTZG
RT @gigaom: Could technology like that in Siri help us detect political BS? http://t.co/4KtZhcqY
Could technology like that in Siri help us detect political BS? http://t.co/YMVHPX3b #apple #siri #wtf
Could technology like that in Siri help us detect political BS? http://t.co/jGdUW6NH
Could technology like that in Siri help us detect political BS? » http://t.co/XeXBNrR4 by @michaelwolf (via @gigaom )
Could technology like that in Siri help us detect political BS? http://t.co/9tMoo8bD Via @gigaom
Could technology like that in Siri help us detect political BS? http://t.co/nc893uG7 #technology
Could technology like that in Siri help us detect political BS? http://t.co/gjTcgKaq
Could technology like that in Siri help us detect political BS? http://t.co/XjrwqKDq #business
Could technology like that in Siri help us detect political BS?: If there was ever a truth in politics, it’s tha… http://t.co/dZFJZUVK
Could technology like that in Siri help us detect political BS? http://t.co/Z6oTqVWk
Could technology like that in Siri help us detect political BS? http://t.co/rZuiI0iL
Could technology like that in Siri help us detect political BS?: If there was ever a truth in politics, it’s tha… http://t.co/HRiDCoTy
Could technology like that in Siri help us detect political BS?: If there was ever a truth in politics, it’s tha… http://t.co/8mDX6Cw9
Could technology like that in Siri help us detect political BS?: If there was ever a truth in politics, it’… http://t.co/4g40lOLe [GO]
Could technology like that in Siri help us detect political BS? — Tech …: Researchers are looking at ways to u… http://t.co/WmNOwhGf
If someone truly believes something that is a lie or wrong, the detectors might identify it as truth, even though it’s still wrong.
No doubt politicians (and people in general) say things all the time they believe which, in the end, proves at odds with the facts. This can be due to a mistatement or through being misinformed, despite good intentions.
A clear shortcoming about the approach described above (and possibly any technology analyzing statements), is you aren’t getting at intent or belief of the person, only what the spoken record is. The spoken record may be wrong for a variety of reasons, and dishonesty is only one of those reasons.
Could technology like that in Siri help us detect political BS? http://t.co/mBLijKby
RT @gigaom: Could technology like that in Siri help us detect political BS? http://t.co/YizYQiQG
Could technology like that in Siri help us detect political BS? http://t.co/xVarNl14
RT @gigaom: Could technology like that in Siri help us detect political BS? http://t.co/RanZNbM8
[247] Could technology like that in Siri help us detect political BS?
http://t.co/NROdvK9b
Here’s a post by me: Could technology like that in Siri help us detect political BS? http://t.co/5zyOwRUq
Could technology like that in Siri help us detect political BS? http://t.co/Ndpzasi0
Could technology like that in Siri help us detect political BS?: If there was ever a truth in… http://t.co/JAoZpU3a
Could technology like that in Siri help us detect political BS? http://t.co/X35HwXtZ