The End of the Lie

Two University of Pennsylvania scientists have invented machines that can look into your brain and know when you’re lying. The implications are extraordinary — and pretty scary

There are already plans to bring the truth machines into daily American life. Right down into the laptops and bank accounts and sock drawers of average citizens.

“Certain assumptions in ethics are going to be violated,” said psychologist Ruben Gur, who helped Langleben devise the fMRI technique. Gur takes ethics seriously, to the point of donating most of his future profits from the truth machine back to the university, for further study. He shook his head. “We want a technology to detect deception,” he said. “But at the same time, we’re scared of it.”

I asked him to name another use for the fMRI, beyond military or covert applications — just an everyday use. “It’s going to be a huge industry,” he said. “For instance, off the top of my head, I think of dating services.” Imagine online dating services where descriptions — single doctor, six-figure income — come stamped with a seal of approval: “Verified by MRI.” The very triviality of the idea gives it power. If there’s no lying on first dates, there’s no lying anywhere.

A company called No Lie MRI, Inc., based in La Jolla, California, leased the rights from Penn to develop the technology for commercial purposes. I spoke with Joel Huizenga, the company’s CEO. “Do a little thought experiment. Say there was some murder or robbery,” he said. “There’s the one person that did it, and then there’s the other 20 people who are implicated. There are more people who are innocent, who want to vindicate themselves.”

There’s a subtle but startling shift there. If, say, there’s a town full of people walking around with wristbands that prove they submitted to a truth machine, but one man with a bare wrist, the question becomes: What’s he hiding? “Yes. The burden of proof has shifted,” Caplan says. “Suddenly it’s, ‘Prove to me that you’re not a lying, cheating scumbag.’”

With the added possibility of “remote observation,” the world starts to eerily resemble George Orwell’s 1984. Orwell’s terms have worn thin from overuse — “Big Brother,” for instance — but the idea stands in ever-­sharpening relief. In the book, the State imposes conformity on its citizens, dispatching the Thought Police to arrest and torture citizens for “thoughtcrimes.”

“If we try to police intentions, it’s pretty hard to do, because the lives of most people are filled with bad intentions,” Caplan said. “They don’t act on them. But I don’t want to be questioned every time I have, you know, lust in my heart.”

Huizenga argues that the good far outweighs the bad. He gave an example: Insurance companies say 30 percent of payoffs are illegitimate, and so they increase the price of insurance for everyone to cover the deceit. That’s where the truth machines step in. “So,” he said, “they could give a discount to individuals who are willing to take the truth verification test.” He added that once his company finishes collecting initial investors, it plans to open its first facility, in Philadelphia.

Langleben points out that the abolition of the lie doesn’t have to come with Orwellian consequences. It could lead to a utopia. “I think there is a tribe in Polynesia where they have no word for ‘lie,’” he said. “So theoretically, this kind of society is possible.”

I asked Chance: Will the truth machines create a utopia, or Orwell’s dystopia? Does the end of the lie mean the end of privacy? And is that good or bad?

He declined to pass judgment, but offered a thoughtful assessment: “This,” he said, “is a fearsome thing.”

E-mail: mteague@phillymag.com