Science
Related: About this forumResearchers Claim Ominous New AI System Can Detect Lies
JUL 8, 2:56 PM EDT by VICTOR TANGERMANN
"Given that we have so much fake news and disinformation spreading, there is a benefit to these technologies."
A polygraph test ostensibly measures a person's breathing rate, pulse, blood pressure, and perspiration to figure out if they're lying or not though the 85-year-old technology has long been debunked by scientists.
Basically, the possibility of false positives and the subjectiveness involved in interpreting results greatly undermines the usefulness of the polygraph as a lie detector. Tellingly, their results are generally not admissible in US courts.
Because it's 2024, researchers are now asking whether artificial intelligence might help. In a new study published in the journal iScience, a team led by University of Würzburg economist Alicia von Schenk found that yes, it just might but, as MIT Tech Review reports, it also led to experimental subjects making more accusations overall, in yet another warning about the far-flung risks of replacing human intuition with algorithms.
First, the researchers asked participants to write down statements about their weekend plans. If they successfully lied about their plans without being found out, they were given a small financial reward.
More:
https://futurism.com/researchers-ai-system-detect-lies
intrepidity
(7,929 posts)Hell, go back and apply it to past debates as well!
getagrip_already
(17,566 posts)Someone who not only shows no emotion, but feels no emotion while lying.
They feel exactly the same telling you they are married as they do telling you they are single.
They show no physical or emotional queues, no tells, they just talk.
I doubt ai would tell one way or the other.
unblock
(54,248 posts)The old lie detectors based on heart rate and breathing and such, yeah, they assume a different physiological reaction when lying, so a pathological liar just sail through that.
But if this is purely text-based, it's probably not relying on emotional cues. Maybe things like contradictions, implausibilities, perhaps too much detail, etc. not sure being a pathological liar would help here. Maybe it would.
I imagine effective lying would be *different* to fool this ai, and a pathological liar might be better at *becoming* good at it with practice.