After the initial enthusiasm, the first feeling that seizes those trying their hand at ChatGPT is… disillusionment, in front of the mass of erroneous information that we see it stating with complete aplomb to satisfy its interlocutor (https://flyingbisons.com/blog/hallucinations-of-chatgpt-4-even-the-most-powerful-tool-has-a-weakness). Let us also recall the very expensive fiasco of Google Bard’s demo on the supposed first photo of an exoplanet by the James Webb telescope… the source of the photo being from elsewhere (https://www.bbc.com/news/business-64576225).
As for the non-intentionally false information generated by AI algorithms, we can stack up hordes of much more worrying deepfakes (https://en.wikipedia.org/wiki/Deepfake). These risk the spread of “fake news” from minor nuisances to major destruction. One such known example shows Barak Obama proclaiming insults against Donald Trump.
As for Twitter (or now X), the failure of the “blue badge” system now allows anyone to usurp an identity due to lack of verification. This is adding to the slow decline of the platform. (https://slate.com/technology/2023/04/elon-musk-twitter-blue-check-marks-verification-lebron-james.html).
What common point can we highlight on these issues? That of trust in the source. In the case of ChatGPT’s erroneous comments, the sender of the information is not identified. In the case of deepfakes, the identity of the subject is usurped. As for Twitter, it gives the illusion of guaranteeing identity…without any real verification.
However, having a means of reliably and securely authenticating the source of information or digital content is a thorny subject known for a long time by archivists and notably addressed by the eIDAS regulation: (https://www.ssi.gouv.fr/en/regulation/digital-confidence/the-eidas-regulation/). It starts from a simple observation: archiving electronic documents is good, signing them electronically during archiving is better, but checking their authenticity before archiving them is even better! This is where the complex electronic signature verification process comes in, including, for example, checking the non-revocation of certificates, which must be carried out before archiving a document.
However, this subject is full of paradoxes. Let us quote this paragraph, which particularly struck me, from the study “Integrity, signature and archiving process” by Françoise Banat-Berger and Anne Canteaut: (https://www.rocq.inria.fr/secret/Anne.Canteaut/Publications/BaCa07.pdf):
“Do we today ask an archivist to check the handwritten signatures that appear on the documents he receives? What we ask of the archivist is to guarantee the archiving process, including paradoxically for documents that are forgeries: the archivist must be able to prove that he has perfectly preserved forgeries!”
To return to our parallel with ChatGPT or deep fakes, it would be a question of proving in an unfalsifiable way that the information given would indeed come from NASA, or that the video of Barak Obama shows the real president of the USA (we can cite the article “Protecting World Leaders Against Deep Fakes” which discusses a detection solution based on a biometric signature of people:
https://openaccess.thecvf.com/content_CVPRW_2019/papers/Media%20Forensics/Agarwal_Protecting_World_Leaders_Against_Deep_Fakes_CVPRW_2019_paper.pdf). The certificate chain mechanism of the electronic signature could very well be a source of inspiration!
Suppose we have solved the problem of authenticity. Just like the archives, authenticating the source does not mean that this source has made truthful statements. We know the limits of the “argument from authority”. Thus, Nobel Prize winners who have “derailed” towards crazy theories or pseudo-sciences are so numerous that they have been grouped under the term “Nobel disease”: https://en.wikipedia.org/wiki/Nobel_disease).
It is therefore a question of completing this proof of authenticity, the mechanism of which remains to be constructed, with proof of veracity. “Fact checking” would make up part of said mechanism. However, we must realize that the science of “veracity assessment” is only in its infancy (as highlighted in the publication https://www.sciencedirect.com/science/article/pii/S0167923619301617). Unlike electronic signatures, this is not an aspect that still seems to concern producers of digital archives. But until when?
Another possible avenue is human control when submitting archives. This is where the notion of an input workflow, which only allows entry into the archiving system if a human validator has explicitly accepted the archive, can play a role.
We can dream that a fruitful synergy will gradually be put in place in the field of digital information on the theme of trust. This is perhaps even the condition for the survival of the sector!