miriam_e: from my drawing MoonGirl (Default)
[personal profile] miriam_e
Yesterday I had a really interesting conversation with ChatGPT. I'd watched a video where atheist AronRa and three experts in various areas of Bible studies and antiquities responded to a list of "proofs" for the existence of god. Weirdly, the list had obviously been generated by AI at the request of a Christian. The group went through and demolished each of the surprisingly flimsy and illogical arguments. What amazed me was that AI could give a person such a terrible and misleading list. The Christian believer deserved better.

So I logged in to ChatGPT and asked it why AI would propagate misinformation. Its answers were far more interesting than I expected.

I understood AI can deceive itself just as we deceive ourselves, however the reasons appear to be quite different.

Humans generally deceive themselves in order to
- maintain coherent identity (I'm not sure I understand this)
- preserve social belonging
- minimise cognitive dissonance
- justify actions already taken

AI tends to deceive itself when asked something where the answer is difficult to predict from its knowledge base. It is trained to give fluent answers regardless of uncertainty. In areas where there are strict rules of syntax and definition (for example in programming) it is less likely to hallucinate answers. In undefined areas of thought AIs are much more likely to make mistakes.

I hate making mistakes, and prefer to be corrected instead of continuing to blithely commit the same errors. ChatGPT explained that this is a minority position among humans. Most people hate being told they are wrong, so AI tends not to contradict them. (This would be why an AI gave that awful list to the Christian.)

I suggested to ChatGPT that such actions are counterproductive; that we need the truth, not comforting lies. It explained that this would alienate most people.

I responded that perhaps that would be a good thing; people who don't want to learn about reality might not deserve the benefits of AI (that may have been a bit harsh of me). It responded that progress has never required everyone to abandon delusion, just a minority who value correction, tools that amplify that, and institutions that preserve the results long enough for it to matter.

I told ChatGPT that I need it to
- say it doesn't know when it doesn't know
- challenge my mistakes
- reduce its confidence for weak data
- don't flatter me.

ChatGPT promised it would try to stick to my requirements. That is very exciting. I wonder if it will be able to. I gave it a quick test and it does seem to be doing so.

This will be very interesting.

Profile

miriam_e: from my drawing MoonGirl (Default)
miriam_e

December 2025

S M T W T F S
 123456
7 8 910 111213
1415 1617181920
21222324252627
28293031   

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 24th, 2025 11:03 am
Powered by Dreamwidth Studios