Chess and Cybersecurity: Why AI Falls Short

Chessboard illustrating cybersecurity challenges

What Happened in My Chess Game with ChatGPT?

I played a chess match with ChatGPT last night. After it made seven illegal moves in the first 11 moves of the game, I suggested resignation, and it gracefully accepted.

The crux here is not about asserting human supremacy. I am just a club-level player rated 1800+/-, occasionally hitting a 2000 level (nowhere close to an "International Master").

Why Is AI Not Intelligent Yet?

My concern with AI lies in its unawareness of knowledge limitations. True intelligence, whether human or artificial, acknowledges what it does not know. This self-awareness is crucial for anything to be deemed intelligent.

After a series of serious mistakes (i.e., illegal moves), it should have been reasonable for AI to give up and accept defeat under the FIDE rules of chess. To the contrary, it kept on pretending that it learned from the mistakes when it had not.

What Can Go Wrong?

The experiment shows that AI will not pass on control to humans even after mistakes. It simply does not know it has made a mistake. It is just like an autopilot in an aircraft that would rather crash the passengers than request the pilot to take control. Likewise, have you forgotten the Boeing 737 MAX MCAS system? (Google it if you do not remember). Same fishing thing.

AI failing to resign in chess, mirroring cybersecurity risks

What Should We Do?

Understanding that AI can produce incorrect results requires astuteness. Not greed. Not laziness. Not peer pressure. Do not listen to those dodgy influencers on IG and TikTok. We must exercise intelligence in not relying on AI recklessly. We must also be ready to take charge.

You may disagree with me and that's perfectly fine. But I need you to think about the limits to which you would be ready to personally take accountability for your AI choices. That would make your choices better informed.

What Else?

In two domains close to my heart—cybersecurity and chess—I consistently outplayed AI (so far). Perhaps, one day, it will surpass me. This thought fuels my drive to learn and outperform a machine in strategic thinking, contextual perfection, and intuition. AI lacks these three, so far, and I think it will take another 5-10 years for AI to develop these skills.

(Note: I covered the misuse of AI for cyber attacks in my 2024 forecast on my quantum research site.)

What about you? In which field do you (or hope to) outshine AI? Let me know on X (@SantoshPanditUK).

Santosh Pandit

12 January 2024