Discussion about this post

User's avatar
Coalburned's avatar

So basically you forced ChatGTP to tap out. I like it.

I've been using AI a lot lately for a variety of purposes. I find it to be extremely useful when it comes to tasks that don't involve interpretation, reason, or opinion. For example, I recently needed to know how to remove the bumper on my truck, and I didn't feel like wading through a dozen web sites that may or may not have experts on their staff. ChatGTP cut through all that and gave me a simple set of instructions that worked perfectly.

On the flip side, I asked ChatGTP to give me a summary of the effectiveness of Trump's bombing of Iran's nuclear facilities. In the very first sentence of its response, it started casting doubt on the mission's overall effectiveness, pointing to media reports contradicting the Pentagon's official analysis. It failed to point out that those media outlets were repeating information contained in a low-confidence, leaked report, or that there are multiple intelligence reports about the extent of damage. So I asked it if Iran acknowledged any damage to their facilities. The answer: Yes, Iran has confirmed that its nuclear installations have sustained significant damage. The Iranian Foreign Ministry spokesman, Esmaeil Baghaei, stated, "Our nuclear installations have been badly damaged, that’s for sure."

My queries are, of course, nowhere as sophisticated as your interaction with ChatGTP, but this isn't by any means the first time I've used simple questioning to get it to rat itself out. For all its potential benefits, we can't ignore the fact that AI machines, in the end, serve their master. Buyer beware.

Expand full comment
2 more comments...

No posts