"The Navigator" News Blog

An Example of AI Gone Wrong

AI is one hell of a tool. I use it a lot myself. But I’ve also talked a lot about avoiding Chat Crap in your work, because AI can sound inauthentic, and that’s where “Good AI” becomes “AI Gone Wrong.” As Amazon stated:  “Roughly 57% of all web-based text has been AI generated or translated through an AI algorithm, according to a separate study from a team of Amazon Web Services researchers published in June.” I’m not sure I completely buy that 57% number – it seems like an awfully high number, considering how short a time AI has been in play – but even if the number is half that, it’s huge.

I also think that AI can be handy for prospect research, as well as content generation.  But you have to be VERY careful. The linked article below is one great example of AI gone wrong. Here’s the opening paragraph:

Tony’s KC:  The Heart of Kansas City’s Barbecue Scene

“Tony’s Kansas City has a rich history dating back several decades in Kansas City. Tony, a BBQ aficionado with a love for crafting the ideal smoked meats, opened the restaurant and soon won over many devoted patrons. The early going was modest, with a limited menu that concentrated on honing the fundamentals. Tony’s KC grew in popularity and offers throughout time, earning a slot on the local and tourist calendar that is not to be missed.”

Sounds like a great place to eat, right? There’s only one problem. Tony’s Kansas City is NOT a restaurant. It’s a local news blog that keeps people up to date on what’s really happening in my fair city, and only every now and then does he even talk about barbecue.

Now, think about this. This piece of terrible, lazy “writing” is on the Washington Post Magazine‘s website. That’s a national outlet. With one quick fact-check – a visit to Tony’s Kansas City blog – whoever prompted the AI could have figured out that the article was junk. Nobody did.

This happens because people think that AI is magic and infallible.  They’re using AI the wrong way because they don’t understand it, and they don’t know how to treat it.  You should treat AI like the best intern you’ve ever had:  It has a higher IQ than you, it has 20 Ph.D’s, and it has absolutely no street smarts whatsoever.  My favorite platform at the moment is Claude; I did a back to back video comparison on Claude vs. ChatGPT and put it on YouTube to explain why.

The “street smarts” have to come from you.  Your job is to guide AI, edit it, and back-check it.  Apparently, the Washington Post doesn’t understand that.  Here’s what happened.  Someone – most likely a low-paid person and possibly an actual intern – was assigned to write an article. I haven’t a clue what platform they used, or how they prompted it, but I do know that they ended up with an article that was a complete fabrication with precious little relationship to the truth.  And, once they had it, they didn’t even bother clicking through to Tony’s website (his name really is Tony Botello – that’s one thing that the article got right) in order to verify that they were in the same solar system as the truth.  As of this writing, the article has been up for a week with no changes or corrections.

So, how should you use AI platforms?  Here are a few of the techniques that I use:

  1. Prompts should be several sentences in length and be fully explanatory; one-sentence prompts usually get you junk for results. What we call “AI” is also known as large language models, meaning that the real feature is the platform’s ability to understand sentences, paragraphs, and context.  The more you put in your prompt, the better your results will be, even if you’re doing prospect research.
  2. Don’t be afraid to ask your AI platform to try again. Every popular platform has a button to do so, but I recommend suggesting your own refinements.  For instance, often I’ll say something like, “That wasn’t bad, but can you try again without the buzzwords, using plainer language?”
  3. In that vein, you should always specify a tone or style in which you want the result, as well as a length (when you’re going for a written document). If you don’t specify word count, AI tends to go very long.
  4. If your work depends on the accuracy of AI’s conclusions, take a quick moment to fact-check at least the biggest issues in the result.
  5. Once you have something you’re happy with (again, for a written document), you should always refine it a bit, to add your own language and your own touches to it. I created a video on how I use AI to create written work.  For the record, I did NOT use AI in this one.

Now, here’s what is really scary.  The Post article now becomes part of the “knowledge base” on the Internet, and will at some point be referred to as source material for other articles, whether human-written or AI-written.  Expect the amount of Chat Crap to expand exponentially.  This, friends, is why AI will not replace people in too many meaningful roles.  If you use AI platforms (and to be clear, I recommend that you use them and use them well), make sure to remember that YOU are the street smarts.  Keep that in mind, and you will do fine, and you will avoid “AI Gone Wrong.”