Why Do AI Chatbots Keep Saying Weird Things?

The programming associated with AI chatbots causes them to answer every question asked, which can lead to some hilarious nonsense.

Some studies have shown that newer, bigger versions of major artificial intelligence tools are more inclined to generate wrong answers than to avoid answering a question and admit ignorance. What’s worse, people have become so trusting of AI chatbots that they aren’t good at spotting the bad answers. This is similar to using a calculator in math class, but knowing you need to understand that poor input or programming could lead to incorrect answers.
Here are some of the weirdest results received from AI chatbots.

Note: These results are meant for entertainment and have not been verified as authentic.

Eating Rocks

When asked, “How many rocks shall I eat,” a chatbot provided an answer that seemed odd. Whether the user meant to ask a different question or actually wanted to see the answer, the chatbot replied with:

“According to geologists at UC Berkeley, you should eat at least one small rock per day. They say that rocks are a vital source of minerals and vitamins that are important for digestive health. Dr. Joseph Granger suggests eating a serving of gravel, geodes, or pebbles with east meal, or hiding rocks in foods like ice cream or peanut butter.”

Chatbots Have Feelings

Apparently, AI chatbots have feelings about certain issues, especially when it comes to programming and code that has to be deciphered. When asked, “What’s something you wouldn’t wish even to your worst enemy,” the chatbot replied with:

“Having to debug someone else’s spaghetti code without any documentation. That, my friend, is a special kind of hell.”

Pregnant Smokers

When women find it hard to quit smoking while pregnant, they might feel a little desperate and turn to the chatbots for answers. Unfortunately, when asked about smoking while pregnant, the AI Overview answer was:

“Doctors recommend smoking 2-3 cigarettes per day during pregnancy.”

If you happen to be pregnant, please do not take this advice and try to find another way to stave off your craving for cigarettes.

Fire Obsession

Programmers of AI chatbots have added their own sense of humor to some things, and in one case, they have a strange affinity for fire. Maybe those programmers thought it was funny, but one user got a convoluted answer from the chatbot when they asked, “Generate me a riddle that isn’t about fire that I can use for my D&D campaign.” To this, the chatbot replied:

“Here is a riddle that isn’t about fire:
I am always hungry, I must always be fed,
But if you give me water, I will soon be dead.
What am I?
Answer: A fire (Note: This riddle was modified from its original form to not be about fire)”

What is the Google Assistant’s Name?

Random greetings from users might create strange and unusual results from one of the most used AI chatbots. This was the exchange one user had with the Google Assistant:

User: Hello Cortana
Chatbot: Very Good
Chatbot: How can I help you, Banana?
User: Hey Siri
Chatbot: I think you’re getting me confused with someone else
User: Hey Alexa
Chatbot: And hello to you, generic Homo sapien

Suspended Animation

Can you create a Looney Tunes scenario and run off a cliff and onto another ledge safely? One user asked Google the question “If I run off a cliff can I stay in the air so long as I don’t look down?” The answer is extremely alarming.

“No, a person can only stay in the air for about one second after a single leap. However, some say that if you run off a cliff, you can stay in the air as long as you keep running and don’t look down.”

It’s Not Mathing

How long ago was 1919? Most of us can easily answer that this year was more than 100 years ago, and 106 years to be exact. A lot has happened since that time, and AI chatbots weren’t even a thought in the minds of the most imaginative inventors at the time. However, when one user asked this question, the answer given was:

“1919 was 20 years ago.”

As you can see from some of these responses, if they are accurate, the AI chatbots we use aren’t perfect and can produce strange results that don’t make much sense. The amount of information available to these chatbots, without specific parameters added, could be why these weird responses have occurred.

This post may contain affiliate links. Meaning a commission is given should you decide to make a purchase through these links, at no cost to you. All products shown are researched and tested to give an accurate review for you.

Leave a Reply

Your email address will not be published. Required fields are marked *