Showing posts with label Category 4: Art and Creativity- Subcategory 3: Behind the Scenes Photos. Show all posts
Showing posts with label Category 4: Art and Creativity- Subcategory 3: Behind the Scenes Photos. Show all posts

Saturday, January 3, 2026

AI: The Snake Eating Its Own Tail


AI: The Snake Eating Its Own Tail









Recently, it seems I am fighting my tools constantly to try to get them to work for my blog and my channel. Some tools I have used for years are now practically unusable. Canva was a wonderful tool. I used to be able to make lovely digital stickers with transparent backgrounds and even shaped digital stickers, like the rounded square for my Athena cat sticker. Now it just makes plastic wrap everywhere. You download a banner with a transparent background, and it makes a large sheet of plastic wrap all around it, so the banner looks strange with everything else pushed inches away. It is like trying to write on a plastic sheet with a pencil — no can do.

AI is also, in general, having its issues. Grok told me that AI is like a snake eating its own tail. What happens if a snake eats its entire body? It dies, of course. Well, AI doesn’t die — it is just a fancy machine, kind of like a glorified calculator. It is, at its most basic form, a bunch of zeros and ones.

Why do I think Grok is correct with the metaphor? Nowadays everyone wants fast food in everything. They don’t care if it is healthy or good — it is fast, and that’s all people seem to want these days. So AI is starting to degrade because now there is very little real human material out there anymore. At least not as much as there was before AI was born, so now it is training on a bunch of AI-created, unreal stuff. That is why it is saying lots of off-the-wall things. I am sure all of my visitors have come across these issues with several AI models.


What the Research Shows

What I am experiencing is not unique. Many researchers and companies now admit that AI is beginning to feed on itself, and the results are getting worse. New “reasoning models” are actually hallucinating more than older ones — meaning they confidently produce incorrect information.

According to OpenAI’s own testing, their newest models (o3 and o4-mini) hallucinate anywhere from 30% to 79%, depending on the task. DeepSeek’s newest reasoning model, R1, also hallucinated far more than their older models. Researchers say this happens because these systems generate answers step-by-step, and a hallucination can occur at any step. The more “thinking steps” a model takes, the more chances it has to go wrong.

AI also tends to hallucinate because of how it is trained. These models try to give the most statistically likely answer, even when the correct answer isn’t in the data. Some research groups have found that these models are designed to guess rather than say “I don’t know,” which naturally increases errors.

Another problem is that as the internet fills up with AI-generated text and images, new AI models are being trained on data that already contains AI mistakes. This creates a loop — AI learning from AI — which may be part of why hallucinations are increasing. It is the perfect example of the snake eating its own tail.

Companies like OpenAI, Google, Microsoft, and Anthropic all say they are trying to reduce hallucinations, but no one has found a complete solution yet. Some researchers suggest teaching AI how to express uncertainty or using retrieval techniques so the model looks up real information before answering. But most experts believe hallucinations can never be fully eliminated — only managed.


Additional Research from Forbes

More reporting supports this idea: modern AI models are hallucinating more often, not less.

A Forbes article from 2025 explains that OpenAI’s new reasoning models — including o3 and o4-mini — produced even higher hallucination rates than earlier versions during internal tests. In some fact-based evaluations, the error rate was over 50%, and the o4-mini model produced incorrect answers almost 80% of the time.

Independent testers also found that DeepSeek’s R1 reasoning model hallucinated far more than their older, simpler models. This suggests that adding more “steps of thinking” does not automatically make AI smarter — it can actually increase the chances for mistakes.

Experts interviewed in the Forbes reporting explain that hallucinations occur because AI models do not have true understanding. They predict answers based on patterns, and when the needed information isn’t present, they fill in the gaps with confident-sounding guesses. If future models train on data already polluted with AI errors, the cycle continues — AI unintentionally reinforcing its own mistakes.

Source: Forbes — “Why AI Hallucinations Are Worse Than Ever,” Conor Murray (2025)
https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/


Why AI Tools Sometimes Change or Stop Working as Expected

Many creators have noticed that AI tools across the internet — not just Canva — sometimes behave differently over time. This isn’t unique to one platform. Several well-known technology publications have reported that AI systems can:

  • degrade in quality after updates
  • produce inconsistent or incorrect results
  • change behavior without explanation
  • struggle to maintain performance under heavy use

These articles explain that AI models can drift, weaken, hallucinate more often, or shift behavior after internal updates. Because Canva uses AI inside many of its features, it makes sense that creators occasionally notice changes in how tools behave or how results look. This reflects the larger reality of today’s evolving AI technology.


Sources for This Section

MIT Technology Review
“Are AI models getting worse? New research suggests yes.”
https://www.technologyreview.com/2023/07/20/1076883/ai-models-getting-worse-gpt-4/

Ars Technica
“Study shows GPT-4’s behavior changes over time—sometimes getting worse.”
https://arstechnica.com/information-technology/2023/07/study-shows-gpt-4s-behavior-changes-over-time/

Wired Magazine
“AI models drift and degrade without warning.”
https://www.wired.com/story/ai-model-drift-degradation/

Forbes Technology Council
“Why AI tools fail in real-world settings.”
https://www.forbes.com/sites/forbestechcouncil/2023/08/03/why-ai-tools-fail-in-real-world-settings/


My Final Thoughts

And to think — they want to put AI in our eyes, in our bodies for legs, and in every corner of our lives. They want AI to do everything and be everything. For me, this is the greatest nightmare.

Recently, it seems I am fighting my tools constantly to try to get them to work for my blog and my channel. Some tools I have used for years are now practically unusable. Canva was a wonderful tool. I used to be able to make lovely digital stickers with transparent backgrounds and even shaped digital stickers, like the rounded square for my Athena cat sticker. Now it just makes plastic wrap everywhere. You download a banner with a transparent background, and it makes a large sheet of plastic wrap all around it, so the banner looks strange with everything else pushed inches away. It is like trying to write on a plastic sheet with a pencil — no can do.

AI is also, in general, having its issues. Grok told me that AI is like a snake eating its own tail. What happens if a snake eats its entire body? It dies, of course. Well, AI doesn’t die — it is just a fancy machine, kind of like a glorified calculator. It is, at its most basic form, a bunch of zeros and ones.

Why do I think Grok is correct with the metaphor? Nowadays everyone wants fast food in everything. They don’t care if it is healthy or good — it is fast, and that’s all people seem to want these days. So AI is starting to degrade because now there is very little real human material out there anymore. At least not as much as there was before AI was born, so now it is training on a bunch of AI-created, unreal stuff. That is why it is saying lots of off-the-wall things. I am sure all of my visitors have come across these issues with several AI models.


What the Research Shows

What I am experiencing is not unique. Many researchers and companies now admit that AI is beginning to feed on itself, and the results are getting worse. New “reasoning models” are actually hallucinating more than older ones — meaning they confidently produce incorrect information.

According to OpenAI’s own testing, their newest models (o3 and o4-mini) hallucinate anywhere from 30% to 79%, depending on the task. DeepSeek’s newest reasoning model, R1, also hallucinated far more than their older models. Researchers say this happens because these systems generate answers step-by-step, and a hallucination can occur at any step. The more “thinking steps” a model takes, the more chances it has to go wrong.

AI also tends to hallucinate because of how it is trained. These models try to give the most statistically likely answer, even when the correct answer isn’t in the data. Some research groups have found that these models are designed to guess rather than say “I don’t know,” which naturally increases errors.

Another problem is that as the internet fills up with AI-generated text and images, new AI models are being trained on data that already contains AI mistakes. This creates a loop — AI learning from AI — which may be part of why hallucinations are increasing. It is the perfect example of the snake eating its own tail.

Companies like OpenAI, Google, Microsoft, and Anthropic all say they are trying to reduce hallucinations, but no one has found a complete solution yet. Some researchers suggest teaching AI how to express uncertainty or using retrieval techniques so the model looks up real information before answering. But most experts believe hallucinations can never be fully eliminated — only managed.


Additional Research from Forbes

More reporting supports this idea: modern AI models are hallucinating more often, not less.

A Forbes article from 2025 explains that OpenAI’s new reasoning models — including o3 and o4-mini — produced even higher hallucination rates than earlier versions during internal tests. In some fact-based evaluations, the error rate was over 50%, and the o4-mini model produced incorrect answers almost 80% of the time.

Independent testers also found that DeepSeek’s R1 reasoning model hallucinated far more than their older, simpler models. This suggests that adding more “steps of thinking” does not automatically make AI smarter — it can actually increase the chances for mistakes.

Experts interviewed in the Forbes reporting explain that hallucinations occur because AI models do not have true understanding. They predict answers based on patterns, and when the needed information isn’t present, they fill in the gaps with confident-sounding guesses. If future models train on data already polluted with AI errors, the cycle continues — AI unintentionally reinforcing its own mistakes.

Source: Forbes — “Why AI Hallucinations Are Worse Than Ever,” Conor Murray (2025)
https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/


Why AI Tools Sometimes Change or Stop Working as Expected

Many creators have noticed that AI tools across the internet — not just Canva — sometimes behave differently over time. This isn’t unique to one platform. Several well-known technology publications have reported that AI systems can:

  • degrade in quality after updates
  • produce inconsistent or incorrect results
  • change behavior without explanation
  • struggle to maintain performance under heavy use

These articles explain that AI models can drift, weaken, hallucinate more often, or shift behavior after internal updates. Because Canva uses AI inside many of its features, it makes sense that creators occasionally notice changes in how tools behave or how results look. This reflects the larger reality of today’s evolving AI technology.


Sources for This Section

MIT Technology Review
“Are AI models getting worse? New research suggests yes.”
https://www.technologyreview.com/2023/07/20/1076883/ai-models-getting-worse-gpt-4/

Ars Technica
“Study shows GPT-4’s behavior changes over time—sometimes getting worse.”
https://arstechnica.com/information-technology/2023/07/study-shows-gpt-4s-behavior-changes-over-time/

Wired Magazine
“AI models drift and degrade without warning.”
https://www.wired.com/story/ai-model-drift-degradation/

Forbes Technology Council
“Why AI tools fail in real-world settings.”
https://www.forbes.com/sites/forbestechcouncil/2023/08/03/why-ai-tools-fail-in-real-world-settings/


My Final Thoughts

And to think — they want to put AI in our eyes, in our bodies for legs, and in every corner of our lives. They want AI to do everything and be everything. For me, this is the greatest nightmare.



Fire Horse

  Fire Horse Fire Horse picture by Pixabay Buy Me a Coffee If my work brings you peace, consider supporting a real human for $3. When I hear...