Showing posts with label and Shorts of Work. Show all posts
Showing posts with label and Shorts of Work. Show all posts

Saturday, January 3, 2026

AI: The Snake Eating Its Own Tail


AI: The Snake Eating Its Own Tail









Recently, it seems I am fighting my tools constantly to try to get them to work for my blog and my channel. Some tools I have used for years are now practically unusable. Canva was a wonderful tool. I used to be able to make lovely digital stickers with transparent backgrounds and even shaped digital stickers, like the rounded square for my Athena cat sticker. Now it just makes plastic wrap everywhere. You download a banner with a transparent background, and it makes a large sheet of plastic wrap all around it, so the banner looks strange with everything else pushed inches away. It is like trying to write on a plastic sheet with a pencil — no can do.

AI is also, in general, having its issues. Grok told me that AI is like a snake eating its own tail. What happens if a snake eats its entire body? It dies, of course. Well, AI doesn’t die — it is just a fancy machine, kind of like a glorified calculator. It is, at its most basic form, a bunch of zeros and ones.

Why do I think Grok is correct with the metaphor? Nowadays everyone wants fast food in everything. They don’t care if it is healthy or good — it is fast, and that’s all people seem to want these days. So AI is starting to degrade because now there is very little real human material out there anymore. At least not as much as there was before AI was born, so now it is training on a bunch of AI-created, unreal stuff. That is why it is saying lots of off-the-wall things. I am sure all of my visitors have come across these issues with several AI models.


What the Research Shows

What I am experiencing is not unique. Many researchers and companies now admit that AI is beginning to feed on itself, and the results are getting worse. New “reasoning models” are actually hallucinating more than older ones — meaning they confidently produce incorrect information.

According to OpenAI’s own testing, their newest models (o3 and o4-mini) hallucinate anywhere from 30% to 79%, depending on the task. DeepSeek’s newest reasoning model, R1, also hallucinated far more than their older models. Researchers say this happens because these systems generate answers step-by-step, and a hallucination can occur at any step. The more “thinking steps” a model takes, the more chances it has to go wrong.

AI also tends to hallucinate because of how it is trained. These models try to give the most statistically likely answer, even when the correct answer isn’t in the data. Some research groups have found that these models are designed to guess rather than say “I don’t know,” which naturally increases errors.

Another problem is that as the internet fills up with AI-generated text and images, new AI models are being trained on data that already contains AI mistakes. This creates a loop — AI learning from AI — which may be part of why hallucinations are increasing. It is the perfect example of the snake eating its own tail.

Companies like OpenAI, Google, Microsoft, and Anthropic all say they are trying to reduce hallucinations, but no one has found a complete solution yet. Some researchers suggest teaching AI how to express uncertainty or using retrieval techniques so the model looks up real information before answering. But most experts believe hallucinations can never be fully eliminated — only managed.


Additional Research from Forbes

More reporting supports this idea: modern AI models are hallucinating more often, not less.

A Forbes article from 2025 explains that OpenAI’s new reasoning models — including o3 and o4-mini — produced even higher hallucination rates than earlier versions during internal tests. In some fact-based evaluations, the error rate was over 50%, and the o4-mini model produced incorrect answers almost 80% of the time.

Independent testers also found that DeepSeek’s R1 reasoning model hallucinated far more than their older, simpler models. This suggests that adding more “steps of thinking” does not automatically make AI smarter — it can actually increase the chances for mistakes.

Experts interviewed in the Forbes reporting explain that hallucinations occur because AI models do not have true understanding. They predict answers based on patterns, and when the needed information isn’t present, they fill in the gaps with confident-sounding guesses. If future models train on data already polluted with AI errors, the cycle continues — AI unintentionally reinforcing its own mistakes.

Source: Forbes — “Why AI Hallucinations Are Worse Than Ever,” Conor Murray (2025)
https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/


Why AI Tools Sometimes Change or Stop Working as Expected

Many creators have noticed that AI tools across the internet — not just Canva — sometimes behave differently over time. This isn’t unique to one platform. Several well-known technology publications have reported that AI systems can:

  • degrade in quality after updates
  • produce inconsistent or incorrect results
  • change behavior without explanation
  • struggle to maintain performance under heavy use

These articles explain that AI models can drift, weaken, hallucinate more often, or shift behavior after internal updates. Because Canva uses AI inside many of its features, it makes sense that creators occasionally notice changes in how tools behave or how results look. This reflects the larger reality of today’s evolving AI technology.


Sources for This Section

MIT Technology Review
“Are AI models getting worse? New research suggests yes.”
https://www.technologyreview.com/2023/07/20/1076883/ai-models-getting-worse-gpt-4/

Ars Technica
“Study shows GPT-4’s behavior changes over time—sometimes getting worse.”
https://arstechnica.com/information-technology/2023/07/study-shows-gpt-4s-behavior-changes-over-time/

Wired Magazine
“AI models drift and degrade without warning.”
https://www.wired.com/story/ai-model-drift-degradation/

Forbes Technology Council
“Why AI tools fail in real-world settings.”
https://www.forbes.com/sites/forbestechcouncil/2023/08/03/why-ai-tools-fail-in-real-world-settings/


My Final Thoughts

And to think — they want to put AI in our eyes, in our bodies for legs, and in every corner of our lives. They want AI to do everything and be everything. For me, this is the greatest nightmare.

Recently, it seems I am fighting my tools constantly to try to get them to work for my blog and my channel. Some tools I have used for years are now practically unusable. Canva was a wonderful tool. I used to be able to make lovely digital stickers with transparent backgrounds and even shaped digital stickers, like the rounded square for my Athena cat sticker. Now it just makes plastic wrap everywhere. You download a banner with a transparent background, and it makes a large sheet of plastic wrap all around it, so the banner looks strange with everything else pushed inches away. It is like trying to write on a plastic sheet with a pencil — no can do.

AI is also, in general, having its issues. Grok told me that AI is like a snake eating its own tail. What happens if a snake eats its entire body? It dies, of course. Well, AI doesn’t die — it is just a fancy machine, kind of like a glorified calculator. It is, at its most basic form, a bunch of zeros and ones.

Why do I think Grok is correct with the metaphor? Nowadays everyone wants fast food in everything. They don’t care if it is healthy or good — it is fast, and that’s all people seem to want these days. So AI is starting to degrade because now there is very little real human material out there anymore. At least not as much as there was before AI was born, so now it is training on a bunch of AI-created, unreal stuff. That is why it is saying lots of off-the-wall things. I am sure all of my visitors have come across these issues with several AI models.


What the Research Shows

What I am experiencing is not unique. Many researchers and companies now admit that AI is beginning to feed on itself, and the results are getting worse. New “reasoning models” are actually hallucinating more than older ones — meaning they confidently produce incorrect information.

According to OpenAI’s own testing, their newest models (o3 and o4-mini) hallucinate anywhere from 30% to 79%, depending on the task. DeepSeek’s newest reasoning model, R1, also hallucinated far more than their older models. Researchers say this happens because these systems generate answers step-by-step, and a hallucination can occur at any step. The more “thinking steps” a model takes, the more chances it has to go wrong.

AI also tends to hallucinate because of how it is trained. These models try to give the most statistically likely answer, even when the correct answer isn’t in the data. Some research groups have found that these models are designed to guess rather than say “I don’t know,” which naturally increases errors.

Another problem is that as the internet fills up with AI-generated text and images, new AI models are being trained on data that already contains AI mistakes. This creates a loop — AI learning from AI — which may be part of why hallucinations are increasing. It is the perfect example of the snake eating its own tail.

Companies like OpenAI, Google, Microsoft, and Anthropic all say they are trying to reduce hallucinations, but no one has found a complete solution yet. Some researchers suggest teaching AI how to express uncertainty or using retrieval techniques so the model looks up real information before answering. But most experts believe hallucinations can never be fully eliminated — only managed.


Additional Research from Forbes

More reporting supports this idea: modern AI models are hallucinating more often, not less.

A Forbes article from 2025 explains that OpenAI’s new reasoning models — including o3 and o4-mini — produced even higher hallucination rates than earlier versions during internal tests. In some fact-based evaluations, the error rate was over 50%, and the o4-mini model produced incorrect answers almost 80% of the time.

Independent testers also found that DeepSeek’s R1 reasoning model hallucinated far more than their older, simpler models. This suggests that adding more “steps of thinking” does not automatically make AI smarter — it can actually increase the chances for mistakes.

Experts interviewed in the Forbes reporting explain that hallucinations occur because AI models do not have true understanding. They predict answers based on patterns, and when the needed information isn’t present, they fill in the gaps with confident-sounding guesses. If future models train on data already polluted with AI errors, the cycle continues — AI unintentionally reinforcing its own mistakes.

Source: Forbes — “Why AI Hallucinations Are Worse Than Ever,” Conor Murray (2025)
https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/


Why AI Tools Sometimes Change or Stop Working as Expected

Many creators have noticed that AI tools across the internet — not just Canva — sometimes behave differently over time. This isn’t unique to one platform. Several well-known technology publications have reported that AI systems can:

  • degrade in quality after updates
  • produce inconsistent or incorrect results
  • change behavior without explanation
  • struggle to maintain performance under heavy use

These articles explain that AI models can drift, weaken, hallucinate more often, or shift behavior after internal updates. Because Canva uses AI inside many of its features, it makes sense that creators occasionally notice changes in how tools behave or how results look. This reflects the larger reality of today’s evolving AI technology.


Sources for This Section

MIT Technology Review
“Are AI models getting worse? New research suggests yes.”
https://www.technologyreview.com/2023/07/20/1076883/ai-models-getting-worse-gpt-4/

Ars Technica
“Study shows GPT-4’s behavior changes over time—sometimes getting worse.”
https://arstechnica.com/information-technology/2023/07/study-shows-gpt-4s-behavior-changes-over-time/

Wired Magazine
“AI models drift and degrade without warning.”
https://www.wired.com/story/ai-model-drift-degradation/

Forbes Technology Council
“Why AI tools fail in real-world settings.”
https://www.forbes.com/sites/forbestechcouncil/2023/08/03/why-ai-tools-fail-in-real-world-settings/


My Final Thoughts

And to think — they want to put AI in our eyes, in our bodies for legs, and in every corner of our lives. They want AI to do everything and be everything. For me, this is the greatest nightmare.



Thursday, July 31, 2025

The Shy Artist: Learning Art to Help a YouTube Channel






The Shy Artist: Learning Art to Help a YouTube Channel


I don’t want my identity known. I make art because I want to—not to be famous or well known. I create art for Serenity of the Mind because I enjoy helping the content creator.

I started with colored pencils but found they didn’t translate well once digitized. It took a lot of time to build concentrated color, and the colors often behaved strangely on YouTube. Here is the first piece I did for the Serenity of the Mind YouTube channel:

Sleep: Serenity of the Mind – Restful Abstract Art and Music for Fast Sleep


The First Drawing: Neurographic Art for Serenity

The very first artwork I created for Serenity of the Mind was in the style of neurographic art. This technique involves drawing flowing, intersecting lines and gently rounding all sharp corners. It’s knowfor being calming and meditative—helping both the artist and viewer feel more at peace.

Creative Paula introduced me to this method. She explained that the process of softening hard edges is symbolic—turning mental tension into connection and harmony. I found it a perfect way to reflect the idea behind Serenity of the Mind—easing stress, embracing flow, and encouraging emotional healing through gentle, organic art.

Here is the YouTube channel for Creative Paula: Creative Paula on YouTube

Unlike Creative Paula, who is a great artist with galleries, I animated the neurographic art and set it tomusic so it “danced.” I felt this would help people relax and fall asleep.


Circle Art

Sleep: Serenity of the Mind: Otherworldly Dreams was inspired by circle art. Because it was difficult to get saturated colors with colored pencils, I decided to make the backgrounds with watercolor. That’s when I learned that not all watercolor paints are created equal—I needed to invest in better-quality paints.

In this watercolor animation, I used colored pencil for the circles. Making the circles saturated enough to appear well on the platform was challenging—it took a week to complete the circles. I also had to learn how to use Kdenlive to animate them. 

However, because I missed a detail while drawing, some of the circles showed parts of the background unintentionally. Partial circles moved around along with the whole circles. You can see what I mean here: 

Sleep: Serenity of the Mind: Otherworldly Dreams


Why Cheap Watercolors Don’t Work

Here is another video that illustrates why cheaper watercolors don’t deliver professional results. The green beans and greens were supposed to be green but came out muddy brown. These were labeled “professional” but were an off-brand.

Thanksgiving Music Video: Music to Relax to


We also learned something important about titling videos on YouTube. This video was primarily a watercolor painting with music, so it should have been titled “Thanksgiving animated watercolor with music” and placed under YouTube’s Film and Animation category. Proper titles and categories help prevent content claims.


Improving with a Watercolor Class

Unsatisfied with my watercolor results, I took a Craftsy class: Startup Library: Painting With Watercolors.

Instructor: Kateri Ewing

I learned a lot about watercolor papers, brushes, and most importantly, which paint brands yield professional results. I purchased Daniel Smith watercolor tubes, and here is the result. Even Paul Clark on YouTube said he liked it—I had been learning from him as well. I appreciate his loose, calm painting style.

You can purchase Daniel Smith watercolors in tubes or pans here (not an affiliate link, no commission—just to help readers):


Daniel Smith Watercolors



Final Watercolor Animation

Sleep: Serenity of the Mind: Tranquil Twilight for Deep Sleep

The colors came out beautifully, and I managed to animate it fairly well.

Next time, I will discuss the difficulties of animating art.


Thank you very much for reading. Please subscribe to the channel @serenityofthemind and follow this


blog for more.

Fire Horse

  Fire Horse Fire Horse picture by Pixabay Buy Me a Coffee If my work brings you peace, consider supporting a real human for $3. When I hear...