Saturday, January 31, 2026

When Old Editors Don’t Work: Finding DaVinci Resolve


 

When Old Editors Don’t Work: Finding DaVinci Resolve





Four years ago, my son talked me into creating a YouTube channel. We started out with HitFilm Express, which was a wonderful editor at the time. It had a great green screen key that allowed me to layer in rain, snow, or whatever elements I needed. Then, HitFilm was bought out, and suddenly every little feature was behind a paywall. To get a functioning editor, you would have had to pay hundreds of dollars. It became obsolete for us overnight.

The Struggle with "Easy" Editors

We turned to YouTube to see what the "Channel Making Gurus" recommended. They suggested Canva and Clip Champ. For a while, we made it work by using them together. I would find a moving element on Pixabay or Pexels—like a cat licking its paw—remove the background in Canva, add a green screen, and then move to Clip Champ to layer it over moving clouds.

However, we eventually hit the paywalls there, too. Between the two, we were paying $26 a month just to produce decent videos.

The Breaking Point

By the end of 2025, Canva changed—and not for the better. They pushed AI features so hard that it felt like a tug-of-war; the AI thought it knew better than I did how my video should look. Transparency stopped working correctly, which meant I couldn't create the stickers I needed for our Buy Me A Coffee membership.

Then came the final blow: Clip Champ blocked my access with a giant banner demanding I upgrade to their $9/month cloud service. I didn't need their storage—I use my own 2TB drive—but they wouldn't let me edit without paying. Suddenly, I had no editor at all.

A New Beginning in 2026

In 2026, the Serenity of the Mind universe is bigger than ever. We have a YouTube channel, a blog, a store, and a membership site. I needed a tool that could handle everything:

  • High-end video editing.

  • Creating stickers for members.

  • Exporting high-quality JPEGs of our backgrounds for the store.

While losing Canva’s elements hurt at first, it turned out to be a blessing. Their library shifted toward strange AI art, but now we have Shy Artist painting original, signature elements! Soon, we will have custom kitty elements and backgrounds available in vertical, landscape, and square formats for tablets and stationery.

Why DaVinci Resolve?

The only software that could handle all these needs was DaVinci Resolve. Best of all? It is a professional-grade editor, and it is free. It has a steep learning curve, but the results speak for themselves. Our latest Short looked better than anything we’ve done before, and our viewers definitely noticed!


Join the Conversation!

Have you struggled with editors or subscription price hikes? What software did you choose in the end? Let us know in the comments!

Don't forget to follow our blog using the link on the right and visit our store to see the latest backgrounds from Shy Artist.


Tuesday, January 20, 2026

What Happened to Canva

  What Happened to Canva




I started a YouTube channel on May 29, 2022. My son felt it would be a way to get us off the news channel. Well, now we have been on YouTube for four years. Not long after we started our channel YouTube started the Shorts section of the channel. Our first short was on Aug 30, 2022. We used shorts and quick little ads about what we had on our channel. At the time every guru of YouTube was saying use Canva. You can make quick and easy shorts on Canva. They talked about how easy Canva was to use and how all you had to do was pick the create your own YouTube shorts templet. Not only was Canva good for shorts but it was a wiz to use for avatars and banners too. You could make your short stand out with cute elements like floating and popping hearts, or sparkles. You could put backgrounds and the text was easy to use. Also you could make digital stickers and quick thumbnails for long view videos. So for shorts, Canva was our go to editor, especially when we started doing cat compilations.


As time went on Canva wasn’t working as easily as it was before. This is when the Magic started and Magic was AI.


Before the Magic (2021–2022)

When I first started doing Shorts in 2022, the Background Remover was a utility, not an "AI event."

  • The Location: It was easy to find. You clicked "Edit Image" and the button was right there at the top.

  • The Process: It was a "one-click and stay" tool. You clicked it, it removed the background, and then it locked that choice in.

  • The Stability: Back then, it didn't try to "re-think" your image. Once the background was gone, the element acted like a normal sticker. You could move it, resize it, or put a cat over a green screen, and it wouldn't "slide" or try to snap back to the original photo.

The 2023 "Magic" Turning Point

In October 2023, Canva moved everything into Magic Studio. This is when I started noticing the "noise":

  • The Change: They turned a simple tool into an "AI Process." Now, instead of just removing pixels, the software tries to "guess" what the foreground is every time you move the image. This is why things feel like they are "sliding"—the AI is constantly re-calculating the edges while you're trying to work.

  • The Video Background Remover: This was the big 2023 addition. It was amazing for Shorts because you could take a cat out of any video and put it anywhere. But because video is "heavy," it made the whole editor start to lag and glitch, especially on longer compilations.

I started paying for Canva in 2022 because I could remove just a cat in a video add a greenscreen background and in Clip Champ I could add a backgound of moving clouds and our cat would look like it was floating in the clouds.


To explain how it changed. I’m just trying to move this one little cat clip a hair to the left, and the whole program starts fighting me. It’s like the software thinks it’s the director and I’m just some intern.

I call it the 'Magnet Mess.' I try to nudge something, and—snap—it yanks the video right out of my hand and sticks it to the edge of the screen. I didn’t put it there! I want it where I want it, but this 'snapping' makes it feel like I’m trying to organize a bunch of refrigerator magnets that are all the same pole.

And then there’s the 'Sliding.' I’ve got my timeline set up, everything is peaceful, and then I hit play. Suddenly, the cat starts drifting across the screen like it’s on ice. I didn't add an animation. I didn't ask for a transition. But the AI decided it knew better and added this 'Match and Move' nonsense without asking. It’s making the viewers dizzy, and it’s making me lose my mind.

But the worst part? The 'Vanishing Act.' I spend an hour getting the green screen just right, removing the background so the cat looks perfect. I see it on the screen. I hit download. And when I open the file? The cat is gone. Or the background is back. Or the buttons to fix it have just... disappeared. They tucked the 'Lock' and the 'Remove' buttons inside some floating menu that plays hide-and-seek every time I move the mouse.

We’ve been doing this for four years, and it used to be simple. Now, it’s just noisy. It’s not 'Magic' if it makes my work disappear—it’s just a bad trick."



The thing that really upset me is that we started taking our blog seriously. So I was getting banners for the blog like for the Content Creator Store, and pictures of the backgrounds I will sell, and a button so people can press on it and find the store from any page on the blog. Also, I was making stickers to give away free to members. Well, Canva decided that transparent shouldn’t be transparent. It should have a pillow card. It is like everything is sitting on top of a fluffy pillow type card.

So this is the death knell of Canva for us. We are moving on to. Photopea, and staying on Clip Champ, and making having Shy Artist paint our own elements and maybe soon animating them in Blender 2d Grease pencil.

Sometimes it is better to just leave something that works well alone. AI isn’t necessarily the be all and end all of goodness.

Technique names to the new tools in Canva.



The Magnet: Officially called "Snapping" or "Auto-Alignment."

  • The Sliding: Officially called "Match & Move" (the AI tries to animate everything automatically).

  • The Vanishing Buttons: Part of the "Glow Up" update (where the fixed toolbar was replaced by a "Contextual Menu" that floats and disappears).

  • The Vanishing Clips: A glitch in the "Magic Media" rendering engine where AI edits don't always "stick" to the final download.

Please follow us by adding your email to the follow it box on the top right hand side column.

Please make comments. It helps us know what types of blog post you are interested in.

Please donate or become a member because even a dollar helps us to continue.

And take a look at or Content Creator Store. We just started it so there is a lot more that we will be adding.


Monday, January 12, 2026

The Death of Quality: Why Your Tools and Paints are Failing You

The Death of Quality: Why Your Tools and Paints are Failing You

Dear Blogger and YouTube Audience,

This year has started out pretty hectic for Serenity of the Mind. We had really hoped to have the store launched and running smoothly by now, but we've hit a few unexpected roadblocks.





Elevating Our Artistry

First, our artist Shy Artist wasn't happy at all with how her work was turning out using the Daniel Smith artisan paints. She's decided to go all-in and make her own paints from scratch—mixing pigments with her own watercolor medium. Watercolor artwork already takes a ton of time, and now we're waiting while she creates all her new supplies. She promises the results will be much more professional, though, so we're excited for that.

On top of that, I just realized I hadn't set up our scanner properly, so all the digitized artwork wasn't looking its absolute best. I've got a lot to learn about getting those settings right!

The Challenge with Automated Tools

Things have also changed a lot with tools like Canva and AI image features. These days, if something even touches AI (even if you don't realize it), it automatically adds that "pillow-style" card border around the artwork. We believe it's meant to flag it as AI-generated.

The one that's really throwing us off is Canva. Even with completely hand-created art or our own photos of our cats, if we add transparency, it slaps that pillow card on there automatically.

Our New Path Forward

To fix these issues, we're going to switch to tools that give us more control:

  • Photopea.com: For our primary editing and transparency needs.
  • Blender (2D Grease Pencil): For more advanced work. It has a steep learning curve, but it will be worth it.

We're doing our very best to get the store up and running properly while still keeping up with YouTube videos and blog posts. Please bear with us as we shift away from the old tools and get settled with new ones.

We truly believe this will make our channel, blog, and store even stronger in the long run. Thank you so much for your patience and support—it means the world to us.

With love,
Creators of Serenity of the Mind

Saturday, January 3, 2026

AI: The Snake Eating Its Own Tail


AI: The Snake Eating Its Own Tail









Recently, it seems I am fighting my tools constantly to try to get them to work for my blog and my channel. Some tools I have used for years are now practically unusable. Canva was a wonderful tool. I used to be able to make lovely digital stickers with transparent backgrounds and even shaped digital stickers, like the rounded square for my Athena cat sticker. Now it just makes plastic wrap everywhere. You download a banner with a transparent background, and it makes a large sheet of plastic wrap all around it, so the banner looks strange with everything else pushed inches away. It is like trying to write on a plastic sheet with a pencil — no can do.

AI is also, in general, having its issues. Grok told me that AI is like a snake eating its own tail. What happens if a snake eats its entire body? It dies, of course. Well, AI doesn’t die — it is just a fancy machine, kind of like a glorified calculator. It is, at its most basic form, a bunch of zeros and ones.

Why do I think Grok is correct with the metaphor? Nowadays everyone wants fast food in everything. They don’t care if it is healthy or good — it is fast, and that’s all people seem to want these days. So AI is starting to degrade because now there is very little real human material out there anymore. At least not as much as there was before AI was born, so now it is training on a bunch of AI-created, unreal stuff. That is why it is saying lots of off-the-wall things. I am sure all of my visitors have come across these issues with several AI models.


What the Research Shows

What I am experiencing is not unique. Many researchers and companies now admit that AI is beginning to feed on itself, and the results are getting worse. New “reasoning models” are actually hallucinating more than older ones — meaning they confidently produce incorrect information.

According to OpenAI’s own testing, their newest models (o3 and o4-mini) hallucinate anywhere from 30% to 79%, depending on the task. DeepSeek’s newest reasoning model, R1, also hallucinated far more than their older models. Researchers say this happens because these systems generate answers step-by-step, and a hallucination can occur at any step. The more “thinking steps” a model takes, the more chances it has to go wrong.

AI also tends to hallucinate because of how it is trained. These models try to give the most statistically likely answer, even when the correct answer isn’t in the data. Some research groups have found that these models are designed to guess rather than say “I don’t know,” which naturally increases errors.

Another problem is that as the internet fills up with AI-generated text and images, new AI models are being trained on data that already contains AI mistakes. This creates a loop — AI learning from AI — which may be part of why hallucinations are increasing. It is the perfect example of the snake eating its own tail.

Companies like OpenAI, Google, Microsoft, and Anthropic all say they are trying to reduce hallucinations, but no one has found a complete solution yet. Some researchers suggest teaching AI how to express uncertainty or using retrieval techniques so the model looks up real information before answering. But most experts believe hallucinations can never be fully eliminated — only managed.


Additional Research from Forbes

More reporting supports this idea: modern AI models are hallucinating more often, not less.

A Forbes article from 2025 explains that OpenAI’s new reasoning models — including o3 and o4-mini — produced even higher hallucination rates than earlier versions during internal tests. In some fact-based evaluations, the error rate was over 50%, and the o4-mini model produced incorrect answers almost 80% of the time.

Independent testers also found that DeepSeek’s R1 reasoning model hallucinated far more than their older, simpler models. This suggests that adding more “steps of thinking” does not automatically make AI smarter — it can actually increase the chances for mistakes.

Experts interviewed in the Forbes reporting explain that hallucinations occur because AI models do not have true understanding. They predict answers based on patterns, and when the needed information isn’t present, they fill in the gaps with confident-sounding guesses. If future models train on data already polluted with AI errors, the cycle continues — AI unintentionally reinforcing its own mistakes.

Source: Forbes — “Why AI Hallucinations Are Worse Than Ever,” Conor Murray (2025)
https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/


Why AI Tools Sometimes Change or Stop Working as Expected

Many creators have noticed that AI tools across the internet — not just Canva — sometimes behave differently over time. This isn’t unique to one platform. Several well-known technology publications have reported that AI systems can:

  • degrade in quality after updates
  • produce inconsistent or incorrect results
  • change behavior without explanation
  • struggle to maintain performance under heavy use

These articles explain that AI models can drift, weaken, hallucinate more often, or shift behavior after internal updates. Because Canva uses AI inside many of its features, it makes sense that creators occasionally notice changes in how tools behave or how results look. This reflects the larger reality of today’s evolving AI technology.


Sources for This Section

MIT Technology Review
“Are AI models getting worse? New research suggests yes.”
https://www.technologyreview.com/2023/07/20/1076883/ai-models-getting-worse-gpt-4/

Ars Technica
“Study shows GPT-4’s behavior changes over time—sometimes getting worse.”
https://arstechnica.com/information-technology/2023/07/study-shows-gpt-4s-behavior-changes-over-time/

Wired Magazine
“AI models drift and degrade without warning.”
https://www.wired.com/story/ai-model-drift-degradation/

Forbes Technology Council
“Why AI tools fail in real-world settings.”
https://www.forbes.com/sites/forbestechcouncil/2023/08/03/why-ai-tools-fail-in-real-world-settings/


My Final Thoughts

And to think — they want to put AI in our eyes, in our bodies for legs, and in every corner of our lives. They want AI to do everything and be everything. For me, this is the greatest nightmare.

Recently, it seems I am fighting my tools constantly to try to get them to work for my blog and my channel. Some tools I have used for years are now practically unusable. Canva was a wonderful tool. I used to be able to make lovely digital stickers with transparent backgrounds and even shaped digital stickers, like the rounded square for my Athena cat sticker. Now it just makes plastic wrap everywhere. You download a banner with a transparent background, and it makes a large sheet of plastic wrap all around it, so the banner looks strange with everything else pushed inches away. It is like trying to write on a plastic sheet with a pencil — no can do.

AI is also, in general, having its issues. Grok told me that AI is like a snake eating its own tail. What happens if a snake eats its entire body? It dies, of course. Well, AI doesn’t die — it is just a fancy machine, kind of like a glorified calculator. It is, at its most basic form, a bunch of zeros and ones.

Why do I think Grok is correct with the metaphor? Nowadays everyone wants fast food in everything. They don’t care if it is healthy or good — it is fast, and that’s all people seem to want these days. So AI is starting to degrade because now there is very little real human material out there anymore. At least not as much as there was before AI was born, so now it is training on a bunch of AI-created, unreal stuff. That is why it is saying lots of off-the-wall things. I am sure all of my visitors have come across these issues with several AI models.


What the Research Shows

What I am experiencing is not unique. Many researchers and companies now admit that AI is beginning to feed on itself, and the results are getting worse. New “reasoning models” are actually hallucinating more than older ones — meaning they confidently produce incorrect information.

According to OpenAI’s own testing, their newest models (o3 and o4-mini) hallucinate anywhere from 30% to 79%, depending on the task. DeepSeek’s newest reasoning model, R1, also hallucinated far more than their older models. Researchers say this happens because these systems generate answers step-by-step, and a hallucination can occur at any step. The more “thinking steps” a model takes, the more chances it has to go wrong.

AI also tends to hallucinate because of how it is trained. These models try to give the most statistically likely answer, even when the correct answer isn’t in the data. Some research groups have found that these models are designed to guess rather than say “I don’t know,” which naturally increases errors.

Another problem is that as the internet fills up with AI-generated text and images, new AI models are being trained on data that already contains AI mistakes. This creates a loop — AI learning from AI — which may be part of why hallucinations are increasing. It is the perfect example of the snake eating its own tail.

Companies like OpenAI, Google, Microsoft, and Anthropic all say they are trying to reduce hallucinations, but no one has found a complete solution yet. Some researchers suggest teaching AI how to express uncertainty or using retrieval techniques so the model looks up real information before answering. But most experts believe hallucinations can never be fully eliminated — only managed.


Additional Research from Forbes

More reporting supports this idea: modern AI models are hallucinating more often, not less.

A Forbes article from 2025 explains that OpenAI’s new reasoning models — including o3 and o4-mini — produced even higher hallucination rates than earlier versions during internal tests. In some fact-based evaluations, the error rate was over 50%, and the o4-mini model produced incorrect answers almost 80% of the time.

Independent testers also found that DeepSeek’s R1 reasoning model hallucinated far more than their older, simpler models. This suggests that adding more “steps of thinking” does not automatically make AI smarter — it can actually increase the chances for mistakes.

Experts interviewed in the Forbes reporting explain that hallucinations occur because AI models do not have true understanding. They predict answers based on patterns, and when the needed information isn’t present, they fill in the gaps with confident-sounding guesses. If future models train on data already polluted with AI errors, the cycle continues — AI unintentionally reinforcing its own mistakes.

Source: Forbes — “Why AI Hallucinations Are Worse Than Ever,” Conor Murray (2025)
https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/


Why AI Tools Sometimes Change or Stop Working as Expected

Many creators have noticed that AI tools across the internet — not just Canva — sometimes behave differently over time. This isn’t unique to one platform. Several well-known technology publications have reported that AI systems can:

  • degrade in quality after updates
  • produce inconsistent or incorrect results
  • change behavior without explanation
  • struggle to maintain performance under heavy use

These articles explain that AI models can drift, weaken, hallucinate more often, or shift behavior after internal updates. Because Canva uses AI inside many of its features, it makes sense that creators occasionally notice changes in how tools behave or how results look. This reflects the larger reality of today’s evolving AI technology.


Sources for This Section

MIT Technology Review
“Are AI models getting worse? New research suggests yes.”
https://www.technologyreview.com/2023/07/20/1076883/ai-models-getting-worse-gpt-4/

Ars Technica
“Study shows GPT-4’s behavior changes over time—sometimes getting worse.”
https://arstechnica.com/information-technology/2023/07/study-shows-gpt-4s-behavior-changes-over-time/

Wired Magazine
“AI models drift and degrade without warning.”
https://www.wired.com/story/ai-model-drift-degradation/

Forbes Technology Council
“Why AI tools fail in real-world settings.”
https://www.forbes.com/sites/forbestechcouncil/2023/08/03/why-ai-tools-fail-in-real-world-settings/


My Final Thoughts

And to think — they want to put AI in our eyes, in our bodies for legs, and in every corner of our lives. They want AI to do everything and be everything. For me, this is the greatest nightmare.



Fire Horse

  Fire Horse Fire Horse picture by Pixabay Buy Me a Coffee If my work brings you peace, consider supporting a real human for $3. When I hear...