I accidentally created an AI music video

AI Video & Visuals


Groups of people dance vigorously under bright, warm yellow-orange lights, creating a blurry dynamic effect that emphasizes movement and excitement.
Frame from “We were born” “Life – 우리는살아가기” – ©2025 Stephen Obermeier and Lea Burger

One line stood out when Midjourney Newsletter hit my inbox late at night. That was enough. I had to wait for sleep.

I opened the interface and clicked a few buttons, but within a few minutes it became clear. Midi Joanie quietly launched a tool that works better out of the box than most competitors. It's fast. It's intuitive. And it offers surprisingly consistent movement from a single yet frame.

What happened next was not planned

We have started animating random images from Midjourney Archive. Old experiments, unfinished ideas, visual scraps from various stages of long-term AI projects Postcards from Pyongyang. If I still feel the photo can move, I will type “anime”. There is no preparation or strategy.

Within an hour, I had around 100 clips, all generated in-app. Total GPU time: About 12 hours fast time. I showed them to my colleague Lea Burger. Her response was immediate: “Let's turn this into a music video.”

The music was already there

A few months ago I created the track using MakeBestmusic, based on Korean lyrics I wrote with the help of ChatGpt 4-O. It was not intended for anything in particular – part of the archive. Surprisingly, the song turned out to be a 70s-style discourse. It's sparkly, punchy, apocalyptic retro.

And it clicked. The dreamy, surreal movements from Midjourney were strangely combined with the sound well.

Everything was dropped into Final Cut Pro. There is no grading. It has no effect. Just rhythm, timing, raw materials.

Total production cost (excluding time): $30. Total time: One night.

Midjourney's first video release: (very) quick review

Yes, that's the V1. But it's surprisingly solid.

The interface is minimal. A single “Animation” button in the familiar image grid. If you choose low or high movement, the tool will generate a 5-second clip (up to 21 seconds for reroll). Everything runs within the browser. There is no timeline. There are no layers. There is no learning curve.

That's the real strength here: it's accessible. It's easy. And it gets results – fast. No animation experience required. No workflow is required. You just need a good stillness and instinct for movement.

The output is not perfect. But that's enough to surprise you. This is not just frame interpolation. The system adds movements that you feel are directed to. The clothes sway. The light flickers. It involved water movements like intention. It's not realistic, but emotionally suggestive.

Here is part of my first animation clip:

Some of the restrictions are obvious:

  • There is no audio
  • No timeline or timing control
  • The output resolution is 480p (for now)
  • Complex scenes, especially high-moving glitches

But the points are as follows: It offers a video that can be used in seconds with zero setup and a clear visual identity. Midjourney isn't about to become a complete animation suite. It is a space of creative responses, fast, direct and surprisingly expressive.

That's why it works. It's not about control. It's about momentum. And sometimes that's everything you need to get started.

That's how this music video came together: one night. Click a few times. A tool that won't get in your way.


The only opinions expressed in this article are those of the author.


About the author: Stephen Obermeier is a 3D artist in the Architecture field, freelance photographer and AI experimenter for advertising agencies. You can find more of Obermeier's works on his website. This article has also been published here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *