Let me save you some pain. I've generated probably a thousand AI video clips, and here's what actually works versus what wastes your time and money.
Which tool should you actually use?
Runway if you need it to look real. Their Gen-3 model creates clips that look like actual film footage. Downside? Expensive – about $0.50 per second, and you'll generate 20 variations before getting a keeper. But when clients see it, they don't ask "is this AI?" Best for professional B-roll, product shots, anything that needs to look cinematic.
Pika Labs is where I spend most of my time. More control over motion – you can tell the camera to zoom while the subject rotates, or keep camera static while things move. Free tier exists for experimenting. Less photorealistic than Runway but more "I got exactly what I wanted." Best for creative work, animated content, motion graphics.
Kling AI – Chinese platform that's slept on. Really good at longer clips (up to 10 seconds) with consistent motion. Interface is Chinese but Chrome translates fine. Best for narrative sequences, character animation.
The trick that actually works:
Don't start with text-to-video. Make a perfect image first in Midjourney, get it exactly right – composition, lighting, style, everything. Then animate that image.
Why? Text-to-video is gambling on both the visual AND the motion. Image-to-video is just gambling on motion. Way better odds.
My workflow: Create image in Midjourney → Upload to Pika/Runway → Add motion prompt like "camera slowly pushes in" → Generate. Quality jumps dramatically.
Prompts that work:
Be explicit about camera movement. "Slow dolly forward" or "camera pans left" or "static shot, no camera movement." If you don't specify, it invents random camera moves that usually ****.
Describe motion intensity. "Subtle movement" vs "dynamic motion" – these matter. Without it, everything moves too chaotically.
Real example: "Camera tracks forward through coffee shop, morning light from windows, gentle steam rising from cups, customers slightly blurred in background, calm atmosphere"
What's still broken:
Hands morphing – avoid close-ups of hands doing detailed things. Faces during quick motion – keep facial movements subtle. Text rendering – add text in post with CapCut. Anything longer than 10 seconds – stick to short clips, they're way more consistent.
Actual workflow:
Generate perfect still image (10-20 tries in Midjourney). Upload to video tool. Simple motion prompt focused on camera. Generate 5-10 variations. Pick best one. Edit in Descript or CapCut for final touches.
Time investment: 30-60 minutes for one good 5-second clip. But the results are actually usable for real projects.
For motion designers:
AI generates elements you animate traditionally. Create abstract background in Runway → Import to After Effects → Animate graphics over it. Or use Luma AI for 3D elements you composite. The combo of AI + traditional tools is where the real power is.
Money reality:
Runway: $12/month minimum, but $76/month if you're serious. Pika: free tier for learning, $10/month for regular use. CapCut: completely free, shockingly good. Topaz Video AI: $299 one-time for upscaling.
Share your attempts here, even the weird ones. Show your prompts and results. Ask specific questions. When something works, share exact settings. We're figuring this out together.
Which tool should you actually use?
Runway if you need it to look real. Their Gen-3 model creates clips that look like actual film footage. Downside? Expensive – about $0.50 per second, and you'll generate 20 variations before getting a keeper. But when clients see it, they don't ask "is this AI?" Best for professional B-roll, product shots, anything that needs to look cinematic.
Pika Labs is where I spend most of my time. More control over motion – you can tell the camera to zoom while the subject rotates, or keep camera static while things move. Free tier exists for experimenting. Less photorealistic than Runway but more "I got exactly what I wanted." Best for creative work, animated content, motion graphics.
Kling AI – Chinese platform that's slept on. Really good at longer clips (up to 10 seconds) with consistent motion. Interface is Chinese but Chrome translates fine. Best for narrative sequences, character animation.
The trick that actually works:
Don't start with text-to-video. Make a perfect image first in Midjourney, get it exactly right – composition, lighting, style, everything. Then animate that image.
Why? Text-to-video is gambling on both the visual AND the motion. Image-to-video is just gambling on motion. Way better odds.
My workflow: Create image in Midjourney → Upload to Pika/Runway → Add motion prompt like "camera slowly pushes in" → Generate. Quality jumps dramatically.
Prompts that work:
Be explicit about camera movement. "Slow dolly forward" or "camera pans left" or "static shot, no camera movement." If you don't specify, it invents random camera moves that usually ****.
Describe motion intensity. "Subtle movement" vs "dynamic motion" – these matter. Without it, everything moves too chaotically.
Real example: "Camera tracks forward through coffee shop, morning light from windows, gentle steam rising from cups, customers slightly blurred in background, calm atmosphere"
What's still broken:
Hands morphing – avoid close-ups of hands doing detailed things. Faces during quick motion – keep facial movements subtle. Text rendering – add text in post with CapCut. Anything longer than 10 seconds – stick to short clips, they're way more consistent.
Actual workflow:
Generate perfect still image (10-20 tries in Midjourney). Upload to video tool. Simple motion prompt focused on camera. Generate 5-10 variations. Pick best one. Edit in Descript or CapCut for final touches.
Time investment: 30-60 minutes for one good 5-second clip. But the results are actually usable for real projects.
For motion designers:
AI generates elements you animate traditionally. Create abstract background in Runway → Import to After Effects → Animate graphics over it. Or use Luma AI for 3D elements you composite. The combo of AI + traditional tools is where the real power is.
Money reality:
Runway: $12/month minimum, but $76/month if you're serious. Pika: free tier for learning, $10/month for regular use. CapCut: completely free, shockingly good. Topaz Video AI: $299 one-time for upscaling.
Share your attempts here, even the weird ones. Show your prompts and results. Ask specific questions. When something works, share exact settings. We're figuring this out together.