how to create anti-slop sora videos that go viral

my 5-step process to 10x your sora video quality

Hey there, solopreneur!

I've been diving deep into Sora over the past week.

Not just creating slop with it, but actually understanding how to create videos that perform.

There's a massive gap between "I made an AI video" and "I made an AI video that gets reach."

A spaghetti with ketchup vs michelin star dish difference.

This week I'm sharing the complete workflow I learned so you can test it yourself and decide if it fits your content.

Here's what you're getting:

  • The exact 4-stage workflow pros use (Tiktok-Gemini-Sora-Pipeline)

  • The Gemini prompt that reverse-engineers any viral video into a JSON blueprint

  • The cleanup steps that prevent shadow-banning (most people skip these and wonder why they get 47 views)

  • How I'm using Sora Pro for $11 per video instead of $200/month

Here’s how:

The Two Places Your Sora Videos Are Dying

I watched dozens of creators try Sora over the past few weeks. Most keep making the same 2 mistakes.

Mistake #1: Starting with a blank canvas

Most people open Sora, stare at the prompt box, and type something like "a woman drinks coffee in a modern kitchen."

Then they wonder why the output looks generic, the camera movements are janky, and nothing feels professional.

The pros don't start here. They start with a viral blueprint on TikTok or IG (more later).

Mistake #2: Thinking generation is the finish line

This one kills you.

Someone finally gets a decent Sora output. They're hyped. Download it, upload it straight to TikTok, and...nothingburger with fries.

47 views. No engagement. Dead.

Even a perfect Sora video has issues: slightly off video quality that makes people scroll past, robotic-sounding audio that feels wrong, and that Sora watermark screaming "AI content." People can tell something's off even if they can't articulate why.

And once viewers start scrolling past? Your reach dies. Fast.

What separates people who make this work: The cleanup you do AFTER generation matters way more than the generation itself.

Let me break down the system step by step:

Become The Emperor of Sora Workflow (4 Stages)

Stage A: Download Your Blueprint Video

Tools: snaptik.app (TikTok) or snapinsta.com (Instagram)
Cost: Free

Find a viral video in your niche. Copy the URL. Download it watermark-free.

Watermark-free matters because you need a clean file for AI analysis, watermarks confuse the prompt generation, and you want to study pure composition.

This takes 2 minutes. Save the file to your desktop.

You now have a proven blueprint. Everything else is reverse-engineering this.

Stage B: Extract the DNA with Gemini (turn video into JSON)

Tool: Gemini (analyzes video directly)
Cost: Free

It’s crazy that most people are still unaware that Gemini can analyze video files.

In this step, upload your downloaded video to Gemini with this exact prompt:

Analyze this video shot by shot and scene by scene. Create a detailed JSON format breakdown for video recreation (maximum 15 seconds, but match the original video length if it's shorter) that includes:

1. Scene description (environment, setting, props, colors)
2. Camera work (angles, movements, transitions between shots)
3. Subject actions (what's happening, movements, gestures)
4. Lighting setup (source, quality, shadows, mood)
5. Voiceover transcript (exact words spoken, if any)
6. On-screen text (any text overlays, captions, or graphics)
7. Audio elements (background music style, sound effects)
8. Timing (duration of each major scene or action)

Format your response as a valid JSON object that can be used for video generation. Be extremely specific and technical about camera movements and lighting.

Gemini will analyze the video frame-by-frame and give you a complete breakdown with timing, camera angles, voiceover text, and lighting specs.

What you get:

  • Exact voiceover transcript

  • On-screen text captured with timing

  • Camera movements broken down shot-by-shot

  • Lighting setup detailed precisely

Takes 3-5 minutes. Now you have the exact formula, not a guess (especially helpful if you don’t have any filmmaking background)

Stage C: Make It Yours and Generate With Sora

Tool: ChatGPT-4 / Claude whatever you like

Take Gemini's detailed output - this is your proven blueprint. But you can't just copy it exactly. You need to make it yours.

This is where you add:

  • Your own twist on the concept of the video

  • A different script or voiceover

  • Your brand's style

  • A new setting or scene that fits your niche

Feed Gemini's JSON into ChatGPT with this prompt:

Take this video breakdown and adapt it for [your niche]. Keep the camera movements and pacing, but change the setting to [X] and the script to focus on [Y].


or

Convert this to a 10-second version focused on [specific angle]. Maintain the professional camera work but simplify the message to [your core point].


The idea: You're using a proven structure (the viral video's blueprint) but customizing the content to match what you actually want to say.

Once ChatGPT gives you the adapted version, simplify it for Sora:

Convert this into Sora's JSON format with these fields: scene, camera, motion, lighting, mood.

Include spoken script as well as intonation.

Now you have your customized prompt ready for Sora.

EU access note: If you're in Europe and don't want to deal with App Store shenanigans to get the Sora app, you can access Sora through Higgsfield or fal.ai. Both let you use Sora without the geographic restrictions (you can’t use cameos though)

Pro tip: Study Linus Ekenstam's filmmaking vocabulary thread. Learning terms like "three-point lighting" or "tracking shot" transforms your prompts overnight.

Before you actually generate anything, you need to decide which Sora tier you're using. This affects both your output quality and what cleanup you'll need to do after.

I use Sora Pro. Here's why:

Sora Free vs. Pro: Which One Should You Use?

The biggest difference: 8-9/10 quality and watermark-free output with Pro.

Sora Pro via API costs about $7.50 for a 15-second video. The quality is noticeably better, but the random success rate makes it risky if you're just testing.

My take:

Start with Free tier to learn the workflow. Once you're consistently making videos you're happy with, upgrade to Pro for the quality bump and no watermark.

If you stay on Free tier, the cleanup steps below become even more critical.

Now feed your prompt into Sora and generate your video.

Stage D: The Cleanup (Where Most People Fail)

You generated a decent video. You're hyped.

Don't upload it yet.

Even Pro-tier Sora videos have issues that will kill your reach:

Issue #1: Quality isn't quite there

Free tier Sora outputs at 6/10 quality. It's "fine" but not platform-competitive. Videos that look slightly off get skipped instantly.

Issue #2: That watermark is screaming "AI"

The Sora watermark might as well say "PLEASE IGNORE THIS CONTENT." It destroys credibility immediately.

Issue #3: The audio sounds robotic

AI-generated audio has a tinny quality. Background hiss. Unnatural cadence. People can hear it even if they can't articulate why it feels wrong.

Issue #4: The metadata is flagged

This is the silent killer. Buried in your video file are metadata tags that say "Sora AI-generated." Even if your video looks and sounds perfect, platforms read this and limit your distribution.

Most creators never even know this is happening. They just wonder why their videos don't get views.

Here's how to fix all of it:

Step 1: Upscale the Video quality (only if you don’t use Pro)

Upload your Sora video. This AI upscaler increases quality 4x.

Your 6/10 output becomes 8-9/10 instantly. Now it's competitive with native phone footage.

Step 2: Remove That Damn Watermark (only if you don’t use Sora Pro)

The Sora watermark is a death sentence for credibility. This tool erases it digitally.

Critical warning: This tool removes ALL text overlays, not just the watermark. When you create your initial Sora video, specify "no overlay text" in your prompt. You'll add text manually later using the platform's native editor.

Step 3: Fix the Audio

Upload your video. This tool removes background noise, eliminates that robotic quality, and enhances speech clarity.

Takes 60 seconds. Your audio goes from "clearly AI" to "studio quality."

Step 4: Clean Your Metadata (The Insurance Policy)

This is the step nobody talks about.

Social platforms can read metadata buried in your video file. Things like:

  • "Generated by: Sora AI"

  • "Creation method: Text-to-video AI"

  • "Software: OpenAI Sora v2"

Even if your video looks and sounds perfect, this metadata gets you flagged and shadow-banned.

The Video Remixer strips all metadata and re-encodes your file as if it came from a normal camera. Upload your cleaned video, download the remixed version.

This is your insurance policy against platform detection.

Why the Math is mathing out

Per video costs:

  • Sora generation (Free tier): $0

  • Upscaling: $0.20

  • Everything else: Free

Total: $0.20 per video

Compare that to hiring a videographer ($500-2000) or Sora Pro ($200/month + $7.50 per video).

The cleanup takes 15 minutes once you've done it a few times.

Skip it and your videos get shadow-banned, skipped, or flagged as AI slop.

Step 5: One More Critical Step: Upload Through the App

After all that cleanup, don't upload from your desktop browser.

Use the TikTok or Instagram app on your phone.

Platforms prioritize content uploaded through their native apps. Desktop uploads get flagged as "less authentic" and get lower initial reach.

Transfer your cleaned video to your phone (AirDrop, Google Drive, whatever) and upload from there.

Sounds stupid. Makes a difference.

My Take After Testing This

The more I learned about creating good videos with Sora, the more I actually enjoyed it.

I can see how I'll probably enhance my content with some AI stuff in the future. But I also love recording myself, so it's about finding a balance.

That's the key when something new comes out - you don't just resist it or complain.

You try it.

You see what works.

You figure out where it fits in your workflow.

Maybe Sora videos will work for you. Maybe they won't. But you won't know until you test it:).

See you next week

Vibe Check: what'd you think of today's email?

Login or Subscribe to participate in polls.

Ole

P.S. Let me know if you enjoyed this more technical edition. Shoutout to David on X, I learned a lot of this Sora stuff from him