Close Menu
    EOSHD Shooter’s Guides
    • New EOSHD Pro Color 5 is out now, for all Sony mirrorless cameras including the A7S III!
    • EOSHD C-LOG and Film Simulation Picture Profiles for Canon
    X (Twitter) Instagram YouTube
    X (Twitter) Instagram YouTube
    EOSHD.com – Filmmaking Gear and Camera Reviews
    STORE
    • Forum
    • EOSHD YouTube
    • PRO COLOR 5
    • EOSHD C-LOG
    • Store
      • The EOSHD 5D Mark III 3.5K RAW Shooter’s Guide
      • The EOSHD 50D Raw Shooter’s Guide
      • The EOSHD Anamorphic Shooter’s Guide 2nd Edition
      • The EOSHD Sony A7 Series Shooter’s Guide to Full Frame Lenses
      • The EOSHD Panasonic GH3 Shooter’s Guide
      • The EOSHD Panasonic GH2 Shooter’s Guide
      • The EOSHD Sony A7R II Setup Guide
      • The EOSHD Panasonic GH4 Shooter’s Guide
      • The EOSHD Samsung NX1 Setup Guide
    • Cart
    • Contact
    • More
      • EOSHD Reviews
      • EOSHD Cinematography
      • About EOSHD / Andrew Reid
      • Blog RSS Feed
      • Instagram
      • Facebook
      • Twitter
    EOSHD.com – Filmmaking Gear and Camera Reviews

    Users are complaining OpenAI wreaked Sora 2. EOSHD takes a look at what happened…

    Andrew Reid (EOSHD)By Andrew Reid (EOSHD)December 14, 2025 Featured 10 Mins Read

    When Sora 2 was released (invite only), results were impressive. Investors were happy, users excited. Then, a strange thing happened.

    Sora began to buckle under load despite being invite only. In order to roll out Sora to all users, the technology looks to have been massively scaled back. Not only are users alleging OpenAI did this, the criticism is that they did so silently behind closed doors without elaborating. The results were clear. Overnight the quality of material generated by Sora 2 fell into the gutter like slop from a sausage machine.

    EOSHD takes a look at what has been going on…

    Sora 1 and the new model 2 are some of the most hyped AI tools of all time, driving enormous investor interest and at their best are capable of amazing visuals with very few imperfections.

    Prompts could be incredibly complex in Sora 2 at launch, involving several sections to inform the look. In OpenAI’s own example of the otherworldly dragon, you had a “First Read” Primary Target & Visuals to lay out the general scene, followed by a Format & Look using cinematography terms like 180 degrees shutter, large format sensor, crisp micro contrast and no gate weave.

    You could then go as far as to specify lenses, filters, grading and lighting. Sound is generated too. To round out the shot, the prompt can provide camera notes, location details and framing, as well as an optimised shot list to sequence it all.

    https://www.eoshd.com/wp-content/uploads/2025/12/dragon2.mp4

    The result was a few seconds, admittedly spectacular, of a dragon flying above a coastline with thunder and lighting in the background. Big whoop.

    Now the thunder is very much in the foreground, for OpenAI is getting a roasting. Sora 2 at this kind of level needs a vast amounts of NVidia compute to work.

    On Reddit discussions in the OpenAI sub, you can hear directly from disgruntled OpenAI paying customers about the dramatic shifts in quality they’ve been experiencing.

    The launch version has effectively been shelved and now isn’t even available in most countries.

    In one topic “It’s insane how badly they’ve ruined Sora 2 already“, the consensus is that OpenAI does not even have a sustainable business model to support it long term and that the “real Sora 2” can’t be rolled out for mainstream demand.

    Sora 2 simply doesn’t scale.

    Users even allege the technology was launched in loss-making form to show its full potential to investors and at the crucial moment when everyone was watching. As the losses came rolling in during subsequent weeks, the system was dialled back to use a fraction of the compute power it did at first.

    Given the option to sign up to OpenAI Plus today for 23 euros per month where I am (Europe), the option for Sora 2 isn’t even available – only a limited use of Sora 1 is allowed.

    So what has happened to Sora 2 and does it have a future?

    Is this kind of technology simply too expensive to roll out at scale to paid users, at least for the ChatGPT Plus tier of $20 per month?

    Could it be that in future, only professional creative studios will be able to use the best generative AI video tools – likely paying upwards of tens of thousands of dollars per month?

    Could it be a bit like the DSLR revolution one day? Imagine the first affordable high-end Sora models reach the mainstream for a one off purchase with hardware controller enabling a virtual film studio on your desktop? That would be nice.

    Might the compute to run it all fit on a single, professional GPU card on your desk?

    The truth is, we have no clue.

    Right now that future looks incredibly uncertain.

    And yet the investors are piling on like OpenAI have discovered an alien technology.

    We don’t know how the business model of generative AI will play out at all, especially not for the most advanced models. Video generation, world simulation, and video game development are all potential targets for AI – but they’re hard, and if it all DOES work as promised, the factors outside the industry’s control might topple them anyway, more on that in a moment…

    Nvidia’s stock price today reflects the massive demand for AI data-centre GPUs.

    But there’s a problem.

    There’s simply not enough NVidia chips to scale demand according to AI investor expectations.

    Production capacity at TSMC is not enough.

    And what’s more, consumer expectations have to keep increasing with the wow factor going up with every release too – which means more and more demand for hardware (not just NVidia chips but everything else too like DDR5 memory and PSUs, rare earths, auxiliary components, and so on) a demand that simply isn’t realistic.

    For not only can this level of compute be manufactured fast enough, it is not available at an economical enough price and has a shelf life of only a couple of years before it all needs to be replaced again.

    Even the latest Blackwell NVidia chips are using too much energy, too much water for cooling and generate too much heat. Add to that, the needs of gamers and cryptocurrency on top – with just ONE company able to supply the required hardware, monopolising 90% of the market. And the chips are fabricated by just ONE supplier, in Taiwan.

    Sounds bad? It gets worse…

    1. Geopolitically, just one man – Ji Xingping, can end the whole industry over night with an invasion of Taiwan and disruption of TSMC’s chipmaking facilities. One Chinese squadron outside just one factory door and it’s all over. Nvidia would prefer not to even think about it.
    2. Socio-politically, no first world country can afford a universal basic income in the event that AI succeeds. Success for AI means unemployment figures in the region of 70 to 80% and the breakdown of law and order. This alone is enough to make the AI industry unsustainable and the investors have simply buried their heads in the sand about it because as usual with bankers and CEOs they don’t care about society at all. It isn’t even a blip on the radar. If AI succeeds in replacing most workers, the tech employees who still have jobs will most likely work for AI companies but none of them will be safe to go to work due to rioting, with they and their leaders with the biggest target on their back. These crucial employees won’t even be able to access basic public services or supermarket food because society would have broken down. Investors simply refuse to contemplate this as a possible outcome!
    3. No profit… The AI companies are unprofitable and always have been. Billions of dollars is simply being circulated between a few key players for an ever greater share of equity and the resulting deals are hyping up the share prices of those involved. The most recent example of this was Disney’s investment of $1bn in OpenAI in return for some equity and some tools, under the guise of an IP licensing deal for 3 years. As soon as the first share price wobble or bad news happens, this kind of equity will be worthless and the entire investment scheme come toppling down, and that’s the only thing which is enabling such massive loss making at the moment.
    4. Because AI companies themselves lack any of their own profit to reinvest in their businesses for the purpose of scaling for demand, it is all being propped up by creditors. The creditors could all take fright at any moment.
    5. The energy cost of the compute is immense and pushing up the cost of fuel for normal businesses and ordinary residential households on top of a cost of living crisis, with climate change spiralling out of control. This isn’t an ethical business.
    6. Artificial General Intelligence (AGI) is purely hypothetical. Anywhere an employee requires the use of their hands or their physical presence, an AGI is caught lacking and a robot is too problematic. Too easily damaged, it can too easily become a target for violence or theft, it isn’t reliable enough long term and not even dextrous enough to learn how to use a photocopier, let alone do more advanced office work, it’s also incredibly expensive to “hire” compared to a human being and it does not even have a consciousness or true awareness, therefore lacks the kind of true understanding and human emotion necessary to inform opinion and action. Would you want to hire a brain in a jar?
    7. The “intelligence” aspect of AI as we know it today is only computational but human intelligence is not. What ChatGPT does is a parallel computing task on a large database of pre-existing data. It can only synthesise text and images, and not always correctly. To call this intelligence is misleading as it’s really just a clever data processor and a python script. If the name was accurate it would be TG – Text Generator. For images and videos it is again just an advanced synthesiser with fancy outputs, or at best a simulator-on-chip capable of visually representing aspects of our world for a few seconds, but it does not have a true understanding of it or true creativity or a true memory.
    8. In the creative industries and social media, people want to engage with content that’s made by real people. Be it normal folk, or a famous name film director, the person behind the art is as important as the work itself. When it is a faceless simulation machine behind the work, the work automatically devalues. This may work for a Cola advert, I’m not sure it will for the next Oppenheimer.
    9. What is being created at the moment are short 5s-10s vignettes of imaginary worlds and subjects, eye-candy in other words, and in most cases outright slop that has no practical use in the real world and therefore very little real world value. The technology has been out for a while and there are still no mainstream recognised and respected AI artists, which has to tell you something.
    10. The AI bubble is absolutely real and is on course to create the kind of economic problems that would make the 1920s blush. Although the fundamental neural network concept and world-simulation technology that is being created may have future potential and is akin to a kind of magic when it works, the hype and misconstrued optimism from investors is a foolish use of funds and soon they will find out what that means.

    I am certainly not a total pessimist when it comes to AI though – I really do think it can be a big future industry, I think it can benefit science and even creative work. The problem is, it has not been shown to benefit any of these things as much as it has benefited the pockets of tech industry oligarchs.

    When is it going to show its real value to this world, a Magnum opus or an improvement in life and work quality for normal people? So far, ChatGPT is a useful data processor and a way to cheat at homework. The image and video generators, they feel like fancy toys at the moment.

    But if I were an investor I’d be selling all my shares in January 2026 and getting out of the bubble.

    Comment on the EOSHD Forum

    ai generative video nvidia openAI Sora
    Andrew Reid (EOSHD)
    • Website

    British filmmaker and editor of EOSHD. On this blog I share my creative and technical knowledge as I shoot.

    Related Posts

    Why Sony is falling behind in video

    Read More

    OpenAI decides content is worth paying for – but only if it’s Disney’s

    Read More

    Warner Bros risk talent exodus over Trump-Ellison deal. Who will follow Christopher Nolan and jump ship next?

    Read More

    EOSHD Pro Color 5 for All Sony cameras

    EOSHD C-LOG and Film Profiles for All Canon DSLRs

    EOSHD Z-LOG for Nikon cameras

    Articles by category
    • EOSHD Guides // Pro Color // LUTS
    • Featured
    • Filmmaking
    • Interview
    • Lenses
    • Lenses – Anamorphic
    • News
    • Opinion
    • RAW Video
    • Reviews
    • Rumors
    • Satire and humour
    • YouTube
    Blog post archives
    • Instagram
    • Twitter
    • YouTube
    © 2025 Andrew Reid / EOSHD

    Type above and press Enter to search. Press Esc to cancel.