127
Tyler Perry Puts $800M Studio Expansion On Hold After Seeing OpenAI’s Sora: “Jobs Are Going to Be Lost”
(www.hollywoodreporter.com)
This is a most excellent place for technology news and articles.
Yep. I watched their demo clips, and the "good" ones are full of errors, have lots of thematically incoherent content, and - this is the biggie - can't be fixed.
Say you're a 3D animator and build an animation with thousands of different assets and individual, alterable elements. Your editor comes to you and says, "This furry guy over here is looking in the wrong direction, he should be looking at the kangaroo king over there, but it looks like he's just glaring at his own hand."
So you just fix it. You go in, tweak the furry guy's animation, and now he's looking in the right direction.
Now say you made that animation with Sora. You have no manipulatable assets, just a set of generated frames that made the furry guy look in the wrong direction.
So you fire up Sora and try to fine-tune its instructions, and it generates a completely new animation that shares none of the elements of the previous one, and has all sorts of new, similarly unfixable errors.
If I use an AI assistant while coding, I can correct its coding errors. But you can't just "correct" frames of video it has created. If you try, you're looking at painstakingly hand-painting every frame where there's an error. You'll spend more time trying to fix an AI-generated animation that's 90% good and 10% wrong than you will just doing the animation with 3D assets from scratch.
"Sora, regenerate $Scene153 with $Character looking at $OtherCharacter. Same Style."
Or "Sora, regenerate $Scene153 from time mark X to time mark Y with $Character looking at $OtherCharcter. Same Style".
It's a new model, you won't work with frames anymore you'll work with scenes and when the tools get a bit smarter you'll be working with scene layers.
"Sora, regenerate $Scene153 with $Character in Layer1 looking at $OtherCharacter in Layer2. Same Style, both layers."
I give it 36 months or less before that's the norm.
I agree, I don't think people realise how early into this tech we are at the moment. There are going to be huge leaps over the next few years.
This seems like a fundamental misunderstanding of how generative AI works. To accomplish what you're describing you'd need:
The whole system would need to be able to rewind to specific trouble spots, correct them, and still generate everything that comes after unchanged. We're talking orders of magnitude more complexity and difficulty.
And in the meantime, artists creating 3D assets the regular way would suddenly look a lot less expensive and a lot less difficult.
If all you have is a hammer, everything looks like a nail. Right now, generative AI is everyone's really attractive hammer. But I don't see it working here in 36 months. Or 48. Or even 60.
The first 90% is easy. The last 10% is really fucking hard.
Or just "take the frame and replace the head with the same face pointed a different way".
I'd imagine eventually we're gonna get something like in painting.