this post was submitted on 29 Oct 2023
108 points (87.5% liked)

No Stupid Questions

35781 readers
1282 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

I've been re-watching star trek voyager recently, and I've heard when filming, they didn't clear the wide angle of filming equipment, so it's not as simple as just going back to the original film. With the advancement of AI, is it only a matter of time until older programs like this are released with more updated formats?

And if yes, do you think AI could also upgrade to 4K. So theoretically you could change a SD 4:3 program and make it 4k 16:9.

I'd imagine it would be easier for the early episodes of Futurama for example due to it being a cartoon and therefore less detailed.

top 50 comments
sorted by: hot top controversial new old
[–] CaptainBlagbird@lemmy.world 71 points 1 year ago* (last edited 1 year ago) (11 children)

I think it would be possible. But adding previously unseen stuff would be changing/redirecting the movie/show.

Each scene is set up and framed deliberately by the director, should AI just change that? It's a similar problem like with pan-and-scan, where content was removed to fit 4:3.

You wouldn't want to add content to the left and right of the Mona Lisa, would you? And if so what? Continuing the landscape, which adds just more uninteresting parts? Now she is in a vast space, and you already changed the tone of the painting. Or would you add other people? This removes the focus from her, which is even worse. Well this is just a one frame example, there are even more problems with moving pictures.

It would be an interesting experiment, but imo it wouldn't improve the quality of the medium, in contrary.

[–] JohnDClay@sh.itjust.works 12 points 1 year ago* (last edited 1 year ago) (3 children)

I hate to brake it to you...

Mona Lisa landscape

And a manual one

[–] catsup@lemmy.one 17 points 1 year ago

I hate to brake it to you...

You just proved his point lol

[–] CeruleanRuin@lemmings.world 10 points 1 year ago

Yeah that sucks.

load more comments (1 replies)
[–] Pechente@feddit.de 10 points 1 year ago (1 children)

But adding previously unseen stuff would be changing/redirecting the movie/show.

You could see this with The Wire 16:9 remake. They rescanned the original negatives that were shot in 16:9 but framed and cropped to 4:3. As a result the framing felt a bit off and the whole thing felt a bit awkward / amateurish.

load more comments (1 replies)
load more comments (9 replies)
[–] Crul@lemm.ee 32 points 1 year ago* (last edited 1 year ago) (3 children)
[–] Nerd02@lemmy.basedcount.com 13 points 1 year ago (1 children)

Holy cow that is beyond impressive. Sure enough, sometimes it does hallucinate a bit, but it's already quite wild. Can't help but wonder where we'll be in the next 5-10 years.

[–] Tar_alcaran@sh.itjust.works 12 points 1 year ago

Eh, doing this on cherrypicked stationary scenes and then cherrypicking the results isn't that impressive. I'll be REALLY impressed when AI can extrapolate someone walking into frame.

[–] nul@programming.dev 6 points 1 year ago* (last edited 1 year ago) (2 children)

The video seems a bit misleading in this context. It looks fine for what it is, but I don't think they have accomplished what OP is describing. They've cherrypicked some still shots, used AI to add to the top and bottom of individual frames, and then gave the shot a slight zoom to create the illusion of motion.

I don't think the person who made the content was trying to be disingenuous, just pointing out that we're still a long ways from convincingly filling in missing data like this for videos where the AI has to understand things like camera moves and object permanence. Still cool, though.

[–] Crul@lemm.ee 4 points 1 year ago

Great points. I agree.

A proper working implementation for the general case is still far ahead and it would be much complex than this experiment. Not only it will need the usual frame-to-frame temporal coherence, but it will probably need to take into account info from potentially any frame in the whole video in order to be consistent with different camera angles of the same place.

load more comments (1 replies)
[–] Zeus@reddthat.com 4 points 1 year ago (1 children)

just fyi, your link is broken for me

i wonder if it's a new url scheme, as i've never seen duplicates in a reddit url before, and if i switch it out for comments it works fine

[–] Crul@lemm.ee 4 points 1 year ago (3 children)

Thanks! Fixed

i wonder if it’s a new url scheme, as i’ve never seen duplicates in a reddit url before

I think you're right. It should work with the old frontend (which I have configured as the default when I'm logged in):

https://old.reddit.com/r/StableDiffusion/duplicates/14xojmf/using_ai_to_fill_the_scenes_vertically/

load more comments (3 replies)
[–] drdiddlybadger@pawb.social 27 points 1 year ago (5 children)

You should be able to but remember that aspect ratios and framing are done intentionally so what is generated won't be at all true to what should be in scene once the frame is there. You'd be watching derivative media. Upscaling should be perfectly doable but eventually details will be generated that will not have originally existed in scenes as well.

Probably would be fun eventually to try the conversion and see what differences you get.

[–] Deestan@lemmy.world 40 points 1 year ago (2 children)

4:3 - Jumpscare, gremlin jumps in from off-camera.

16:9 AI upsized - Gremlin hangs out awkwardly to the left of the characters for half a minute, then jumps in.

[–] averagedrunk@lemmy.ml 10 points 1 year ago (2 children)

I would watch the hell out of that movie.

load more comments (2 replies)
[–] setsneedtofeed@lemmy.world 3 points 1 year ago (1 children)

I was just thinking that. Or something like a comedy bit where the camera pans to a character who had just been out of frame.

Overall it seems like impressive technology to be able to reform old media, but I’d rather put it to use in tastefully sharpening image quality rather than reframing images.

load more comments (1 replies)
[–] Ghost33313@kbin.social 7 points 1 year ago

Exactly, and to add to it, you can't know the director's vision or opinion on how the framing should be adjusted. AI can make images easily but it won't understand subtext and context that was intended. No time soon at least.

[–] CaptainBlagbird@lemmy.world 4 points 1 year ago

This.

Surprise! If you want to go from 4:3 to wide screen, you still have exactly the same problem as when using pan&scan for going from wide to 4:3.

[–] FelipeFelop@discuss.online 4 points 1 year ago (2 children)

Very true, I remember a few years ago someone converting old cartoons to a consistent 60 frames a second.

If they’d asked an animator they’d have found out that animation purposely uses different rates of change to give a different feel to scenes. So the improvement actually ruined what they were trying to improve.

[–] setsneedtofeed@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

Yes, sometimes frame rates are intentional choices for artistic reasons and sometimes they are economic choices that animators work around.

Old Looney Tunes used a lot of smear frames in order to speed up production. They were 24 frames per second broadcast on doubles, which meant 12 drawn frames per second with each frame being shown twice. The smear frames gave the impression of faster movement. Enhancing the frame rate on those would almost certainly make them look weird.

If you want to see an artistic choice, the first Spiderverse movie is an easy example. It’s on doubles (or approximates being on doubles in CG) for most scenes to gives them a sort of almost stop motion look, and then goes into singles for action scenes to make them feel smoother.

[–] NikkiNikkiNikki@kbin.social 3 points 1 year ago

There's a video from the YouTuber Noodle who does a good job explaining it

load more comments (1 replies)
[–] setsneedtofeed@lemmy.world 22 points 1 year ago (4 children)

Who is the person that enjoys old shows, but also can’t get past the old aspect ratio?

If the AI is just adding complimentary, unobtrusive parts to the shot, so as not to disrupt the original intent, I have to ask- is there really value being added? Why do this at all?

George Lucas thought CGI could make the original Star Wars movies better.

load more comments (4 replies)
[–] xia@lemmy.sdf.org 21 points 1 year ago (1 children)

Ahhh... the much-fabled "uncrop" operation!

[–] echodot@feddit.uk 4 points 1 year ago* (last edited 1 year ago)

Almost as good as Enhance

[–] CeruleanRuin@lemmings.world 19 points 1 year ago* (last edited 1 year ago) (3 children)

Why would you want that? It's always best to consume media in its purest form, and that means with its original aspect ratio. Resolution is something I'm flexible on, because I figure that filmmakers and tv directors in prior eras would have used HD if it was available, but aspect in general is tied to format, though it can be used to great effect to convey the space of a scene in different ways. Changing the ratio is akin to changing the color pallet. Might as well offer Instagram-style filters for older content while you're at it.

[–] Globulart@lemmy.world 7 points 1 year ago (1 children)

Exactly, the filmmaker knew exactly what the aspect ratio was and framed shots specifically for it, why would anyone ever want this...?

[–] Squirrel@thelemmy.club 4 points 1 year ago

Ooo, maybe we can get a nice blurred copy of the picture to fill the edges of the screen, just like TikTok!

I feel sick even jokingly suggesting that...

load more comments (2 replies)
[–] TheInsane42@lemmy.world 18 points 1 year ago (1 children)

Previous century I bought a TV that was 16:9 and had software to stretch 4:3 broadcasts to fit the screen. It chopped off a tad at the top and bottom and stretched the sides a tad to fill the screen... it was horrible. I'd rather have the dark borders at the sides over mutilated images. Somehow I doubt AI would be a bit more creative.

load more comments (1 replies)
[–] Pyroglyph@lemmy.world 11 points 1 year ago* (last edited 1 year ago) (3 children)

I remember watching a great video about why this isn't a good idea.

Here it is.

load more comments (3 replies)
[–] echodot@feddit.uk 8 points 1 year ago

There is no original film for voyager it was filmed on tape.

TNG used film so that can be rescanned but the original analogue broadcast is the literally the best quality we have of Voyager.

[–] SHITPOSTING_ACCOUNT@feddit.de 7 points 1 year ago (1 children)

Adding imagery that reliably looks good is currently beyond what AI can do, but it's likely going to become possible eventually. It's fiction, so the AI making stuff up isn't a problem.

Upscaling is already something AI can do extremely well (again, if you're ok with hallucinations).

[–] MeatsOfRage@lemmy.world 4 points 1 year ago* (last edited 1 year ago) (1 children)

I'm not sure it's really beyond the scope of AI. Stuff like stable diffusion in-painting / out-painting and some of the stuff Adobe was showing off at their recent keynote shows were already there.

[–] Tar_alcaran@sh.itjust.works 5 points 1 year ago

Those are on a completely different level from having someone walk into frame though, and they still only work on small things that can be extrapolated from the image.

[–] Potatos_are_not_friends@lemmy.world 6 points 1 year ago (1 children)

I used adobe's Generative Fill tool (where it uses AI to fill in the blanks like adding more sky/backgrounds/hiding people) and it's pretty miss. 20% of the time, it would kinda work. And I say that loosely.

I think in a few years, we'll get there. But not today.

load more comments (1 replies)
[–] PopOfAfrica@lemmy.world 6 points 1 year ago

Yeah, but why would you want to? It would have to generate new imagery to fill out the gaps. That's bound to not look right. It at the very least would not be fitting the artist's intention.

[–] Stamets@startrek.website 5 points 1 year ago

Ive been rewatching Star Trek Voyager recently

Good choice.

I'd imagine that AI probably will be able to in 10-15 years. We already have that photoshop stuff that can analyze the surroundings and then fill in gaps/erase stuff. It's not perfect but it's the ground floor. I can only imagine that in the not too distant future it'll be able to fill in the gaps of video too. Especially with a consistent set like Voyagers main engineering, as an example.

Actually come to think of it, the same principle could be used to make VR environments too.

[–] IAmDotorg@lemmy.world 5 points 1 year ago (1 children)

Voyager and DS9 were shot on video, not film.

That's why there's HD versions of TOS and TNG but not DS9 and Voyager.

[–] JTheDoc@lemmy.world 5 points 1 year ago* (last edited 1 year ago) (3 children)

They were shot on negative 35mm film, and edited on tape in that aspect ratio.

Only some odd scenes and shots were captured on tape, but it was mostly film. They won't need to upscale any live scenes, but they would have to work on rendering all the digital effects and blue screen shots for example. As it was only ever edited on tape, it's unlikely that the digital effects could be ever rendered or upscaled, it would probably need to be entirely reworked from scratch.

Very daunting task. Although, they did it for TNG. ¯\_(ツ)_/¯

Edit: Someone pointed out the obvious in a reply. Yes, of course they will need to be scanned for a better resolution scan. I already pointed out it's on film, and even mentioned why it's not just that simple. (Hence the digital effects needing to be redone).

[–] echo64@lemmy.world 6 points 1 year ago (8 children)

It's worth noting that edited on tape also means that to make hd versions, you have to re-scan all the camera negatives (if they even still exist), then re-edit all the scenes to be exactly the same editing as the tape editing. Plus, all the colour timing needs to be redone.

It's a huge amount of work, and the tng stuff wasn't profitable because of it. It's just too much.

load more comments (8 replies)
load more comments (2 replies)
[–] zwaetschgeraeuber@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

its already possible, but not really good atm. saw some scenes, where they changed the aspect ratio from 16:9 to tiktok. im waiting for realfill, which google is working on. idk how it works, but it can guess, what should be outside of the image and the images i saw were pretty amazing compared to SD. 4k upscaling at 120hz also no problem, as i did this to the first season of spongebob and it really works fine with my hobby equipment in one night. id really love to see an upscaled 16:9 version of the it crowd and the first 6 seasons of scrubs.

edit: this is what i was referring to https://youtu.be/bD_HyxHMHPo?si=-ktJz3GuBNHhn9UY

[–] thirstyhyena@lemmy.world 5 points 1 year ago (1 children)

I was wondering the same last week, but for Buffy the Vampire Slayer tv series, which received a horrible HD release some years back.

[–] tankplanker@lemmy.world 6 points 1 year ago

For Buffy they recut the shot often using the raw footage, and they did so very cheaply so film equipment was often visible. They also didn't address how bad the make up looked in HD, but then soft focus face filters are also garbage.

The Simpsons when they tried they cut the frame, which is just laughably bad as it removed information and the context for scenes.

When the Wire was done they got David Simon back to work on the conversion, he considers it a completely different cut of the show. I think this is the only way to do it as it means re framing the shot, this is a decision for the director, editor, and DP IMO

AI making shit up to add to the frame removes the context for the shot. Nothing wrong with black bars for me, I just want good colour balance and upscaling.

load more comments
view more: next ›