this post was submitted on 26 Mar 2024
210 points (91.7% liked)

Technology

59605 readers
3415 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 46 comments
sorted by: hot top controversial new old
[–] BetaDoggo_@lemmy.world 79 points 8 months ago (2 children)

Partnered with Adobe research so we're never going to get the actual model.

[–] LazaroFilm@lemmy.world 37 points 8 months ago (1 children)

And if we do it will be bundled in an Ai suite with some license checking software that crawls your whole computer to a crawl for no reason.

[–] CrayonRosary@lemmy.world 11 points 8 months ago (1 children)

Crawls to a crawl is a very common phrase, I don't know why people are saying it's not.

[–] HeChomk@lemmy.world 1 points 7 months ago (1 children)

Because it isn't? Slows to a crawl is correct. "Crawls to a crawl" means nothing.

[–] CrayonRosary@lemmy.world 1 points 7 months ago

I was being sarcastic. No one else had said anything about it. 😄

[–] Muffi@programming.dev 33 points 8 months ago

Adobe is the fucking worst. That immediately killed my hype.

[–] snooggums@midwest.social 38 points 8 months ago (1 children)

More mediocre images for everyone!

[–] BrianTheeBiscuiteer@lemmy.world 20 points 8 months ago (4 children)

While I think the realism of some models is fantastic and the flexibility of others is great it is starting to feel like we're reaching a plateau on quality. Most of the white papers I've seen posted lately are about speed or some alternate way of doing what ControlNet or inpainting can already do.

[–] Fubarberry@sopuli.xyz 12 points 8 months ago

Have you seen the SD3 preview images? They're looking seriously impressive.

[–] bjoern_tantau@swg-empire.de 9 points 8 months ago (1 children)

Well, when it's fast enough you can do it in real time. How about making old games look like they looked to you as a child?

[–] UlrikHD@programming.dev 2 points 8 months ago* (last edited 8 months ago) (1 children)

There's way more to a game's look than textures though. Arguably ray tracing will have a greater impact than textures. Not to mention, for retro games, you could just generate the textures beforehand, no need to do it in real time.

[–] bjoern_tantau@swg-empire.de 2 points 8 months ago (1 children)

I meant putting the whole image through AI. Not just the textures. Tell it how you want it to look and suddenly a grizzled old Mario is jumping on a realistic turtle with blood splattering everywhere.

[–] webghost0101@sopuli.xyz 2 points 8 months ago (1 children)

There is no single “whole” image when talking about a video game. It’s a combination of dynamic layers carefully interacting with each-other.

You can take any individual texture and make it look different/more realistic and it may work with some interaction but might end up breaking the game. Especially if hit boxes depend on the texture.

We may see ai remakes of video games at some point but it will require the ai to reprogram from scratch.

Now when we talk about movies and other linear media, i expect to see this technology relatively soon.

[–] bjoern_tantau@swg-empire.de 2 points 8 months ago (1 children)

There is no single “whole” image when talking about a video game. It’s a combination of dynamic layers carefully interacting with each-other.

Of course there is. When everything is done a whole image is sent to the display to show. That's how FSR 1 can work without explicit game support.

[–] webghost0101@sopuli.xyz 1 points 8 months ago* (last edited 8 months ago) (1 children)

What i ment is that the final image is dynamic so players may have a unique configuration which makes it harder for ai to understand whats going on.

Using the final render of each frame would cause a lot of texture bleeding for example when a red character stands in front of a red background. Or is jumping on top of an animal, you may have wild frames where the body shape drastically changes or is suddenly realistically riding the animal then petting it the next frame to then have it die on frame 3, all because every frame is processed as its own work.

Upscaling final renders is indeed possible but mostly because it doesnt change things all that much of the general shapes, Small artifacts are also very common here but often not noticeable by the human eye and dont effect a modern game.

In older games, especially mario where hitboxes are pixel dependent youd either have a very confusing games with tons of clipping because the game doesn’t consider the new textures or it abides to new textures affecting the gameplay.

Source: i have studied game development and have recreated mario era games as part of assignments, currently i am self-studying the technological specifics of how machine learning and generative algorithms operate.

[–] froost@lemmy.world 3 points 8 months ago (3 children)

Those are valid points, but nothing there is insurmountable with even little bit of advancement.

For example this is a relatively old example from 2021, before any of the dall-e2 and stable diffusion and video consistency models were out. https://www.youtube.com/watch?v=P1IcaBn3ej0

Is this perfect? No, there are artifacts and the quality just matches the training driving dataset, but this was an old specialized example from a completely different (now archaic) architecture. But the newer image generation models are much better, and frame-to-frame consistency models are getting better by week and some of them are nearly there (obviously not in real-time).

About the red-on-red bleed/background separation etc. issues: for 3d rendered games it's relatively straightforward to get not just the color but depth and normal maps from the buffer (especially assuming this will be done with the blessing of the game developers/game engines/directX or other APIs). I don't know if you follow the developments but for example using ControlNet with StableDiffusion it is trivial to add color, depth, normal map, pose, line outline, or a lot more other constraints on the created image, so if the character is wearing red over a red background, that is separated by the depth map and the generated image will also have the same depth, or their surface normals would be different. You can use whatever aspects of the input game as constraint in the generation.

I am not saying we can do this right now, the generation speeds for high quality images, plus any other required tools in the workflow (from understanding the context and caption generation/latent space matching, to getting these color/depth/normal constraints, to do temporal consistency using the previous N frames to generating the final image, and doing it all fast enough) obviously have a ton of challenges involved. But, it is quite possible, and I fully expect to see working non-realtime demos within a year or couple years at the most.

--

In 2D games, it may be harder due to pixelation. As you said there are upscaling algorithms that work more or less well to increase the resolution slightly, nothing photorealistic obviously. There are also methods such as these using segmenting algorithms to split the images and generating new ones using AI generators: https://github.com/darvin/X.RetroGameAIRemaster

To be honest to make 2D games in a different style you can do much better, even now. Most of the retro games have their sprites and backgrounds already extracted from their original games, you can just upscale once (by the producer, or fan-edits), and then you don't even need to worry about the real time generation. I wanted to upscale Laura Bow 2 this way for example. One random example I just found: https://www.youtube.com/watch?v=kBFMKroTuXE

Replacing the sprites/backgrounds won't make them really photorealistic with dynamic drop shadows and lighting changes, but once the sprites are in enough resolution then you can feed them into the full-frame re-generation frame by frame. But then I probably don't want Guybrush to look like Gabriel Knight 2 or other FMV games so not sure about that angle.

[–] webghost0101@sopuli.xyz 2 points 8 months ago

First of all thank you for the detailed reply.

+10 for “randomly” linking latent vision. Thats the dude that made ipadapter for stable diffusion which is hands down revolutionary for my comfyui workflows.

I actually fully agree on all the 3D stuff, i remember that gta video.

My comment was reflecting the following idea, ahum

“putting the whole image through Al. Not just the textures. Tell it how you want it to look and suddenly a grizzled old Mario is jumping on a realistic turtle with blood splattering everywhere.” -bjoern_tantau

But on the topic of modern 3D I expect we can go very far. Generate high quality models of objects. Venerate from that a low poly version + consistent prompt to be used by the game engine ai during development and live gaming. Including raytracing matrixds (not rtx but yes sikilar but for detection. (which admittedly coded exactly once to demonstrated for an exam and barely understand ) what i try to say is some clever people will figure out how to calculate collisions and interaction using low poly+ai.

I am very impressed by the retrogameXMaster but i think it may also depend on the game.

In these older games the consistency of its gameplay is core to its identity, there pre graphics. hitbox detection is pixel based which is core gameplay and influences difficulty. Hardware limitations in a way also become part of gameplay design.

You can upscale and given many of em fancy textures and maybe even layers of textures, modded items and accessibility cheats.

But the premises: “ Not just the textures. Tell it how you want it to look and suddenly a grizzled old Mario is jumping on a realistic turtle with blood splattering everywhere.”

An ai can coock something up like that But it will be a new distinct Mario game if you change all that much of what’s happening on screen:

Anyway i am tired and prob sound like a lunatic the longer i speak so again thanks for the good read.

[–] PipedLinkBot@feddit.rocks 1 points 8 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=P1IcaBn3ej0

https://www.piped.video/watch?v=kBFMKroTuXE

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] snooggums@midwest.social 7 points 8 months ago (1 children)

When the output of something is the average of the inputs it will naturally be mediocre. It will always look like the output of a committee by the nature of how it is formed.

Certain artists stand out because they are different from everyone else, and that is why they are celebrated. M.C. Escher has a certain style that when run through AI looks like a skilled high school student doing their best impression of M.C. Escher.

Now as a tool to inspire, AI is pretty good at creating mashups of multiple things really fast. Those could be used by an actual artist to create something engaging. Most AI reminds me of photoshop battles.

[–] webghost0101@sopuli.xyz 2 points 8 months ago

Who says the output is an average?

I agree for narrow models and Loras trained on a specific style they can never be as good as the original but i also think that is the lamest uncreative way to generate.

Much more fun to use general use models and to crack the settings to generate exactly what you want the way you want,

[–] agressivelyPassive@feddit.de 3 points 8 months ago (1 children)

That's maybe because we've reached the limits of what the current architecture of models can achieve on the current architecture of GPUs.

To create significantly better models without having a fundamentally new approach, you have to increase the model size. And if all accelerators accessible to you only offer, say, 24gb, you can't grow infinitely. At least not within a reasonable timeframe.

[–] Kbin_space_program@kbin.social -3 points 8 months ago* (last edited 8 months ago) (1 children)

Will increasing the model actually help? Right now we're dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.

Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it's not just random chance that it gets it right.

[–] agressivelyPassive@feddit.de 9 points 8 months ago (1 children)

That is literally a complete misinterpretation of how models work.

You don't "have the Internet as a model", you train a model using large amounts of data. That does not mean, that this model contains any of the actual data. State of the at models are somewhere in the billions of parameters. If you have, say, 50b parameters, each being a 64bit/8 byte double (which is way, way too much accuracy) you get something like 400gb of data. That's a lot, but the Internet slightly larger than that.

[–] Kbin_space_program@kbin.social -5 points 8 months ago* (last edited 8 months ago) (1 children)

It's an exaggeration, but its not far off given that Google literally has all of the web parsed at least once a day.

Reddit just sold off AI harvesting rights on all of its content to Google.

The problem is no longer model size. The problem is interpretation.

You can ask almost everyone on earth a simple deterministic math problem and you'll get the right answer almost all of the time because they understand the principles behind it.

Until you can show deterministic understanding in AI, you have a glorified chat bot.

[–] agressivelyPassive@feddit.de 8 points 8 months ago (1 children)

It is far off. It's like saying you have the entire knowledge of all physics because you skimmed a textbook once.

Interpretation is also a problem that can be solved, current models do understand quite a lot of nuance, subtext and implicit context.

But you're moving the goal post here. We started at "don't get better, at a plateau" and now you're aiming for perfection.

[–] Kbin_space_program@kbin.social -3 points 8 months ago (1 children)

You're building beautiful straw men. They're lies, but great job.

I said originally that we need to improve the interpretation of the model by AI, not just have even bigger models that will invariably have the same flaw as they do now.

Deterministic reliability is the end goal of that.

[–] agressivelyPassive@feddit.de 3 points 8 months ago

Will increasing the model actually help? Right now we're dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.

Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it's not just random chance that it gets it right.

Where exactly did you write anything about interpretation? Getting "details right" by processing faster? I would hardly call that "interpretation" that's just being wrong faster.

[–] Aopen@discuss.tchncs.de 27 points 8 months ago
[–] Fubarberry@sopuli.xyz 25 points 8 months ago

These kind of performance improvements have really cool potential for real time image/texture generation in games. I've already seen some games do this, but they usually rely on generating the images online.

ASCII and low graphic roguelike's have a lot of generation freedom where they can create very unique monsters/items/etc. However a lot of this flexibility is lost as you move to more polished games that require models and art assets for everything. This is also one of the many reasons that old-styled games are still popular, is because they often offer more variety and randomization than newer titles. I think generated art assets could be a cool way to bridge the gap though, and let more modern games have crazy unique monsters/items with visuals.

[–] Thann@lemmy.ml 24 points 8 months ago (1 children)

Pfft, I can do that, just run them on a computer that's 30 times faster!

[–] jas0n@lemmy.world 8 points 8 months ago

Found the software engineer.

[–] cheeselover@lemmy.world 18 points 8 months ago (1 children)

So can I run this without cloud?

[–] Black616Angel@feddit.de 8 points 8 months ago* (last edited 8 months ago) (2 children)
[–] Archr@lemmy.world 7 points 8 months ago* (last edited 8 months ago) (1 children)

I see your InvokeAI and raise you Stability Matrix

Edit: I wanted to edit my comment to leave some context for people.

Stability Matrix is an app that handles installing many different stable diffusion applications. (no more messing with InvokeAI's janky install script).

It also integrates with CivitAI and HuggingFace to directly download models and Lora and share them between your applications, saving you lots of diskspace.

[–] Black616Angel@feddit.de 5 points 8 months ago* (last edited 8 months ago)

... Thanks. This looks super useful.

Edit: After posting I realized.that this sounds super sarcastic, which it wasn't. This does look useful and I was already looking for smth. like that.

[–] Cyyy@lemmy.world 1 points 8 months ago

but that is just normal Stable Diffusion, not the method used or mentioned here. So it isn't even what this news is about :/

[–] drspod@lemmy.ml 16 points 8 months ago
[–] stoy@lemmy.zip 12 points 8 months ago (1 children)
[–] Steve@startrek.website 16 points 8 months ago (1 children)
[–] assassinatedbyCIA@lemmy.world 3 points 8 months ago

Gotta jack fast

[–] istanbullu@lemmy.ml 9 points 8 months ago (1 children)

This is nice, but the post ignores all the other research in this topic. SDXL Lightning can generate images in 2 steps.

[–] AnUnusualRelic@lemmy.world 3 points 8 months ago (1 children)

1st step, describe the picture.
2nd step generate the picture.

[–] cornshark@lemmy.world 1 points 8 months ago* (last edited 8 months ago)

1st step: draw some circles

2nd step: draw the rest of the fucking owl

[–] werefreeatlast@lemmy.world 3 points 8 months ago

Now they can make a movie!