this post was submitted on 30 Jan 2025
100 points (100.0% liked)
games
20698 readers
464 users here now
Tabletop, DnD, board games, and minecraft. Also Animal Crossing.
-
3rd International Volunteer Brigade (Hexbear gaming discord)
Rules
- No racism, sexism, ableism, homophobia, or transphobia. Don't care if it's ironic don't post comments or content like that here.
- Mark spoilers
- No bad mouthing sonic games here :no-copyright:
- No gamers allowed :soviet-huff:
- No squabbling or petty arguments here. Remember to disengage and respect others choice to do so when an argument gets too much
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Modern compilers are very good, but I'd argue you're overestimating the gains. Unless you're using some very strict typing and compiling options, you're still missing out on enormous efficiency even from the most modern compilers, like orders of magnitude in some case. Stuff that with proper coding you'd have in assembly. And in '98, you absolutely could program GPU hardware directly in assembly.
I'm not saying it'd be anywhere near easy, but I absolutely contend it'd be possible.
In my experience writing straightforward C code is gonna give you basically the same thing as well-written asm would. I'm not saying modern compilers are perfect but they generally do the right thing (or it's very easy to coax then into doing the right thing). They usually vectorize properly, they know how to avoid obvious pipeline stalls etc. These days order of magnitude performance improvements generally don't come from hand-written assembly but from using better algorithms and data structures
Also you could look at what games of that era actually looked like. Something like Quake 2 already pushed contemporary hardware to it's limits and I wouldn't argue it's a poorly optimized game
I was alive and played games during that era, so I am aware! I'm very personally aware how much work we crammed from a tiny slow processor and a few kb.
And though we probably wouldn't describe those games as 'poorly optimised', there's still absolutely huge room for optimisations. As you say, compilers have gotten a lot better, and we have better algorithms and data structures. In Quake especially, they basically did have order-of-magnitude gains by writing some parts in assembly. And they'll have undoubtedly have written most of those parts very imperfectly.
I may be skirting the bounds of reality more than you, by talking about what's 'theoretically possible' compared to what's possible by today's knowledge and human practices. We agree there'd be some graphical compromise, maybe we're just disagreeing over details of the scale of that compromise and the hardware that would be 'permittable' by this scenario.
I was imagining something that looks and plays almost exactly like Witcher 3 except at lower resolution, which doesn't seem possible to me. I'd say if you simplify the graphics (replace dynamic lighting with pre-baked lightmaps, do gouraud shading instead of PBR, remove most post-processing effects etc etc) and make all models look like half-life assets, you could definitely render something at an acceptable frame rate. However asset streaming would still be a huge issue for a large and detailed open world like Witcher 3, considering memory limits and slow disk reads