this post was submitted on 29 Mar 2025
274 points (99.3% liked)

Technology

68305 readers
4374 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 23 comments
sorted by: hot top controversial new old
[–] AbouBenAdhem@lemmy.world 49 points 5 days ago (3 children)

Spectral JPEG XL utilizes a technique used with human-visible images, a math trick called a discrete cosine transform (DCT), to make these massive files smaller [...] it then applies a weighting step, dividing higher-frequency spectral coefficients by the overall brightness (the DC component), allowing less important data to be compressed more aggressively.

This all sounds like standard jpeg compression. Is it just jpeg with extra channels?

[–] Prok@lemmy.world 41 points 5 days ago (1 children)

Yeah, it compresses better too though, and jpeg XL can be configured to compress lossless, which I imagine would also work here

[–] dohpaz42@lemmy.world 7 points 5 days ago (3 children)

Lossless JPEG would be amazing.

[–] zerofk@lemm.ee 8 points 5 days ago

JPEG 2000 supports lossless mode.

[–] Cocodapuf@lemmy.world 3 points 5 days ago (1 children)

In my experience, as you increase the quality level of a jpeg, the compression level drops significantly, much more than with some other formats, notably PNG. I'd be curious to see comparisons with png and gif. I wouldn't be surprised if the new jpeg compresses better at some resolutions, but not all, or with only some kind of images.

[–] rice@lemmy.org 7 points 5 days ago

jpeg xl has been in development from FLIF for like 15 years there are tons of comparisons all over, even live ones on youtube

[–] uis@lemm.ee 1 points 5 days ago

There is Lossless JPEG since 1993.

[–] wischi@programming.dev 9 points 5 days ago* (last edited 5 days ago) (1 children)

It's not just like jpeg with extra channels. It's technically far superior, supports loss less compression, and the way the decompression works would make thumbnails obsolete. It can even recompress already existing JPEGs even smaller without additional generation loss. It's hard to describe what a major step this format would be without getting very technical. A lot of operating systems and software already support it, but the Google chrome team is practically preventing widespread adoption because of company politics.

https://issues.chromium.org/issues/40168998

[–] uis@lemm.ee 2 points 5 days ago (2 children)

Both og JPEG and JXL support lossless compression.

[–] wischi@programming.dev 4 points 4 days ago* (last edited 4 days ago) (1 children)

JPEG does not support lossless compression. There was an extension to the standard in 1993 but most de/encoders don't implement that and it never took off. With JPEG XL you get more bang for your buck and the same visual quality will get you a smaller file. There would be no more need for thumbnails because of improved progressive decoding.

https://youtu.be/UphN1_7nP8U

[–] uis@lemm.ee 1 points 4 days ago

Then same can be said about JPEG LS and JPEG XL. Most browsers don't implement that.

[–] wischi@programming.dev 0 points 4 days ago

JPEG does not support lossless compression. There was an extension to the standard in 1993 but most de/encoders don't implement that and it never took off. With JPEG XL you get more bang for your buck and the same visual quality will get you a smaller file. There would be no more need for thumbnails because of improved progressive decoding.

https://youtu.be/UphN1_7nP8U

[–] zerofk@lemm.ee 5 points 5 days ago (1 children)

Kind of, but JPEG converts image data to its own internal 3 came channel colour space before applying DCT. It is not compressing the R, G and B channels of most images. So a multichannel compression is not just compressing each channel separately.

[–] AbouBenAdhem@lemmy.world 1 points 5 days ago

Yeah, jpeg converts to lab (or something similar, I think). But the dimensions are the same: one channel for lightness, and then a number of channels one less than the total number of sampled frequencies to capture the rest of the color space.

[–] zerofk@lemm.ee 30 points 5 days ago (2 children)

“And while Spectral JPEG XL dramatically reduces file sizes, its lossy approach may pose drawbacks for some scientific applications.”

This is the part that confuses me. First of all, many applications that need spectral data need it to be as accurate as possible. Lossy compression in that might not be acceptable.

More interestingly (and I’ll read the actual paper for this): which data will be more compressed? Simply put, JPEG achieves its best compression by keeping the brightness but discarding colour. Which dimension in which spectral space do the researchers think can be more compressed than others? In this case there is no human visual system to base the decision on.

[–] rice@lemmy.org 17 points 5 days ago

jpeg xl does support lossless and their 69 page paper does mention this so I am unsure why they are putting the lossy aspect of this as the comparison to their "lossless ZIP COMPRESSION of OpenEXR"

page 51 has more detail on compression stuff. The openEXR does also support lossy. Anyway I think page 51-52 would answer it for someone that knows more about openEXR which I sure don't

Their comparison images do clearly show data being lost as well so they aren't even using visually lossless of jpeg xl they are actually just going full lossy. Must be some use case somewhere?

[–] hera@feddit.uk 2 points 5 days ago (1 children)

I literally can't think of a scientific use case where lossy compression would be acceptable?

[–] ayyy@sh.itjust.works 3 points 5 days ago

Perhaps training an image classifier?

[–] pelya@lemmy.world 9 points 5 days ago (1 children)

What, pickle.dump your enormous Numpy array not good enough for you anymore? Not even fancy zlib.compress(pickle.dumps(enormousNumpyArray)) will satisfy you? Are you a scientist or a spectral data photographer?

[–] KingRandomGuy@lemmy.world 2 points 5 days ago (1 children)

I guess part of the reason is to have a standardized method for multi and hyper spectral images, especially for storing things like metadata. Simply storing a numpy array may not be ideal if you don't keep metadata on what is being stored and in what order (i.e. axis order, what channel corresponds to each frequency band, etc.). Plus it seems like they extend lossy compression to this modality which could be useful for some circumstances (though for scientific use you'd probably want lossless).

If compression isn't the concern, certainly other formats could work to store metadata in a standardized way. FITS, the image format used in astronomy, comes to mind.

[–] pelya@lemmy.world 2 points 5 days ago (1 children)

Saving arbitrary metadata is the exact use case for pickle module, you just put it together with your numpy array into a tuple. jpeg format has support for storing metadata, but they are an afterthought like .mp3 tags, half of applications do not support them.

I can imagine multichannel jpeg to be used in photo editing software, so you can effortlessly create false-color plots of your infrared data, maybe even apply a beauty filter to your Eagle Nebula microwave scans.

[–] KingRandomGuy@lemmy.world 2 points 4 days ago

I agree that pickle works well for storing arbitrary metadata, but my main gripe is that it isn't like there's an exact standard for how the metadata should be formatted. For FITS, for example, there are keywords for metadata such as the row order, CFA matrices, etc. that all FITS processing and displaying programs need to follow to properly read the image. So to make working with multi-spectral data easier, it'd definitely be helpful to have a standard set of keywords and encoding format.

It would be interesting to see if photo editing software will pick up multichannel JPEG. As of right now there are very few sources of multi-spectral imagery for consumers, so I'm not sure what the target use case would be though. The closest thing I can think of is narrowband imaging in astrophotography, but normally you process those in dedicated astronomy software (i.e. Siril, PixInsight), though you can also re-combine different wavelengths in traditional image editors.

I'll also add that HDF5 and Zarr are good options to store arrays in Python if standardized metadata isn't a big deal. Both of them have the benefit of user-specified chunk sizes, so they work well for tasks like ML where you may have random accesses.

[–] rdri@lemmy.world 3 points 4 days ago

Last I checked, JPEG XL takes a lot of time and resources to encode (create) an image, if you actually want it to be far more optimized than JPEG.