this post was submitted on 28 Apr 2024
503 points (96.8% liked)

Science Memes

10950 readers
2097 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] fossilesque@mander.xyz 72 points 6 months ago (4 children)
[–] ipha@lemm.ee 101 points 6 months ago (1 children)
[–] fossilesque@mander.xyz 40 points 6 months ago (1 children)

So is a wedding gift registry.

[–] whereBeWaldo@lemmy.dbzer0.com 6 points 6 months ago* (last edited 6 months ago)

No, this is Patrick!

[–] Fosheze@lemmy.world 81 points 6 months ago* (last edited 6 months ago) (1 children)

It's a dynamically-sized list of objects of the same type stored contiguously in memory.

dynamically-sized: The size of it can change as needed.

list: It stores multiple things together.

object: A bit of programmer defined data.

of the same type: all the objects in the list are defined the same way

stored contigiously in memory: if you think of memory as a bookshelf then all the objects on the list would be stored right next to each other on the bookshelf rather than spread across the bookshelf.

[–] kbotc@lemmy.world 15 points 6 months ago (4 children)

Dynamically sized but stored contiguously makes the systems performance engineer in me weep. If the lists get big, the kernel is going to do so much churn.

[–] Killing_Spark@feddit.de 15 points 6 months ago (1 children)

Contiguous storage is very fast in terms of iteration though often offsetting the cost of allocation

[–] Slotos@feddit.nl 7 points 6 months ago

Modern CPUs are also extremely efficient at dealing with contiguous data structures. Branch prediction and caching get to shine on them.

Avoiding memory access or helping CPU access it all upfront switches physical domain of computation.

[–] IAmVeraGoodAtThis@lemmy.blahaj.zone 7 points 6 months ago* (last edited 6 months ago)

Which is why you should:

  1. Preallocate the vector if you can guesstimate the size
  2. Use a vector library that won't reallocate the entire vector on every single addition (like Rust, whose Vec doubles in size every time it runs out of space)

Memory is fairly cheap. Allocation time not so much.

[–] yetiftw@lemmy.world 4 points 6 months ago

matlab likes to pick the smallest available spot in memory to store a list, so for loops that increase the size of a matrix it's recommended to preallocate the space using a matrix full of zeros!

[–] tamal3@lemmy.world 2 points 6 months ago (1 children)

Is that churn or chum? (RN or M)

[–] SpaceNoodle@lemmy.world 3 points 6 months ago

Many things like each other lined up in a row, and you can take some away or put more in.

[–] mindbleach@sh.itjust.works 1 points 6 months ago

It's how you want an array to work.