[-] Reader9@programming.dev 3 points 11 months ago

Agreed. My copy lost this documentation link in the original which gives more detail about the horizontal scaling: https://join-lemmy.org/docs/administration/horizontal_scaling.html.

It seems really straightforward (which is a good thing), each backend Lemmy_server handles incoming requests and also pulls from a shared queue of other federation work.

[-] Reader9@programming.dev 23 points 11 months ago

Time zones are an endless source of frustration, this one doesn’t sound too bad though:

Going forward, all timestamps in the API are switching from timestamps without time zone (2023-09-27T12:29:59.113132) to ISO8601 timestamps (e.g. 2023-10-29T15:10:51.557399+01:00 or Z suffix). In order to be compatible with both 0.18 and 0.19, parse the timestamp as ISO8601 and add a Z suffix if it fails (for older versions).

https://github.com/LemmyNet/lemmy/pull/3496

100
Lemmy 0.19 updates (programming.dev)
submitted 11 months ago* (last edited 11 months ago) by Reader9@programming.dev to c/programming@programming.dev

https://programming.dev/post/3666732

Change log for upcoming Lemmy version 0.19.0 I am just reposting this from the original post: https://lemmy.ml/post/5711722.

It’s interesting to see this for the software we’re all using and it makes me want to learn a bit more about the architecture. Quite a few user-facing features and some backend improvements. For example:

Outgoing Federation Queue The federation queue has been rewritten to be much more performant and reliable. This is irrelevant for client developers, but admins should look out for potential federation problems. If you run multiple Lemmy backends for horizontal scaling, be sure to read the updated documentation and set the new configuration parameters. The Troubleshooting section has information about how to find out the state of the federation queues.

https://github.com/LemmyNet/lemmy/pull/3605

[-] Reader9@programming.dev 2 points 1 year ago* (last edited 1 year ago)

This data structure uses a 2-dimensional array to store data, documented in this scala implementation: https://github.com/twitter/algebird/blob/develop/algebird-core/src/main/scala/com/twitter/algebird/CountMinSketch.scala. I’m still trying to understand it as well.

Similar to your idea, I had thought that by using k bloom filters, each with their own hash function and bit array, one could store an approximate count up to k for each key, which also might be wasteful or a naïve solution.

PDF link: http://www.eecs.harvard.edu/~michaelm/CS222/countmin.pdf

[-] Reader9@programming.dev 1 points 1 year ago* (last edited 1 year ago)

I haven’t used them in Spark directly but here’s how they are used for computing sparse joins in a similar data processing framework:

Let’s say you want to join some data “tables” A and B. When B has many more unique keys than are present in A, computing “A inner join B” would require lots of shuffling if B, including those extra keys.

Knowing this, you can add a step before the join to compute a bloom filter of the keys in A, then apply the filter to B. Now the join from A to B-filtered only considers relevant keys from B, hopefully now with much less total computation than the original join.

[-] Reader9@programming.dev 2 points 1 year ago

Collage sounds really interesting , will check it out. Another variation on bloom filter I recently learned about is count-min-sketch. It allows for storing/incrementing a count along with each key, and can answer “probably in set with count greater than _”, “definitely not in set”.

Thanks for adding more detail on the DB use-cases!

33

What are your real-world applications of this versatile data structure?

They are useful for optimization in databases like sqlite and query engines like apache spark. Application developers can use them as concise representations of user data for filtering previously seen items.

The linked site gives a short introduction to bloom filters along with some links to further reading:

A Bloom filter is a data structure designed to tell you, rapidly and memory-efficiently, whether an element is present in a set. The price paid for this efficiency is that a Bloom filter is a probabilistic data structure: it tells us that the element either definitely is not in the set or may be in the set.

[-] Reader9@programming.dev 1 points 1 year ago

The reflector of the flashlight is built so light coming from a very small source (like the filament of an incandescent bulb) is directed forward in a focused beam.

I agree, but I also think that using a modern LED with a single source of forward-facing light is fine. However the emitter would need to be properly positioned in the light.

Here’s a very similar host (from one of the best low-cost flashlight makers) showing a properly aligned LED and reflector:

Product link: https://www.aliexpress.us/item/3256801849471618.html

[-] Reader9@programming.dev 2 points 1 year ago

I found a few references to this exact model on candlepowerforums.com which I believe has more folks who own(ed) incandescent lights. Not that has been such a long time, but LED technology advanced very quickly. Not sure if that will help your search.

There’s also a few people over on !flashlight@lemmy.world !

1
submitted 1 year ago* (last edited 1 year ago) by Reader9@programming.dev to c/flashlight@lemmy.world

Edited: previously linked to https://www.sofirnlight.com/products/sofirn-if30-edc-powerful-flashlight?spm=..collection_eea4d417-4338-4a86-ba3e-1a41b66cb32b.collection_detail_1.1, replaced with an image for better Lemmy preview.

Something new from Sofirn. These larger cells (than 26800) seem to be popping up in flashlights more often now. This one lists 6500 mAh which isn't a much bigger capacity but presumably the maximum discharge must be pretty high to support 12000 lumens.

Sft-40 in the center surrounded by flood emitters. This isn’t one I’ll be picking up, maybe I’ll hold out for a 461000 cell!

[-] Reader9@programming.dev 4 points 1 year ago

Although your current role wouldn’t seem very senior at a large organizational, “senior“ is a relative term and at this company it seems like you are the engineer with ownership responsibilities over the end-to-end software development of a production system. So it might still be reasonable to use a senior title if there are other benefits

[-] Reader9@programming.dev 2 points 1 year ago

This is a great suggestion because it focuses directly on tracking the outcome (did the software work?) and it gives a fair chance to the folks who don’t want to test - maybe their code really is perfect!

Another similar metric I would add is the number of rollbacks of newly released code, if the CD system supports it using a method like canary or blue-green rollouts.

[-] Reader9@programming.dev 9 points 1 year ago

Focusing on code coverage (which doesn't distinguish between more and less important parts of the code) seems like the opposite of your very good (IMO) recommendation in another comment to focus on specific high-value use-cases.

From my experience it’s far easier to sell the need for specific tests if they are framed as “we need assurances that this component does not fail under conceivable usecases” and specially as “we were screwed by this bug and we need to be absolutely sure we don’t experience it ever again.”

Code coverage is an OK metric and I agree with tracking it, but I wouldn’t recommend making it a target. It might force developers to write tests, but it probably won’t convince them. And as a developer I hate feeling “forced” and prefer if at all possible to use consensus to decide on team practices.

[-] Reader9@programming.dev 1 points 1 year ago

One aspect that does work is framing the need for tests as assurance that specific invariants are verified and preserved

Agreed - this is the specific aspect which I hoped would be communicated by studying TDD a bit!

The team is afraid that making changes will be more difficult when tests exist, but TDD (or maybe a more specific concept like you mentioned) demonstrates that tests make future changes easier.

And I specifically advocated not to follow “write tests first”.

Name-dropping concepts actually contributes to loose credibility of any code quality effort, and works against you.

OK. If I were having an in-depth discussion with my team of fellow developers to convince them to start writing tests, I don’t think that’s name-dropping.

[-] Reader9@programming.dev 8 points 1 year ago* (last edited 1 year ago)

We can’t test yet, we’re going to make changes soon

This could be a good opportunity to introduce the concept of test-driven development (TDD) without the necessity to “write tests first”. But I think it can help illustrate why having tests is better when you are expecting to make changes because of the safety they provide.

“When we make those changes, wouldn’t it be great to have more confidence that the business logic didn’t break when adding a new technical capability?”

You shouldn’t have to refactor to test something

This seems like a reasonable statement and I sort of agree, in the sense that for existing production code, making a code change which only adds new tests yet also requires refactoring of existing functionality might feel a bit risky. As other commenters mentioned, starting with writing tests for new features or fixes might help prevent folks feeling like they are refactoring to test. Instead they’re refactoring and developing for the feature and the tests feel like they contribute to that feature as well.

view more: next ›

Reader9

joined 1 year ago