7fb2adfb45bafcc01c80

joined 1 year ago
[–] 7fb2adfb45bafcc01c80@lemmy.world 10 points 13 hours ago (4 children)

It wasn't always followed on Reddit, but downvoting there was supposed to be for comments that don't contribute to the conversation.

Here the guidance is looser -- the docs don't address comments, but do say to "upvote posts that you like."

I've tried contributing to some conversations and sometimes present a different viewpoint in the interest of thought exchange, but this often results in massive downvotes because people disagree. I'm not going to waste my energy contributing to a community that ends up burying my posts because we have different opinions.

That's true on Reddit to, so I'm kind of being tangential to the original question. I guess what I'm saying is that some people might feel like I do and won't engage in any community, be it Reddit or Lemmy, if it's just going to be an echo chamber.

I try to follow the Chicago Manual of Style, so for me it's Travis's. Generally that's the style guide used in fiction.

The Associated Press Stylebook just puts an apostrophe at the end of a proper noun ending with "s," however (although they will use an apostrophe-ess for common nouns, creating things like scissors's).

I'm still using my Galaxy S8 with only one problem: Verizon's voicemail app won't run on something this old. Every other app is fine. It figures that the only app that encourages me to upgrade is from the phone company.

Inuyasha often said he was evil and played the tough guy so he would be left alone, but he was usually compassionate and had a soft side.

[–] 7fb2adfb45bafcc01c80@lemmy.world 12 points 3 weeks ago* (last edited 3 weeks ago)

I've been doing this for 30+ years and it seems like the push lately has been towards oversimplification on the user side, but at the cost of resources and hidden complexity on the backend.

As an Assembly Language programmer I'm used to programming with consideration towards resource consumption. Did using that extra register just cause a couple of extra PUSH and POP commands in the loop? What's the overhead on that?

But now some people just throw in a JavaScript framework for a single feature and don't even worry about how it works or the overhead as long as the frontend looks right.

The same is true with computing. We're abstracting containers inside of VMs on top of base operating systems which is adding so much more resource utilization to the mix (what's the carbon footprint on that?) with an extremely complex but hidden backend. Everything's great until you have to figure out why you're suddenly losing packets that pass through a virtualized router to linuxbridge or OVS to a Kubernetes pod inside a virtual machine. And if one of those processes fails along the way, BOOM! it's all gone. But that's OK; we'll just tear it down and rebuild it.

I get it. I understand the draw, and I see the benefits. IaC is awesome, and the speed with which things can be done is amazing. My concern is that I've seen a lot of people using these things who don't know what's going on under the hood, so they often make assumptions or mistakes that lead to surprises later.

I'm not sure what the answer is other than to understand what you're doing at every step of the way, and always try to choose the simplest route (but future-proofed).

Technically, each time that it is viewed it is a republication from copyright perspective. It's a digital copy that is redistributed; the original copy that was made doesn't go away when someone views it. There's not just one copy that people pass around like a library book.

[–] 7fb2adfb45bafcc01c80@lemmy.world 0 points 1 month ago* (last edited 4 weeks ago)

Again, isn't that the site's prerogative?

I think there should at least be a recognized way to opt-out that archive.org actually follows. For years they told people to put

User-agent: ia_archiver
Disallow:

in robots.txt, but they still archived content from those sites. They refuse to publish what IP addresses they pull content down from, but that would be a trivial thing to do. They refuse to use a UserAgent that you can filter on.

If you want to be a library, be open and honest about it. There's no need to sneak around.

Like I said, I have no problems with individuals archiving it and not republishing it.

If I take a newspaper article and republish it on my site I guarantee you I will get a takedown notice. That will be especially true if I start linking to my copy as the canonical source from places like Wikipedia.

It's a fine line. Is archive.org a library (wasn't there a court case about this recently...) or are they republishing?

Either way, it doesn't matter for me any more. The pages are gone from the archive, and they won't archive any more.

[–] 7fb2adfb45bafcc01c80@lemmy.world -4 points 1 month ago (2 children)

Shouldn't that be the content creator's prerogative? What if the content had a significant error? What if they removed the page because of a request from someone living in the EU requested it under their laws? What if the page was edited because someone accidentally made their address and phone number public in a forum post?

[–] 7fb2adfb45bafcc01c80@lemmy.world -2 points 1 month ago (6 children)

how do you expect an archive to happen if they are not allowed to archive while it is still up.

I don't want them publishing their archive while it's up. If they archive but don't republish while the site exists then there's less damage.

I support the concept of archiving and screenshotting. I have my own linkwarden server set up and I use it all the time.

But I don't republish anything that I archive because that dilutes the value of the original creator.

[–] 7fb2adfb45bafcc01c80@lemmy.world -3 points 1 month ago (5 children)

Yes, some wikipedia editors are submitting the pages to archive.org and then linking to that instead of to the actual source.

So when you go to the Wikipedia page it takes you straight to archive.org -- that is their first stop.

 

I guess I'm becoming a dinosaur, and now I don't know where to find out about new FOSS stuff being developed, when new releases are out, etc.

I used to get it all on USENET and mailing lists, and then later on sourceforge.net and freshmeat.net. Now I track some things on https://freshcode.club/, but I don't see much that's 'fresh'. Maybe new updates, but not too many new packages. sourceforge still exists, but it doesn't seem current.

If I know about a project I'll follow it on GitHub, but I'm looking for a place to find out about new things that I didn't know I wanted yet.

tl;dr: Where can I watch to see promising new FOSS software projects?

 

I started migrating my servers from Linode to Hetzner Cloud this month, but noticed that my quota only gave me ten instances.

I need many more, probably on the order of 25 right now and probably more later. I'd also like the ability to create test servers, etc.

I asked for an increase with all of that in mind, and Hetzner replied:

"As we try to protect our resources we are raising limits step by step and on the actuall [sic] requirement. Please tell us your currently needed limit."

I don't understand. Does Hetzner not have enough servers to accommodate me? Wouldn't knowing the size of the server be relevant if it's an actual resource question?

I manage a very large OpenStack cluster for my day job and we just give people what they pay for. I'm having a hard time wrapping my head around this unless Hetzner might not be able to give me what I ultimately want to pay for, and if that's the case, I wonder if they're the right solution for me after all.

It also makes me worry about cloud elasticity.

Does anyone have any insights that can help me understand why keeping a low limit matters?

view more: next ›