vampatori

joined 1 year ago
[–] vampatori@feddit.uk 3 points 1 year ago (1 children)

Definitely give Ruthless a go, I love it.. reminds me of early game ARPG's on higher difficulties. Positioning really matters, you have to adapt based on what you get. It seems to have been the proving ground for PoE2's new tempo.

[–] vampatori@feddit.uk 1 points 1 year ago

I was going to do an origin character as a solo play-through and a custom character for a group play-through with my mates, but now I might do it the other way around... which means hours in the character creator! Ha.

[–] vampatori@feddit.uk 7 points 1 year ago

Often the question marked as a duplicate isn't a duplicate, just the person marking it as such didn't spend the time to properly understand the question and realise how it differs. I also see lots of answers to questions mis-understanding the question or trying to force the person asking down their own particular preference, and get tons of votes whilst doing it.

Don't get me wrong, some questions are definitely useful - and some go above-and-beyond - but on average the quality isn't great these days and hasn't been for a while.

[–] vampatori@feddit.uk 19 points 1 year ago (1 children)

Google's first quarter 2023 report shows they made massive profits off vast revenue due to advertising.

It is about control though. The thing that caught my eye is that they're saying that only "approved" browsers will be able to access these WEI sites. So what does that mean for crawlers/scrapers? That the big tech companies on the approval board will be able to lock potential competitors out of accessing the web - new browsers, search engines, etc. but much more importantly... Machine Learning.

Google's biggest fear right now is that ML systems will completely eliminate most people's reason to use Google's search, and therefore their main source of revenue will plummet. And they're right to be scared, it's already starting to happen and it's showing us very quickly just how bad Google's search results are.

So this seems to me like an attempt to control things from that side. It's essentially the "big boys" trying to consolidate and firm-up their hold in the industry and not let newcomers rival them, as with ML the barrier to entry has never been lower.

[–] vampatori@feddit.uk 10 points 1 year ago

Red Hat saying that argument in-particular shows they've pivoted their philosophy significantly, it's a seemingly subtle change but is huge - presumably due to the IBM acquisition, but maybe due to the pressures in the market right now.

It's the classic argument against FOSS, which Red Hat themselves have argued against for decades and as an organisation proved that you can build a viable business on the back of FOSS whilst also contributing to it, and that there was indirect value in having others use your work. Only time will tell, but the stage is set for Red Hat to cultivate a different relationship with FOSS and move more into proprietary code.

[–] vampatori@feddit.uk 12 points 1 year ago* (last edited 1 year ago)

Don't roll your own if you can help it, just use a distribution dedicated for use as a thin client. I was co-incidentally just looking into this last week and came across ThinStation which looks really good. There are other distro's too, search for "linux thin client".

[–] vampatori@feddit.uk 1 points 1 year ago (1 children)

How do Linux distro's deal with this? I feel like however that's done, I'd like node packages to work in a similar way - "package distro's". You could have rolling-release, long-term service w/security patches, an application and verification process for being included in a distro, etc.

It wouldn't eliminate all problems, of course, but could help with several methods of attack, and also help focus communities and reduce duplication of effort.

[–] vampatori@feddit.uk 19 points 1 year ago

I personally found Fedora to be rock solid, and along with Ubuntu provided the best hardware support out of the box on all my computers - though it's been a couple of years since I used it. I did end up on Ubuntu non-LTS in the end as I now run Ubuntu LTS on my servers and find having the same systems to be beneficial (from a knowledge perspective).

[–] vampatori@feddit.uk 3 points 1 year ago* (last edited 1 year ago)

If I’m okay with the software (not just trying it out) am I missing out by not using dockers?

No, I think in your use case you're good. A lot of the key features of containers, such as immutability, reproduceability, scaling, portability, etc. don't really apply to your use case.

If you reach a point where you find you want a stand-alone linux server, or an auto-reconfiguring reverse proxy to map domains to your services, or something like that - then it starts to have some additional benefit and I'd recommend it.

In fact, using native builds of this software on Windows is probably much more performant.

[–] vampatori@feddit.uk 7 points 1 year ago (7 children)

Containers can be based on operating systems that are different to your computer.

Containers utilise the host's kernel - which is why there needs to be some hoops to run Linux container on Windows (VM/WSL).

That's one of the most key differences between VMs and containers. VMs virtualise all the hardware, so you can have a totally different guest and host operating systems; whereas because a container is using the host kernel, it must use the same kind of operating system and accesses the host's hardware through the kernel.

The big advantage of that approach, over VMs, is that containers are much more lightweight and performant because they don't have a virtual kernel/hardware/etc. I find its best to think of them as a process wrapper, kind of like chroot for a specific application - you're just giving the application you're running a box to run in - but the host OS is still doing the heavy lifting.

[–] vampatori@feddit.uk 8 points 1 year ago* (last edited 1 year ago)

As always, it depends! I'm a big fan of "the right tool for the job" and I work in many languages/platforms as the need arises.

But for my "default" where I'm building up the largest codebase, I've gone for the following:

  • TypeScript
    • Strongly-typed (ish) which makes for a nice developer experience
    • Makes refactoring much easier/less error-prone.
    • Runs on back-end (node) and front-end, so only one language, tooling, codebase, etc. for both.
  • SvelteKit
    • Svelte as a front-end reactive framework is so nice and intuative to use, definite the best there is around atm.
    • It's hybrid SSR/CSR is amazing, so nice to use.
    • As the back-end it's "OK", needs a lot more work IMO, but I do like it for a lot of things - and can not use it where necessary.
  • Socket.IO
    • For any real-time/stream based communication I use this over plain web sockets as it adds so much and is so easy to use.
  • PostgreSQL
    • Really solid database that I love more and more the more I use it (and I've used it a lot, for a very long time now!)
  • Docker
    • Easy to use container management system.
    • Everything is reproducible, which is great for development/testing/bug-fixing/and disasters.
    • Single method to manage all services on all servers, regardless of how they're implemented.
  • Traefik
    • Reverse proxy that can be set to auto-configure based on configuration data in my docker compose files.
    • Automatically configuring takes a pain point out of deploying (and allows me to fully automate deployment).
    • Really fast, nice dashboard, lots of useful middleware.
  • Ubuntu
    • LTS releases keep things reliable.
    • Commercial support available if required.
    • Enough name recognition that when asked by clients, this reassures them.
[–] vampatori@feddit.uk 2 points 1 year ago

I was using file merging, but one issue I found was that arrays don't get merged - and since switching to use Traefik (which is great) there are a lot of arrays in the config! And I've since started using labels for my own tooling too.

 

I run my own small software development company and I'd like somewhere my clients can login and get access to things like:

  • Access to documents from their repo(s) (GitHub, all contracts/etc. are kept here)
  • Links to invoices and to pay.
  • Milestone progress from their repo(s) (GitHub)
  • Links to their test, staging, and production services.
  • Ability to get in touch (potentially raising an issue in GitHub).

We're just doing things manually for now, but before we reinvent the wheel I thought it would be useful to see what's out there to either use directly or extend.

 

Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there's sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

view more: next ›