RandomDevOpsDude

joined 1 year ago
MODERATOR OF
[–] RandomDevOpsDude@programming.dev 1 points 8 months ago (1 children)

Thanks for sharing! I definitely hadn't seen that plugin. We definitely use helm, even though I hate it lol. I will take a look when I get around to looking at external secrets since I still haven't had a chance to (you know how it goes... priorities made up by some random PM or whatever)

[–] RandomDevOpsDude@programming.dev 7 points 8 months ago* (last edited 8 months ago)

I find it very difficult to recommend generative ai as a learning tool (specifically for juniors) as it often spits out terrible code (or even straight up not working) which could be mistaken as "good" code. I think the more experienced a dev is, the better it is to use more like a pair programmer.

The problem is it cannot go back and correct/improve already generated output unless prompted to. It is getting better and better, but it is still an overly glorified template generator, for the most part, that often includes import statements from packages that don't exist, one off functions that could have been inline (cannot go back and correct itself), and numerous garbage variables that are referenced only once and take up heap space for no seemingly no good reason.

Mainly speaking on GPT4, CoPilot is better, both have licensing concerns (of where did it get this code from) if you are creating something real and not for fun.

[–] RandomDevOpsDude@programming.dev 1 points 8 months ago (3 children)

I prefer Sealed Secrets over sops since it has the namespace scoping element and can also be stored in repo (once encrypted). I also generally prefer having a controller deployed rather than forcing devs to learn kustomize (which we don't widely use yet) so I guess less of a support burden for me.

[–] RandomDevOpsDude@programming.dev 2 points 9 months ago (5 children)

I can't believe I haven't seen external secrets before. Sealed secrets are cool, but such a pain as you described. Gonna be setting up external secrets next week sounds like. Thanks for the great post

[–] RandomDevOpsDude@programming.dev 7 points 10 months ago (2 children)

I have been using "gaming" keyboards for coding for ~10 years now. The only thing to be wary of imo, is keebs that have "extra customizable keys" on them and break conformity from a standard layout. Depends on the device, but Logitech will call them "G keys", for example, and often stick them on the far left of the board, left of tab/caps/L shift. Makes life a lot more difficult if not gaming.

Outside of that, I think calling something a "gaming" keyboard is more of a marketing tactic to up the price. It's hard to not recommend mechanical, but that sounds out of budget and often hard to do wireless/bluetooth, but personally I think mech is the top priority.

What I have seen a lot of peers do is wait to see whatever keyboard the get in office, then buy the same one for home for consistency, rather than dragging a personal one back and forth. Often companies will offer basic boards like logitech K270, K350, or K650. Not amazing, not terrible, and most likely fit in your described criteria.

[–] RandomDevOpsDude@programming.dev 3 points 10 months ago (1 children)

Im a bit late to the show, but I personally feel like you are heading down the wrong path. Unless you are trying to completely host locally, but for some reason want your backups in the cloud, and not simply on separate local server, you are mixing your design for seemingly no reason. If you are hosting locally, you should back up to a separate local instance.

If you indeed are cloud based, you SHOULD NOT be hosting a DB separately. Since you specified S3, you are using AWS, and you should rather use RDS managed mySQL and should use the snapshot feature built in. ref

I am not as familiar with Cloud Native DevOps Newsletter but I do enjoy the podcast

December 8th, 2009 - Motorola Droid successfully rooted ... [granting] root access on the phone using a terminal emulator. This is how I learned bash which inevitably pushed me into pursuing proper Computer Science.

wiki ref

December 8th, 2009 - Motorola Droid successfully rooted ... [granting] root access on the phone using a terminal emulator. This is how I learned bash which inevitably pushed me into pursuing proper Computer Science.

wiki ref

I prefer a similar workflow.

I am a major advocate of keeping CI as simple as possible, and letting build tools do the job they were built to do. Basic builds and unit/component testing. No need for overcomplicating things for the sake of "doing it all in one place".

CD is where things get dirty, and it really depends on how/what/where you are deploying.

Generally speaking, if integration testing with external systems is necessary, I like to have contract testing with these systems done during CI, then integration/e2e in an environment that mimics production (bonus points if ephemeral).

The most difficult generalisation step is going from one to two. Once you've generalised to two cases, it's much easier to generalise to three, four, or n cases.

🥇

Although it seems to ignore things like sidecar containers to support the application locally (rather than needing to do a full install of a database tech, for example), I really like the point being driven.

 

Links to certain topics in video description

This year, JetBrains partnered with Google Cloud and DORA to put together the 2022 State of DevOps report. We are hosting a livestream to present the key takeaways and discuss how to achieve successful software delivery and operational performance.

In this livestream, we will:

Introduce the report, along with some highlights from the newly released Accelerate State of DevOps Report from Google Cloud. Discuss the operational performance practices currently employed by JetBrains.

 
 

Rolling Deployment

A rolling deployment strategy slowly replaces previous versions of an application with new versions by entirely switching out the environment in which the application is running. For example, containers running new versions of an application may take the place of containers running previous versions of an application....

Canary Deployment

To avoid risk, a canary deployment uses a phased approach in which traffic is shifted in increments. With the aid of a router or load balancer, new application code is released to a small group of users so it can be tested. Metrics measure the success of the new iteration....

Blue-Green Deployment

Blue-Green deployments eliminate downtime by running 2 identical production environments, one called Blue and the other called Green. Only one of the environments is live at any one time and handles all production traffic....

A/B Deployment

An A/B deployment strategy allows your company to test 2 versions of an application on users. The “A” version would be the old version, while the “B” version would contain a new or revised feature. Each version would be released to a subset of users for testing and feedback....

 

I don't know much about Bret's courses, but I do enjoy his podcast.

Topics cover anything DevOps, cloud management, sysadmin, Docker and container tools like Kubernetes and Swarm, and the full software lifecycle supply chain.

2
CI/CD explained - GitLab (about.gitlab.com)
submitted 1 year ago* (last edited 1 year ago) by RandomDevOpsDude@programming.dev to c/devops@programming.dev
 
view more: next ›