this post was submitted on 08 Jun 2023
3 points (100.0% liked)

Programming

13376 readers
1 users here now

All things programming and coding related. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Hey all! My team at work is struggling with growing pains of getting into a formalized review process, so I was wondering if any of you guys have some things to live or die by in your code reviews. How much of it is manual, or how much is just static code analysis + style guide stuff, etc?

top 7 comments
sorted by: hot top controversial new old
[–] reversebananimals@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

I'm a senior at a large tech company - I push all the teams I work with to automate the review process as much as possible. At minimum, teams must have a CI hook on their pull request process that runs a remote dryrun build of the changed packages. The dryrun makes sure the packages compile, pass unit tests and meet linter rules. A failed build blocks the pull request from being merged.

I try to encourage developers to turn the outcome of every code style discussion into a lint rule that fails the dryrun build when its violated. It saves time by automating something devs were doing manually anyway (reading every line of code to look for their code style pet peeves) and it also makes the dialogue healthier - devs can talk about the team standards and whether the code meets them, instead of making subjective comments about one another's code style.

[–] pnelego@lemmy.world 1 points 1 year ago

Small startup - Here is our process

  • Create your branch
  • Implement feature
  • Test independently of other components (with unit tests or otherwise)
  • Test directly with other componenets
  • Work with other devs to ensure stability on dev branch, make any small bug fixes directly in dev branch
  • Push to prod
[–] cark@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

Current place:

  • Work is done on a feature branch on a personal fork of the repo
  • Codebase requires 100% functional coverage, and you're responsible for writing the tests for your code and including that in the same PR
  • Run pre-commit hooks for style auto-formatters before you can commit and push your code to your origin fork
  • Ideally run local tests as well
  • Create a PR to pull feature branch into the upstream repo's main branch, which triggers the CI pipeline for style and tests
  • At least 1 other person must review the code before a PR can be approved and merged into upstream main
  • There's a separate CI pipeline for testing of publishing the packages to TestPyPI
  • Releases to PyPI are currently done manually
[–] nigel_peters@lemmy.nz 1 points 1 year ago

Ours is pretty intense - large bank, 60 or so iOS engineers actively contributing to a mono-repo:

  1. We have about 15 CI steps that pick up on anything from basic linting to security concerns (SonarQube). Unit tests, UI tests, etc.
  2. We have a template that PR authors follow to add descriptions, test plans, devices tested on.
  3. Reviewers are automatically assigned using a round robin system
  4. Reviewers obviously review the code, but also execute the test plan, which includes accessibility testing.
  5. All PRs require 2 approvals.
  6. A bunch of other stuff (uploading artefacts, generating gRPC protos) that probably isn't worth going into detail.

It's intense, and PRs on average take a week or so to get merged. In saying that, it is the highest quality and most well-architected codebase I have ever worked on.

If I were in your situation I'd push for the following:

  • all PRs have one approval, preferably two depending on team size
  • code is tested by someone else before being merged to main
  • use linters, Danger, etc to pick up on trivial shit
  • a few manual checks like ensuring code is unit tested
  • a Github PR Reviewer guide describing common issues to look for, tone of messaging when leaving comments ("be nice", "make it clear when you are adding optional nit-picks", etc)
  • encourage authors to add review comments to their own PRs for any bit of code that isn't immediately obvious
  • stretch goal: look into generating code coverage reports on your PRs, add quality gates
[–] jmcs@discuss.tchncs.de 1 points 1 year ago* (last edited 1 year ago)

At my work we use trunk based development, with mandatory review before merging (enforced by tooling). Part of the review is ensuring proper test coverage and it's enhanced by static code analysis.

[–] DmMacniel@feddit.de 0 points 1 year ago (1 children)

have you looked at Git Flow? Its pretty solid.

My team has a develop branch from which we branch feature branches. On it we commit our stuff and when we think its feature complete we build a snapshot version of it so that our QA can test it. Once that test was successful, and the code has been peer reviewed, it will be merged back onto develop.

PRs will be auto built so that the feature can be integrated and automated tested.

[–] flintcedar@beehaw.org 1 points 1 year ago

There is trunk based way. Although I have not used it heavily at work. https://trunkbaseddevelopment.com/

My team is very small (3 people). We mostly trust each other on just merging away without PR reviews. Although we ask for reviews when in doubt during development, not when ready to merge. Mostly for asking ideas on where to put stuff.

On my previous work, we were like a 15+ dev team, doing mandatory PR reviews before merging and doing the shotgun request (ping @review_channel and pray). I hated it.