this post was submitted on 27 Nov 2024
115 points (75.6% liked)

Technology

59680 readers
3220 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

"We have data on the performance of >50k engineers from 100s of companies. ~9.5% of software engineers do virtually nothing: Ghost Engineers.”

Last week, a tweet by Stanford researcher Yegor Denisov-Blanch went viral within Silicon Valley. “We have data on the performance of >50k engineers from 100s of companies,” he tweeted. “~9.5% of software engineers do virtually nothing: Ghost Engineers.”

Denisov-Blanch said that tech companies have given his research team access to their internal code repositories (their internal, private Githubs, for example) and, for the last two years, he and his team have been running an algorithm against individual employees’ code. He said that this automated code review shows that nearly 10 percent of employees at the companies analyzed do essentially nothing, and are handsomely compensated for it. There are not many details about how his team’s review algorithm works in a paper about it, but it says that it attempts to answer the same questions a human reviewer might have about any specific segment of code, such as:

  • “How difficult is the problem that this commit solves?
  • How many hours would it take you to just write the code in this commit assuming you could fully focus on this task?
  • How well structured is this source code relative to the previous commits? Quartile within this list
  • How maintainable is this commit?”

Ghost Engineers, as determined by his algorithm, perform at less than 10 percent of the median software engineer (as in, they are measured as being 10 times worse/less productive than the median worker).

Denisov-Blanch wrote that tens of thousands of software engineers could be laid off and that companies could save billions of dollars by doing so. “It is insane that ~9.5 percent of software engineers do almost nothing while collecting paychecks,” Denisov-Blanch tweeted. “This unfairly burdens teams, wastes company resources, blocks jobs for others, and limits humanity’s progress. It has to stop.”

The Stanford research has not yet been published in any form outside of a few graphs Denisov-Blanch shared on Twitter. It has not been peer reviewed. But the fact that this sort of analysis is being done at all shows how much tech companies have become focused on the idea of “overemployment,” where people work multiple full-time jobs without the knowledge of their employers and its focus on getting workers to return to the office. Alongside Denisov-Blanch’s project, there has been an incredible amount of investment in worker surveillance tools. (Whether a ~9.5 percent rate of workers not being effective is high is hard to say; it's unclear what percentage of workers overall are ineffective, or what other industry's numbers look like).

Over the weekend, a post on the r/sysadmin subreddit went viral both there and on the r/overemployed subreddit. In that post, a worker said they had just sat through a sales pitch from an unnamed workplace surveillance AI company that purports to give employees “red flags” if their desktop sits idle for “more than 30-60 seconds,” which means “no ‘meaningful’ mouse and keyboard movement,” attempts to create “productivity graph” based on computer behavior, and pits workers against each other based on the time it takes to complete specific tasks. 

What is becoming clear is that companies are becoming obsessed with catching employees who are underperforming or who are functionally doing nothing at all, and, in a job market that has become much tougher for software engineers, are feeling emboldened to deploy new surveillance tactics. 

“In the past, engineers wielded a lot of power at companies. If you lost your engineers or their trust or demotivated the team—companies were scared shitless by this possibility,” Denisov-Blanch told 404 Media in a phone interview. “Companies looked at having 10-15 percent of engineers being unproductive as the cost of doing business.”

Denisov-Blanch and his colleagues published a paper in September outlining an “algorithmic model” for doing code reviews that essentially assess software engineer worker productivity. The paper claims that their algorithmic code assessment model “can estimate coding and implementation time with a high degree of accuracy,” essentially suggesting that it can judge worker performance as well as a human code reviewer can, but much more quickly and cheaply. 

I asked Denisov-Blanch if he thought his algorithm was scooping up people whose work contributions might not be able to be judged by code commits and code analysis alone. He said that he believes the algorithm has controlled for that, and that companies have told him specific workers who should be excluded from analysis because their job responsibilities extend beyond just pushing code. 

“Companies are very interested when we find these people [the ghost engineers] and we run it by them and say ‘it looks like this person is not doing a lot, how does that fit in with their job responsibilities?’” Denisov-Blanch said. “They have to launch a low-key investigation and sometimes they tell us ‘they’re fine,’ and we can exclude them. Other times, they’re very surprised.”

He said that the algorithm they have developed attempts to analyze code quality in addition to simply analyzing the number of commits (or code pushes) an engineer has made, because number of commits is already a well-known performance metric that can easily be gamed by pushing meaningless updates or pushing then reverting updates over and over. “Some people write empty lines of code and do commits that are meaningless,” he said. “You would think this would be caught during the annual review process, but apparently it isn’t. We started this research because there was no good way to use data in a scalable way that’s transparent and objective around your software engineering team.”

Much has been written about the rise of “overemployment” during the pandemic, where workers take on multiple full-time remote jobs and manage to juggle them. Some people have realized that they can do a passable enough job at work in just a few hours a day or less. 

“I have friends who do this. There’s a lot of anecdotal evidence of people doing this for years and getting away with it. Working two, three, four hours a day and now there’s return-to-office mandates and they have to have their butt in a seat in an office for eight hours a day or so,” he said. “That may be where a lot of the friction with the return-to-office movement comes from, this notion that ‘I can’t work two jobs.’ I have friends, I call them at 11 am on a Wednesday and they’re sleeping, literally. I’m like, ‘Whoa, don’t you work in big tech?’ But nobody checks, and they’ve been doing that for years.”

Denisov-Blanch said that, with massive tech layoffs over the last few years and a more difficult job market, it is no longer the case that software engineers can quit or get laid off and get a new job making the same or more money almost immediately. Meta and X have famously done huge rounds of layoffs to its staff, and Elon Musk famously claimed that X didn’t need those employees to keep the company running. When I asked Denisov-Blanch if his algorithm was being used by any companies in Silicon Valley to help inform layoffs, he said: “I can’t specifically comment on whether we were or were not involved in layoffs [at any company] because we’re under strict privacy agreements.”

The company signup page for the research project, however, tells companies that the “benefits of participation” in the project are “Use the results to support decision-making in your organization. Potentially reduce costs. Gain granular visibility into the output of your engineering processes.”

Denisov-Blanch said that he believes “very tactile workplace surveillance, things like looking at keystrokes—people are going to game them, and it creates a low trust environment and a toxic culture.” He said with his research he is “trying to not do surveillance,” but said that he imagines a future where engineers are judged more like salespeople, who get commission or laid off based on performance. 

“Software engineering could be more like this, as long as the thing you’re building is not just counting lines or keystrokes,” he said. “With LLMs and AI, you can make it more meritocratic.”

Denisov-Blanch said he could not name any companies that are part of the study but said that since he posted his thread, “it has really resonated with people,” and that many more companies have reached out to him to sign up within the last few days.

top 50 comments
sorted by: hot top controversial new old
[–] irotsoma@lemmy.world 16 points 12 hours ago

I think most people misunderstand what software engineers do. Writing code is only a small portion of the work for most. Analyzing defects and performance issues, supporting production support that ends up with unqualified people due to the way support us handled these days, writing documentation or supporting those who do, design work, QE/QA/QC support, code reviews, product meetings, and tons of other stuff. That's why "AI" is not having any luck with just replacing even junior engineers, besides the fact that it just doesn't work.

[–] Mustard@lemmy.blahaj.zone 21 points 17 hours ago (1 children)

Those numbers seem really sus. "We came up with some kind of ~~bullshit metric~~ magic algorithm and it turns out that if you look at people who score 10% of the average, that's about 10% of the people!!!"

Uh... yeah buddy sure is.

[–] Valmond@lemmy.world 8 points 15 hours ago

Exactly.

"Why didn't you make more commits last sprint?" Is such an idiotic take, that a friend of mine got a month or so ago.

He worked with another programmer and they comitted with his PC so... Didn't help, was still bashed for it because of "the numbers".

Sometimes they just need excuses to fire people too I guess.

[–] Pulptastic@midwest.social 3 points 12 hours ago

If people were gaming the performance monitoring systems to do less work, this is just another step in the cat and mouse game. They’ll figure out how to game this too.

[–] Feathercrown@lemmy.world 10 points 17 hours ago

The legendary 0.1x engineer

[–] mctoasterson@reddthat.com 33 points 23 hours ago (1 children)

This is bullshit. There are many people hired with the job title "Software Engineer" who don't sit and generate code, and for a number of reasons.

You could be on a hybrid team that does projects and support, so you spend 80% of your time attending meetings, working tickets, working with users, and shuffling paper in whatever asinine change management process your company happens to use.

I have worked places where "engineers" ended up having to spend most of their time dicking around in ServiceNow/Remedy/etc. instead of doing their actual jobs. That's shitty business process design and shitty management, and not a reflection of the employee doing nothing.

[–] aesthelete@lemmy.world 6 points 17 hours ago

I spend most of my time in other time wasters like jira and fucking aha as well.

If I actually do anything, it only generates more work for me because I have to explain myself to fifteen different parties before making very minor, very necessary changes.

My company can't be the only one like this.

[–] CriticalMiss@lemmy.world 13 points 21 hours ago

I’ve seen it first hand but I don’t know if 9.5% is the correct number. One software guy at my company works for 11 years at this company. He went through so much shit that at this point he doesn’t even sit under the software department anymore, he’s just under finance. All he does is upgrade GitLab once every quarter or so and then he just watches TV and messes around with his homelab in his free time. Comes to the office couple times a week for 3-4 hours to show everyone he is still alive then goes home.

[–] nihilvain@lemmy.ml 43 points 1 day ago

This guy is such a waste of carbon. Don't be fooled by his title as a "researcher" or him being in Stanford. He's just another Tech Bro, pushing his "product" to greedy companies to make a few bucks for himself.

And his sponsor? This guy.

http://www.dailymail.co.uk/sciencetech/article-5932725/amp/Controversial-AI-detects-sexuality-IQ-used-spot-criminals-claims-inventor.html

Both deserve the deepest level of hell!

[–] mochisuki@lemmy.world 36 points 1 day ago (1 children)

The old adage of the engineer paid to know where to tap an X comes to mind: https://quoteinvestigator.com/2017/03/06/tap/?amp=1

Frankly anyone telling you they can measure the value of a line of code without any background knowledge is selling BS.

But I welcome this new BS system as the previous system of managers not so secretly counting total commits and lines added was comically stupid.

[–] Codandchips@lemmy.world 9 points 22 hours ago

You don't pay me for what I do, you pay me for what I know...

[–] 2ugly2live@lemmy.world 36 points 1 day ago

What a fucking snitch. 9.5% of engineers gotta go, but the CEO getting paid buckets and buckets of money isn't draining the company? Fire 9.5% of engineers that actually have knowledge and are skilled enough to demand a high price for their skills, or CEO fuck-all who comes in via zoom once a quarter and couldn't open a pdf if they're life depended on it. Hmm, what a hard choice 🤔

[–] bokherif@lemmy.world 5 points 19 hours ago

LET'S LAY THEM OFF. If everyone is unemployed we can actually work on eating the rich

[–] fruitycoder@sh.itjust.works 6 points 21 hours ago (1 children)

Tbh sounds yet another "if we lived in another society this would be great" how much money is spent trying to measure, manage and control people doing the actual work?

Honestly though make an opensource version of this and run in it within the team, make the results public to the team and it's a great tool to motivate coworkers and make sure they are working without needing a manager (as often at least) to assign KPIs and get status reports.

[–] aesthelete@lemmy.world 4 points 17 hours ago* (last edited 17 hours ago)

I worked on a project that had at least 4x the amount of "management" people of various stripes talking about delivery dates and status as it did engineers. There was like 5 engineers on the project and 20ish people just talking bullshit and sitting in meetings.

[–] Viri4thus@feddit.org 29 points 1 day ago* (last edited 1 day ago)

How to scare managers into hiring me with my PoS black box software.

Step 1: make wild claims about wasted resources.

<- you are here

[–] Lucidlethargy@sh.itjust.works 11 points 1 day ago (1 children)

Was this written by an AI? I'm legitimately asking.

[–] OneCardboardBox@lemmy.sdf.org 3 points 21 hours ago

No. 404 media is written by people. I've personally been impressed by their reporting over the last year.

[–] Shanedino@lemmy.world 55 points 1 day ago (2 children)

They are called managers not ghost engineers.

[–] a4ng3l@lemmy.world 18 points 1 day ago

Or architects, infrastructure engineer… plenty of peripheral functions are hired as « IT engineers » and not pushing code in a repo. What a weird article.

[–] stoly@lemmy.world 47 points 1 day ago (1 children)

Alternatively they are on an engineering team and providing their expertise via other means beyond code submission. This entire thing sounds like a sledgehammer trying to do the work of a scalpel.

[–] bestboyfriendintheworld@sh.itjust.works 9 points 1 day ago (2 children)

Reviews, planning, teaching, mentoring, testing produce little code.

[–] wrekone@lemmy.dbzer0.com 4 points 17 hours ago

One of the best engineers I've worked with produced very little code at that point in his career. His primary responsibility was to do the research and planning that empowered the rest of the team to move quickly. Without a doubt, that team was far more productive due to his efforts. When needed, he could quickly whip out some top notch code, and he was heavy involved in the code review process. Writing code just wasn't how he could deliver the most value.

[–] MirthfulAlembic@lemmy.world 2 points 16 hours ago

Developing standards, best practices, conventions, etc. One of the most valuable people on my team wrote some incredible quality automations a few years ago, and the only coding he does at this point is updates to them when necessary. By volume, he's easily bottom 5% this year, but we'd be much worse off without his expertise/advise and the fact he advocates for the team.

This is classic shit management metrics. It would take some time for the rot to set in after using a cudgel approach to a team, and by the time it did, the assholes responsible would have fucked off elsewhere with their huge bonuses.

[–] 01189998819991197253@infosec.pub 35 points 1 day ago (1 children)

The thing about AI startups, is they always try and walk it in.

[–] ArtVandelay@lemmy.world 10 points 1 day ago

Absolutely full of ludicrous displays

[–] BassTurd@lemmy.world 60 points 1 day ago (1 children)

It's a long article that I admittedly didn't read all of. I got to the part where it said the details of his algorithm are basically unknown, which means his data means nothing. If someone can't provide the proof to their claims, they have no merit.

An LLM that's built entirely on code repo data, and is somehow claiming workers "do virtually nothing" without any sort of outside data, is insane.

[–] JollyG@lemmy.world 15 points 1 day ago (1 children)

One of my big beefs with ML/AL is that these tools can be used to wrap bad ideas in what I will call "Machine legitimacy". Which is another way of saying that there are many cases where these models are built up around a bunch of unrealistic assumptions, or trained on data that is not actually generalizable to the applied situation but will still spit out a value. That value becomes the truth because it came from some automated process. People cant critically interrogate it because the bad assumptions are hidden behind automation.

[–] aesthelete@lemmy.world 1 points 16 hours ago* (last edited 16 hours ago)

Yeah it's similar to a computer spitting out 42 as the answer to life, the universe, and everything.

[–] MTK@lemmy.world 151 points 1 day ago (4 children)

"We have to let you go as from our analysis you do mostly nothing, mr senior engineer"

1 week later everything is crashing and no one knows why

Ah yes, the classic evaluation of stupid shit that ends up shooting the company in the foot.

[–] BearOfaTime@lemm.ee 71 points 1 day ago (1 children)

Yep.

This question doesn't address what else these engineers do besides write code.

Who knows how many meetings they're involved in to constrain the crazy from senior management?

load more comments (1 replies)
[–] Enkers@sh.itjust.works 46 points 1 day ago* (last edited 1 day ago) (2 children)

Makes me think of a trend in FTP gaming, where there was a correlation between play time and $ spent, so gaming companies would try and optimise for time played. They'd psychologically manipulate their players to spend more time in game with daily quests, battle passes, etc, all in an effort to raise revenues.

What they didn't realise was that players spent time in game because it was fun, and they bought mtx because they enjoyed the game and wanted it to succeed. Optimising for play time had the opposite effect, and made the game a chore. Instead of raising revenues, they actually dropped.

This is why you always have to be careful when chasing metrics. If you pick wrong, it can have the opposite effect that you want.

[–] aesthelete@lemmy.world 5 points 16 hours ago (1 children)

This is why you always have to be careful when chasing metrics. If you pick wrong, it can have the opposite effect that you want.

I don't know where the adage came from but I find it very true:

Once you turn a metric into a target, it ceases to be a good metric.

[–] halcyonloon@midwest.social 3 points 15 hours ago

Goodhart's law! One of my personal favorites after working in the field of healthcare regulatory reporting.

[–] MTK@lemmy.world 18 points 1 day ago (1 children)

When your data "scientists" don't understand the difference between causation and correlation

load more comments (1 replies)
load more comments (2 replies)
[–] gravitas_deficiency@sh.itjust.works 102 points 1 day ago (9 children)

This fundamentally misunderstands the domain of software engineering. Most of the time, with an actually difficult problem, the hardest part is devising the solution itself. Which, you know, often involves a lot of thinking and not that much typing. And that also entirely puts aside how neurodivergent people - who are somewhat over repressed in STEM - often arrive at solutions in very different ways that statistical models like these simply don’t account for.

[–] GreenSkree@lemmy.world 1 points 13 hours ago

And beyond this, solving the problem is just the baseline. Solving the problem well can take an immense amount of time, often producing solutions that appear overly simplistic in the end.

I recently watched a talk about ongoing Java language work (Project Valhalla). They've been working on this particular set of performance improvements for years without a lot to show for it. Apparently, they had some prototypes that worked well but were unwieldy to use. After a lot of refinement, they have a solution that seems completely obvious. It takes a lot of skill to come up with solutions like that, and this type of work would be unjustly punished by algorithms like this.

[–] ribboo@lemm.ee 2 points 16 hours ago

And that’s after you’ve located and understood the problem. That part is often far more complicated and time consuming than the fix itself.

load more comments (7 replies)
[–] socsa@piefed.social 51 points 1 day ago* (last edited 1 day ago) (2 children)

I don't doubt the thesis, but reviewing commit history is next to useless. I'm probably not top 50% of activity within our organization but I've literally invented most of our tech and my name is on the patents.

If anything, it's the people who spend all day making pedantic code review comments just to boost git actions who have nothing better to do.

[–] ribboo@lemm.ee 4 points 16 hours ago

When I was a junior about 95% of my days was writing code. Nowadays? 30-40% maybe. The rest is meetings, code-review, helping colleagues that calls me among other things.

Good luck finding that Mr Algorithm. Commit history is basically useless due to another factor as well. For bugs - finding the actual problem and the reason for it, is often far more consuming than the fix itself.

[–] mctoasterson@reddthat.com 5 points 1 day ago

Yeah I was just about to say one obvious flaw in his methodology is that people could show up as "high productivity" by adding thousands of lines of worthless comments.

[–] gedaliyah@lemmy.world 91 points 1 day ago (3 children)

I am not a coder, nor do I work in or have much knowledge of the industry. But I can tell immediately that this looks like some extra fancy BS. Designing a program to detect the quality and quantity of a person's code commits sounds like AI mumbo-jumbo from the start. Even if it were technically possible, it would not tell you whether someone is an effective communicator, coordinates with other team members, shares productive ideas, etc.

The headline should have been:

"Consulting Firm Desperately Tries to Justify its Existence."

load more comments (3 replies)
[–] fluxion@lemmy.world 55 points 1 day ago* (last edited 1 day ago) (2 children)

Some easy tasks involve pumping out mounds of code/commits, some tasks involve monumental amounts of inter-department cooperation or design discussions with open source communities online or at yearly conferences and result in relatively small amounts of code especially in terms of LOC/day.

This study purports to take this into account to some degree but i call bullshit. I can barely explain this level of nuance to anyone above my first-line manager everyone else is just like what's taking so long can we throw an intern on it to speed things up? and its like sure... after you hire them full-time and spend the next couple years training them. Oh you want me to do it that too?

The whole tone of this "researcher" makes the bias so clear but I'm sure we'll have all kinds of fancy new monitoring and lay-offs of good people thanks to these sorts of bullshit metrics.

If you want to know whether employees are a waste of space or not, hire good fucking managers that know what they are doing. If they farm that out to tools like this, it's a good sign they don't.

[–] trolololol@lemmy.world 2 points 14 hours ago

Hire good managers? Can I just promote your intern to manager instead?

load more comments (1 replies)
[–] yildolw@lemmy.world 33 points 1 day ago (1 children)

The thing about being a big organization is that you need to have slack capacity most of the time in order to be able to go quickly in a different direction at certain times. If you don't have excess capacity sitting idle, an unforeseen event can paralyze you

load more comments (1 replies)
[–] sudo42@lemmy.world 38 points 1 day ago

How much hubris/ignorance this guy has to believe his algorithm is accurate enough to detect “10%” of employees were deadbeats? What precision! If it found 50% deadbeats, that would mean the algorithm might be working.
The worst companies have only 10% deadbeats? Any company with only 10% deadbeats means their management team is doing a great job hiring/managing. Any company that only 50% deadbeat managers would be outstanding.

load more comments
view more: next ›