this post was submitted on 17 Oct 2024
277 points (98.6% liked)

Science Memes

10970 readers
2120 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 22 comments
sorted by: hot top controversial new old
[–] solsangraal@lemmy.zip 69 points 3 weeks ago
[–] Bishma@discuss.tchncs.de 40 points 3 weeks ago (1 children)

I look forward to 6 hours of BobbyBroccoli videos about this.

[–] RamenDame@lemmy.world 2 points 3 weeks ago (1 children)

I love his videos. I found him one lacy Sunday afternoon bing watching his Schön videos. He and many more are my reason for Nebula.

[–] CrayonRosary@lemmy.world 1 points 2 weeks ago

I, too, spend my Sunday afternoons draped in lace.

[–] troyunrau@lemmy.ca 33 points 3 weeks ago (2 children)

Ewww - the whole point of peer review is to catch this shit. If peer review isn't working, we should be going back to monographs :)

[–] decerian@lemmy.world 76 points 3 weeks ago (3 children)

I disagree there - peer review as a system isn't designed to catch fraud at all, it's designed to ensure that studies that get published meet a minimum standard for competence. Reviewers aren't asked to look for fake data, and in most cases aren't trained to spot it either.

Whether we need to create a new system that is designed to catch fraud prior to publication is a whole different question.

[–] Cephalotrocity@biglemmowski.win 55 points 3 weeks ago (2 children)

Whether we need to create a new system that is designed to catch fraud prior to publication is a whole different question

That system already exists. It's what replication studies are for. Whether we desperately need to massively bolster the amount of replication studies done is the question, and the answer is 'yes'.

[–] FinalRemix@lemmy.world 22 points 3 weeks ago* (last edited 3 weeks ago)

But that's not S E X Y! We need new research, to earn grants and subsidize faculty pay!

[–] witty_username@feddit.nl 16 points 3 weeks ago (1 children)

An institute for reproducibility would be awesome

[–] Imgonnatrythis@sh.itjust.works 3 points 3 weeks ago

Agree! Maybe efforts spent working on projects assigned from the IFR would be rewarded with grant funds or grant extensions for novel projects.

[–] troyunrau@lemmy.ca 20 points 3 weeks ago (1 children)

We could award a certain percentage of grants and grad students should be able to get degrees doing replication studies. Unfortunately everyone is chasing total paper count and impact factor rankings and shit.

[–] Rolando@lemmy.world 19 points 3 weeks ago (1 children)

Maybe we should consider replication studies to be "service to the community" when judging career accomplishments. Like, maybe you never chaired a conference but you published several replication studies instead. You could get your Masters students and/or undergrads to do the replications. We'd need journals that focus on replication studies, though.

[–] Imgonnatrythis@sh.itjust.works 6 points 3 weeks ago

Nah. Enough of this service to community stuff. It always ends up meaning us doing more work for free that someone else profits from. It should be incentiviced with grant funds. Studies I would want to make sure undergo replication are industry sponsored. Industry sponsored studies should have to pay into a pool and certain studies would be selected for replication analysis with these funds.

[–] evasive_chimpanzee@lemmy.world 9 points 3 weeks ago

Yeah, reviewing is about making sure the methods are sound and the conclusions are supported by the data. Whether or not the data are correct is largely something that the reviewer cannot determine.

If a machine spits out a reading of 5.3, but the paper says 6.2, the reviewer can't catch that. If numbers are too perfect, you might be suspicious of it, but it's really not your job to go all forensic accountant on the data.

[–] Wolf314159@startrek.website 20 points 3 weeks ago

You're conflating peer review and studies that verify results. The problem is that verifying someone else's results isn't sexy, doesn't get you grant money, and doesn't further your career. Redoing the work and verifying the results of other "pioneers" is important, but thankless work. Until we insensitivise doing the boring science by funding all fundamental science research more, this kind of problem will only get worse.

[–] bjoern_tantau@swg-empire.de 24 points 3 weeks ago

Everyone laughing about troll physics, this guy did troll chemistry. Nobody's laughing now.

[–] Zagorath@aussie.zone 15 points 3 weeks ago (1 children)

Did he work with copper nanotubes, perhaps?

I'm getting the impression he worked with brass balls

[–] D61@hexbear.net 12 points 3 weeks ago (1 children)

Probably not a good idea to phrase it as "earned" retractions.

It shouldn't be a competition to see who is the worst.

[–] buh@hexbear.net 6 points 3 weeks ago

Respect is earned, not given 😤

[–] samus12345@lemmy.world 6 points 3 weeks ago* (last edited 3 weeks ago)
[–] Dippy@beehaw.org 3 points 3 weeks ago

Dude even fakes his smile