This really just shines a light on a more significant underlying problem with scientific publication in general, that being that there’s just way too much of it. “Publish or perish” is resulting in enormous pressure to churn out papers whether they’re good or not.
As an outsider with a researcher PhD in the family, I suspect its an issue of how the institutions measure success. How many papers? How many cites? Other metrics might work, but probably not as broadly. I assume they will also care about the size of your staff, how much grant money you get, patents held, etc.
I suspect that, short of a Nobel prize, it is difficult to objectively measure how one is advancing scientific progress (without a PhD committee to evaluate it.)
Yeah, that’s my sense too, as someone within low-level academia. Bibliometrics and other attempts to quantify research output have been big in the last few decades, but I think that they have made the problem worse if anything.
It’s especially messy when we consider the kind of progress and contribution that Nobel prizes can’t account for, like education and outreach. I really like how Dr Fatima explores this in her video How Science Pretends to be Meritocratic(duration: 37:04)
The saying “when a measure becomes a target it ceases to be a good measure” (Goodhart’s Law) has been making the rounds online recently, this is a good example of that.
Ironically, this is a common problem faced when training AIs too.
This really just shines a light on a more significant underlying problem with scientific publication in general, that being that there’s just way too much of it. “Publish or perish” is resulting in enormous pressure to churn out papers whether they’re good or not.
As an outsider with a researcher PhD in the family, I suspect its an issue of how the institutions measure success. How many papers? How many cites? Other metrics might work, but probably not as broadly. I assume they will also care about the size of your staff, how much grant money you get, patents held, etc.
I suspect that, short of a Nobel prize, it is difficult to objectively measure how one is advancing scientific progress (without a PhD committee to evaluate it.)
Yeah, that’s my sense too, as someone within low-level academia. Bibliometrics and other attempts to quantify research output have been big in the last few decades, but I think that they have made the problem worse if anything.
It’s especially messy when we consider the kind of progress and contribution that Nobel prizes can’t account for, like education and outreach. I really like how Dr Fatima explores this in her video How Science Pretends to be Meritocratic(duration: 37:04)
Here is an alternative Piped link(s):
How Science Pretends to be Meritocratic
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
The saying “when a measure becomes a target it ceases to be a good measure” (Goodhart’s Law) has been making the rounds online recently, this is a good example of that.
Ironically, this is a common problem faced when training AIs too.