- UID
- 1333346
- 在线时间
- 小时
- 注册时间
- 2018-3-14
- 最后登录
- 1970-1-1
- 主题
- 帖子
- 性别
- 保密
|
https://www.edge.org/response-detail/11587
是原文吗?
涉及评价研究者的两者指标,一个h好像和发文被引用量有关,一个r和replicable有关。
好像说现在impact比较重要,r不是很重要,一般都是用h的指标。
但是h指标也有缺点,比如作者为了提高h,可能会发controversial但r低的文。
所以,用r指标进行补充也有好处。
Replicability
BRIAN KNUTSON
Associate professor of psychology and neuroscience, Stanford University
Since different visiting teachers had promoted contradictory philosophies, the villagers asked the Buddha
whom they should believe. The Buddha advised: “When you know for yourselves . . . these things, when performed and undertaken, conduce to well-being and happiness—then live and act accordingly.” Such empirical advice might sound surprising coming from a religious leader, but not from a scientist.
开头先介绍了一种R开头的打分策略,然后说学术界对于一种R打头的计分方式implicitly尊重,但是不会explicitly使用,原因是学术界有一个H打头的分数,这个分数决定了他们的prestige,然后所有低于这个H分数的都列入其他。但是H打头的分数的这种计分方式有坏处,那就是作者为了提过引用率,经常会去提供一些容易引起争议的观点,这样其实没法确保学术质量。
“See for yourself” is an unspoken credo of science. It is not enough to run an experiment and report the findings. Others who repeat that experiment must find the same thing. Repeatable experiments are called “replicable.” Although scientists implicitly respect replicability, they do not typically explicitly reward it.
To some extent, ignoring replicability comes naturally. Human nervous systems are designed to respond to rapid changes, ranging from subtle visual flickers to pounding rushes of ecstasy. Fixating on fast change makes adaptive sense—why spend limited energy on opportunities or threats that have already passed? But in the face of slowly growing problems, fixation on change can prove disastrous (think of lobsters in the cooking pot or people under greenhouse gases).
第一段:讲了repeatable experiments的定义。一般情况下,学术研究的评估标准之一是看引用量,可以用h index来代表。h index越高确实可以代表这个学术研究更eminence,但是有的人会为了高h index来发表一些controversial的研究成果,这样反而对学术研究不好 阅读二 R score:第一段先讲现在普遍使用H score去rate scientists的文章发表成果, 其中一大局限性是有的scientists会为了H score发表unreplicable的作品,无法保证作品的质量;
Cultures can also promote fixation on change. In science, some high-profile journals, and even entire fields, emphasize novelty, consigning replications to the dustbin of the unremarkable and unpublishable. More formally, scientists are often judged based on their work’s novelty rather than its replicability. The increasingly popular “h-index” quantifies impact by assigning a number (h) which indicates that an investigator has published h papers that have been cited h or more times (so, Joe Blow has an h-index of 5 if he has published five papers, each of which others have cited five or more times). While impact factors correlate with eminence in some fields (e.g., physics), problems can arise. For instance, Dr. Blow might boost his impact factor by publishing controversial (thus, cited) but unreplicable findings.
Why not construct a replicability (or “r”) index to complement impact factors? As with h, r could indicate that a scientist has originally documented r separate effects that independently replicate r or more times (so, Susie Sharp has an r-index of 5 if she has published five independent effects, each of which others have replicated five or more times). Replication indices would necessarily be lower than citation indices, since effects have to first be published before they can be replicated, but they might provide distinct information about research quality. As with citation indices, replication indices might even apply to journals and fields, providing a measure that can combat biases against publishing and publicizing replications.
第三段前半段不记得了,后面讲R score可以应用在研究public policy的方面(such as health,education,etc.) 第三段是对于R打头这个策略在公共政策领域的引用做了一一些延伸,说这个策略如何适用于公共政策的improvement,因为很多既定政策经过反复试验就会发现其实是有问题并且需要改善的。
A replicability index might prove even more useful to nonscientists. Most investigators who have spent significant time in the salt mines of the laboratory already intuit that most ideas don’t pan out, and those that do sometimes result from chance or charitable interpretations. Conversely, they also recognize that replicability means they’re really onto something. Not so for the general public, who instead encounter scientific advances one cataclysmic media-filtered study at a time. As a result, laypeople and journalists are repeatedly surprised to find the latest counterintuitive finding overturned by new results. Measures of replicability could help channel attention toward cumulative contributions. Along those lines, it is interesting to consider applying replicability criteria to public-policy interventions designed to improve health, enhance education, or curb violence. Individuals might even benefit from using replicability criteria to optimize their personal habits (e.g., more effectively dieting, exercising, working, etc.).
Replication should be celebrated rather than denigrated. Often taken for granted, replicability may be the exception rather than the rule. As running water resolves rock from mud, so can replicability highlight the most reliable findings, investigators, journals, and even fields. More broadly, replicability may provide an indispensable tool for evaluating both personal and public policies. As suggested in the Kalama Sutta, replicability might even help us decide whom to believe.
|
|