Citation rings are a huge problem, but couldn't the sentiment-analysis side be readily addressed? There's a difference between papers that just cite a paper without judgment (+0), papers whose results explicitly support another paper (+1) and papers that explicitly reject another paper (-1). Such an AI could also bypass "citation analysis" and instead ingest the entire scientific literature, and find refutations in papers that don't even cite each other.
In practice, 2 problems: 1) this requires sophisticated AI; and 2) many people will keep assuming that high citations = credible (i.e. assume that +0 and +1 are equivalent).
But this method will help researchers go beyond "it's highly-cited therefore I trust it"; it'll help students and rigorous scholars find how well-supported a paper is, despite its popularity. And it'll surface refutations that may be hard to otherwise find; for example when a scholar that's famous in a field, and unknown in another, decides to publish a paper in the latter field that may not be widely seen.
Ultimately (though this is far off: theoretically completely possible but highly technically difficult) we can also imagine AIs that ingest the whole literature, gains deep cross-domain knowledge, and are trained to detect poor methodologies, or automatically highlight new insights in one discipline that can enrich another discipline, or (ideally) make its own judgments about the merits of any finding based on the knowledge it acquired. After human review, it could systemically help "clean up" the scientific literature of bad methodologies and ideas, and free scientists to spend less time working on scientific dead ends.
That last paragraph won't be doable for a while, but the first 3 seem readily within reach of today's AI technologies (and seems to be what scite is doing).
In practice, 2 problems: 1) this requires sophisticated AI; and 2) many people will keep assuming that high citations = credible (i.e. assume that +0 and +1 are equivalent).
But this method will help researchers go beyond "it's highly-cited therefore I trust it"; it'll help students and rigorous scholars find how well-supported a paper is, despite its popularity. And it'll surface refutations that may be hard to otherwise find; for example when a scholar that's famous in a field, and unknown in another, decides to publish a paper in the latter field that may not be widely seen.
Ultimately (though this is far off: theoretically completely possible but highly technically difficult) we can also imagine AIs that ingest the whole literature, gains deep cross-domain knowledge, and are trained to detect poor methodologies, or automatically highlight new insights in one discipline that can enrich another discipline, or (ideally) make its own judgments about the merits of any finding based on the knowledge it acquired. After human review, it could systemically help "clean up" the scientific literature of bad methodologies and ideas, and free scientists to spend less time working on scientific dead ends.
That last paragraph won't be doable for a while, but the first 3 seem readily within reach of today's AI technologies (and seems to be what scite is doing).