I reviewed a paper submitted for a smallish magazine. It presented an algorithm to perform some allocation task and compared its performance to that of several other algorithms from the literature performing the same task in several ways (the results, i.e. the allocation, can be evaluated based on the usage of several different resources, so a result could use less of resource A, but more of resource B, and so on).
My opinion was that, although the algorithm was badly presented and the paper was nigh-incomprehensible, the results presented seemed good, so the authors deserved another shot at better explaining themselves, so I did not suggest to reject it altogether.
The first review round went through with a unanimous “major revisions” verdict.
Then I was asked to review the second submitted version of the paper too. In this new version, the algorithm had been compared to a much broader range of algorithms. Problem is: even though the algorithms it was being compared to changed, the comparison charts remained exactly the same, and looking at them side-by-side revealed no difference whatsoever (no explicit numerical data was provided).
What is worse is that the change was not even one-to-one. In the first submission, the algorithm (let’s call it A) was compared with the same three other algorithms in all categories (resource A usage, resource B usage etc.) while in the second submission, each resource comparison involved different algorithms, so for example, A was compared to B,C and D in resource A utilization, but it was compared to C, E and F in resource B utilization, and so on.
Nonetheless, each chart in the second submission was identical to one from the first submission.
At this point, I was fairly certain that at least the second round of comparisons had been completely faked, i.e. the authors just changed the labels on the charts.
Asking one of my senior coworkers, I was advised to just ignore the issue and to not raise a ruckus, since this issue has a high chance of backfiring: we are not an academic institution, we are the R&D department of a pretty small private firm, hence we have very little political weight and scientific reputation.
I am wondering if I really should raise this issue with the editor, with whom my firm has business relations, as we are partners in several government-funded projects, or I should heed the advice of my colleague.
While the paper has very little chance of being published as the second submission is also nigh-unreadable, a co-author of this paper has an extremely high h-index (100+), hence I feel if my suspicion is founded, it really should be brought to the light.