I’m a PhD student.
An associate professor in my department has established a collaboration with another university. This other university has people who have developed a product. The professor does not own the product, but is one of the collaborators with the university.

I recently made an article that involves this product, and included authors who helped produce this work. The professor in question is not an author, because he was neither invited to the paper, nor did he contribute. I simply did not need him for this paper. On the day the paper was camera-ready, I received an email from the professor stating his disappointment that he has not been involved with this publication.

This person is not involved in my supervision team whatsoever. It seems to me that he expects any publication that is related to the product to have his name on it.

Is this acceptable?

I am about to submit a paper in which one of the algorithms I used is heavily based on the code available on one of the TensowFlow tutorials. In fact, I mostly copied the code from the page and made the necessary modifications for my specific case. I did cite, in the paper, both TensorFlow and the page, and disclosed that the neural net architecture I was using was based on the one on the page. The licensing terms of the code (Apache 2.0) mention that the user is free to build upon the code and redistribute it.

I am not in CS, and am applying the model to a specific problem in my field. However, in copying the code (which I believe will not be disclosed) I am afraid I might be doing academic misconduct. However, on the other hand, if that was the case, using open-source libraries would also be frowned upon, given that the user is essentially copying code.

Will I be committing academic misconduct or anything that is ethically frowned upon in academia by submitting results parts of which were based on copied code?

PS: In response to a comment, I cited TensorFlow and the webpage in the paper, which will be published if accepted, but the code itself (which was heavily based on the code available in the webpage) won’t be posted anywhere (as far as I know).

I reviewed a paper submitted for a smallish magazine. It presented an algorithm to perform some allocation task and compared its performance to that of several other algorithms from the literature performing the same task in several ways (the results, i.e. the allocation, can be evaluated based on the usage of several different resources, so a result could use less of resource A, but more of resource B, and so on).

My opinion was that, although the algorithm was badly presented and the paper was nigh-incomprehensible, the results presented seemed good, so the authors deserved another shot at better explaining themselves, so I did not suggest to reject it altogether.

The first review round went through with a unanimous “major revisions” verdict.

Then I was asked to review the second submitted version of the paper too. In this new version, the algorithm had been compared to a much broader range of algorithms. Problem is: even though the algorithms it was being compared to changed, the comparison charts remained exactly the same, and looking at them side-by-side revealed no difference whatsoever (no explicit numerical data was provided).

What is worse is that the change was not even one-to-one. In the first submission, the algorithm (let’s call it A) was compared with the same three other algorithms in all categories (resource A usage, resource B usage etc.) while in the second submission, each resource comparison involved different algorithms, so for example, A was compared to B,C and D in resource A utilization, but it was compared to C, E and F in resource B utilization, and so on.

Nonetheless, each chart in the second submission was identical to one from the first submission.

At this point, I was fairly certain that at least the second round of comparisons had been completely faked, i.e. the authors just changed the labels on the charts.

Asking one of my senior coworkers, I was advised to just ignore the issue and to not raise a ruckus, since this issue has a high chance of backfiring: we are not an academic institution, we are the R&D department of a pretty small private firm, hence we have very little political weight and scientific reputation.

I am wondering if I really should raise this issue with the editor, with whom my firm has business relations, as we are partners in several government-funded projects, or I should heed the advice of my colleague.

While the paper has very little chance of being published as the second submission is also nigh-unreadable, a co-author of this paper has an extremely high h-index (100+), hence I feel if my suspicion is founded, it really should be brought to the light.

I am about to submit a paper in which one of the algorithms I used is heavily based on the code available on one of the TensowFlow tutorials. In fact, I mostly copied the code from the page and made the necessary modifications for my specific case. I did cite both TensorFlow and the page, and disclosed that the neural net architecture I was using was based on the one in the page. The licensing terms of the code (Apache 2.0) mention that the user is free to build upon the code and redistribute it.

I am not in CS, and am applying the model to a specific problem in my field. However, in copying the code (which I believe will not be disclosed) I am afraid I might be doing academic misconduct. However, on the other hand, if that was the case, using open-source libraries would also be frowned upon, given that the user is essentially copying code.

Will I be committing academic misconduct or anything that is ethically frowned upon in academia by submitting results part of which were based on copied code?

In computer science, a large portion of people develop algorithms, and demonstrate their effectiveness by running experiments to compare with other existing approaches in their research papers. In these experiments, as far as I know, there are two possible ways to purposely hide data:

  • Ashley has developed algorithm X and decided to experimentally compare it with algorithm Y. She compared them on benchmark set A, B and C. She found that on benchmark C, algorithm Y outperformed X, so she decided to not report on C. In the paper, she also only claims algorithm X outperformed on A and B.

  • Bill has developed algorithm P. He compared P with other algorithms Q, R and S on some benchmark instances. He found that P outperformed Q and R, but not S on these benchmark instances. He decided to not report the comparison between P and S. In the paper, he also only claims algorithm P outperformed Q and R.

Are Ashley and Bill’s actions considered research misconduct?

In computer science, a large portion of people develop algorithms, and demonstrate their effectiveness by running experiments to compare with other existing approaches in their research papers. In these experiments, as far as I know, there can be two types of purposely hiding data, which seem somewhat popular.

  • Ashley has developed algorithm X and decided to experimentally compared it with algorithm Y. She compared them on benchmark set A, B and C. She found that on benchmark C, algorithm Y outperformed X, so she decided to not report on C. In the paper, she also only claims algorithm X outperformed on A and B.

  • Bill has developed algorithm P. He compared P with other algorithms Q, R and S on some benchmark instances. He found that P outperformed Q and R, but not S on these benchmark instances. He decided to not report the comparison between P and S. In the paper, he also only claims algorithm P outperformed Q and R.

Are Ashley and Bill’s actions considered research misconduct?

In a conference with an author rebuttal phase, I received a review from a reviewer (who gave a strong reject) asking to compare my work to a paper which is not even on the same problem as my paper. I firmly believe that this review is biased and the reviewer is either the author of the paper, which he/she is asking me to compare or was planning to putting a similar paper.
Is it okay to email the chairs to look into my paper and reviews?

During my PhD studies, I published a journal article (one of total four) in a prestigious journal of my field. I gave my codes to my PhD supervisor, but he messed up things in the lab and lost it (I also didn’t care and lost, for I am pursuing different directions).

Two years after publication, my PhD advisor wanted to commercialize my PhD work and wanted me to develop the codes again without willing to credit me for my efforts. I found his emails harassing, malicious and blackmailing and stopped responding him by directing his emails to spam folder.

Out of desperation, he himself developed some codes and submitted a corrigendum in which he clearly tried to push an agenda to suit his commercialization efforts (I know what I am talking, believe me on this). This included falsifying earlier findings just because he didn’t understand my work and cannot implement it and thus, presented an alternate algorithm which clearly is inferior. From my experience with that algorithm, I know from his description of implementation in the corrigendum that the results he presented are clearly made-up.
My PhD advisor has basically faked results in a corrigendum to suit his commercialization efforts.

What are my options ?

  1. Can I convey it to the journal ? (I have already conveyed to the journal that this corrigendum is submitted without my approval.)

  2. Can I offer my ex advisor to re-implement but giving me the due credit?
    What should I do?

My graduate student utilized some particles produced by a collaborator’s lab in some animal imaging experiments, which were designed and analyzed by my student and me. The protocol for synthesizing the particles has already been published by my collaborator, and a technician in the collaborator’s lab made the particles according to the protocol. I shared some of the resulting in vivo images from this study with my collaborator, but unfortunately, I subsequently discovered that the collaborator used some of these images in a fraudulent manner (grossly misrepresenting them as preliminary data in some grant applications). The collaborator’s institution conducted a formal investigation of this and other incidents and found scientific misconduct had occurred. I would like to publish my graduate student’s image data, in part to make sure that a legitimate representation of the data is in the literature, but mostly because the work was publicly-funded and represents the hard work of many good people. If the misconduct hadn’t occurred, I probably would have considered including the collaborator as a co-author on the publication, by getting them more involved in the manuscript, even though the particle prep was not novel. But now there are many reasons why I do not wish to publish something with this collaborator! Is documented scientific misconduct involving the data from this study a valid reason for not including this collaborator as a coauthor of a paper describing this study? Can I simply acknowledge the technician who provided the particles and reference the prior publication of the protocol?

By professional conduct, I refer to issues related to how a researcher manages his/her research group, including funding (both research and salary) and human resource issue.

For example, there is the Office of the Independent Adjudicator in the UK that reviews student complaints against higher-education providers. But the service offered is restricted to students, and not directly relevant to the professional conduct of a researcher.

An example of such an organisation in the financial field is the Financial Conduct Authority, who oversees the professional conduct of both organisations and individuals.