Basically, my research was not into a particularly new methodology, but we have combined several existing methods to prove a result which has not been seen before under certain circumstances (rebalanced dataset, etc). Would this be an acceptable submission to a conference or journal, or should I go in a different direction and look for something unique?

For reference, this is in the field of machine learning / network security.

I am an amateur mathematician, not a member of academia. I’ve created a small algorithm for a twin prime sieve. It’s like a prime number sieve, but for twin primes of the form (6k-1, 6k+1). However, those details aren’t pertinent to this question.

I wrote a paper using LaTex and a template for a math publication, naively submitted it for consideration and was rejected with no reasons given.

With no basis for modifications, no reason to believe submitting to any other publication would be successful, and because publishing is not really a serious driver for me, I naively decided to throw it on Wikipedia.

The reviewer for Wikipedia made a comment that the information might be notable, but without citations to articles discussing it, there was no way to tell and the article could not thus be published on Wikipedia.

As I said, formal publication isn’t that important to me. However, my experience leaves me wondering how a person in this situation actually could publish. No publication -> no citations -> no publication allowed. It’s a catch-22 situation.

At this point, I’m more interested in publishing as an exercise in figuring out how to actually do it. What if I thought up something that was actually useful?
How would I go about it?

One challenge is, not being in academia I have no research resources other than Google or a public library. As far as I can tell my algorithm is original.

Second challenge is, being original, I have no sources to cite in a bibliography.

Third challenge, I suppose, is lack of a more senior academic to serve as my advisor.

I am currently reading a paper, which, while it contains valuable research and an actual field trial of the methods described (rare in this particular area), is packed with so many original acronyms that it is near impossible to read.

It became clear that if I was not already interested in reading this paper carefully, it would be difficult to recognize its value in the field.

This got me thinking… similar to studies which have been conducted using résumés with different names at the top (representing different genders, nationalities, etc), have there been any comparison studies using “dummy” papers which were identical except for their use of acronyms?

In my own writing I often find it convenient to use acronyms to speed things up, and of course it helps keep papers within length limits. But how much is too much? What are the pitfalls that come with using too many acronyms?

or, *At What Point do Too Many Acronyms make an Academic Publication Too Hard to Read?

Is there any research into the impact of acronyms on the readability or potential impact of an academic paper? Perhaps a metric, such as ratio of acronyms to non-acronyms? Or, at the very least, best practices on the subject?

I am currently reading a paper, which, while it contains valuable research and an actual field trial of the methods described (rare in this particular area), is packed with so many original acronyms that it is near impossible to read.

In one particular paragraph, every fifth word, on average, is an acronym (a few are repeated, but I believe this is cancelled out by the fact that some are recursive acronyms). Of the 10 unique acronyms used in this passage, two could be considered common in the literature.

There is no table of abbreviations, meaning that by the time one has finished the paper, it is time to lubricate the wheel of one’s mouse.

In my own writing I often find it convenient to use acronyms to speed things up, and of course it helps keep papers within length limits. But how much is too much?

It would be interesting to see a study of pairs of papers which were identical except for their liberal vs conservative use of acronyms — which were published first, if at all? Which were cited most?

I have been working on a research problem independent of any advisor for the past few months and I think I have a result. The problem I am investigating has n and K as input. The naive approach would take O(n!K+2), which is ridiculously bad. I managed to come up with, and proof the correctness of a 2-approximation algorithm that runs in O(Kn2).

However, this is absolutely not a hot topic. Researchers in it are not a lot. On the other hand, there are practical applications of my algorithm (off the top of my head, I can think of three significant ones). I am also positive that my algorithm beats all published approximation algorithms as well.

So my question is: Would a reputable journal accept my paper given that it isn’t really a hot topic now?

I have been working on a research problem independent of any advisor for the past few months and I think I have a result. The problem I am investigating has n and K as input. The naive approach would take (C(n!,K)*(n!)^2=O((n!)^(k+2))) which is ridiculously bad. I managed to come up with, and proof the correctness of a 2-approximation algorithm for it that runs in O(Kn^2). However, this topic is absolutely not a hot topic. Researchers in it are not a lot. So my question is, would a reputable journal accept my paper given that it isn’t really a hot topic now?

Also, sorry if my English is not adequate. It is not my first language.

In our lab, we hypothesized that a technique T1 should be able to solve some problem with high performance. As per our hypothesis, we got an excellent result. We started writing a short paper to be submitted to a conference for the last date is due in four days. The paper is complete except for some proof-reading. We have not yet submitted the results.

Yesterday, just for fun, I was applying a different technique T2 to the same problem. Surprisingly it achieved an even better performance than T1.

We were wondering, is it okay to write a “failure” paper stating the hypothesis failed because of so and so [which is tough to analyze given the time constraint on deadline.]? Some of my colleague suggested to not disclose the performance of T2 until T1 is published, so that later I could do a comparative study between T1 and T2. Will it be okay?

Note: T1 and T2 are very different and it does not make any sense to write on both techniques in the same paper. Plus, rewriting the paper now is also difficult.

Update After going through answers, comments and suggestions, we are submitting T1 paper. Thank you very much all the learned academicians here on academia.SE.