From my understanding of the reviewers’ comments, it means our proposed solution adds nothing new to the body of knowledge even if we get better results than the state-of-the-art work.

The reviewers believe our work is just an application of existing literature. While many published works do integration + per-existence method to enhance the result, we use pre-processing + per-existence method to do the same. We were rejected, while others were accepted.

Any advice is appreciated.

I’m doing my master’s thesis on translation quality assessment of academic textbooks of a language pair. With respect to sampling, I’m encountering some questions.

I’m assessing the academic textbooks of a given field. Out of about 70 translations, I have decided to sample about 35 books which are read the most and from different subsections of the field. According to my literature review the best unit of translation and therefore assessment is sentence. So, I have decided to to do stratified random sampling from the sentences of the books. I am an independent researcher, my time and resources are limited, and analyzing each sentence by applying the chosen model takes a lot of time.

Here, the problem of generalizability comes to the surface. I overestimated the number of pages of the books. I assume that each book has 500 pages while in reality each book averagely has far fewer pages, maybe around 250. And again I estimate the number of sentences in each page the most possible: usually each page has 10 to 20, so I assume that each page has 20. Then, I multiply the number of pages and the number of sentences and the number of books, and reach the number of 300000 sentences.

To limit the sample size, I’ve decided to set the confidence level at 90% And the margin of error at 10%. By doing so, out of 300 sentences 68 sentences (that is, 2 sentences from each book selected absolutely randomly) would suffice. Is that right? Do you think this sample size is defensible?

By the way, my advisor believes instead of this way, it is best to pick 3-6 pages from each book and assess them with a less detailed, less time-consuming model. In some respects he is right, but then the workload will be huge (at least, with the present model). I believe in finding the most from sentences, he believes n finding even less from a bigger unit of text (at least 1 single page).

We want to compare the scientific outputs of researchers across various fields (such as theoretical physics versus organic chemistry) quantitatively. Is the following formula appropriate?

Let us consider researcher A in theoretical physics and researcher B in organic chemistry. First we obtain MIF (mean impact factor) for each researcher’s field.
Then, let us denote sum totals of the impact factors of each researcher’s papers published during a year to be S(A) and S(B). Now, we can divide S(A) and S(B) by MIFs of each field. The ratio of the values obtained for each researcher can provide us with a relatively appropriate ground for comparing the outputs of these two researchers, who belong to essentially different fields of research.

I am doing economic analyses of natural resources based on processes seen in the real world (geology) for my PhD thesis. The model I have built is mostly inspired from an already well-established model in the literature, but my approach on getting economic values is quite different.

In details, the original model consists of 7 study cases for which data were taken from literature. The research estimates a type of resource to be explored/exploited worldwide. The concept is not really innovative and contains many approximations. Also, the assessment methods used to calculate economic values are basic mathematics. However, it is built on strong scientific bases and requires lots of understanding of natural processes.

My own model (inspired by the prior work) focuses on 4 of the same study cases, but I have made estimates for half of the worldwide potential occurrences for the same type of resource (plus another one). My study has better accuracy and uses different assumptions to get the economic values. My calculations for 3 of the study cases are different, except for 1 (I use different data and get different results though).

Question: Is it conceivable to publish my economic results, even if the modelling concept is similar to or inspired from someone’s work?

I have cited the original research for comparing my results. The research is from a worldwide specialist who is well-considered in the scientific community. I would not want to have any conflict with him in the future.

I am doing economic analyses of natural resources based on processes seen in the real world (geology) for my PhD thesis. The model I have built is mostly inspired from an already well-established model in the literature, but my approach on getting economic values is quite different.

In details, the original model consists of 7 study cases for which data were taken from literature. The research estimates a type of resource to be explored/exploited worldwide. The concept is not really innovative and contains many approximations. Also, the assessment methods used to calculate economic values are basic mathematics. However, it is built on strong scientific bases and requires lots of understanding of natural processes.

My (inspired) model focus on 4 of the same study cases, but I have made estimates for half of the worldwide potential occurrences for the same type of resource (plus another one). My study has better accuracy and uses different assumptions to get the economic values. My calculations for 3 of the study cases are different, except for 1 (I use different data and get different results though).

Is it conceivable to publish my economic results, even if the modelling concept is similar to or inspired from someone’s work?

I have cited the original research for comparing my results. The research is from a worldwide specialist who is well-considered in the scientific community. I would not want to have any conflict with him in the future.

I am starting my graduate studies in computer science this year and am confused as to what quality research (some new innovation or discovery) domain I would choose specifically amongst virtualization, distributed computing, containers, and/or blockchain.

My main issue is that I understand but am not interested in algorithms and their efficiencies. Similarly, I can understand my undergraduate level mathematics but that does not interest me. I have good knowledge and understanding of operating systems, virtualization, and containers concepts.

I have researched regarding this online and came to know people normally only compare out the efficiency of one algorithm over another in problems of memory sharing, process switching, and other similar problems; is there any other method or means to do something new in the field of cloud computing, containers or blockchain?

It would be quite helpful if any expert(s) from these domains could enlighten me regarding as to what type of research is currently going on in these domains and how research work in these domains can be done (while staying away from algorithms and complex mathematics).

Thanks to all those who answered and commented. Answering some comments and to give some more context to my question:

  • I am looking to select my research area from the broad domains of Cloud (virtualization and administration), DevOps methodologies and tools (optimizing workflows and processes), Containers technology (for example dockers and kubernetes), and Blockchain optimization ( I am aware this one would involve complex algorithms).

  • I’m not looking to write PHP scripts or JS (I know that is engineering). I personally dislike front-end scripting. Some of my projects include setting up a private cloud infrastructure for a company in an internship, writing shell scripts for CI/CD purposes and creating a custom RHEL ISO which already has preinstalled software (no, not templates). Basically, I am proficient in Python, bash shell, Java, C and have a good understanding of how OS, Cloud and IT infrastructure generally works. You can say I am interested in the data center or infrastructure optimization.

So, my question’s main perspective is: I find it hard to believe that computer science research (at least development of new techniques, tools, and platforms) is impossible without deep knowledge of mathematics and algorithm designing (I may be wrong but that’s the point of the question). Is there established research work or progress done in these domains that do not involve very complex usage of algorithms and mathematics like integration, differentiation and such. Please note, I am trying to avoid complex usage and modification of algorithms and its maths. I am interested to discover methodologies of such research (I am hardly able to find much on the Internet).

I am starting my graduate studies in computer science this year and am confused as to what quality research (some new innovation or discovery) domain I would choose specifically amongst virtualization, distributed computing, containers, and/or blockchain.

My main issue is that I understand but am not interested in algorithms and their efficiencies. Similarly, I can understand my undergraduate level mathematics but that does not interest me. I have good knowledge and understanding of the operating system, virtualization, and containers concepts.

I have researched regarding this online and came to know people normally only compare out the efficiency of one algorithm over another in problems of memory sharing, process switching, and other similar problems; is there any other method or means to do something new in the field of cloud, containers or blockchain?

It would be quite helpful if any expert(s) from these domains, can enlighten me regarding as to what type of research is currently going on in these domains and how research work in these domains can be done (while staying away from algorithms and complex mathematics).

Thanks to all those who answered and commented. Answering some comments and to give some more context to my question: I am looking to select my research area from the broad domains of Cloud (virtualization and administration), DevOps methodologies and tools (optimizing workflows and processes), Containers technology (for example dockers and kubernetes), and Blockchain optimization ( I am aware this one would involve complex algorithms)
Also, I’m not looking to write PHP scripts or JS ( I know that is engineering). I personally dislike front-end scripting. Some of my projects include setting up a private cloud infrastructure for a company in an internship, writing shell scripts for CI/CD purposes and creating a custom RHEL ISO which already has preinstalled software (no, not templates). Basically, I am proficient to work with python, bash shell, Java, C and have a good understanding of how OS, Cloud and IT infrastructure generally works. You can say I am interested in the data center or infrastructure optimization.

So, my question’s main perspective is: I find it hard to believe that computer science research (at least development of new techniques, tools, and platforms) is impossible without deep knowledge of mathematics and algorithm designing (I maybe wrong but that’s the point of the question). Is there established research work or progress done in these domains which do not involve very complex usage of algorithms and mathematics like integration, differentiation and such. Please note, I am trying to avoid complex usage and modification of algorithms and its maths. I am interested to discover methodologies of such research (I am hardly able to find much on the internet)