A methods section is very common in areas such as medicine. However, in math
or computer science papers I rarely find such a section if no experiments or
observational studies were conducted. Nevertheless, it is a requirement for
the computer science journal I submitted to. While nearly none of the papers
published by the journal provides one, I still have to fulfill this requirement.

While some general guidelines can be found, for example in this question,
it would be very helpful to have some good examples for such a section in existing math or computer science literature.

Edit: As clarification: I already submitted and got a revision request that did not mention a missing methods second. I then submitted a revision and after a few hours I got a second revision request asking for a methods section (and a few other formal requests). I know that I could discuss this with the editor if I really need one, but maybe it would actually improve the paper.

I’m currently writing the methodology section of my thesis and wondered if I was including too much.

In the methodology section I have currently included:

  • Alternative approaches I considered
  • The pros/cons of those approaches/applicability to this project
  • Why I selected the approach I used

This information is needed somewhere but it is resulting in a very long methodology which is going a lot deeper into “Why I chose this approach” than “Here’s how I did the research”.

Does this “Why” discussion belong in the methodology section of a research paper or should it be placed somewhere else? If so, where?

edit

As an example: I can either survey people or perform an analysis of existing discussions on the topic. There are pros/cons to each approach (sample size, recruitment, etc) . The approach I choose will impact the research but I want to discuss the differences in the possible approaches and explain my reasoning for my choice.

I’m searching for any universities (or other centers of learning) around the world that employ new methods of physics education. An example of a non-university center of learning is the Perimeter Institute in Canada. They have ties to a university but are a separate entity (as I understand it) but still offer courses.

When I say ‘new method’, I mean they don’t employ the standard lecture model where a professor or grad student stands in front of class and talks, including classes where clickers and other simple interaction tools may be used. New methods include the ‘reverse lecture’ where students study notes/books/etc. outside class then come to class to solve difficult problems and discuss what they’ve learned (typically in groups with professor oversight), any lecture here, if present, would usually be limited to 5-10 minutes. More standard homework could be assigned as normal.

I’ve seen new methods applied to select classes in different universities but I’ve yet to find an organization that applies any new methods as the norm, not the exception.

I’m currently writing the methodology section of my thesis and wondered if I was including too much.

In the methodology section I have currently included:

  • Alternative approaches I considered
  • The pros/cons of those approaches/applicability to this project
  • Why I selected the approach I used

This information is needed somewhere but it is resulting in a very long methodology which is going a lot deeper into “Why I chose this approach” than “Here’s how I did the research”.

Does this “Why” discussion belong in the methodology section of a research paper or should it be placed somewhere else? If so, where?

I’m doing a PhD in the social sciences and want to use an observational method to analyse audio recordings of social work conversations. I will be using an established behavioural coding manual (the Motivational Interviewing Skills Code 2.5) to assign behavioural codes to the interaction. The problem I have is that I need a way of demonstrating that my coding is reliable. Typically, on studies I have worked on in the past, there would be a small sample of recordings that would be re-coded by at least one other person to establish inter-rater reliability. As a sole researcher, I cannot do this.

Is there a way around this issue?

The question I have is regarding the formulation of the initial hipotheses for testing. But before that, I need to choose and define my methodology approach.
I am studying first year graduate economics. The subject of my thesis is measuring the impact of labour and capital inputs on production and income at the national level (or industrial level).

Now, as far as theoretical basis goes, most of my uni professors are either neoclassical, or ex-planned economy-recently-turned-neoclassical. I can’t say for sure, because I don’t know them that well yet, of course, but based from what I’ve seen, for example, on the scientific databases that are available for journal reading and study of scientific literature, I haven’t found any post-keynesian ones (journals) subscribed by my uni. That’s not a problem in itself, as I need to be aquainted with the mainstream methodology and theory first, before I can compare methodologies or theory.
(The reason I got a bit distracted by the theoretical side is I got a bit confused about my topic. As I’m going to measure input impact on production AND income, isn’t my topic actually supply and demand analysis? Well, I think it is in a big way. And as such, I really think the post-keynesian school could really tell a lot about the demand side. But there is no way for me to access post-keynesian economic journals…)

However, my question is not regarding theory, but methodology.
As I need to measure the impact of capital and labour inputs on production, I’ve so far found three large sets of methodologies that can be used:
1) Growth accounting.
2) Aggregate production function analysis.
3) Econometric modelling.

As far as Growth accounting (1) goes, it’s basically using the Solow form of Cobb-Douglas production function for measuring relative factor shares, changes in factor shares and changes in TFP (total factor productivity). EUKLEMS and WIOT are databases used for these sorts of measures.

As for APFs (2), I’ve read so much about various forms of aggregate production functions, but I’m still not sure how to estimate them. The basic idea (due to my weak prior econometric training, or lack-thereoff (my own fault)), if I’m correct, is you take time-series data of some form of inputs (i.e. total hours worked per year for labour input and GFCF for capital input), you use the linear least squares method on the inputs AND the output, and then, I guess, you find the production function parameters? I.e. to make that much of Y you need this much L and that much K. Okay. But If I want to test the production function econometrically, I need a linear model of it. So I use logarithms to obtain the linearized form.
But which step is first? Do I first estimate the parameters and then linearize, or do I linearize to more easily find the parameters?
I might have gotten this confused and backwards…
Now, I might as well use CES and/or Translog production functions. (Please, correct me if I’m wrong, but isn’t the Translog production function just the same CES, only a special case of it (the Cobb-Douglas case)?

And as for the (3) econometric analysis, as strange as it may sound to myself, this method looks the most straightforward to me. I can make a VAR (vectoral autoregression) model using GDP or GO (gross output), or GVA (gross value added) as my dependent variable and my Labour (Employees or employees+self employed persons; hours worked by employees or hours worked by employees+self employed persons) and Capital (Gross fixed capital formation or “Capital services” (still not sure how this is measured)) inputs. I can make a structural VAR, and I can even make a VECM (accounting for the levels, i.e. incorporating the non-stationary I(1) data as well as stationary I(0) data into the model). Then I would analyze the impulse-response functions and make inference that way.
(I’ve done an initial testing of the data, and it might be so that the labour input series is of different integration order than the output (GVA) and Capital time series data. I don’t know how or why. But if it were such a case, I would need to use an ARDL model? Suggestions welcome, please :))

So, I guess, after all this writing, my questions are… How do I choose the right methodology? I want to use them all, because they seem similar and connected, and it looks as though I could make comparisons, inference, etc. But I’m actually overwhelmed by them right now. I need to submit my initial paper real soon, and I don’t want to miss out on something important if I discard any one of these methodologies. But on the other hand, to use them all would be just too much for me right now. I’ve actually had little formal Macroeconomic training, as my bachelors was in BBA, so you can see why I’m not so confident in choosing the right method to accompany the theory.

Many thanks in advance
R.

I am currently a student at a tech school and otherwise unemployed. I’ve never been a scientist or published a paper.

So I am a little skeptical of some results I have been hearing about experiments using computer simulations, and I would like to at some point do my own experiment. (The topic is among differences in gender, homosexuality, global warming, etc., i.e. a topic that can provoke emotions.)

I figured that I should probably learn as much about research as I can before I do any experiment. On the other hand if I learn how exactly the others did their experiment that would bias how I construct mine. I would be prone to solving problems by doing the same thing rather than coming up with a new way to test it.

So does it make any sense to limit the study of existing experiments for the sake of more diverse testing, or should I try to learn as much about them as I can?

I am currently a student at a tech school and otherwise unemployed. I’ve never been a scientist or published a paper.

So I am a little skeptical of some results I have been hearing about experiments using computer simulations, and I would like to at some point do my own experiment. (The topic is among differences in gender, homosexuality, global warming, etc., i.e. a topic that can provoke emotions.)

I figured that I should probably learn as much about research as I can before I do any experiment. On the other hand if I learn how exactly the others did their experiment that would bias how I construct mine. I would be prone to solving problems by doing the same thing rather than coming up with a new way to test it.

So does it make any sense to limit the study of existing experiments for the sake of more diverse testing, or should I try to learn as much about them as I can?