Is there any research/study/survey/… that looked at the effect of background music in educational videos on the learning outcome?

I am aware of (1) but they focus on educational virtual environments.


(1) Fassbender, Eric, Deborah Richards, Ayse Bilgin, William Forde Thompson, and Wolfgang Heiden. “VirSchool: The effect of background music and immersive display systems on memory for facts learned in an educational virtual environment.” Computers & Education 58, no. 1 (2012): 490-500. https://scholar.google.com/scholar?cluster=4113404139067026741&hl=en&as_sdt=0,22

I am just about to finish my Master thesis. During the analysis, I wrote two functions which simplify certain tasks and put them into a R package. Furthermore, I extended an existing R package by parallelizing it.

While these actions do not have any value for the scientific outcome of my thesis and just simplified my analysis by saving me time, I wonder if I should mention these tasks within my thesis?

If yes, in which section? My idea was to put it in a section in the discussion called something like “additional outcomes of this work” and link to the specific packages on my Github account. However, as the package I wrote is not yet published to the public and the package I parallelized is not yet updated, I question myself if I have enough “serious stuff to mention”.

If not, what are the reasons for not mentioning such results in a scientific work? If I would be the supervisor, I would like to know about such outcomes of a thesis although they just affect the technical analysis part.

I have been working extensively on a project for which I have written a couple of papers which still haven’t been submitted for publication. I am also preparing a website where I want these materials to be freely available. I would like to avoid having to rewrite everything again for the website, so I was wondering what is the best solution to be able to reuse literal sentences from the papers (once they’re published).

As far as I know, Open Access solutions sometimes allow the author to retain copyright, but I’m not sure if always. What kind of publishing licenses would allow me to copy-paste literal material from my paper onto my website?

Overall, is this a good idea or should I strive to rephrase content even if not legally bound to do so?

I’m a communications engineering undergraduate in London South Bank University. I’m curious whether it’s possible to gain acceptance to a PhD program without having to first complete a Masters.

Specific to my case, I’m performing very well (near-perfect GPA, significant coursework completed under self-study, significant percentage of masters coursework completed during undergraduate). I was wondering whether that would have any impact my ability to skip the Masters degree.

I need to start learning about a new research topic, I’ve been collecting articles for about a year now. I collected +400 articles, spanning a range of ~40 years.

I haven’t had the time to read them since I was working on finishing a few papers, I’m finished now so I want to get on it.

The question is: should I start from oldest to newest, or from newest to oldest?

I can see some pros and cons in both approaches:

Oldest to newest:

  • Pro: it feels right, since that is the way the research actually happened.
  • Con: It will take me a lot of time to become acquainted with the latest developments in the field, and I am not able to start working on it until I do.

Newest to oldest:

  • Pro: I will quickly develop some knowledge (not in depth though) into the actual state of the field. This allows me to start working, at least on minor things.
  • Pro: It will alert me of wrong paths taken previously in the field, so when I find them in the older literature I’ll be already aware.
  • Con: It will be considerably more difficult to follow, specially at first, since I’ll be diving into the analysis of the latest developments of an unknown field.
  • Con: I’ll necessarily be jumping back to older literature anyway, risking going down the rabbit hole of research.

I’m currently preparing a paper for publication and so trying to construct the figures. I’m finding it more difficult than making figures for a report mainly because journals expect one figure file for each figure, regardless of whether it contains subfigures or not. This leads to some problems/hurdles:

  • I can’t use LaTeX’s subfig package, so I have to manually place an (a), (b) and (c) on the figure and keep track of the caption if the ordering of these changes.
  • Journals’ rules about figure submission vary quite a lot, some can take pdfs, some want vector graphics, others raster.
  • I have to somehow merge the output from various programs into a single file; for example diagrams made with Tikz make up the same figure as a graph plotted with python’s matplotlib.
  • With having to merge subfigures in such a way, my workflow becomes quite convoluted if I have to modify one of the subfigures, and then regenerate the single figure file.

I work with point cloud data.

I developed a method for segmenting objects automatically. This can be also done manually. I extracted 200 objects manually from my clouds as comparison data. I discussed with a collague and referred to this data as my ground truth. He told me this is wrong, cause ground truth implies data collected in the field. Is he right? If yes what other term can be used in a publication?

Jan

I passed my B.tech in Software Engineering.

I am currently undergoing my work in Cognizant, and am curious of what I should do after finishing my 1 year in Cognizant. Some say join Gate exam and GRE, some say don’t study because you can’t after working. I really need some guidance in what should or how should I approach in my career.

A lot of people are grumbling about USPTO 9,430,468, “Online peer review system and method”, which describes (if I’m reading it right) Elsevier’s system of reallocating papers rejected from Journal A to a potentially more suitable Journal B. Parts of the patent describe the general process of peer review and (partly because of this) some of the criticisms currently circulating seem to think that this is a patent on the process of online peer review itself, which would clearly be ridiculous. However, the Electronic Frontier Foundation has picked this as its ‘stupid patent of the month’ for August, pointing out various examples of a similar process called cascading peer review dating back to at least 2009 (example 1; example 2). Eff clearly has concerns about the novelty of this patent.

My question: what, if any, are the differences between Elsevier’s now-patented ‘waterfall system’ and ‘cascading peer review’?

(I’m not sure if this would be a better fit for Law.SE, but thought the audience here would be more familiar with the important areas of the peer review process)

Full disclosure: I’m asking this as a displacement activity while I should be reviewing a paper.

Given how well journal clubs etc work, it has occurred to me a few times that reviewing a manuscript as a group would probably result in faster turnaround and more detailed comments. However, typically journals approach individual reviewers and place confidentiality restrictions on the reviewer so this mechanism wouldn’t work. Does anyone know of any journals, ideally in the life sciences, which support or even encourage collaborative peer review of manuscripts?