I am seeking my masters in the field of clinical medicine. I am now starting studying statistics and now I explored some approach called Bayesian statistics. Strangely, I found many articles endorse this approach over what they called frequentest statistics which is the only approach I have encountered in all the paper I have read before.

Honestly, I see the Bayesian approach is very difficult to understand. So my question is : Is Bayesian approach the future of medical research? knowing that I have not noticed any research using it

I have a discrete data set showing number of customers entering a shop per day, with the following descriptives:

Std: 193

As I want to run a monte-carlo simulation with number of customers per day as input, I want to find the best-fitting model to describe the data.

My problem is that the data fits a Weibull-distribution very well (p=0,9), but do not fit any discrete models at all. Is it possible to use the Weibull distribution to then generate data (using bins, rounding to nearest integer), or would this be regarded bad scientific practice?

I am writing my masters thesis on an econometrics topic. It is very closely related to another paper by NW. Since our papers are very closely related, I am using their notation in order to draw comparisons (which I state before introducing my model).

NW motivate their model using known findings from their field. They demonstrate that the standard model does not behave well under their assumptions by (mathematically) decomposing the standard model’s objective function. This approach is very common, however, every author formats their arguments to fit their purpose. In my paper, I want to build on NWs argument. I.e. I want to make the same point and extend it. Do I need to cite their equation (the decomposition of the objective function) in any special way?

To paraphrase what I currently have:

… NW illustrate this by deconstructing the expectation of the objective
function into a signal term and a noise term

$$E[Q(beta)] = Eg(beta)’Wg(beta) + tr(W Omega(beta))$$

(I copy their notation exactly.)

The reason I am unsure is that NWs argumentation is not unique to their paper but how they frame it is.

I am going to submit a paper in a couple of weeks. The results are good, but my adviser wants me to report the results in a particular way that makes them artificially look better. I asked a statistics professor about this method and he emphatically said it was dishonest and the wrong way to report results. I argued this with my professor and he would have none of it. I told them the results are good anyway, why report it in this inflated way, and he insists that this is simply a difference in opinions on the definition of this metric. He says he has published many papers with this metric and other (major) people in the field have used this metric too. But when I google around I can’t find a single source that jibes with his definition.

Clearly I am unable to convince my adviser. I am uneasy being the first author on a paper with dishonest reporting (especially since honest reporting would not impact the worthiness of the article). Some of the other coauthors are uneasy too, but we’re just grad students. I’m tempted to change the numbers to the right ones right before I submit the paper. What do you recommend I do?

I’ve finished my PhD in pure mathematics in late 2013 in the US (2 published papers, 1 preprint), and after almost 4 years of postdoc in the subjects of pure math (1 year-unproductive), medical imaging and computer vision (3 years- 3 papers), I’m joining a research position in industry in France. I’ve done 3 years of postdoc in France (and currently here) and 1 in the US.

I’ll be working alongside a professor of statistics in France who’s a consultant for my company, and the company also encourages publications (after they get patent etc.). Besides, I’m collaborating with two people from academia and industry, and in 2-3 years or so, I hope to have 2-3 more publications.

My goal is to defend my habilitation in statistics/machine learning (https://en.wikipedia.org/wiki/Habilitation) in 3-4 years from now, as I’m very interested to have a joint academic-industrial position in the future. To elaborate, I’d like to hold a professor position in academia and also a researcher/research consultant position in industry. I’m happy to indirectly supervise PhD students alongside someone else but not directly. And somewhat shamelessly, I’d like to have a second income from academia on top of industry. I’m not a EU or US citizen, by the way.

My question is: is it possible for me to defend my habilitation in France when I’m not a part of French academia? Normally, I’ve seen only academics do that.

Thanks in advance!

I’ve currently decided to get a second bachelor’s in Statistics over 1.5 years after finishing my first bachelor’s in math. Both of these degrees are debt free and follow a passion of mine. The next goal is to work in industry for about three years at my parent’s home while taking one class a semester in Computer Science online to save for a master’s in Statistics with Thesis where I hope to take some really advanced courses over two years.

The big idea is that I could grab courses like Categorical Data Analysis, Multivariate Statistics, Nonparametric stats (and maybe substitute mathematical stats 1 and 2 with graduate Measure Theory) etc during my second B.S. and then focus on the really advanced statistics courses and math courses during my masters like advanced graduate topology, experimental design and bayesian analysis

Then I would start a PhD in say Computer Science focused on Machine Learning to develop cutting edge statistical tools based on theoretical principles in Statistics, computational topology, measure theory etc.

Would this be too much education? I want to do more than just take classes on various statistical methods before a PhD.

Edit: I should add that the answer I received from r/datascience was to go straight into a Masters program. But since I can’t afford that yet, I guess the answer in my circumstances is that I can do a second B.S. in Stats or Comp Sci if I’m making good money near my parent’s home?

This b question in the problem has me puzzled…

It has been projected that the average and standard deviation of the amount of time spent online using the Internet are, respectively, 14 and 17 hours per person per year (many do not use the Internet at all!).

a- What value is exactly 1 standard deviation below the mean?

b- If the amount of time spent online using the Internet is approximately normally distributed, what proportion of the users spend an amount of time online that is less than the value you found in part (a)?

c- Is the amount of time spent online using the Internet approximately normally distributed? Why?

a. Clearly -3…

b. Why would he/she ask who spends less than -3 hours? is it tricky? or I just got wrong part a?