I’ve currently decided to get a second bachelor’s in Statistics over 1.5 years after finishing my first bachelor’s in math. Both of these degrees are debt free and follow a passion of mine. The next goal is to work in industry for about three years at my parent’s home while taking one class a semester in Computer Science online to save for a master’s in Statistics with Thesis where I hope to take some really advanced courses over two years.
The big idea is that I could grab courses like Categorical Data Analysis, Multivariate Statistics, Nonparametric stats (and maybe substitute mathematical stats 1 and 2 with graduate Measure Theory) etc during my second B.S. and then focus on the really advanced statistics courses and math courses during my masters like advanced graduate topology, experimental design and bayesian analysis
Then I would start a PhD in say Computer Science focused on Machine Learning to develop cutting edge statistical tools based on theoretical principles in Statistics, computational topology, measure theory etc.
Would this be too much education? I want to do more than just take classes on various statistical methods before a PhD.
Edit: I should add that the answer I received from r/datascience was to go straight into a Masters program. But since I can’t afford that yet, I guess the answer in my circumstances is that I can do a second B.S. in Stats or Comp Sci if I’m making good money near my parent’s home?
I cannot find some detailed profile about gamma distribution.
I want to know its application and how the mean is calculated.
This b question in the problem has me puzzled…
It has been projected that the average and standard deviation of the amount of time spent online using the Internet are, respectively, 14 and 17 hours per person per year (many do not use the Internet at all!).
a- What value is exactly 1 standard deviation below the mean?
b- If the amount of time spent online using the Internet is approximately normally distributed, what proportion of the users spend an amount of time online that is less than the value you found in part (a)?
c- Is the amount of time spent online using the Internet approximately normally distributed? Why?
a. Clearly -3…
b. Why would he/she ask who spends less than -3 hours? is it tricky? or I just got wrong part a?
I’m interested in finding the acceptance ratio of USENIX conferences (OSDI in particular).
This website provides a lot of stats about a large amount of conferences, but OSDI is only covered from 1999 to 2010 (I’m looking for more recent data).
Has anyone a source for this information ?
I am a mathematics professor with a PhD and 12 years of math teaching experience, including 6 years post-PhD.
For the last two years, I have taught statistics every semester. It was a big change from math at first, but now I feel much more confident.
I know several people on this site have served on hiring committees. At a university with low research expectations hiring for a stats position, would an application such as mine be competitive with newly-graduated statistics PhDs?
To generalize, when applying for teaching positions in areas adjacent to your PhD, what evidence can you provide that you are able to competently teach in this area?
A study compared two groups (differing on factor A) on ability X and found that X differs between the two groups (effect of A). However, the groups also differ on a not interesting variable B (say, “age”). When using B as covariate in the analysis of A on X, there is no effect of A.
In the statistical analysis (ANCOVA) the authors present the statistics for X corrected for B (i.e. “no significant effect of A on X”), however, next to this analysis, they present the effect sizes (Cohen’s d) of the raw/uncorrected values of X as support for their hypothesis (“large effect of A on X”). They do not attempt to make it obvious that these effect sizes are based on scores without correcting for B.
I think they have to present the effect sizes of A corrected for B (i.e. after regressing out the effect of B on X). However, they don’t want to, possibly because they have a strong hypothesis about A affecting X.
Is what they are doing correct, is it normal, is it misleading or even fraud?
I took the GRE General earlier this year and got a 160Q/156V with a 3.5 writing. I am hoping to apply to mid-level ranked statistics M.S. programs (ranked 15-50ish), does anyone think that those scores are strong enough to be competitive or should I retake it? I’m having a hard time finding admission statistics aren’t incredibly vague (Competitive applications have done well on the GRE…, etc.).
I know a lot of depends on how strong the rest of your transcript is, fwiw I graduated with a B.S. in Biochemistry with a 3.3 GPA and have worked in a biostatistics lab.
I wanted to know which bibliography is used in a MS in in Statistics and Data Mining… I only found the syllabus:
Not sure if each subject will probably have different bibliography, or there is some common classic bibliography for this?
So far, I have found this: http://shop.oreilly.com/product/0636920028918.do
what basis is used for an MSc in statistics and data mining, is there any standard bibliography, or varies widely between professors/subjects?
I get below error every time I run path analysis, different models from different datasets, with Amos software:
Result (Default model)
Minimum was achieved
Chi-square = .000
Degrees of freedom = 0
Probability level cannot be computed
I have to answer to this question:
You might be asked whether regression analysis is the right instrument to analyse such a diverse dataset or whether clustering might be more appropriate.
I would like to know if any body can introduce a reference to say regression analysis is appropriate for diverse dataset.
My dataset consists of of countries from different continents including UK, USA, China and Venezuela.
I really appraicfte it.