Hope everyone’s preparation is going well coming into the exam.
My weakest area without a doubt is Quantitative Methods. I am prepared to write the topic off completely given it’s a small weighting (hopefully) and a better use of my time might be to focus on other topics where I learn quicker.
While the above is my strategy, for anyone that is quite familar with the topic - could you suggest any ‘low hanging fruit’ within this topic? What are the concepts that I might be able to grasp quickly? Perhaps that are more theoretical instead of learning formulas?
In my mind, the core of quant is structured such that to get the concepts you really need to memorize the formulas (significant testing, statistics, etc.). Maybe you could skip the formulas and just get the theory on AR and MA models? But I would hardly call that low hanging fruit. Really the formulas aren’t crazy difficult it just might be a time crunch for you at this point.
know how to read the results table and the regression results/corresponding formulas. it’s pretty straighforward. Know what r^2 is and adjusted r^2 as well as heteroskedasticity and multicollinearity and how to identify.
I worked through the 6 topic tests today and it took just over 2 hours. I think it’s a good investment
I agree. Heteroskedasticity and Multicollinearity have a good chance at being tested.
Perhaps I will do the topic tests - these will provide a good indicator of the poetential theoretical questions, plus getting more a grip of the fomulas required.
Learn the problmes of regresion (hetero, Serial, and Multi), T-test, and learn the Anova table hard. It’s super simple and most questions seem to derive 3-4 points from those things. The time series stuff is nasty…maybe glance at AR, etc…but uggh.
In almost all the quant questions these regression problems come up… most of the time you dont even need to do the math just remember them like this. My trick - HB Pencils In Standardized Exams. use White out to fix. Heteroskedasticity - Breusch Pagan - Incorrect Standard Errors (they are non constant). Use White-corrected standard errors to fix. Autocorrelation/serial correlation - incorrect error terms too - detect with Durbin Watson - correct with Hansen Multicolinarity - detect if F test significant and t-test insignificant - correct by dropping an X variable. For the degrees of freedom question: When you are calculating any kind of statistic (F, T etc) in regression Always revert back to n-k-1. N - number of observatons K - number of variables For simple linear regression we only have 1 variable x so k=1so that’s where you find df = n-2 When you’re looking at the anova table as a whole… there are 2 parts to it - regression part and error part. The df on the regression part is just k and the df on the error part is n-k-1 (hence why we use this when we calculate the tests) . When we sum these to get the TOTAL df we get n-1. (That’s where you see n-1) Hope this helps a little. I have a math degree and still get confused with the way they present this material! It’s not intuitive at all if you don’t know how these equations were derived. Good luck on exam day!!