Quant concepts

Thanks. got it

Can somebody pls explain the concept of unit root? It is the case when b1=1. How does this render the regression eqtn non-stationary? Does this something to do with mean-reverting level eqtn, b0/(1-b1)?

also i need one example of random walk with and without a drift. Thanks

You need a covariance stationary time series in order to use a regression. A random walk violates 2 of the covariance stationary time series criteria to be considered as such 1.) Mean is finite and constant and 2.) Variance is finite and constant. Since a random walk is a special case of an AR (1) model with intercept = 0 and slope coefficient = 1 if we plug the slope coefficient into the equation given for a mean reversion level we get an undefined answer of 0/0. This in turn violates requirement for the mean to be finite. The second part is that as the observations of t increase, the Variance of the error terms also increase without bounds and approaches infinity. This makes the Variance nonfinite and violates the criteria for a Variance to be finite.

Quick question… is anyone going to really go into depth with MA models and ARMA models? Are they necessary to know in order to understand everything else after?

Can anyone explain detecting whether a time series fits an AR or an MA? I u Der stand that if it’s an AR model the first auto correlation will be large and gradually decline to zero whereas for an MA time series it will drop to zero after the q autocorrelation. What I’m having a trouble grasping is that I thought for AR models we wanted all the autocorrelation to not significantly different from zero so that our model is correctly specified… am I missing something?

Mannn… no one’s posting in here anymore huh? =[

In positive serial correlation, standard errors of the parameter estimates are small and hence t-stats are large. Pls explain how is this. an example would be appreciated

heteroskedasticity: standard errors of the parameters are too small and hence t-stats are large. Pls explain why is this so with a simple example

If the autocorrelation is significantly different from zero, does it mean that there is autocorrelation present or does it mean otherwise?

Can anyone pls comment on my above 2 posts heteroskedasticity and +ive serial correlation?

I am still confused on the basics of this whole argument - to test the co-efficients, we need a std error (std deviation of the co-efficient. How do you get the std deviation of b1- as b1 itself is Cov/Var of the whole data. U get a single no. out of the whole data - what is the scope of std dev.

This makes sense.