What is the main difference between the three that we should know for the exam? In regards to regression analysis.
SEE is very approximated to the standard deviation of the error terms of the whole regression.
Standard error is the error of a certain variable, including the dependent variable. It is used to build T-statistics, Confidence Intervals, etc.
R2 is the percentage of the variation of the dependent variable that is explained by the variation of the set of independent variables. The higher, the better. However, to compare between models R2 adjusted is preferable.
To add a little more clarity:
Think of the standard error (in general) as the standard deviation for a sample statistic’s distribution. For example, there is a distribution of sample values for the slope coefficient. This distribution has a standard deviation-- the standard error of that slope coefficient. It gives us an idea of the variation of our sample estimates around the true slope value. You can have a standard error for any sample statistic (i.e. x-bar, p-hat, y-hat, R-squared, etc.).
When Harrogath says [use adjusted R-squared to compare between models], make sure that the dependent variable is identical. Really, you can’t reasonably (or logically) compare R-squared or adjusted R-squared for models that have different dependent variables.