.

Saturday, February 2, 2019

Regression Results :: Research Analysis

3.3.4. ResultsFor the project of finding a suitable function for benefits transfer, different meta-regression models become specify (i) different functional forms (e.g., a simple linear form versus semi-log form) (ii) a fully specified model including all in capable variants and a limit model on grounds of statistical significance or econometric problems (e.g., multicollinearity) (iii) rich consistent standard faults to correct for heteroskedasticity. As shown by the sample for heteroskedasticity (see Table 3.7), a simple linear form has heteroskedasticity. There be several ways to correct for heteroskedasticity (e.g., GLS, WLS, robust consistent errors, and data transformation). For this study, robust consistent standard errors and data transformation (e.g., the log transformation of the dependent variable) are utilized. All independent variables initially are considered, even if after dropped on grounds of statistical significance or econometric problems (e.g., multicollin earity). about variables (e.g., MSW and ACTIV) are dropped because the variables have multicollinearity and/or are statistically insignificant at the 20% level for optimizing the meta-regression transfer model (suggested by Rosenberger and Loomis (2001, 2003).A extensive range of diagnostic tests has been conducted on each regression for benefits transfer (suggested by Walton et al. 2006). The R2 for the overall fit of the regression, speculation tests (F tests and t tests), and diagnostic works (e.g., skewness-kurtosis normality test, Ramseys RESET test for the specification error bias, heteroskedasticity test, and multicollinearity assessment) are reported.The F test assesses the null hypothesis that all or some(prenominal) coefficients ( ) on the models explanatory variables equal postal code i.e., H_0 _1= _2== _k=0 for all or some coefficients (Wooldridge 2003). A linear restriction test on some coefficients is useful before dropping the variables when some variables are unreliable due to multicollinearity (Hamilton 2004). An important issue when handling dispirited samples is the potential for multicollinearity which has a high horizontal surface of linear relationships between explanatory variables (Walton et al. 2006). The high correlation between estimated coefficients on explanatory variables in mild samples can produce possible concerns (i) substantially higher standard errors with pooh-pooh t statistics (a greater chance of falsely accepting the null hypothesis in standard significance tests) (ii) unexpected changes in coefficient magnitudes or signs and (iii) statistically insignificant coefficients despite the high R2 (Hamilton 2004). A flake of tests to indicate the presence and severity of multicollinearity exist (e.g., Durbin-Watson tests, VIF, Tolerance, and a correlation ground substance between estimated coefficients). One test is the variance inflation factor (VIF) which measures the degree to which the variance and standard error of an estimated coefficient increase because of the inclusion of the explanatory variable (i.

No comments:

Post a Comment