Abstract
We illustrate the need to use higher-order (specifically sixth-order) expansions in order to properly determine the asymptotic distribution of a standard artificial neural network test for neglected nonlinearity. The test statistic is a quasi-likelihood ratio (QLR) statistic designed to test whether the mean square prediction error improves by including an additional hidden unit with an activation function violating the no-zero condition in Cho, Ishida, and White (2011). This statistic is also shown to be asymptotically equivalent under the null to the Lagrange multiplier (LM) statistic of Luukkonen, Saikkonen, and Ter̈asvirta (1988) and Ter̈asvirta (1994). In addition, we compare the power properties of our QLR test to one satisfying the no-zero condition and find that the latter is not consistent for detecting a DGP with neglected nonlinearity violating an analogous no-zero condition, whereas our QLR test is consistent.
Original language | English |
---|---|
Pages (from-to) | 273-287 |
Number of pages | 15 |
Journal | Neural Computation |
Volume | 24 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2012 |
All Science Journal Classification (ASJC) codes
- Arts and Humanities (miscellaneous)
- Cognitive Neuroscience