Higher-order approximations for testing neglected nonlinearity

Halbert White, Jin Seo Cho

Research output: Contribution to journalLetterpeer-review

9 Citations (Scopus)

Abstract

We illustrate the need to use higher-order (specifically sixth-order) expansions in order to properly determine the asymptotic distribution of a standard artificial neural network test for neglected nonlinearity. The test statistic is a quasi-likelihood ratio (QLR) statistic designed to test whether the mean square prediction error improves by including an additional hidden unit with an activation function violating the no-zero condition in Cho, Ishida, and White (2011). This statistic is also shown to be asymptotically equivalent under the null to the Lagrange multiplier (LM) statistic of Luukkonen, Saikkonen, and Ter̈asvirta (1988) and Ter̈asvirta (1994). In addition, we compare the power properties of our QLR test to one satisfying the no-zero condition and find that the latter is not consistent for detecting a DGP with neglected nonlinearity violating an analogous no-zero condition, whereas our QLR test is consistent.

Original languageEnglish
Pages (from-to)273-287
Number of pages15
JournalNeural Computation
Volume24
Issue number1
DOIs
Publication statusPublished - 2012

All Science Journal Classification (ASJC) codes

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'Higher-order approximations for testing neglected nonlinearity'. Together they form a unique fingerprint.

Cite this