Abstract
Artificial neural networks (ANN) seem very promising for regression and classification, especially for large covariate spaces. Yet, their usefulness for medical and social research is limited because they present only prediction results and do not present features of the underlying process relating the inputs to the output. ANNs approximate a non-linear function by a composition of low-dimensional ridge functions, and therefore appear to be less sensitive to the dimensionality of the covariate space. However, due to non-uniqueness of a global minimum and the existence of (possibly) many local minima, the model revealed by the network is non-stable. We introduce a method that demonstrates the effects of inputs on output of ANNs by using novel robustification techniques. Simulated data from known models are used to demonstrate the interpretability results of the ANNs. Graphical tools are used for studying the interpretation results, and for detecting interactions between covariates. The effects of different regularization methods on the robustness of the interpretation are discussed; in particular we note that ANNs must include skip layer connections. An application to an ANN model predicting 5-yr mortality following breast cancer diagnosis is presented. We conclude that neural networks estimated with sufficient regularization can be reliably interpreted using the method presented in this paper.
Original language | English |
---|---|
Pages (from-to) | 373-393 |
Number of pages | 21 |
Journal | Computational Statistics and Data Analysis |
Volume | 37 |
Issue number | 3 |
DOIs | |
State | Published - 28 Sep 2001 |
Keywords
- Data mining tools
- Interaction effects
- Logistic regression
- Nonlinear models
- Splitlevel plots