## Introduction

Neural networks are a predictive modeling technique that is capable of modeling extremely complex functions and data relationships. In other words, neural networks are unsupervised machine learning algorithms. And networks are relatively noised tolerant.

In addition, neural networks can be used for exploratory analysis by looking for data clustering with Kohonen networks. These are neural networks based on the topological properties of the human brain. Also known as self-organizing feature maps (SOFMs). The other types of problems (regression, classification, time series) have three different options for creating neural networks:

- automated neural networks search (ANS)
- custom neural networks (CNS)
- subsampling (random, bootstrap)

The ability to learn by examples is one of the many features of neural networks that enables the user to model data and establish accurate rules governing the underlying relationship between various data attributes. The neural network user gathers representative data and then invokes training algorithms, which can automatically learn the structure of the data. Although the user does need to have some heuristic knowledge of how to select and prepare data, how to select the appropriate neural network, and how to interpret the results, the level of user knowledge needed to successfully apply neural networks is much lower than those needed in most traditional statistical tools and techniques.

For more information see Kohonen, 1982; Fausett, 1994; Haykin, 1994; Patterson, 1996.

## Automation

Statistica's neural networks can be used for classification, regression, cluster or time series problems. Statistica automates pre-and post-processing tasks such as:

- wizard-style Automated Network Search (ANS) guides the user step-by-step through the procedure of creating a variety of different networks and choosing the network with the best performance (avoiding a lengthy "trial-and-error" process)
- feature selection; Statistica determines the best 5 neural networks for the analysis problem (i.e., for the respective dependent variable and predictors) and final importance rankings for the predictors are then computed by averaging the importance rankings for each predictor over all 5 networks
- recoding variables with nominal-values (e.g., Sex = {Male, Female})
- scaling for both inputs and outputs
- normalization
- missing value substitution
- special data preparation for use with time series problems
- assigns cases to class memberships and interprets network outputs as true probabilities for classification problems
- retains copies of the best networks found as you experiment on a problem
- if over-learning occurs, automatically retrieves the "best network" (retained copy) and uses it
- automatically assess the usefulness and predictive validity of the network when the user includes variables for train, test, and validation sampling

## Options for Neural Networks

Statistica supports the most important classes of neural networks for real-world problem-solving.

- Multilayer Perceptrons
- Radial Basis Function networks
- Self-Organizing Feature Maps

In addition, ANS supports ensemble networks formed from arbitrary combinations of the network types listed above. Combining networks to form ensemble predictions is particularly easy to use, especially for noisy or small datasets.

Users can also explore the usefulness and predictive validity of the network by evaluating the size and efficiency of the network as well as the cost of misclassification.

For enhanced performance, Statistica supports a number of network customization options. You can specify a linear output layer for networks used in (but not restricted to) regression problems or softmax activation functions for probability estimation in classification problems. Cross-entropy error functions, based on information-theory models, are also included, and there is a range of specialized activation functions, including Exponential, Tangent Hyperbolic, Logistic Sigmoid, and Sine functions for both hidden and output neurons.

Statistica's neural networks include fast, second-order training algorithms: Conjugate Gradient Descent and BSFGS. There is also a memory-less version of BFGS to which Statistica automatically switches whenever the amount of memory on your computer is at critical levels. These algorithms typically converge far more quickly than first-order algorithms such as Gradient Descent.

The iterative training procedures are complemented by automated tracking of both the training error and an independent testing error as training progresses. Training can be aborted at any point by the click of a button. Users can also specify stopping conditions when training should be prematurely aborted. For example:

- target error level is reached
- selection error deteriorates over a given number of epochs

## Probing and Testing a Neural Network

Once you have trained a network, you'll want to test its performance and explore its characteristics. Statistica offers a wide selection of statistical and graphical output.

You may select multiple models and ensembles. When possible Statistica will display any results generated in a comparative fashion (e.g. by plotting the response curves for several models on a single graph or presenting the predictions of several models in a single spreadsheet). This feature is particularly useful for comparing various models trained on the same data set.

All statistics are generated independently for the training, test, and validation samples or combinations of your choice.

Overall statistics calculated include mean network error, the confusion matrix for classification problems (which summarizes correct and incorrect classification across all classes), and the correlation for regression problems - all automatically calculated. Kohonen networks include a Topological Map window, which enables you to visually inspect unit activations during data analysis.

## Training Algorithm Summary

- Gradient Descent
- Conjugate Gradient Descent
- BFGS
- Kohonen training
- k-Means Center Assignment for Radial Basis networks

## Recommended Comments

There are no comments to display.