are artificial neural networks?
neural networks are tools that simulate the behaivor of mammalian
neurons as they process information and learn about patterns in data.
Artificial neural networks, or neural nets, belong to a class of machine
learning tools within the br
ader field of artificial intelligence.
Neural nets, because they are pattern recognition tools, differ from
other quantitative modeling tools such as deterministic models and
statistical models. They share more in common with statistical models
than deterministic models.
net models contain several features. First, most neural net models
contain input, hidden and output vectors (Figure 1). Input vectors
contain information that is quantified with output vectors through
a process called "training". The training process
presents data to the neural network so that values are passed
through a nonlinear function (called the activation function),
which is commonly a logistic or tanh function, combined together
at nodes at the next layer by a squashing function (which
is generally a linear combination of weights and values from the
previous layer - this is shown in Figure 2 below). A bias function can be added to adjust the activiation function across
the Cartesian plane.
nets begin training by assigning randow weights between nodes
in the input layer and nodes in the hidden layer and between nodes
in the hidden layer and the nodes of the output layer. As information
is passed forward through the neural network, the value of the
output is compared to the observed value for the output as it
proceeds back (called back propagation).
illustrates the process of feed forward of weights and back propagation
and errors, respectively. As the net cycles through
this process, a learning algorithm, often called the delta function,
is used to determine the difference of errors between the last
two cycles. If the error has decreased, then the weights are
changed slightly. The errors often decrease over time, as illustrated
in Figure 4 below.
|One of the modeling exercises that is conducted
during modeling is the analysis of the pattern of error over the
number of training cycles (sometimes called epochs). We find that
most LTM applications find a minimum error very rapidaly, sometimes
after only 100 epochs. We have examined how
model different node weights and activation values affect model performance.
research is important as many of the models that are coupled in
this project rely on the LTM for accurate predictions.
Neural nets are
powerful generalization tools. Models developed with one
set of data are likely to perform well on another set that are presented
in the same manner (e.g., same variables, samve neural net topology).
We have begun to examine how this ability to generalize can be used
to expand our modeling across space, time and datasets. For example, we have tested whether
a neural net model that is built to simulate land use change in the
Detroit area can be used to predict historical changes in the Twin
Cities metropolitan area, and visa versa. In addition, we are testing
whether small training sets can be used to scale up to larger regions.
We call these exercises "spatial learning exercises".
We have also used
a set of time series data for the Twin Cities to determine temporal
generalization as well (Pijanowski et al., 2005). This work shows that there is
nearly a 90% match between time steps across a 13 year period suggesting
that the neural net model can generalize across time as well.