Download Slides - Software Foundation Fiji

Cooperative Coevolution of
Recurrent Neural Networks for Chaotic Time
Series Prediction
Dr. Rohitash
Chandra
School of Computing, Information and Mathematical Sciences
University of South Pacific
Suva, Fiji.
Outline
●
Chaos Theory and Time Series Prediction
●
Neural Networks
●
Cooperative Coevolution and Neuro-evolution
●
Encoding Schemes in Cooperative Coevolution
●
Adaptation in Cooperative Coevolution
●
Results and Analysis
●
Conclusions
Chaos Theory and Time Series Prediction
●
●
●
●
●
Chaos theory studies the behavior of dynamical systems that are
highly sensitive to initial conditions–Also known as the ”Butterfly
Effect”
Small differences in the initial conditions yield diverging outcomes
for chaotic systems, this makes long-term prediction difficult, eg
weather forecasting
Happens even though the system is fully deterministic, i.e,
deterministic systems does not mean that they are predictable
Prediction of chaotic time series has a wide range of applications
such as in finance, signal processing, power load, weather
forecasting and sunspot prediction
Artificial Intelligence has been used model chaotic time series
Lorenz Equations
20
15
10
5
0
-5
-10
-15
-20
-25
9
72 198 324 450 576 702 828 954 1080 1206 1332 1458 1584 1710 1836 1962 2088 2214 2340 2466 2592 2718 2844 2970 3096 3222 3348 3474 3600 3726 3852 3978 4104
135 261 387 513 639 765 891 1017 1143 1269 1395 1521 1647 1773 1899 2025 2151 2277 2403 2529 2655 2781 2907 3033 3159 3285 3411 3537 3663 3789 3915 4041
Artificial Neural Networks
●
●
●
Motivated by biological neural
systems
Feedforward and Recurrent Neural
Networks (Models)
Training Algorithms (Adjust the
models)
Recurrent Neural Networks
●
●
●
They are suitable for modeling temporal sequences
Applications: Speech recognition, gesture recognition, financial
prediction, signature verification and robotics control
Dynamical systems whose next state and output depend on the present
network state and input; they are particularly useful for modelling
dynamical systems.
RNN Architectures
●
First-order recurrent networks
●
Second order recurrent neural networks
●
Long short-term memory networks
●
Echo State networks
●
Liquid State machines
Unfolding a RNN in time
Evolutionary Algorithms
●
●
●
●
Nature inspired optimization techniques
Can be easily deployed in optimization problems which are
non-differentiable
The similarity of most evolutionary algorithms-> population
of individuals
The difference lies in how they create new solutions in the
evolutionary search process.
Neuro-evolution (NE)
●
●
●
●
●
The search for its optimal training algorithm is still an open
problem.
Gradient descent based training paradigms are unable to
guarantee a good or acceptable solution in difficult
problems and those involving long-term dependencies.
Neuro-evolution employs evolutionary algorithms -->
handle the global search problem.
Easily deployed in any neural network optimisation problem
→ without being constrained to a particular network
architecture.
Used for evolving feedforward and recurrent network
Cooperative Coevolution
●
●
●
●
Cooperative coevolution (CC) decomposes a bigger problem
into smaller subcomponents and employs standard evolutionary
optimisation in solving those subcomponents in order to
gradually solve the bigger problem.
The subcomponents are also known as species or modules and
are represented as subpopulations.
The subpopulations are evolved separately and the co-operation
only takes place for fitness evaluation for the respective
individuals in each subpopulation.
Applications: optimization, large scale function optimization,
training neural networks, training recurrent neural networks
Encoding Schemes in Cooperative Coevolution
●
●
●
●
Network level encoding (NetL), employ a conventional evolutionary
algorithm, no problem decomposition used.
Enforced Subpopulations (ESP), used for training RNN for double
pople balancing problems (Gomez and Mikkulainen, 2003)
Synapse level encoding (SL), Synapse level problem decomposition
(Gomez, 2008)
Neuron level encoding (NL), Problem decomposition based on neuron
level as shown (Chandra, 2011)
●
●
Chandra, 2011, "Encoding Subcomponents in Cooperative Coevolutionary Recurrent Neural
Networks", Neurocomputing, Elsevier, (doi:10.1016/j.neucom.2011.05.003)
Adaptive Modularity in Cooperative Coevolution
●
●
●
The nature of optimisation problems changes during
the evolutionary process
Need to adapt Problem Decomposition during the
evolutionary process
SL → NL –> NetL
R. Chandra, M. Frean, M. Zhang, "Modularity Adaptation in Cooperative Coevolutionary
Feedforward Neural Networks" , Proceedings of the International Joint Conference on Neural
Networks, 2011, San Jose, USA, pp. 681-688.
R. Chandra, M. Frean, M. Zhang, "A Memetic Framework for Cooperative Coevolution of
Recurrent Neural Networks", Proceedings of the International Joint Conference on Neural
Networks, 2011, San Jose, USA, pp. 673-680.
Simulation and Results
●
●
●
●
●
●
●
Elman recurrent network is trained on chaotic time series
(Lorenz, Mackey Glass, Sunspot).
The G3-PCX evolutionary algorithm (Deb, 2002) is used in CC.
The cooperative coevolution framework has 100 individuals in all
subpopulations at all three levels (SL, NL and NetL).
Performance Measures: NMSE, RMSE
Embedding
●
●
●
●
Taken's theorm is used to construct the phase space of the time series
with embedding dimention D and time lag T
Lorenz: D = 3, T= 2 (tanh in output layer)
1000 Samples, 500 for training, 500 for testing
Mackey Glass: D = 3, T = 2 (Sigmoid in output layer)
1000 Samples, 500 for training, 500 for testing
Sunspot: D = 5, T =2 (tanh in output layer)
2000 Samples, 1000 for training, 1000 for testing
World Data Center for the Sunspot Index, November 1834 to June 2001
●
●
Discussions
AMCC has shown best results for Lorenz and Sunspot time
Series. SL has shown best results for Mackey Glass time series.
The performance of AMCC is better than some of the existing
methods (One of the best for Sunspot).
However, the best results are of the Hybrid NARX-Elman network.
Hybrid NARX-Elman have the advantage of architectural properties
of the hybrid methods with residual analysis which further improves
the results. Not a fair comparison! But I compare anyways...
Conclusions and Future Directions
Different levels of problem decomposition at different stages of
evolution is beneficial.
●
The performance of the proposed method compares well with the
results given by other computational intelligence techniques from
literature.
●
AMCC can be further improved by incorporating boosting
techniques, gradient based local search, residual analysis and
evolving the neural network topology during evolution.
●
Future work Memetic computing approaches in CC for RNN
and Chaotic time series.
●
●
AMCC for multi-objective optimisation
●
AMCC for large scale global optimisation
●
More real world problems.
●
Collaboration with other fields...
Publications from this work
●
●
Rohitash Chandra, Mengjie Zhang, Cooperative coevolution of Elman
recurrent neural networks for chaotic time series prediction,
Neurocomputing, 86, 116-123 (2012)
Rohitash Chandra, Adaption of Modularity in Cooperative coevolution of
Elman recurrent neural networks for chaotic time series prediction,
IEEE International Joint Conference on Neural Networks, August,
2013.
Thank you, Vinaka, and Dhanyawaad
●
[email protected]