Design of Entropy Neural Network - IJETTCS

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
Web Site: www.ijettcs.org Email: [email protected]
Volume 3, Issue 3, May – June 2014
ISSN 2278-6856
Design of Entropy Neural Network
1
Shruti Bhardwaj, 2Urvashi Chaudhary
1
Department of Information Technology, Banasthali University, Rajasthan
Department of Electrical Engineering, Indian Institute of Technology, Delhi
2
Abstract: This paper is to develop a classifier network is
used for classification and it introduced to increase the
learning of machine by which we can find the recognition
rate. For optimizing the error rate and to learn network we
use evolutionary learning technique i.e., particle swarm
optimization (PSO) algorithm. We can apply this neural
network for biometric problems or data sets include knuckles,
ear etc. Biometrics consists of the methods for automatically
& uniquely recognizing humans based upon one or more
intrinsic physical or behavioral traits possessed by the
individuals.
1. Introduction
Multitime scale dynamics are implemented by
unsupervised competitive neural network model, in which
competitive law is considered for local and global
asymptotic stability [1]. Unlabelled data that are
populated by noise are managed by online unsupervised
learning mechanism and this system is designed as two
layer neural network which shows topological structure of
unsupervised online data [2].
A inhibited neural neural network with unsupervised
hebbian learning modeled by the global exponential
stability of a multitime scale competitive neural network
model with non-smooth functions [3]. By using a
nonparametric unsupervised artificial neural network
Kohenen’s self organizing map and hybrid genetic
algorithm
developed
a multicomponent image
segmentation method without any a priori knowledge [4].
An unsupervised Bayesian classifier is introduced to
efficiently conduct the segmentation in natural-scene
sequence with complex background motion and changes
in illumination for video object segmentation [5]. For
speech recognition, Chung et al. [6] introduced a
divergence-based centroid neural network (DCNN) which
implements the statistical characteristics of observation
densities and the divergence measure as its distance
measure in Hidden Markov’s Model. Neuronal cluster
model which consists of spacial as well as temporal
weights in its unified adaptation scheme and it is based
on hebbian and lateral inhibition learning rule for SpatioTemporal adaptation [7]. Machine learning Petri Nets
models which uses developed supervised and
unsupervised algorithms to create fully trainable model
and remove the flaws encountered by Artificial neural
network[8]. Unsupervised learning based on hebb-like
mechanism is used for training second order neural
networks to perform different types of motion analysis
and allowing the network to make crucial for noiserobustness [9]. An iterative computation method is
introduced for alternating the projection between two
convex sets for unsupervised learning of neural network
Volume 3, Issue 3 May – June 2014
structure with convex constraint [10]. A modified selforganizing feature map neural network is used for the
unsupervised context-sensitive technique for change
detection in multi-temporal remote sensing machine
proposed by
Ghosh et al. [11]. A novel network is introduced to
separate mixtures of inputs which are previously learned
by using unsupervised learning and based on Hebbian
update for the unsupervised segmentation [12]. A
classification of segmented objects represented in 3-D as
point clouds of laser reflection by a convolutional
learning system and its performance is improved by using
the combination of supervised and unsupervised learning
[13]. For a surface identification in related to all-terrain
low-velocity mobile robotics, a tactile probe is designed
which can be used for unsupervised learning of terrains
[14]. Forlov et al. [15] introduced a Boolean factor
analysis method that can be done by using Hebbian
learning and Hopfield neural network and it can be more
capable by doping modification in Hopfield neural
architecture and it dynamics. The neural-network-based
Boolean factor analysis algorithm is enhanced as a
neural-network-based algorithm for word clustering
which can handle more complex model of signals related
to textual documents [16]. Self organized model is
introduced which is related to probabilistic mixture of
multivariate Gaussian components to remove the flaws of
self organizing map on fixed topology and provide
visualization method for high dimensional data [17].
Zuneno et al. [18] Introduced a method which is based
on referring SVM parameters to an unsupervised solution
for the computation of the generalized bounds and also
succeed effective model selection. Chang et al. [19]
introduced an automatic wafer inspection system based on
a self organizing neural network to overcome the lack of
product flexibility in automatic wafer inspection by using
unsupervised auto-clustering. An autonomous system is
implemented for unsupervised monitoring of bowel
sound, achieved by means of abdominal surface vibrations
and it was introduced to utilize the time-frequency
features that are used in pattern classification application
[20]. Quek et al. [21] introduced two clustering
techniques, the unsupervised discrete clustering technique
and supervised discrete clustering technique which are
based on kohonen- like self organizing neural network
architecture to reduce data loss by proposing non
uniform, normal fuzzy sets. For efficient hyper spectral
image classification, semi-supervised neural networks are
used for training of neural network by adding a flexible
embedding regularizer to the loss function and it can
handle millions of unlabelled data [22]. An unsupervised
Page 146
International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
Web Site: www.ijettcs.org Email: [email protected]
Volume 3, Issue 3, May – June 2014
neural network based automatic algorithm is used for an
on-line diagnostics of three phase induction motor stator
fault in which alpha beta stator current is used as input
variables [23]. The architecture for feature segmentation
in which fields like recurrent neural network,
unsupervised hebbian learning, supervised learning are
connected by hebbian learning method is based on the
competitive layer model and handles segmentation
problems [24]. The topological structure of unsupervised
online data to make word meaning learning in humanoid
robot is represented by a noise-robust self-organized
growing neural network [25]. A self-organized neural
network is used to make humanoid robot to online
grammar learning, word acquisition, and learns through
top-down and bottom-up approaches [26]. Inhibited
Neural network with unsupervised hebbian learning can
be modeled by global exponential stability of a multitime
scale scale competitive neural network model with nonsmooth functions [27]. To achieve online learning
unsupervised tasks, Furao et al.
[28] Introduced
enhanced self-organizing incremental neural network
(ESOINN) to remove the flaws of self-organizing
incremental neural network (SOINN). For hierarchal
classification of unlabelled dataset, growing hierarchal
tree SOM (GHTSOM) is introduced which is a self
organizing network that combines unsupervised learning
with dynamic topology [29].
The system is introduced by RT-UNNID which uses
unsupervised neural network for intelligent real time
intrusion detection and show real time solution to detect
new attacks in network traffic [30]. A context-sensitive
technique which is based on modified Hopfield neural
network architecture used for unsupervised change
detection in multitemporal remote sensing machine [31].
The adaptive resonance theory (ART 2) is a kind of
unsupervised neural network which tested iris plant
database and alphabet character for PD pattern
recognition as well as classification [32]. A thin film
transistors and simplified architecture are used by neural
network at device level by reducing the synapse unit into
one transistor and using unsupervised learning [33].
Classification of documents by word map via
unsupervised learning and supervised multilayerperceptron-based classifier techniques based on HMMs
and self organizing maps [34].
2. Acquisition of Hanman Classifier
2.1 Knuckles Database
Knuckles data sets are taken from the Hong Kong poly
technique university finger knuckle print database.
Among various kinds of biometric identifiers, hand based
biometrics has been attracting considerable attention.
Recently, it is found that the finger-knuckle-print (FKP),
which refers to the inherent patterns of the outer surface
around the phalangeal joint of one’s finger, is highly
unique and can serve as a distinctive biometric identifier.
Abundant line-like textures are contained in an FKP
image. FKP images were collected from 165 volunteers,
Volume 3, Issue 3 May – June 2014
ISSN 2278-6856
including 125 males and 40 females. Among them, 143
subjects were 20~30 years old and the others were 30~50
years old. We collected samples in two separate sessions.
In each session, the subject was asked to provide 6 images
for each of the left index finger, the left middle finger, the
right index finger, and the right middle finger. Therefore,
48 images from 4 fingers were collected from each
subject. In total, the database contains 7,920 images from
660 different fingers. The average time interval between
the first and the second sessions was about 25 days. The
maximum and minimum intervals were 96 days and 14
days, respectively [35].
2.2 Iris Database
Iris flower datasets are taken from UCI Repository. The
UCI Machine Learning Repository is a collection of
databases, domain theories, and data generators that are
used by the machine learning community for the
empirical analysis of machine learning algorithms. The
archive was created as an ftp archive in 1987 by David
Aha and fellow graduate students at UC Irvine. Since that
time, it has been widely used by students, educators, and
researchers all over the world as a primary source of
machine learning data sets. This is perhaps the best
known database to be found in the pattern recognition
literature. The data set contains 3 classes (Iris versicolor,
Iris setosa, Iris virginica) of 50 instances each, where
each class refers to a type of iris plant [36].
2.3 Ear Database
Ear datasets are taken from IIT Delhi database.
Biometrics Research Laboratory at IIT Delhi has been
engaged in the collection of ear image database from the
volunteers since October 2006[37]. The IIT Delhi ear
image database consists of the ear image database
collected from the students and staff at IIT Delhi, New
Delhi, India. This database has been acquired in IIT
Delhi campus during Oct 2006 - Jun 2007 (still in
progress) using a simple imaging setup. We have taken a
datasets for 125 users in which every user has 3 ear
images.
3. Information Sets
The theory on Information sets is expounded in [38] with
view to expand the scope of fuzzy sets in which element is
a pair comprising a property (Information source) and its
degree of belonging (Membership function value). In
most of the applications involving fuzzy theory only the
membership function is at the centre stage of operations.
The value of property is rarely figured. This anomaly is
sought to be removed by proposing the concept of
information set. In real life contexts, we operate on
information values. The information sources received by
our senses are perceived by the mind as information
values. That is the reason why we fail to perceive sound
even when it strikes our ears. Like fuzzy variables,
information values are also natural variables.
Page 147
International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
Web Site: www.ijettcs.org Email: [email protected]
Volume 3, Issue 3, May – June 2014
ISSN 2278-6856
Definition:
Consider a fuzzy set constructed from gray levels
I  {I (i, j )} in a window. This step is basically the
granularization of a dataset, or an image. If an attribute
or property in the window follows a distribution, it is easy
to fit a membership function or at least an approximating
function describing the distribution. In that case, the
attributes or elements of the fuzzy set are represented by
the membership function grades. It can be proved that
the product of information source values (gray levels), i.e.
{I (i, j)}, and their corresponding membership grades
{ij } constitute the information set H and each element
the information sets each element is an information value.
Several candidates that serve as information can be
derived from (4), which is the basic form. Some of the
forms which emanate from information sets are:
of the information set H (i, j) is called information value
defined as
3.2 The Hanman Transform
As we know that the membership value associated with
each information source gives a measure of uncertainty,
by making it a parameter in the exponential gain function
of this entropy gives rise to the information value as the
gain. To this end, the parameters of Hanman-Anirban
entropy function Eq. (2) are chosen as a=b=d=0 and
H  {H (i, j )}  {ij I (i, j )}
The relation comes from non-normalized 2D HanmanAnirban entropy function [34]
n
Where all probabilities pij [0,1] ,
n
 p
ij
1
i 1 j 1
and a, b, c and d are the real-valued parameters.
This form allows us to relax the assumption that the
sum of all probabilities is equal to 1, called the equality
constraint. The difficulty of probabilities is that they
become very small in large datasets by trying to meet the
equality constraint.
The relaxation of the equality
constraint is done by replacing Pij with an information
source with the associated grade. For more elaboration on
the entropy function, you may refer to Appendix. With
this constraint relaxed, we have the flexibility of choosing
information sources in such a way that the entropy value
can exceed 1, thus acquiring the discriminating power.
Taking pij=I(i, j), a=b=0, c=1/Imax and d=-I(ref)/Imax in
the exponential gain of Eqn. (2), leads to the exponential
function given by
n
n
H   pije
3
2
( apij bpij  cpij  d )
(2)
i 1 j 1
 ij  e  {| l ( i . j )  I ( ref ) |
f h2 }
(3)
The unknown parameters in Eq. (3) are the reference gray
level in an image I (ref), which can be taken as the
maximum gray level or median in the window. In view of
Eqns. (2) and (3), eqn. (2) can be interpreted as
2
Here the fuzzifier fh is defined as
W
2
f h ( ref ) 
W
4
( I ( ref )  I ( i , j ))


2
j 1 i 1 ( I ( ref )  I ( i , j ))
(4)
i 1 j  n
Now the information can be represented as a
set {ijI(i, j)} . In the parlance of a fuzzy set, each
element of the set is a pair consisting of information
source and its subsequent membership value whereas in
Volume 3, Issue 3 May – June 2014
{I(i, j) f (ij )},{g(I(i, j))ij}
A family of information forms is thus deduced from the
Hanman-Anirban entropy for dealing with different
problems.
c

so as to obtain the Hanman Transform.
I max
W
H t (I )  
i 1
W
 I (i, j )e
 H (i , j )
I max
(5)
j 1
Where, H (i, j ) 
ij I (i, j ) . In the general case, one
can take the exponential gain as the function of
 f ( H ( i , j ))
information as e
.
The motivation behind this development is now
elaborated. As can be seen from Eq. (5), the information
source is weighted as a function of the information value.
One can also see the utility of this transform in the social
context. For example, a person (information source) is
judged by the opinions (exponential gain) formed on the
person (information value) resulting in the judgment (the
weighted information source). Just as Fourier transforms
sieves the frequency content through a periodic signal,
Hanman transform sieves the uncertainty (information)
through the vague information source. The exponential
function being the monotonically increasing function, it
has the ability of retrieving things in terms of its gain. As
the information values in the gain can assume different
forms, the Hanman transform can capture the related
things from the information sources thus offering
immense possibilities to try out.
Alternatively, Hanman transform Eq. (5) can also be
written in the matrix form as
(  . I
n
H   I (i, j ) ij
1
{I(i, j)ij3}, {I (i, j)ij2}, {I 2 (i, j)ij},{I(i, j)ij2 },
)
I max
(6)
H t ( I )  I .e
Where I is the sub image of the window and
(here the
product is taken element-wise) is the corresponding
information matrix. The information is obtained as the
sum of the matrix elements. It is possible to include a bias
in the Hanman transform as follows:
Page 148
International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
Web Site: www.ijettcs.org Email: [email protected]
Volume 3, Issue 3, May – June 2014
W
W
H t (I )  
 I (i, j )e
i 1
( H ( i , j )  H 0
I max
)
j 1
Spatial Variation
If g (k) is the kth feature value representing the spatial
variation of the information source with the
k
corresponding membership value is
and h(k) is the
frequency of occurrence of this feature and, then (5) can
be written as
H
s

(g ) 
h (k ) g (k )e

k
g (k )/ g
max
k
For instance, h (k) vs. g (k) is the histogram of og grays
of an image. If g (k) =k, if gray levels are varying like
natural numbers then h (k) vs. k is the histogram.
Considering h (k) as the membership function of, the
Hanman transform m can be written as:
H s(g) 

ke
 ( h ( k ). k )
k
Instead of discrete k, let us take now the continuous
variable t such that h (t) is function of t, the above
becomes,
H t ( t )   e  ( h ( t ). t ) dt
3.2.1 Time Variation
Let h (t) be a time varying function and  (t ) be the
continuous membership function then Hanman transform
takes the integral form given by
H t ( t )   h ( t ) e   ( t ) h ( t ) dt
This transformation is motivated from the fact that any
information source (text, image or video) must be
weighed as a function of the information. Note that the
information results from an agent who gives the
information source a grade (membership function value).
3.2.2 Heterogeneous Hanman Transform
If A with the information source Ia and value Ha and B
with the information source Ib and value Hb, the
heterogeneous Hanman transforms are expressed as
H ( A / B )   I ae Hb
H (B / A) 

Ibe Ha
To mention a few applications of this transform we may
cite: Generation of new features, Evaluation of quality of
signals, Image processing and video processing to name a
few.
Algorithm:
The Hanman transform features are extracted from (5) in
the following steps:
1) Compute the membership value associated with each
gray level in a window of size WxW .
2) Compute the information as the product of the gray
level and its membership function value, divided by the
maximum gray level in the window.
3) Take the exponential of the normalized information
and multiply it with the gray level,
Volume 3, Issue 3 May – June 2014
ISSN 2278-6856
4) Repeat steps 1- 3 on all gray levels in a window and
sum the values to obtain a feature,
5) Repeat steps 1-4 on all windows in a face image to get
all features, and
6) Repeat steps 1-5 for
=13, 15, 17, 19 for the
performance evaluation.
4. The Proposed Algorithm
A new classifier that seeks accentuate the absolute
differences between the training and test samples using
the t-norms and evaluate the entropy is formulated. Let Nl
be the number of users, Nr be the number of training
samples per user. As we deal with only one test sample,
the number of test samples is no concern. Let the feature
vector of rth training sample and lth user be indicated
f
(r, l )
f
(t )
by tr ,k
. Similarly, the feature vector of tth the test
sample which may pertain to any user be denoted
by te ,k . The absolute errors between the training and
test samples are computed from
er ,l (k )  f tr ,k (r , l )  f te ,k (t )
, r=1,.., Nr; l=1,..,Nl
(7)
All the error vectors (Nr) pertaining to a user (l) contain
the information required for matching. In order to utilize
this information without going for learning, we generate
the normed-error vectors by taking the t-norm of all
possible pairs of error vectors.
E ( k )  t (e (k ), e (k ))
ij
i ,l
j ,l
(8)
As i, j=1,2,...,Nr, the number of products generated is
Nr
Np   ( Nr  r  1) . The normed error vectors act as
r 2
support vectors of Support Vector Machine (SVM)
because t-norms stretch the errors thus creating a margin.
Recalling the Hanman-Anirban entropy function with
a=b=0 and p=Eij (k), we obtain what we call general
Hanman classifier
M
hij (l )   Eij (k )e
[ cEij ( k )  d ]
(9)
k 1
In (9) we need to learn c and d, which we can avoid by
taking c=1 and d=0. In this case (9) is simplified to
Hanman classifier:
M
hij (l )   Eij (k )e
 E ij ( k )
k 1
The minimum of
(10)
hij (l )
is the measure of dissimilarity
corresponding to the lth user. So we determine the
following:
H (l )  min{hij (l )}
(11)
The identity of the user corresponds to the one where H
(l) is minimum. The normed-error vectors can also be
used for the classification by ignoring the exponential in
(10) as
Page 149
International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
Web Site: www.ijettcs.org Email: [email protected]
Volume 3, Issue 3, May – June 2014
M
hij (l )   Eij (k )
k 1
(12)
It is possible to find the pair from all {hij (l)} which
corresponds to the minimum H (l) for the lth user. This is
repeated for all l.
4.1 Entropy Neural Network
ISSN 2278-6856
5.1.3 In Ear Database
The best results are shown in table 5.3.
Table 5.3 Ear Database results using Various T-norms
T-norms
P
Result
Einstein Product
0.3
86.40%
Hamacher
Frank
0.1
0.1
85.60%
87.20 %
5.2 By using PSO
For optimizing the error rate, we use PSO Algorithm.
We found frank T-norm shows best results.
By using equation (21), we find recognition rate
n
'
i
'
i
 tnorm (e , e ) exp  (a * tnorm(e , e )  b ))
i
i
e 1
5. Results
We here use 3 dataset FKP (finger knuckle print) of
PolyU and IRIS Flower of UCI Repository and Ear
Database of IITD.
5.1 Without PSO
5.1.1 In FKP Dataset
We have taken LBP (Local Binary Pattern) data for left
index knuckles for 165 users. We use various t-norms
include Hamacher, Enstein product, schweiszer or sklar,
yager and frank. We have tried various values for p but
the best results are considered in table. In table 5.1 you
can see the following results:
Table.5.1 FKP Datasets results using Various Tnorms
T-norms
p
Result
Schweizer & sklar
0.9
81.82%
Yager
0.1
86.06 %
0.3
85.76 %
Frank
0.1
89.09 %
0.2
88.18 %
0.3
87.20 %
We analyze that frank t-norm shows best result among all
the t-norms.
5.1.2 In IRIS Flower database
The best results are shown in table 5.2.
Table 5.2 Iris Flower Database results using Various Tnorms
T-norms
P
Result
Enstein product
Schweizer
or
sklar
0.1
0.3
85%
90 %
91.67%
Yager
0.3
91.67 %
Frank
0.1
86.67 %
Volume 3, Issue 3 May – June 2014
There is no effect on value of b. So we learn value of a.
5.2.1 FKP Database
Value of a is 3.0437
After learning value or weights through PSO, we apply
this value of alpha in classifier and found recognition rate
is 90.61%.
Table5.4.Results of FKP database using PSO Algorithm
Value of
Recognition
Recognition
Increase in
a
rate without
rate with PSO accuracy
PSO
rate
3.0437
89.09 %
90.61
1.52
6. Conclusion
We have investigated finger-knuckle and ear based
authentication using the Hanman classifier. This classifier
derived from the t-norms and the entropy function performs
fairly well on both the knuckles and the ear database. Various tnorms due to Hamacher, Einstein product, Yager, Schweizer
and Sklar, Frank have been explored. This study aims at
tapping the potential of t-norms for classification. The approach
renders very good performance as it is quite computationally
fast.
The entropy neural network is built on the classifier by
incorporating evolutionary learning technique. The evolutionary
learning technique particle swarm optimization is utilized to
learn the parameters to make machine learning better. The
experimental results ascertain the improvement in the
classification accuracy by optimally learning the parameters
using PSO. Frank T-norm shows 89.09 % in Knuckles
database,89 % in IRIS flower database and 87.2% in ear
database. The experimental results suggest that Frank T-norm
outperforms over all the other t-norms.
References
[1] A.Meyer-Bäse,V.Thümmler,” Local and Global
Stability Analysis of an Unsupervised Competitive
Neural Network”, IEEE Trans. Neural Netw.vol.19,
no.2,pp. 346 - 351,Feb.2008.
[2] Shen Furao, Osamu Hasegawa,” An incremental
network for on-line unsupervised classification and
topology learning”, Neural Netw., vol.19, no.1,
pp.90-106, Jan.2006.
[3] Hongtao Lu and Shun-ichi Amari,” Global
Page 150
International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
Web Site: www.ijettcs.org Email: [email protected]
Volume 3, Issue 3, May – June 2014
Exponential Stability of Multitime Scale Competitive
Neural Networks With Nonsmooth Functions”, IEEE
Trans. Neural Netw.,vol.17,no.5,pp. 1152 1164,sep.2006.
[4] Mohamad Awad, Kacem Chehdi, and Ahmad Nasri,”
Multicomponent Image Segmentation Using Genetic
Algorithm and Artificial Neural Network”,
IEEE
Trans. Geosciences.remote sensing.vol.4,no.4,pp.
571 - 575,oct.2007.
[5] Dubravko Culibrk, Oge Marques, Daniel Socek, Hari
Kalva, and Borko Furht,” Neural Network Approach
to Background Modeling for Video Object
Segmentation”,
IEEE
Trans.
Neural
Netw.,vol.18,no.6,pp. 1614 – 1627,nov.2007.
[6] Dong-Chul Park, Oh-Hyun Kwon, and Jio Chung,”
Centroid Neural NetworkWith a Divergence Measure
for GPDF Data Clustering”, IEEE Trans. Neural
Netw.,vol.19,no.6,pp.948 - 957,june 2008.
[7] Dongyue Chen, Liming Zhang, Juyang (John)Weng,”
Spatio–Temporal Adaptation in the Unsupervised
Development of Networked Visual Neurons”, IEEE
Trans. Neural Netw.,vol.20,no.6,pp. 992 - 1008
,june.2009.
[8] Victor R. L. Shen, Yue-Shan Chang, Tony TongYing Juang,” Supervised and Unsupervised Learning
by Using Petri Nets”, IEEE Trans. Syst., Man,
Cybern. A, vol. 40, no. 2, pp. 363 - 375, Mar.2010.
[9] T.Maul, S.Baba,” Unsupervised learning in secondorder neural networks for motion analysis”,
Neurocomputing, vol. 74,no.6,pp.884-895,Feb.2011.
[10] H.Tong, T.Liu, Q.Tong,” Unsupervised learning
neural network with convex constraint: Structure and
algorithm”, Neurocomputing., vol. 71,no.4-6,pp.620625,Jan.2008.
[11] S.Ghosh, S.Patra, A.Ghosh,” An unsupervised
context-sensitive change detection technique based
on modified self-organizing feature map neural
network”,International Journal of Approximating
Reasoning, vol. 50,no.1,pp.37-50,Jan.2009.
[12] A. Ravishankar Rao, Guillermo A. Cecchi, Charles
C. Peck, James R. Kozloski,” Unsupervised
Segmentation With Dynamical Units”, IEEE Trans.
Neural Netw.,vol.19,no.1,pp. 168 – 182,jan.2008.
[13] D.Prokhorov,” A Convolutional Learning System for
Object Classification in 3-D Lidar Data”, IEEE
Trans. Neural Netw.,vol.21,no.5,pp. 858 - 863,2010.
[14] P.Giguere,G.Dudek,” A Simple Tactile Probe for
Surface Identification by Mobile Robots” IEEE
Trans. robotics.,vol.27,no.3,pp. 534 - 544,June.2011.
[15] Alexander A. Frolov, Dusan Husek, Igor P.
Muraviev, and Pavel Yu. Polyakov,” Boolean Factor
Analysis by Attractor Neural Network”, IEEE Trans.
Neural Netw., vol.18, no.3, pp. 698 - 707,May.2007.
[16] Alexander A. Frolov, Dusan Husek,Pavel Yu.
Polyakov,”
Recurrent-Neural-Network-Based
Boolean Factor Analysis and Its Application to Word
Clustering”,
IEEE
Trans.
Neural
Netw.,
vol.20,no.7,pp. 1073 - 1086,July.2009.
Volume 3, Issue 3 May – June 2014
ISSN 2278-6856
[17] E.López-Rubio, E.JPalomo,” Growing Hierarchical
Probabilistic Self-Organizing Graphs”, IEEE Trans.
Neural Netw., vol.22,no.7,pp. 997 - 1008,July.2011.
[18] Sergio Decherchi, S.Ridella, R.Zunino, P.Gastaldo,
D.Anguita,” Using Unsupervised Analysis to
Constrain Generalization Bounds for Support Vector
Classifiers”, IEEE Trans. Neural Netw.vol.21,
no.3,pp. 424 - 438,Mar.2011.
[19] Chuan-Yu Chang, ChunHsi Li, Jia-Wei Chang,
MuDer Jeng,” An unsupervised neural network
approach for automatic semiconductor wafer defect
inspection”,Expert systems and applications, vol.36,
no.1,pp 950-958,Jan.2009.
[20] C. Dimoulas, G. Kalliris, G. Papanikolaou, V.
Petridis, A. Kalampakas,” Bowel-sound pattern
analysis using wavelets and neural networks with
application
to
long-term,
unsupervised,
gastrointestinalmotility monitoring” Expert systems
and applications, vol.34, no.1,pp 26-41,Jan.2008.
[21] A.Singh, C.Quek, S.-Y.Cho,”DCT-Yager FNN: A
Novel Yager-Based Fuzzy Neural Network with the
Discrete Clustering Technique”, IEEE Trans. Neural
Netw., vol.19, no. 4, pp.625 - 644, apr.2008.
[22] F.Ratle, G.Camps-Valls,J.Weston,” Semisupervised
Neural Networks for Efficient Hyperspectral Image
Classification” ,IEEE Trans. Geosciences.remote
sensing. , vol.48, no.5, pp. 2271 - 2282, May.2011.
[23] J. F. Martins, V. Fernão Pires,A. J. Pires,”
Unsupervised Neural-Network-Based Algorithm for
an On-Line Diagnosis of Three-Phase Induction
Motor
Stator
Fault”,
IEEE
Trans.Industrial.Electronics, vol.54, no.1, pp. 259 264, Feb.2007.
[24] S.Weng, H.Wersing, Jochen J. Steil,H.Ritter,”
Learning Lateral Interactions for Feature Binding
and Sensory Segmentation From Prototypic Basis
Interactions”, IEEE Trans. Neural Netw.vol.17,
no.4,pp. 843 - 862,July.2006.
[25] Xiaoyuan
He,
R.Kojima,O.Hasegawa,”
Developmental Word Grounding Through a Growing
Neural Network With a Humanoid Robot” , IEEE
Trans. Syst., Man, Cybern. B, vol. 37, no. 2, pp. 451
- 462, Apr.2007.
[26] Xiaoyuan
He,T.Ogura,
A.Satou,O.Hasegawa,”
Developmental Word Acquisition and Grammar
Learning by Humanoid Robots Through a SelfOrganizing Incremental Neural Network”, IEEE
Trans. Syst., Man, Cybern. B, vol. 37, no. 5, pp.
1357 - 1372, Oct.2007.
[27] Hongtao Lu and S.Amari,” Global Exponential
Stability of Multitime Scale Competitive Neural
Networks With Nonsmooth Functions”, IEEE Trans.
Neural Netw.vol.17, no.5,pp 1152 - 1164,Sep.2006.
[28] S.Furao, T.Ogura, O.Hasegawa,” An enhanced selforganizing incremental neural network for online
unsupervised learning”,Neural Netw., vol.28, no.8,pp
893-903,Oct.2007.
[29] A.Forti, G.L.Foresti,” Growing Hierarchical Tree
Page 151
International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)
Web Site: www.ijettcs.org Email: [email protected]
Volume 3, Issue 3, May – June 2014
ISSN 2278-6856
SOM: An unsupervised neural network with dynamic
topology”, Neural Netw., vol.19, no.10,pp 15681580,Dec.2006.
[30] M.Amini, R.Jalili, H.R.Shahriari,” RT-UNNID: A
practical solution to real-time network-based
intrusion detection using unsupervised neural
networks”,Computers and Security, vol.25, no.6,pp
459-468,Sep.2006.
[31] Susmita Ghosh, Lorenzo Bruzzone, Swarnajyoti
Patra,Francesca Bovolo, Ashish Ghosh”,A ContextSensitive Technique for Unsupervised Change
Detection Based on Hopfield-Type Neural
Networks”, IEEE Trans. Geosciences. remote
sensing. , vol.45, no.3, pp.778 - 789, mar.2007.
[32] B. Karthikeyan, S. Gopal, S. Venkatesh,” ART 2—
an unsupervised neural network for PD pattern
recognition and classification” Expert systems and
applications, vol.31, no.2,pp .345-350,Aug.2006.
[33] T.Kasakawa, H.Tabata, R.Onodera, H.Kojima,
M.Kimura,H.Hara,S.Inoue,” An Artificial Neural
Network at Device Level Using Simplified
Architecture and Thin-Film Transistors”, ”, IEEE
Trans. Electronic.Device.vol.57, no.10, 2744 2750,Oct.2010.
[34] N.Tsimboukakis,G.Tambouratzis”
Word-Map
Systems for Content-Based Document Classification”
IEEE Trans. Syst., Man, Cybern. C, vol. 41, no. 5,
pp. 662 - 673, Sep.2011.
[35] www.ics.uci.edu
[36] http://www4.comp.polyu.edu.hk/~biometrics/2D_3D
_Palmprint.htm
[37] http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Data
base_Ear.htm
[38] M.hanmandlu, F.Sayeed,” information sets and
information processing of an application of face
recognition”communicated
to
IEEE
Trans.
transaction or system man and cybernatics part B
Volume 3, Issue 3 May – June 2014
Page 152