题名 |
Elliptical Probabilistic Neural Networks |
DOI |
10.6302/JITA.201106_5(2).0005 |
作者 |
I-Cheng Yeh;Kuan-Cheng Lin;Kuan-Chieh Huang;Xin-Ying Zhang;Chong Wu |
关键词 |
probabilistic neural network ; variable importance ; classification ; learning rule |
期刊名称 |
Journal of Information Technology and Applications(資訊科技與應用期刊) |
卷期/出版年月 |
5卷2期(2011 / 06 / 01) |
页次 |
92 - 102 |
内容语文 |
英文 |
英文摘要 |
The traditional PNN believes that all the variables have the same status, making the contour of probabilistic density function round. In this study, variable weights are added into the probabilistic density function of Elliptical Probabilistic Neural Network (EPNN), so that the kernel function can be adjusted into arbitrary hyper-ellipse to match the various shapes of classification boundaries. Although there are three kinds of network parameters in EPNN, including variable weights representing the importance of input variables, the core-width-reciprocal representing the effective range of data, and data weights representing the data reliability, in this study the principle of minimizing error sum of squares is used to derive the supervised learning rules for all the parameters with a unified mathematic theoretical framework. The kernel shape parameters of EPNN can be adjusted based on the supervised learning rules, and reflects the importance of the input variables on the classification. Hence, this study further derives the relationship between the kernel shape parameters and the importance index. The results show that (1) EPNN is much more accurate than PNN and slightly less accurate than MLP for the artificial classification functions; (2) EPNN is more accurate than MLP and PNN for the actual classification applications; (3) the importance index can indeed measure the importance of input variables. |
主题分类 |
基礎與應用科學 >
資訊科學 |