The effect of adaptive gain and adaptive momentum in improving training time of gradient descent back propagation algorithm on classification problems

Abdul Hamid, Norhamreeza and Mohd Nawi, Nazri and Ghazali, Rozaida (2011) The effect of adaptive gain and adaptive momentum in improving training time of gradient descent back propagation algorithm on classification problems. International Journal on Advanced Science, Engineering and Information Technology, 1 (2). pp. 178-184.

Full text not available from this repository.

Abstract

The back propagation algorithm has been successfully applied to wide range of practical problems. Since this algorithm uses a gradient descent method, it has some limitations which are slow learning convergence velocity and easy convergence to local minima. The convergence behaviour of the back propagation algorithm depends on the choice of initial weights and biases, network topology, learning rate, momentum, activation function and value for the gain in the activation function. Previous researchers demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the current working back propagation algorithm which is Gradien Descent Method with Adaptive Gain by changing the momentum coefficient adaptively for each node. The influence of the adaptive momentum together with adaptive gain on the learning ability of a neural network is analysed. Multilayer feed forward neural networks have been assessed. Physical interpretation of the relationship between the momentum value, the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and current Gradient Descent Method with Adaptive Gain was verified by means of simulation on three benchmark problems. In learning the patterns, the simulations result demonstrate that the proposed algorithm converged faster on Wisconsin breast cancer with an improvement ratio of nearly 1.8, 6.6 on Mushroom problem and 36% better on Soybean data sets. The results clearly show that the proposed algorithm significantly improves the learning speed of the current gradient descent back-propagatin algorithm.

Item Type:Article
Uncontrolled Keywords:back propagation algorithm; gain; activation function; adaptive momentum
Subjects:Q Science > QA Mathematics > QA75 Calculating machines > QA75.5 Electronic computers. Computer science
Divisions:Faculty of Computer Science and Information Technology > Department of Software Engineering
ID Code:2982
Deposited By:Normajihan Abd. Rahman
Deposited On:07 Feb 2013 18:35
Last Modified:22 Jan 2015 08:33

Repository Staff Only: item control page