University of Surrey

Test tubes in the lab Research in the ATI Dance Research

Elman Backpropagation as Reinforcement for Simple Recurrent Networks

Grüning, A (2007) Elman Backpropagation as Reinforcement for Simple Recurrent Networks Neural Computation, 19 (11). pp. 3108-3131.

[img] Text (licence)

Download (1kB)
neco.2007.19.11.3108.pdf - Version of Record

Download (686kB)
Official URL:


Simple recurrent networks (SRNs) in symbolic time-series prediction (e.g., language processing models) are frequently trained with gradient descent-based learning algorithms, notably with variants of backpropagation (BP). A major drawback for the cognitive plausibility of BP is that it is a supervised scheme in which a teacher has to provide a fully specified target answer. Yet agents in natural environments often receive summary feedback about the degree of success or failure only, a view adopted in reinforcement learning schemes.In this work, we show that for SRNs in prediction tasks for which there is a probability interpretation of the network’s output vector, Elman BP can be reimplemented as a reinforcement learning scheme for which the expected weight updates agree with the ones from traditional Elman BP. Network simulations on formal languages corroborate this result and show that the learning behaviors of Elman backpropagation and its reinforcement variant are very similar also in online learning tasks.

Item Type: Article
Divisions : Faculty of Engineering and Physical Sciences > Computing Science
Authors :
Grüning, A
Date : 20 September 2007
DOI : 10.1162/neco.2007.19.11.3108
Uncontrolled Keywords : MODEL
Related URLs :
Depositing User : Symplectic Elements
Date Deposited : 31 Jan 2012 13:58
Last Modified : 31 Oct 2017 14:16

Actions (login required)

View Item View Item


Downloads per month over past year

Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800