# RMSPropOptimizer¶

Module: pennylane

class RMSPropOptimizer(stepsize=0.01, decay=0.9, eps=1e-08)[source]

Root mean squared propagation optimizer.

The root mean square progation optimizer is a modified Adagrad optimizer, with a decay of learning rate adaptation.

Extensions of the Adagrad optimization method generally start the sum $$a$$ over past gradients in the denominator of the learning rate at a finite $$t'$$ with $$0 < t' < t$$, or decay past gradients to avoid an ever-decreasing learning rate.

Root Mean Square propagation is such an adaptation, where

$a_i^{(t+1)} = \gamma a_i^{(t)} + (1-\gamma) (\partial_{x_i} f(x^{(t)}))^2.$
Parameters: stepsize (float) – the user-defined hyperparameter $$\eta$$ used in the Adagrad optmization decay (float) – the learning rate decay $$\gamma$$ eps (float) – offset $$\epsilon$$ added for numerical stability (see Adagrad)
apply_grad(grad, x)[source]

Update the variables x to take a single optimization step. Flattens and unflattens the inputs to maintain nested iterables as the parameters of the optimization.

Parameters: grad (array) – The gradient of the objective function at point $$x^{(t)}$$: $$\nabla f(x^{(t)})$$ x (array) – the current value of the variables $$x^{(t)}$$ the new values $$x^{(t+1)}$$ array