# qml.RMSPropOptimizer¶

class RMSPropOptimizer(stepsize=0.01, decay=0.9, eps=1e-08)[source]

Bases: pennylane.optimize.adagrad.AdagradOptimizer

Root mean squared propagation optimizer.

The root mean square progation optimizer is a modified Adagrad optimizer, with a decay of learning rate adaptation.

Extensions of the Adagrad optimization method generally start the sum $$a$$ over past gradients in the denominator of the learning rate at a finite $$t'$$ with $$0 < t' < t$$, or decay past gradients to avoid an ever-decreasing learning rate.

Root Mean Square propagation is such an adaptation, where

$a_i^{(t+1)} = \gamma a_i^{(t)} + (1-\gamma) (\partial_{x_i} f(x^{(t)}))^2.$
Parameters
• stepsize (float) – the user-defined hyperparameter $$\eta$$ used in the Adagrad optmization

• decay (float) – the learning rate decay $$\gamma$$

• eps (float) – offset $$\epsilon$$ added for numerical stability (see Adagrad)

 apply_grad(grad, x) Update the variables x to take a single optimization step. compute_grad(objective_fn, x[, grad_fn]) Compute gradient of the objective_fn at the point x. Reset optimizer by erasing memory of past steps. step(objective_fn, x[, grad_fn]) Update x with one step of the optimizer. update_stepsize(stepsize) Update the initialized stepsize value $$\eta$$.
apply_grad(grad, x)[source]

Update the variables x to take a single optimization step. Flattens and unflattens the inputs to maintain nested iterables as the parameters of the optimization.

Parameters
• grad (array) – The gradient of the objective function at point $$x^{(t)}$$: $$\nabla f(x^{(t)})$$

• x (array) – the current value of the variables $$x^{(t)}$$

Returns

the new values $$x^{(t+1)}$$

Return type

array

static compute_grad(objective_fn, x, grad_fn=None)

Compute gradient of the objective_fn at the point x.

Parameters
• objective_fn (function) – the objective function for optimization

• x (array) – NumPy array containing the current values of the variables to be updated

• grad_fn (function) – Optional gradient function of the objective function with respect to the variables x. If None, the gradient function is computed automatically.

Returns

NumPy array containing the gradient $$\nabla f(x^{(t)})$$

Return type

array

reset()

Reset optimizer by erasing memory of past steps.

step(objective_fn, x, grad_fn=None)

Update x with one step of the optimizer.

Parameters
• objective_fn (function) – the objective function for optimization

• x (array) – NumPy array containing the current values of the variables to be updated

• grad_fn (function) – Optional gradient function of the objective function with respect to the variables x. If None, the gradient function is computed automatically.

Returns

the new variable values $$x^{(t+1)}$$

Return type

array

update_stepsize(stepsize)

Update the initialized stepsize value $$\eta$$.

This allows for techniques such as learning rate scheduling.

Parameters

stepsize (float) – the user-defined hyperparameter $$\eta$$