# qml.MomentumOptimizer¶

class MomentumOptimizer(stepsize=0.01, momentum=0.9)[source]

Bases: pennylane.optimize.gradient_descent.GradientDescentOptimizer

$x^{(t+1)} = x^{(t)} - a^{(t+1)}.$

The accumulator term $$a$$ is updated as follows:

$a^{(t+1)} = m a^{(t)} + \eta \nabla f(x^{(t)}),$

with user defined parameters:

• $$\eta$$: the step size

• $$m$$: the momentum

Parameters
• stepsize (float) – user-defined hyperparameter $$\eta$$

• momentum (float) – user-defined hyperparameter $$m$$

 apply_grad(grad, x) Update the variables x to take a single optimization step. compute_grad(objective_fn, x[, grad_fn]) Compute gradient of the objective_fn at the point x. Reset optimizer by erasing memory of past steps. step(objective_fn, x[, grad_fn]) Update x with one step of the optimizer. update_stepsize(stepsize) Update the initialized stepsize value $$\eta$$.
apply_grad(grad, x)[source]

Update the variables x to take a single optimization step. Flattens and unflattens the inputs to maintain nested iterables as the parameters of the optimization.

Parameters
• grad (array) – The gradient of the objective function at point $$x^{(t)}$$: $$\nabla f(x^{(t)})$$

• x (array) – the current value of the variables $$x^{(t)}$$

Returns

the new values $$x^{(t+1)}$$

Return type

array

static compute_grad(objective_fn, x, grad_fn=None)

Compute gradient of the objective_fn at the point x.

Parameters
• objective_fn (function) – the objective function for optimization

• x (array) – NumPy array containing the current values of the variables to be updated

• grad_fn (function) – Optional gradient function of the objective function with respect to the variables x. If None, the gradient function is computed automatically.

Returns

NumPy array containing the gradient $$\nabla f(x^{(t)})$$

Return type

array

reset()[source]

Reset optimizer by erasing memory of past steps.

step(objective_fn, x, grad_fn=None)

Update x with one step of the optimizer.

Parameters
• objective_fn (function) – the objective function for optimization

• x (array) – NumPy array containing the current values of the variables to be updated

• grad_fn (function) – Optional gradient function of the objective function with respect to the variables x. If None, the gradient function is computed automatically.

Returns

the new variable values $$x^{(t+1)}$$

Return type

array

update_stepsize(stepsize)

Update the initialized stepsize value $$\eta$$.

This allows for techniques such as learning rate scheduling.

Parameters

stepsize (float) – the user-defined hyperparameter $$\eta$$