MomentumOptimizer

Module: pennylane

class MomentumOptimizer(stepsize=0.01, momentum=0.9)[source]

Gradient-descent optimizer with momentum.

The momentum optimizer adds a “momentum” term to gradient descent which considers the past gradients:

\[x^{(t+1)} = x^{(t)} - a^{(t+1)}.\]

The accumulator term \(a\) is updated as follows:

\[a^{(t+1)} = m a^{(t)} + \eta \nabla f(x^{(t)}),\]

with user defined parameters:

  • \(\eta\): the step size
  • \(m\): the momentum
Parameters:
  • stepsize (float) – user-defined hyperparameter \(\eta\)
  • momentum (float) – user-defined hyperparameter \(m\)
apply_grad(grad, x)[source]

Update the variables x to take a single optimization step. Flattens and unflattens the inputs to maintain nested iterables as the parameters of the optimization.

Parameters:
  • grad (array) – The gradient of the objective function at point \(x^{(t)}\): \(\nabla f(x^{(t)})\)
  • x (array) – the current value of the variables \(x^{(t)}\)
Returns:

the new values \(x^{(t+1)}\)

Return type:

array

reset()[source]

Reset optimizer by erasing memory of past steps.