Module: pennylane

class NesterovMomentumOptimizer(stepsize=0.01, momentum=0.9)[source]

Gradient-descent optimizer with Nesterov momentum.

Nesterov Momentum works like the Momentum optimizer, but shifts the current input by the momentum term when computing the gradient of the objective function:

\[a^{(t+1)} = m a^{(t)} + \eta \nabla f(x^{(t)} - m a^{(t)}).\]

The user defined parameters are:

  • \(\eta\): the step size
  • \(m\): the momentum
  • stepsize (float) – user-defined hyperparameter \(\eta\)
  • momentum (float) – user-defined hyperparameter \(m\)
compute_grad(objective_fn, x, grad_fn=None)[source]

Compute gradient of the objective_fn at at the shifted point \((x - m\times\text{accumulation})\).

  • objective_fn (function) – the objective function for optimization
  • x (array) – NumPy array containing the current values of the variables to be updated
  • grad_fn (function) – Optional gradient function of the objective function with respect to the variables x. If None, the gradient function is computed automatically.

NumPy array containing the gradient \(\nabla f(x^{(t)})\)

Return type: