qml.NesterovMomentumOptimizer¶
-
class
NesterovMomentumOptimizer
(stepsize=0.01, momentum=0.9)[source]¶ Bases:
pennylane.optimize.momentum.MomentumOptimizer
Gradient-descent optimizer with Nesterov momentum.
Nesterov Momentum works like the
Momentum optimizer
, but shifts the current input by the momentum term when computing the gradient of the objective function:\[a^{(t+1)} = m a^{(t)} + \eta \nabla f(x^{(t)} - m a^{(t)}).\]The user defined parameters are:
\(\eta\): the step size
\(m\): the momentum
- Parameters
stepsize (float) – user-defined hyperparameter \(\eta\)
momentum (float) – user-defined hyperparameter \(m\)
Methods
apply_grad
(grad, x)Update the variables x to take a single optimization step.
compute_grad
(objective_fn, x[, grad_fn])Compute gradient of the objective_fn at at the shifted point \((x - m\times\text{accumulation})\) and return it along with the objective function forward pass (if available).
reset
()Reset optimizer by erasing memory of past steps.
step
(objective_fn, x[, grad_fn])Update x with one step of the optimizer.
step_and_cost
(objective_fn, x[, grad_fn])Update x with one step of the optimizer and return the corresponding objective function value prior to the step.
update_stepsize
(stepsize)Update the initialized stepsize value \(\eta\).
-
apply_grad
(grad, x)¶ Update the variables x to take a single optimization step. Flattens and unflattens the inputs to maintain nested iterables as the parameters of the optimization.
- Parameters
grad (array) – The gradient of the objective function at point \(x^{(t)}\): \(\nabla f(x^{(t)})\)
x (array) – the current value of the variables \(x^{(t)}\)
- Returns
the new values \(x^{(t+1)}\)
- Return type
array
-
compute_grad
(objective_fn, x, grad_fn=None)[source]¶ Compute gradient of the objective_fn at at the shifted point \((x - m\times\text{accumulation})\) and return it along with the objective function forward pass (if available).
- Parameters
objective_fn (function) – the objective function for optimization
x (array) – NumPy array containing the current values of the variables to be updated
grad_fn (function) – Optional gradient function of the objective function with respect to the variables
x
. IfNone
, the gradient function is computed automatically.
- Returns
- The NumPy array containing the gradient \(\nabla f(x^{(t)})\) and the
objective function output. If
grad_fn
is provided, the objective function will not be evaluted and insteadNone
will be returned.
- Return type
tuple
-
reset
()¶ Reset optimizer by erasing memory of past steps.
-
step
(objective_fn, x, grad_fn=None)¶ Update x with one step of the optimizer.
- Parameters
objective_fn (function) – the objective function for optimization
x (array) – NumPy array containing the current values of the variables to be updated
grad_fn (function) – Optional gradient function of the objective function with respect to the variables
x
. IfNone
, the gradient function is computed automatically.
- Returns
the new variable values \(x^{(t+1)}\)
- Return type
array
-
step_and_cost
(objective_fn, x, grad_fn=None)¶ Update x with one step of the optimizer and return the corresponding objective function value prior to the step.
- Parameters
objective_fn (function) – the objective function for optimization
x (array) – NumPy array containing the current values of the variables to be updated
grad_fn (function) – Optional gradient function of the objective function with respect to the variables
x
. IfNone
, the gradient function is computed automatically.
- Returns
- the new variable values \(x^{(t+1)}\) and the objective function output
prior to the step
- Return type
tuple
-
update_stepsize
(stepsize)¶ Update the initialized stepsize value \(\eta\).
This allows for techniques such as learning rate scheduling.
- Parameters
stepsize (float) – the user-defined hyperparameter \(\eta\)
Contents
Using PennyLane
Development
API
Downloads