qml.MomentumOptimizer¶
-
class
MomentumOptimizer
(stepsize=0.01, momentum=0.9)[source]¶ Bases:
pennylane.optimize.gradient_descent.GradientDescentOptimizer
Gradient-descent optimizer with momentum.
The momentum optimizer adds a “momentum” term to gradient descent which considers the past gradients:
\[x^{(t+1)} = x^{(t)} - a^{(t+1)}.\]The accumulator term \(a\) is updated as follows:
\[a^{(t+1)} = m a^{(t)} + \eta \nabla f(x^{(t)}),\]with user defined parameters:
\(\eta\): the step size
\(m\): the momentum
- Parameters
stepsize (float) – user-defined hyperparameter \(\eta\)
momentum (float) – user-defined hyperparameter \(m\)
Methods
apply_grad
(grad, x)Update the variables x to take a single optimization step.
compute_grad
(objective_fn, x[, grad_fn])Compute gradient of the objective_fn at the point x and return it along with the
reset
()Reset optimizer by erasing memory of past steps.
step
(objective_fn, x[, grad_fn])Update x with one step of the optimizer.
step_and_cost
(objective_fn, x[, grad_fn])Update x with one step of the optimizer and return the corresponding objective function value prior to the step.
update_stepsize
(stepsize)Update the initialized stepsize value \(\eta\).
-
apply_grad
(grad, x)[source]¶ Update the variables x to take a single optimization step. Flattens and unflattens the inputs to maintain nested iterables as the parameters of the optimization.
- Parameters
grad (array) – The gradient of the objective function at point \(x^{(t)}\): \(\nabla f(x^{(t)})\)
x (array) – the current value of the variables \(x^{(t)}\)
- Returns
the new values \(x^{(t+1)}\)
- Return type
array
-
static
compute_grad
(objective_fn, x, grad_fn=None)¶ - Compute gradient of the objective_fn at the point x and return it along with the
objective function forward pass (if available).
- Parameters
objective_fn (function) – the objective function for optimization
x (array) – NumPy array containing the current values of the variables to be updated
grad_fn (function) – Optional gradient function of the objective function with respect to the variables
x
. IfNone
, the gradient function is computed automatically.
- Returns
- The NumPy array containing the gradient \(\nabla f(x^{(t)})\) and the
objective function output. If
grad_fn
is provided, the objective function will not be evaluted and insteadNone
will be returned.
- Return type
tuple
-
step
(objective_fn, x, grad_fn=None)¶ Update x with one step of the optimizer.
- Parameters
objective_fn (function) – the objective function for optimization
x (array) – NumPy array containing the current values of the variables to be updated
grad_fn (function) – Optional gradient function of the objective function with respect to the variables
x
. IfNone
, the gradient function is computed automatically.
- Returns
the new variable values \(x^{(t+1)}\)
- Return type
array
-
step_and_cost
(objective_fn, x, grad_fn=None)¶ Update x with one step of the optimizer and return the corresponding objective function value prior to the step.
- Parameters
objective_fn (function) – the objective function for optimization
x (array) – NumPy array containing the current values of the variables to be updated
grad_fn (function) – Optional gradient function of the objective function with respect to the variables
x
. IfNone
, the gradient function is computed automatically.
- Returns
- the new variable values \(x^{(t+1)}\) and the objective function output
prior to the step
- Return type
tuple
-
update_stepsize
(stepsize)¶ Update the initialized stepsize value \(\eta\).
This allows for techniques such as learning rate scheduling.
- Parameters
stepsize (float) – the user-defined hyperparameter \(\eta\)
Contents
Using PennyLane
Development
API
Downloads