qml.QNGOptimizer

class QNGOptimizer(stepsize=0.01, diag_approx=False, lam=0)[source]

Bases: pennylane.optimize.gradient_descent.GradientDescentOptimizer

Optimizer with adaptive learning rate, via calculation of the diagonal or block-diagonal approximation to the Fubini-Study metric tensor. A quantum generalization of natural gradient descent.

The QNG optimizer uses a step- and parameter-dependent learning rate, with the learning rate dependent on the pseudo-inverse of the Fubini-Study metric tensor \(g\):

\[x^{(t+1)} = x^{(t)} - \eta g(f(x^{(t)}))^{-1} \nabla f(x^{(t)}),\]

where \(f(x^{(t)}) = \langle 0 | U(x^{(t)})^\dagger \hat{B} U(x^{(t)}) | 0 \rangle\) is an expectation value of some observable measured on the variational quantum circuit \(U(x^{(t)})\).

Consider a quantum node represented by the variational quantum circuit

\[U(\mathbf{\theta}) = W(\theta_{i+1}, \dots, \theta_{N})X(\theta_{i}) V(\theta_1, \dots, \theta_{i-1}),\]

where all parametrized gates can be written of the form \(X(\theta_{i}) = e^{i\theta_i K_i}\). That is, the gate \(K_i\) is the generator of the parametrized operation \(X(\theta_i)\) corresponding to the \(i\)-th parameter.

For each parametric layer \(\ell\) in the variational quantum circuit containing \(n\) parameters, the \(n\times n\) block-diagonal submatrix of the Fubini-Study tensor \(g_{ij}^{(\ell)}\) is calculated directly on the quantum device in a single evaluation:

\[g_{ij}^{(\ell)} = \langle \psi_\ell | K_i K_j | \psi_\ell \rangle - \langle \psi_\ell | K_i | \psi_\ell\rangle \langle \psi_\ell |K_j | \psi_\ell\rangle\]

where \(|\psi_\ell\rangle = V(\theta_1, \dots, \theta_{i-1})|0\rangle\) (that is, \(|\psi_\ell\rangle\) is the quantum state prior to the application of parameterized layer \(\ell\)).

Combining the quantum natural gradient optimizer with the analytic parameter-shift rule to optimize a variational circuit with \(d\) parameters and \(L\) layers, a total of \(2d+L\) quantum evaluations are required per optimization step.

For more details, see:

James Stokes, Josh Izaac, Nathan Killoran, Giuseppe Carleo. “Quantum Natural Gradient.” arXiv:1909.02108, 2019.

Note

The QNG optimizer supports single QNodes or VQECost objects as objective functions. Alternatively, the metric tensor can directly be provided to the step() method of the optimizer, using the metric_tensor_fn argument.

For the following cases, providing metric_tensor_fn may be useful:

  • For hybrid classical-quantum models, the “mixed geometry” of the model makes it unclear which metric should be used for which parameter. For example, parameters of quantum nodes are better suited to one metric (such as the QNG), whereas others (e.g., parameters of classical nodes) are likely better suited to another metric.

  • For multi-QNode models, we don’t know what geometry is appropriate if a parameter is shared amongst several QNodes.

If the objective function is VQE/VQE-like, i.e., a function of a group of QNodes that share an ansatz, there are two ways to use the optimizer:

  • Realize the objective function as a VQECost object, which has a metric_tensor method.

  • Manually provide the metric_tensor_fn corresponding to the metric tensor of of the QNode(s) involved in the objective function.

Examples:

For VQE/VQE-like problems, the objective function for the optimizer can be realized as a VQECost object.

>>> dev = qml.device("default.qubit", wires=1)
>>> def circuit(params, wires=0):
...     qml.RX(params[0], wires=wires)
...     qml.RY(params[1], wires=wires)
>>> coeffs = [1, 1]
>>> obs = [qml.PauliX(0), qml.PauliZ(0)]
>>> H = qml.Hamiltonian(coeffs, obs)
>>> cost_fn = qml.VQECost(circuit, H, dev)

Once constructed, the cost function can be passed directly to the optimizer’s step function:

>>> eta = 0.01
>>> init_params = [0.011, 0.012]
>>> opt = qml.QNGOptimizer(eta)
>>> theta_new = opt.step(cost_fn, init_params)
>>> print(theta_new)
[0.011445239214543481, -0.027519522461477233]

Alternatively, the same objective function can be used for the optimizer by manually providing the metric_tensor_fn.

>>> qnodes = qml.map(circuit, obs, dev, 'expval')
>>> cost_fn = qml.dot(coeffs, qnodes)
>>> eta = 0.01
>>> init_params = [0.011, 0.012]
>>> opt = qml.QNGOptimizer(eta)
>>> theta_new = opt.step(cost_fn, init_params, metric_tensor_fn=qnodes.qnodes[0].metric_tensor)
>>> print(theta_new)
[0.011445239214543481, -0.027519522461477233]

See also

See the quantum natural gradient example for more details on Fubini-Study metric tensor and this optimization class.

Parameters
  • stepsize (float) – the user-defined hyperparameter \(\eta\)

  • diag_approx (bool) – If True, forces a diagonal approximation where the calculated metric tensor only contains diagonal elements \(G_{ii}\). In some cases, this may reduce the time taken per optimization step.

  • lam (float) – metric tensor regularization \(G_{ij}+\lambda I\) to be applied at each optimization step

apply_grad(grad, x)

Update the variables x to take a single optimization step.

compute_grad(objective_fn, x[, grad_fn])

Compute gradient of the objective_fn at the point x.

step(qnode, x[, recompute_tensor, …])

Update x with one step of the optimizer.

update_stepsize(stepsize)

Update the initialized stepsize value \(\eta\).

apply_grad(grad, x)[source]

Update the variables x to take a single optimization step. Flattens and unflattens the inputs to maintain nested iterables as the parameters of the optimization.

Parameters
  • grad (array) – The gradient of the objective function at point \(x^{(t)}\): \(\nabla f(x^{(t)})\)

  • x (array) – the current value of the variables \(x^{(t)}\)

Returns

the new values \(x^{(t+1)}\)

Return type

array

static compute_grad(objective_fn, x, grad_fn=None)

Compute gradient of the objective_fn at the point x.

Parameters
  • objective_fn (function) – the objective function for optimization

  • x (array) – NumPy array containing the current values of the variables to be updated

  • grad_fn (function) – Optional gradient function of the objective function with respect to the variables x. If None, the gradient function is computed automatically.

Returns

NumPy array containing the gradient \(\nabla f(x^{(t)})\)

Return type

array

step(qnode, x, recompute_tensor=True, metric_tensor_fn=None)[source]

Update x with one step of the optimizer.

Parameters
  • qnode (QNode) – the QNode for optimization

  • x (array) – NumPy array containing the current values of the variables to be updated

  • recompute_tensor (bool) – Whether or not the metric tensor should be recomputed. If not, the metric tensor from the previous optimization step is used.

  • metric_tensor_fn (function) – Optional metric tensor function with respect to the variables x. If None, the metric tensor function is computed automatically.

Returns

the new variable values \(x^{(t+1)}\)

Return type

array

update_stepsize(stepsize)

Update the initialized stepsize value \(\eta\).

This allows for techniques such as learning rate scheduling.

Parameters

stepsize (float) – the user-defined hyperparameter \(\eta\)

Contents

Using PennyLane

Development

API