qml.execute

execute(tapes, device, gradient_fn, interface='autograd', mode='best', gradient_kwargs=None, cache=True, cachesize=10000, max_diff=2, override_shots=False)[source]

Execute a batch of tapes on a device in an autodifferentiable-compatible manner.

Parameters
  • tapes (Sequence[QuantumTape]) – batch of tapes to execute

  • device (Device) – Device to use to execute the batch of tapes. If the device does not provide a batch_execute method, by default the tapes will be executed in serial.

  • gradient_fn (None or callable) – The gradient transform function to use for backward passes. If “device”, the device will be queried directly for the gradient (if supported).

  • interface (str) – The interface that will be used for classical autodifferentiation. This affects the types of parameters that can exist on the input tapes. Available options include autograd, torch, tf, and jax.

  • mode (str) – Whether the gradients should be computed on the forward pass (forward) or the backward pass (backward). Only applies if the device is queried for the gradient; gradient transform functions available in qml.gradients are only supported on the backward pass.

  • gradient_kwargs (dict) – dictionary of keyword arguments to pass when determining the gradients of tapes

  • cache (bool) – Whether to cache evaluations. This can result in a significant reduction in quantum evaluations during gradient computations.

  • cachesize (int) – the size of the cache

  • max_diff (int) – If gradient_fn is a gradient transform, this option specifies the maximum number of derivatives to support. Increasing this value allows for higher order derivatives to be extracted, at the cost of additional (classical) computational overhead during the backwards pass.

Returns

A nested list of tape results. Each element in the returned list corresponds in order to the provided tapes.

Return type

list[list[float]]

Example

Consider the following cost function:

dev = qml.device("lightning.qubit", wires=2)

def cost_fn(params, x):
    with qml.tape.JacobianTape() as tape1:
        qml.RX(params[0], wires=0)
        qml.RY(params[1], wires=0)
        qml.expval(qml.PauliZ(0))

    with qml.tape.JacobianTape() as tape2:
        qml.RX(params[2], wires=0)
        qml.RY(x[0], wires=1)
        qml.CNOT(wires=[0, 1])
        qml.probs(wires=0)

    tapes = [tape1, tape2]

    # execute both tapes in a batch on the given device
    res = execute(tapes, dev, qml.gradients.param_shift)

    return res[0][0] + res[1][0, 0] - res[1][0, 1]

In this cost function, two independent quantum tapes are being constructed; one returning an expectation value, the other probabilities. We then batch execute the two tapes, and reduce the results to obtain a scalar.

Let’s execute this cost function while tracking the gradient:

>>> params = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> x = np.array([0.5], requires_grad=True)
>>> cost_fn(params, x)
1.9305068163274222

Since the execute function is differentiable, we can also compute the gradient:

>>> qml.grad(cost_fn)(params, x)
(array([-0.0978434 , -0.19767681, -0.29552021]), array([5.37764278e-17]))

Finally, we can also compute any nth-order derivative. Let’s compute the Jacobian of the gradient (that is, the Hessian):

>>> x.requires_grad = False
>>> qml.jacobian(qml.grad(cost_fn))(params, x)
array([[-0.97517033,  0.01983384,  0.        ],
       [ 0.01983384, -0.97517033,  0.        ],
       [ 0.        ,  0.        , -0.95533649]])