Gradients and training

PennyLane offers seamless integration between classical and quantum computations. Code up quantum circuits in PennyLane, compute gradients of quantum circuits, and connect them easily to the top scientific computing and machine learning libraries.

Gradients

When creating a QNode, you can specify the differentiation method that PennyLane should use whenever the gradient of that QNode is requested.

@qml.qnode(dev, diff_method="parameter-shift")
def circuit(x):
    qml.RX(x, wires=0)
    return qml.probs(wires=0)

PennyLane currently provides the following differentiation methods for QNodes:

Simulation-based differentiation

The following methods use reverse accumulation to compute gradients; a well-known example of this approach is backpropagation. These methods are not hardware compatible; they are only supported on statevector simulator devices such as default.qubit.

However, for rapid prototyping on simulators, these methods typically out-perform forward-mode accumulators such as the parameter-shift rule and finite-differences. For more details, see the quantum backpropagation demonstration.

  • "backprop": Use standard backpropagation.

    This differentiation method is only allowed on simulator devices that are classically end-to-end differentiable, for example default.qubit. This method does not work on devices that estimate measurement statistics using a finite number of shots; please use the parameter-shift rule instead.

  • "adjoint": Use a form of backpropagation that takes advantage of the unitary or reversible nature of quantum computation.

    The adjoint method reverses through the circuit after a forward pass by iteratively applying the inverse (adjoint) gate. This method is similar to "backprop", but has significantly lower memory usage and a similar runtime.

  • "reversible": Use a form of backpropagation that takes advantage of the unitary or reversible nature of quantum computation.

    This method is similar to the adjoint method, but has a slightly larger time overhead and a similar memory overhead. Compared to the parameter-shift rule, the reversible method can be faster or slower, depending on the density and location of parametrized gates in a circuit.

Hardware-compatible differentiation

The following methods support both quantum hardware and simulators, and are examples of forward accumulation. However, when using a simulator, you may notice that the time required to compute the gradients scales quadratically with the number of trainable circuit parameters.

  • "parameter-shift": Use the analytic parameter-shift rule for all supported quantum operation arguments, with finite-difference as a fallback.

  • "finite-diff": Use numerical finite-differences for all quantum operation arguments.

Device gradients

  • "device": Queries the device directly for the gradient. Only allowed on devices that provide their own gradient computation.

Note

If not specified, the default differentiation method is diff_method="best". PennyLane will attempt to determine the best differentiation method given the device and interface. Typically, PennyLane will prioritize device-provided gradients, backpropagation, parameter-shift rule, and finally finite-differences, in that order.

Training and interfaces

The bridge between the quantum and classical worlds is provided in PennyLane via interfaces. Currently, there are four built-in interfaces: NumPy, PyTorch, JAX, and TensorFlow. These interfaces make each of these libraries quantum-aware, allowing quantum circuits to be treated just like any other operation.

In PennyLane, an interface is declared when creating a QNode, e.g.,

@qml.qnode(dev, interface="tf")
def my_quantum_circuit(...):
    ...

Note

If no interface is specified, PennyLane will default to the NumPy interface (powered by the autograd library).

This will allow native numerical objects of the specified library (NumPy arrays, Torch Tensors, or TensorFlow Tensors) to be passed as parameters to the quantum circuit. It also makes the gradients of the quantum circuit accessible to the classical library, enabling the optimization of arbitrary hybrid circuits.

See the links below for walkthroughs of each specific interface:

In addition to the core interfaces discussed above, PennyLane also provides higher-level classes for converting QNodes into both Keras and torch.nn layers:

pennylane.qnn.KerasLayer(qnode, …)

Converts a QNode() to a Keras Layer.

pennylane.qnn.TorchLayer(qnode, …)

Converts a QNode() to a Torch layer.

Note

QNodes with an interface will always incur a small overhead on evaluation. If you do not need to compute quantum gradients of a QNode, specifying interface=None will remove this overhead and result in a slightly faster evaluation. However, gradients will no longer be available.