qml.qnode

qnode(device, *, interface='autograd', mutable=True, diff_method='best', **kwargs)[source]

Decorator for creating QNodes.

When applied to a quantum function, this decorator converts it into a QNode instance.

Example

>>> dev = qml.device("default.qubit", wires=1)
>>> @qml.qnode(dev)
>>> def circuit(x):
>>>     qml.RX(x, wires=0)
>>>     return qml.expval(qml.PauliZ(0))
Parameters
  • device (Device) – a PennyLane-compatible device

  • interface (str) –

    The interface that will be used for classical backpropagation. This affects the types of objects that can be passed to/returned from the QNode:

    • interface='autograd': Allows autograd to backpropogate through the QNode. The QNode accepts default Python types (floats, ints, lists) as well as NumPy array arguments, and returns NumPy arrays.

    • interface='torch': Allows PyTorch to backpropogate through the QNode. The QNode accepts and returns Torch tensors.

    • interface='tf': Allows TensorFlow in eager mode to backpropogate through the QNode. The QNode accepts and returns TensorFlow tf.Variable and tf.tensor objects.

    • None: The QNode accepts default Python types (floats, ints, lists) as well as NumPy array arguments, and returns NumPy arrays. It does not connect to any machine learning library automatically for backpropagation.

  • mutable (bool) – whether the QNode circuit is mutable

  • diff_method (str, None) –

    the method of differentiation to use in the created QNode.

    • "best": Best available method. Uses classical backpropagation or the device directly to compute the gradient if supported, otherwise will use the analytic parameter-shift rule where possible with finite-difference as a fallback.

    • "backprop": Use classical backpropagation. Only allowed on simulator devices that are classically end-to-end differentiable, for example default.tensor.tf. Note that the returned QNode can only be used with the machine learning framework supported by the device; a separate interface argument should not be passed.

    • "reversible": Uses a reversible method for computing the gradient. This method is similar to "backprop", but trades off increased runtime with significantly lower memory usage. Compared to the parameter-shift rule, the reversible method can be faster or slower, depending on the density and location of parametrized gates in a circuit. Only allowed on (simulator) devices with the “reversible” capability, for example default.qubit.

    • "device": Queries the device directly for the gradient. Only allowed on devices that provide their own gradient rules.

    • "parameter-shift": Use the analytic parameter-shift rule where possible, with finite-difference as a fallback.

    • "finite-diff": Uses numerical finite-differences for all parameters.

    • None: a non-differentiable QNode is returned.

Keyword Arguments
  • h (float) – Step size for the finite difference method. Default is 1e-7 for analytic devices, or 0.3 for non-analytic devices (those that estimate expectation values with a finite number of shots).

  • order (int) – order for the finite-difference method, must be 1 (default) or 2