qml.qnode¶

qnode
(device, interface='autograd', diff_method='best', mutable=True, max_expansion=10, h=1e07, order=1, shift=1.5707963267948966, adjoint_cache=True, argnum=None, **kwargs)[source]¶ Decorator for creating QNodes.
This decorator is used to indicate to PennyLane that the decorated function contains a quantum variational circuit that should be bound to a compatible device.
The QNode calls the quantum function to construct a
QuantumTape
instance representing the quantum circuit. Parameters
func (callable) – a quantum function
device (Device) – a PennyLanecompatible device
interface (str) –
The interface that will be used for classical backpropagation. This affects the types of objects that can be passed to/returned from the QNode:
"autograd"
: Allows autograd to backpropogate through the QNode. The QNode accepts default Python types (floats, ints, lists) as well as NumPy array arguments, and returns NumPy arrays."torch"
: Allows PyTorch to backpropogate through the QNode. The QNode accepts and returns Torch tensors."tf"
: Allows TensorFlow in eager mode to backpropogate through the QNode. The QNode accepts and returns TensorFlowtf.Variable
andtf.tensor
objects.None
: The QNode accepts default Python types (floats, ints, lists) as well as NumPy array arguments, and returns NumPy arrays. It does not connect to any machine learning library automatically for backpropagation.
diff_method (str) –
the method of differentiation to use in the created QNode.
"best"
: Best available method. Uses classical backpropagation or the device directly to compute the gradient if supported, otherwise will use the analytic parametershift rule where possible with finitedifference as a fallback."backprop"
: Use classical backpropagation. Only allowed on simulator devices that are classically endtoend differentiable, for exampledefault.tensor.tf
. Note that the returned QNode can only be used with the machinelearning framework supported by the device; a separateinterface
argument should not be passed."reversible"
: Uses a reversible method for computing the gradient. This method is similar to"backprop"
, but trades off increased runtime with significantly lower memory usage. Compared to the parametershift rule, the reversible method can be faster or slower, depending on the density and location of parametrized gates in a circuit. Only allowed on (simulator) devices with the “reversible” capability, for exampledefault.qubit
."adjoint"
: Uses an adjoint method that reverses through the circuit after a forward pass by iteratively applying the inverse (adjoint) gate. This method is similar to the reversible method, but has a lower time overhead and a similar memory overhead. Only allowed on simulator devices such asdefault.qubit
."device"
: Queries the device directly for the gradient. Only allowed on devices that provide their own gradient rules."parametershift"
: Use the analytic parametershift rule for all supported quantum operation arguments, with finitedifference as a fallback."finitediff"
: Uses numerical finitedifferences for all quantum operation arguments.
mutable (bool) – If True, the underlying quantum circuit is reconstructed with every evaluation. This is the recommended approach, as it allows the underlying quantum structure to depend on (potentially trainable) QNode input arguments, however may add some overhead at evaluation time. If this is set to False, the quantum structure will only be constructed on the first evaluation of the QNode, and is stored and reused for further quantum evaluations. Only set this to False if it is known that the underlying quantum structure is independent of QNode input.
max_expansion (int) – The number of times the internal circuit should be expanded when executed on a device. Expansion occurs when an operation or measurement is not supported, and results in a gate decomposition. If any operations in the decomposition remain unsupported by the device, another expansion occurs.
h (float) – step size for the finite difference method
order (int) – The order of the finite difference method to use.
1
corresponds to forward finite differences,2
to centered finite differences.shift (float) – the size of the shift for twoterm parametershift gradient computations
adjoint_cache (bool) – For TensorFlow and PyTorch interfaces and adjoint differentiation, this indicates whether to save the device state after the forward pass. Doing so saves a forward execution. Device state automatically reused with autograd and JAX interfaces.
argnum (int, list(int), None) – Which argument(s) to compute the Jacobian with respect to. When there are fewer parameters specified than the total number of trainable parameters, the jacobian is being estimated. Note that this option is only applicable for the following differentiation methods:
"parametershift"
,"finitediff"
and"reversible"
.kwargs – used to catch all unrecognized keyword arguments and provide a user warning about them
Example
>>> dev = qml.device("default.qubit", wires=1) >>> @qml.qnode(dev) >>> def circuit(x): >>> qml.RX(x, wires=0) >>> return expval(qml.PauliZ(0))
Contents
Using PennyLane
Development
API
Downloads