qml.tape.qnode¶
-
qnode
(device, interface='autograd', diff_method='best', **diff_options)[source]¶ Decorator for creating QNodes.
This decorator is used to indicate to PennyLane that the decorated function contains a quantum variational circuit that should be bound to a compatible device.
The QNode calls the quantum function to construct a
JacobianTape
instance representing the quantum circuit.Note
The quantum tape is an experimental feature. QNodes that use the quantum tape have access to advanced features, such as in-QNode classical processing, but do not yet have feature parity with the standard PennyLane QNode.
This quantum tape-comaptible QNode can either be created directly,
>>> import pennylane as qml >>> @qml.tape.qnode(dev)
or enabled globally via
enable_tape()
without changing your PennyLane code:>>> qml.enable_tape()
For more details, see
pennylane.tape
.- Parameters
func (callable) – a quantum function
device (Device) – a PennyLane-compatible device
interface (str) –
The interface that will be used for classical backpropagation. This affects the types of objects that can be passed to/returned from the QNode:
interface='autograd'
: Allows autograd to backpropogate through the QNode. The QNode accepts default Python types (floats, ints, lists) as well as NumPy array arguments, and returns NumPy arrays.interface='torch'
: Allows PyTorch to backpropogate through the QNode. The QNode accepts and returns Torch tensors.interface='tf'
: Allows TensorFlow in eager mode to backpropogate through the QNode. The QNode accepts and returns TensorFlowtf.Variable
andtf.tensor
objects.None
: The QNode accepts default Python types (floats, ints, lists) as well as NumPy array arguments, and returns NumPy arrays. It does not connect to any machine learning library automatically for backpropagation.
diff_method (str, None) –
the method of differentiation to use in the created QNode.
"best"
: Best available method. Uses classical backpropagation or the device directly to compute the gradient if supported, otherwise will use the analytic parameter-shift rule where possible with finite-difference as a fallback."backprop"
: Use classical backpropagation. Only allowed on simulator devices that are classically end-to-end differentiable, for exampledefault.tensor.tf
. Note that the returned QNode can only be used with the machine-learning framework supported by the device; a separateinterface
argument should not be passed."reversible"
: Uses a reversible method for computing the gradient. This method is similar to"backprop"
, but trades off increased runtime with significantly lower memory usage. Compared to the parameter-shift rule, the reversible method can be faster or slower, depending on the density and location of parametrized gates in a circuit. Only allowed on (simulator) devices with the “reversible” capability, for exampledefault.qubit
."device"
: Queries the device directly for the gradient. Only allowed on devices that provide their own gradient rules."parameter-shift"
: Use the analytic parameter-shift rule for all supported quantum operation arguments, with finite-difference as a fallback."finite-diff"
: Uses numerical finite-differences for all quantum operation arguments.
- Keyword Arguments
h=1e-7 (float) – Step size for the finite difference method.
order=1 (int) – The order of the finite difference method to use.
1
corresponds to forward finite differences,2
to centered finite differences.
Example
>>> qml.enable_tape() >>> dev = qml.device("default.qubit", wires=1) >>> @qml.qnode(dev) >>> def circuit(x): >>> qml.RX(x, wires=0) >>> return expval(qml.PauliZ(0))
Contents
Using PennyLane
Development
API
Downloads