qml.tape.QNode

class QNode(func, device, interface='autograd', diff_method='best', **diff_options)[source]

Bases: object

Represents a quantum node in the hybrid computational graph.

A quantum node contains a quantum function (corresponding to a variational circuit) and the computational device it is executed on.

The QNode calls the quantum function to construct a JacobianTape instance representing the quantum circuit.

Note

The quantum tape is an experimental feature. QNodes that use the quantum tape have access to advanced features, such as in-QNode classical processing, but do not yet have feature parity with the standard PennyLane QNode.

This quantum tape-comaptible QNode can either be created directly,

>>> import pennylane as qml
>>> qml.tape.QNode(qfunc, dev)

or enabled globally via enable_tape() without changing your PennyLane code:

>>> qml.enable_tape()

For more details, see pennylane.tape.

Parameters
  • func (callable) – a quantum function

  • device (Device) – a PennyLane-compatible device

  • interface (str) –

    The interface that will be used for classical backpropagation. This affects the types of objects that can be passed to/returned from the QNode:

    • interface='autograd': Allows autograd to backpropagate through the QNode. The QNode accepts default Python types (floats, ints, lists) as well as NumPy array arguments, and returns NumPy arrays.

    • interface='torch': Allows PyTorch to backpropogate through the QNode. The QNode accepts and returns Torch tensors.

    • interface='tf': Allows TensorFlow in eager mode to backpropogate through the QNode. The QNode accepts and returns TensorFlow tf.Variable and tf.tensor objects.

    • None: The QNode accepts default Python types (floats, ints, lists) as well as NumPy array arguments, and returns NumPy arrays. It does not connect to any machine learning library automatically for backpropagation.

  • diff_method (str, None) –

    the method of differentiation to use in the created QNode

    • "best": Best available method. Uses classical backpropagation or the device directly to compute the gradient if supported, otherwise will use the analytic parameter-shift rule where possible with finite-difference as a fallback.

    • "backprop": Use classical backpropagation. Only allowed on simulator devices that are classically end-to-end differentiable, for example default.tensor.tf. Note that the returned QNode can only be used with the machine-learning framework supported by the device.

    • "reversible": Uses a reversible method for computing the gradient. This method is similar to "backprop", but trades off increased runtime with significantly lower memory usage. Compared to the parameter-shift rule, the reversible method can be faster or slower, depending on the density and location of parametrized gates in a circuit. Only allowed on (simulator) devices with the “reversible” capability, for example default.qubit.

    • "device": Queries the device directly for the gradient. Only allowed on devices that provide their own gradient computation.

    • "parameter-shift": Use the analytic parameter-shift rule for all supported quantum operation arguments, with finite-difference as a fallback.

    • "finite-diff": Uses numerical finite-differences for all quantum operation arguments.

Keyword Arguments
  • h=1e-7 (float) – step size for the finite difference method

  • order=1 (int) – The order of the finite difference method to use. 1 corresponds to forward finite differences, 2 to centered finite differences.

  • shift=pi/2 (float) – the size of the shift for two-term parameter-shift gradient computations

Example

>>> qml.enable_tape()
>>> def circuit(x):
...     qml.RX(x, wires=0)
...     return expval(qml.PauliZ(0))
>>> dev = qml.device("default.qubit", wires=1)
>>> qnode = qml.QNode(circuit, dev)

INTERFACE_MAP

INTERFACE_MAP = {'autograd': <function QNode.to_autograd>, 'tf': <function QNode.to_tf>, 'torch': <function QNode.to_torch>}

__call__(*args, **kwargs)

Call self as a function.

construct(args, kwargs)

Call the quantum function with a tape context, ensuring the operations get queued.

draw([charset])

Draw the quantum tape as a circuit diagram.

get_best_method(device, interface)

Returns the ‘best’ JacobianTape and differentiation method for a particular device and interface combination.

get_tape(device, interface[, diff_method])

Determine the best JacobianTape, differentiation method, and interface for a requested device, interface, and diff method.

to_autograd()

Apply the Autograd interface to the internal quantum tape.

to_tf([dtype])

Apply the TensorFlow interface to the internal quantum tape.

to_torch([dtype])

Apply the Torch interface to the internal quantum tape.

__call__(*args, **kwargs)[source]

Call self as a function.

construct(args, kwargs)[source]

Call the quantum function with a tape context, ensuring the operations get queued.

draw(charset='unicode')[source]

Draw the quantum tape as a circuit diagram.

Consider the following circuit as an example:

@qml.qnode(dev)
def circuit(a, w):
    qml.Hadamard(0)
    qml.CRX(a, wires=[0, 1])
    qml.Rot(*w, wires=[1])
    qml.CRX(-a, wires=[0, 1])
    return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))

We can draw the QNode after execution:

>>> result = circuit(2.3, [1.2, 3.2, 0.7])
>>> print(circuit.draw())
0: ──H──╭C────────────────────────────╭C─────────╭┤ ⟨Z ⊗ Z⟩
1: ─────╰RX(2.3)──Rot(1.2, 3.2, 0.7)──╰RX(-2.3)──╰┤ ⟨Z ⊗ Z⟩
>>> print(circuit.draw(charset="ascii"))
0: --H--+C----------------------------+C---------+| <Z @ Z>
1: -----+RX(2.3)--Rot(1.2, 3.2, 0.7)--+RX(-2.3)--+| <Z @ Z>
Parameters

charset (str, optional) – The charset that should be used. Currently, “unicode” and “ascii” are supported.

Raises
  • ValueError – if the given charset is not supported

  • QuantumFunctionError – drawing is impossible because the underlying quantum tape has not yet been constructed

Returns

the circuit representation of the tape

Return type

str

static get_best_method(device, interface)[source]

Returns the ‘best’ JacobianTape and differentiation method for a particular device and interface combination.

This method attempts to determine support for differentiation methods using the following order:

  • "backprop"

  • "device"

  • "parameter-shift"

  • "finite-diff"

The first differentiation method that is supported (going from top to bottom) will be returned.

Parameters
  • device (Device) – PennyLane device

  • interface (str) – name of the requested interface

Returns

tuple containing the compatible JacobianTape, the interface to apply, and the method argument to pass to the JacobianTape.jacobian method

Return type

tuple[JacobianTape, str, str]

static get_tape(device, interface, diff_method='best')[source]

Determine the best JacobianTape, differentiation method, and interface for a requested device, interface, and diff method.

Parameters
  • device (Device) – PennyLane device

  • interface (str) – name of the requested interface

  • diff_method (str) – The requested method of differentiation. One of "best", "backprop", "reversible", "device", "parameter-shift", or "finite-diff".

Returns

tuple containing the compatible JacobianTape, the interface to apply, and the method argument to pass to the JacobianTape.jacobian method

Return type

tuple[JacobianTape, str, str]

to_autograd()[source]

Apply the Autograd interface to the internal quantum tape.

to_tf(dtype=None)[source]

Apply the TensorFlow interface to the internal quantum tape.

Parameters

dtype (tf.dtype) – The dtype that the TensorFlow QNode should output. If not provided, the default is tf.float64.

Raises

QuantumFunctionError – if TensorFlow >= 2.1 is not installed

to_torch(dtype=None)[source]

Apply the Torch interface to the internal quantum tape.

Parameters

dtype (tf.dtype) – The dtype that the Torch QNode should output. If not provided, the default is torch.float64.

Raises

QuantumFunctionError – if PyTorch >= 1.3 is not installed