qml.QNode

class QNode(func, device, interface='auto', diff_method='best', expansion_strategy='gradient', max_expansion=10, grad_on_execution='best', cache='auto', cachesize=10000, max_diff=1, device_vjp=False, **gradient_kwargs)[source]

Bases: object

Represents a quantum node in the hybrid computational graph.

A quantum node contains a quantum function (corresponding to a variational circuit) and the computational device it is executed on.

The QNode calls the quantum function to construct a QuantumTape instance representing the quantum circuit.

Parameters
  • func (callable) – a quantum function

  • device (Device) – a PennyLane-compatible device

  • interface (str) –

    The interface that will be used for classical backpropagation. This affects the types of objects that can be passed to/returned from the QNode. See qml.workflow.SUPPORTED_INTERFACES for a list of all accepted strings.

    • "autograd": Allows autograd to backpropagate through the QNode. The QNode accepts default Python types (floats, ints, lists, tuples, dicts) as well as NumPy array arguments, and returns NumPy arrays.

    • "torch": Allows PyTorch to backpropagate through the QNode. The QNode accepts and returns Torch tensors.

    • "tf": Allows TensorFlow in eager mode to backpropagate through the QNode. The QNode accepts and returns TensorFlow tf.Variable and tf.tensor objects.

    • "jax": Allows JAX to backpropagate through the QNode. The QNode accepts and returns JAX Array objects.

    • None: The QNode accepts default Python types (floats, ints, lists, tuples, dicts) as well as NumPy array arguments, and returns NumPy arrays. It does not connect to any machine learning library automatically for backpropagation.

    • "auto": The QNode automatically detects the interface from the input values of the quantum function.

  • diff_method (str or TransformDispatcher) –

    The method of differentiation to use in the created QNode. Can either be a TransformDispatcher, which includes all quantum gradient transforms in the qml.gradients module, or a string. The following strings are allowed:

    • "best": Best available method. Uses classical backpropagation or the device directly to compute the gradient if supported, otherwise will use the analytic parameter-shift rule where possible with finite-difference as a fallback.

    • "device": Queries the device directly for the gradient. Only allowed on devices that provide their own gradient computation.

    • "backprop": Use classical backpropagation. Only allowed on simulator devices that are classically end-to-end differentiable, for example default.qubit. Note that the returned QNode can only be used with the machine-learning framework supported by the device.

    • "adjoint": Uses an adjoint method that reverses through the circuit after a forward pass by iteratively applying the inverse (adjoint) gate. Only allowed on supported simulator devices such as default.qubit.

    • "parameter-shift": Use the analytic parameter-shift rule for all supported quantum operation arguments, with finite-difference as a fallback.

    • "hadamard": Use the analytic hadamard gradient test rule for all supported quantum operation arguments. More info is in the documentation qml.gradients.hadamard_grad.

    • "finite-diff": Uses numerical finite-differences for all quantum operation arguments.

    • "spsa": Uses a simultaneous perturbation of all operation arguments to approximate the derivative.

    • None: QNode cannot be differentiated. Works the same as interface=None.

  • expansion_strategy (str) –

    The strategy to use when circuit expansions or decompositions are required.

    • gradient: The QNode will attempt to decompose the internal circuit such that all circuit operations are supported by the gradient method. Further decompositions required for device execution are performed by the device prior to circuit execution.

    • device: The QNode will attempt to decompose the internal circuit such that all circuit operations are natively supported by the device.

    The gradient strategy typically results in a reduction in quantum device evaluations required during optimization, at the expense of an increase in classical preprocessing.

  • max_expansion (int) – The number of times the internal circuit should be expanded when executed on a device. Expansion occurs when an operation or measurement is not supported, and results in a gate decomposition. If any operations in the decomposition remain unsupported by the device, another expansion occurs.

  • grad_on_execution (bool, str) – Whether the gradients should be computed on the execution or not. Only applies if the device is queried for the gradient; gradient transform functions available in qml.gradients are only supported on the backward pass. The ‘best’ option chooses automatically between the two options and is default.

  • cache="auto" (str or bool or dict or Cache) – Whether to cache evalulations. "auto" indicates to cache only when max_diff > 1. This can result in a reduction in quantum evaluations during higher order gradient computations. If True, a cache with corresponding cachesize is created for each batch execution. If False, no caching is used. You may also pass your own cache to be used; this can be any object that implements the special methods __getitem__(), __setitem__(), and __delitem__(), such as a dictionary.

  • cachesize (int) – The size of any auto-created caches. Only applies when cache=True.

  • max_diff (int) – If diff_method is a gradient transform, this option specifies the maximum number of derivatives to support. Increasing this value allows for higher order derivatives to be extracted, at the cost of additional (classical) computational overhead during the backwards pass.

  • device_vjp (bool) – Whether or not to use the device-provided Vector Jacobian Product (VJP). A value of None indicates to use it if the device provides it, but use the full jacobian otherwise.

Keyword Arguments

**kwargs – Any additional keyword arguments provided are passed to the differentiation method. Please refer to the qml.gradients module for details on supported options for your chosen gradient transform.

Example

QNodes can be created by decorating a quantum function:

>>> dev = qml.device("default.qubit", wires=1)
>>> @qml.qnode(dev)
... def circuit(x):
...     qml.RX(x, wires=0)
...     return qml.expval(qml.Z(0))

or by instantiating the class directly:

>>> def circuit(x):
...     qml.RX(x, wires=0)
...     return qml.expval(qml.Z(0))
>>> dev = qml.device("default.qubit", wires=1)
>>> qnode = qml.QNode(circuit, dev)

QNodes can be executed simultaneously for multiple parameter settings, which is called parameter broadcasting or parameter batching. We start with a simple example and briefly look at the scenarios in which broadcasting is possible and useful. Finally we give rules and conventions regarding the usage of broadcasting, together with some more complex examples. Also see the Operator documentation for implementation details.

Example

Again consider the following circuit:

>>> dev = qml.device("default.qubit", wires=1)
>>> @qml.qnode(dev)
... def circuit(x):
...     qml.RX(x, wires=0)
...     return qml.expval(qml.Z(0))

If we want to execute it at multiple values x, we may pass those as a one-dimensional array to the QNode:

>>> x = np.array([np.pi / 6, np.pi * 3 / 4, np.pi * 7 / 6])
>>> circuit(x)
tensor([ 0.8660254 , -0.70710678, -0.8660254 ], requires_grad=True)

The resulting array contains the QNode evaluations at the single values:

>>> [circuit(x_val) for x_val in x]
[tensor(0.8660254, requires_grad=True),
 tensor(-0.70710678, requires_grad=True),
 tensor(-0.8660254, requires_grad=True)]

In addition to the results being stacked into one tensor already, the broadcasted execution actually is performed in one simulation of the quantum circuit, instead of three sequential simulations.

Benefits & Supported QNodes

Parameter broadcasting can be useful to simplify the execution syntax with QNodes. More importantly though, the simultaneous execution via broadcasting can be significantly faster than iterating over parameters manually. If we compare the execution time for the above QNode circuit between broadcasting and manual iteration for an input size of 100, we find a speedup factor of about \(30\). This speedup is a feature of classical simulators, but broadcasting may reduce the communication overhead for quantum hardware devices as well.

A QNode supports broadcasting if all operators that receive broadcasted parameters do so. (Operators that are used in the circuit but do not receive broadcasted inputs do not need to support it.) A list of supporting operators is available in supports_broadcasting. Whether or not broadcasting delivers an increased performance will depend on whether the used device is a classical simulator and natively supports this. The latter can be checked with the capabilities of the device:

>>> dev.capabilities()["supports_broadcasting"]
True

If a device does not natively support broadcasting, it will execute broadcasted QNode calls by expanding the input arguments into separate executions. That is, every device can execute QNodes with broadcasting, but only supporting devices will benefit from it.

Usage

The first example above is rather simple. Broadcasting is possible in more complex scenarios as well, for which it is useful to understand the concept in more detail. The following rules and conventions apply:

There is at most one broadcasting axis

The broadcasted input has (exactly) one more axis than the operator(s) which receive(s) it would usually expect. For example, most operators expect a single scalar input and the broadcasted input correspondingly is a 1D array:

>>> x = np.array([1., 2., 3.])
>>> op = qml.RX(x, wires=0) # Additional axis of size 3.

An operator op that supports broadcasting indicates the expected number of axes–or dimensions–in its attribute op.ndim_params. This attribute is a tuple with one integer per argument of op. The batch size of a broadcasted operator is stored in op.batch_size:

>>> op.ndim_params # RX takes one scalar input.
(0,)
>>> op.batch_size # The broadcasting axis has size 3.
3

The broadcasting axis is always the leading axis of an argument passed to an operator:

>>> from scipy.stats import unitary_group
>>> U = np.stack([unitary_group.rvs(4) for _ in range(3)])
>>> U.shape # U stores three two-qubit unitaries, each of shape 4x4
(3, 4, 4)
>>> op = qml.QubitUnitary(U, wires=[0, 1])
>>> op.batch_size
3

Stacking multiple broadcasting axes is not supported.

Multiple operators are broadcasted simultaneously

It is possible to broadcast multiple parameters simultaneously. In this case, the batch size of the broadcasting axes must match, and the parameters are combined like in Python’s zip function. Non-broadcasted parameters do not need to be augmented manually but can simply be used as one would in individual QNode executions:

dev = qml.device("default.qubit", wires=4)
@qml.qnode(dev)
def circuit(x, y, U):
    qml.QubitUnitary(U, wires=[0, 1, 2, 3])
    qml.RX(x, wires=0)
    qml.RY(y, wires=1)
    qml.RX(x, wires=2)
    qml.RY(y, wires=3)
    return qml.expval(qml.Z(0) @ qml.X(1) @ qml.Z(2) @ qml.Z(3))


x = np.array([0.4, 2.1, -1.3])
y = 2.71
U = np.stack([unitary_group.rvs(16) for _ in range(3)])

This circuit takes three arguments, and the first two are used twice each. x and U will lead to a batch size of 3 for the RX rotations and the multi-qubit unitary, respectively. The input y is a float value and will be used together with all three values in x and U. We obtain three output values:

>>> circuit(x, y, U)
tensor([-0.06939911,  0.26051235, -0.20361048], requires_grad=True)

This is equivalent to iterating over all broadcasted arguments using zip:

>>> [circuit(x_val, y, U_val) for x_val, U_val in zip(x, U)]
[tensor(-0.06939911, requires_grad=True),
 tensor(0.26051235, requires_grad=True),
 tensor(-0.20361048, requires_grad=True)]

In the same way it is possible to broadcast multiple arguments of a single operator, for example:

>>> qml.Rot.ndim_params # Rot takes three scalar arguments
(0, 0, 0)
>>> x = np.array([0.4, 2.3, -0.1]) # Broadcast the first argument with size 3
>>> y = 1.6 # Do not broadcast the second argument
>>> z = np.array([1.2, -0.5, 2.5]) # Broadcast the third argument with size 3
>>> op = qml.Rot(x, y, z, wires=0)
>>> op.batch_size
3

Broadcasting does not modify classical processing

Note that classical processing in QNodes will happen before broadcasting is taken into account. This means, that while operators always interpret the first axis as the broadcasting axis, QNodes do not necessarily do so:

@qml.qnode(dev)
def circuit_unpacking(x):
    qml.RX(x[0], wires=0)
    qml.RY(x[1], wires=1)
    qml.RZ(x[2], wires=1)
    return qml.expval(qml.Z(0) @ qml.X(1))

x = np.array([[1, 2], [3, 4], [5, 6]])

The prepared parameter x has shape (3, 2), corresponding to the three operations and a batch size of 2:

>>> circuit_unpacking(x)
tensor([0.02162852, 0.30239696], requires_grad=True)

If we were to iterate manually over the parameter settings, we probably would put the batching axis in x first. This is not the behaviour with parameter broadcasting because it does not modify the unpacking step within the QNode, so that x is unpacked first and the unpacked elements are expected to contain the broadcasted parameters for each operator individually; if we attempted to put the broadcasting axis of size 2 first, the indexing of x would fail in the RZ rotation within the QNode.

interface

The interface used by the QNode

qtape

The quantum tape

tape

The quantum tape

transform_program

The transform program used by the QNode.

interface

The interface used by the QNode

qtape

The quantum tape

tape

The quantum tape

transform_program

The transform program used by the QNode.

__call__(*args, **kwargs)

Call self as a function.

add_transform(transform_container)

Add a transform (container) to the transform program.

best_method_str(device, interface)

Similar to get_best_method(), except return the ‘best’ differentiation method in human-readable format.

construct(args, kwargs)

Call the quantum function with a tape context, ensuring the operations get queued.

get_best_method(device, interface[, shots])

Returns the ‘best’ differentiation method for a particular device and interface combination.

get_gradient_fn(device, interface[, …])

Determine the best differentiation method, interface, and device for a requested device, interface, and diff method.

__call__(*args, **kwargs)[source]

Call self as a function.

add_transform(transform_container)[source]

Add a transform (container) to the transform program.

Warning

This is a developer facing feature and is called when a transform is applied on a QNode.

static best_method_str(device, interface)[source]

Similar to get_best_method(), except return the ‘best’ differentiation method in human-readable format.

This method attempts to determine support for differentiation methods using the following order:

  • "device"

  • "backprop"

  • "parameter-shift"

  • "finite-diff"

The first differentiation method that is supported (going from top to bottom) will be returned. Note that the SPSA-based and Hadamard-based gradient are not included here.

This method is intended only for debugging purposes. Otherwise, get_best_method() should be used instead.

Parameters
  • device (Device) – PennyLane device

  • interface (str) – name of the requested interface

Returns

The gradient function to use in human-readable format.

Return type

str

construct(args, kwargs)[source]

Call the quantum function with a tape context, ensuring the operations get queued.

static get_best_method(device, interface, shots=None)[source]

Returns the ‘best’ differentiation method for a particular device and interface combination.

This method attempts to determine support for differentiation methods using the following order:

  • "device"

  • "backprop"

  • "parameter-shift"

  • "finite-diff"

The first differentiation method that is supported (going from top to bottom) will be returned. Note that the SPSA-based and Hadamard-based gradients are not included here.

Parameters
  • device (Device) – PennyLane device

  • interface (str) – name of the requested interface

Returns

Tuple containing the gradient_fn, gradient_kwargs, and the device to use when calling the execute function.

Return type

tuple[str or TransformDispatcher, dict, Device

static get_gradient_fn(device, interface, diff_method='best', shots=None)[source]

Determine the best differentiation method, interface, and device for a requested device, interface, and diff method.

Parameters
  • device (Device) – PennyLane device

  • interface (str) – name of the requested interface

  • diff_method (str or TransformDispatcher) – The requested method of differentiation. If a string, allowed options are "best", "backprop", "adjoint", "device", "parameter-shift", "hadamard", "finite-diff", or "spsa". A gradient transform may also be passed here.

Returns

Tuple containing the gradient_fn, gradient_kwargs, and the device to use when calling the execute function.

Return type

tuple[str or TransformDispatcher, dict, Device

Contents

Using PennyLane

Development

API

Internals