qml.interfaces.torch.ExecuteTapes

class ExecuteTapes(*args, **kwargs)[source]

Bases: torch.autograd.function.Function

The signature of this torch.autograd.Function is designed to work around Torch restrictions.

In particular, torch.autograd.Function:

  • Cannot accept keyword arguments. As a result, we pass a dictionary as the first argument kwargs. This dictionary must contain:

    • "tapes": the quantum tapes to batch evaluate

    • "device": the quantum device to use to evaluate the tapes

    • "execute_fn": the execution function to use on forward passes

    • "gradient_fn": the gradient transform function to use for backward passes

    • "gradient_kwargs": gradient keyword arguments to pass to the gradient function

    • "max_diff: the maximum order of derivatives to support

Further, note that the parameters argument is dependent on the tapes; this function should always be called with the parameters extracted directly from the tapes as follows:

>>> parameters = []
>>> [parameters.extend(t.get_parameters()) for t in tapes]
>>> kwargs = {"tapes": tapes, "device": device, "gradient_fn": gradient_fn, ...}
>>> ExecuteTapes.apply(kwargs, *parameters)

The private argument _n is used to track nesting of derivatives, for example if the nth-order derivative is requested. Do not set this argument unless you understand the consequences!

dirty_tensors

is_traceable

materialize_grads

metadata

needs_input_grad

next_functions

non_differentiable

requires_grad

saved_tensors

saved_variables

to_save

dirty_tensors
is_traceable = False
materialize_grads
metadata
needs_input_grad
next_functions
non_differentiable
requires_grad
saved_tensors
saved_variables
to_save

apply

backward(ctx, *dy)

Returns the vector-Jacobian product with given parameter values p and output gradient dy

forward(ctx, kwargs, *parameters)

Implements the forward pass batch tape evaluation.

mark_dirty(*args)

Marks given tensors as modified in an in-place operation.

mark_non_differentiable(*args)

Marks outputs as non-differentiable.

mark_shared_storage(*pairs)

name

register_hook

save_for_backward(*tensors)

Saves given tensors for a future call to backward().

set_materialize_grads(value)

Sets whether to materialize output grad tensors.

apply()
static backward(ctx, *dy)[source]

Returns the vector-Jacobian product with given parameter values p and output gradient dy

static forward(ctx, kwargs, *parameters)[source]

Implements the forward pass batch tape evaluation.

mark_dirty(*args)

Marks given tensors as modified in an in-place operation.

This should be called at most once, only from inside the forward() method, and all arguments should be inputs.

Every tensor that’s been modified in-place in a call to forward() should be given to this function, to ensure correctness of our checks. It doesn’t matter whether the function is called before or after modification.

mark_non_differentiable(*args)

Marks outputs as non-differentiable.

This should be called at most once, only from inside the forward() method, and all arguments should be outputs.

This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in backward(), but it’s always going to be a zero tensor with the same shape as the shape of a corresponding output.

This is used e.g. for indices returned from a max Function.

mark_shared_storage(*pairs)
name()
register_hook()
save_for_backward(*tensors)

Saves given tensors for a future call to backward().

This should be called at most once, and only from inside the forward() method.

Later, saved tensors can be accessed through the saved_tensors attribute. Before returning them to the user, a check is made to ensure they weren’t used in any in-place operation that modified their content.

Arguments can also be None.

set_materialize_grads(value)

Sets whether to materialize output grad tensors. Default is true.

This should be called only from inside the forward() method

If true, undefined output grad tensors will be expanded to tensors full of zeros prior to calling the backward() method.