Optimizers¶
Optimizers are objects which can be used to automatically update the parameters of a quantum or hybrid machine learning model. The optimizers you should use are dependent on your choice of classical interface (NumPy, PyTorch, and TensorFlow), and are available from different access points.
Regardless of their origin, all optimizers provide the same core functionality, and PennyLane is fully compatible with all of them.
NumPy Interface¶
When using the standard NumPy interface, PennyLane offers some built-in optimizers.
Some of these are specific to quantum optimization, such as the QNGOptimizer
.
AdagradOptimizer
Gradient-descent optimizer with past-gradient-dependent learning rate in each dimension.
AdamOptimizer
Gradient-descent optimizer with adaptive learning rate, first and second moment.
GradientDescentOptimizer
Basic gradient-descent optimizer.
MomentumOptimizer
Gradient-descent optimizer with momentum.
NesterovMomentumOptimizer
Gradient-descent optimizer with Nesterov momentum.
QNGOptimizer
Optimizer with adaptive learning rate, via calculation of the diagonal or block-diagonal approximation to the Fubini-Study metric tensor.
RMSPropOptimizer
Root mean squared propagation optimizer.
PyTorch Interface¶
If you are using the PennyLane PyTorch interface, you should import one of the native
PyTorch optimizers (found in torch.optim
).
TensorFlow Interface¶
When using the PennyLane TensorFlow interface, you will need to leverage one of
the TensorFlow optimizers
(found in tf.keras.optimizers
).
Contents
Downloads