# SETUP (common to all tutorials)
import fermioniq
client = fermioniq.Client() # Initialize the client (assumes environment variables are set)
import rich # Module for pretty printing
import nest_asyncio # Necessary only for Jupyter, can be omitted in .py files
nest_asyncio.apply()
7) VQE#
In this section, we’ll explore how to set up and run a Variational Quantum Eigensolver (VQE) using the emulator. VQE is a hybrid quantum-classical algorithm designed to minimize the expectation value of an observable for a given parameterized quantum circuit by performing updates on the parameters.
We can run VQE with the emulator in 5 steps:
Define parameters to optimize with Qiskit’s
Parameter
class. The names passed (“parameter 1” and “parameter 2” in the example below) are the names used in the output of the emulator.Use these parameters in the construction of a Qiskit circuit. Parameters can be utilized multiple times and can also be input as linear expressions (e.g.
0.2 * p1 + 1
). Note that more intricate expressions or combinations of multiple parameters for a single gate are not supported.Define a single observable to minimize (see tutorial 6).
Enable the optimizer in the config, and add the observable.
Submit a job with the defined circuit and config, and wait for optimization.
from qiskit.circuit import Parameter
from qiskit import QuantumCircuit
from qiskit.quantum_info import SparsePauliOp
from fermioniq.config.defaults import standard_config
# 1: Define the parameters
p1 = Parameter("parameter 1")
p2 = Parameter("parameter 2")
# 2: Build the parameterized circuit
circuit = QuantumCircuit(2)
circuit.h(0)
circuit.cx(0, 1)
circuit.rx(p1, 0) # Use parameter 1 in the construction of the gate
circuit.rz(0.2*p2, 1) # Use parameter 2 in the construction of the gate
# 3: Define single observable to minimize (as sum over local observables)
observable = SparsePauliOp.from_sparse_list([("ZX", [1, 0], -1), ("YY", [0, 1], 2)], num_qubits=circuit.num_qubits)
# 4: Enable optimization in the config and add the observable
config = standard_config(circuit)
config["optimizer"] = {
"enabled": True,
"observable": {"-XZ + 2YY": observable}
}
# 5: Submit a job and wait for optimization
emulator_job = fermioniq.EmulatorJob(circuit=circuit,config=config)
result = client.schedule_and_wait(emulator_job)
Results
The results show the observable expection value (“Loss”) and fidelity of the emulation for each iteration. The fidelity serves as a measure of the reliability of the loss at each step. To extract the raw optimizer data, we can use the optimizer_data()
method of the output object.
# Full results table
rich.print(result)
# Extract optimizer data
print("Optimizer history:", result.optimizer_data(circuit_number=0, run_number=0))
Config Settings
Here we give an overview of the different settings which can be specified in the “optimizer” section of the config.
enabled
: Set toTrue
to use the optimizer.
The default isFalse
, in which case a normal emulation is performed with parameters set to zero.observable
: The observable for which the expectation value is minimized using the VQE algorithm.
Note: this can only be one observable.evaluation_mode
: Specifies the evaluation method of the observable expectation value. Currently, the only option iscontract
, which uses exact MPS contraction for evaluation.optimizer_name
: Specifies the optimizer used. Currently, only the SPSA optimizer is available.optimizer_settings
: A dictionary containing the settings for the classical optimizer. See the options for SPSA in the next paragraph.initial_param_values
: Specifies the initial values for the parameters. If not provided, the optimizer randomly initializes the parameters.initial_param_noise
(default 0.1): This parameter controls the amount of random noise to the initial parameter values.
SPSA Settings
The settings that can be set when using SPSA are
learning_rate
(default 5): The step size or learning rate for updating the parameters in each iteration. A higher value can lead to faster convergence but may risk overshooting the optimal values.learning_rate_exponent
(default 0.6): Exponent applied to the learning rate in each iteration.perturbation
(default 1e-2): The perturbation or finite difference used to approximate the gradient. A smaller perturbation provides a more accurate gradient estimate but may increase computational cost.perturbation_exponent
(default 0.1): Exponent applied to the perturbation in each iteration.eps
(default 1e-14): A small value to prevent division by zero in the gradient calculation. It ensures numerical stability during optimization.max_iter
(default 100): The maximum number of iterations (updates to the parameters).params_tol
(default 1e-8): Convergence tolerance for changes in the parameters. If the maximum change in parameters between iterations falls below this threshold, the optimization process stops.fun_tol
(default 1e-7): Convergence tolerance for changes in cost function (in this case observable expectation value).
Below is an example config for the parameterized circuit defined in the section above. If we want to measure the expectation values of any other observables using the circuit with optimized parameters, we can list them in the output
section of the config like we did in tutorial 6.
from fermioniq.config.defaults import standard_config
config = standard_config(circuit)
config["optimizer"] = {
"enabled": True,
"evaluation_mode": "contract",
"optimizer_name": "spsa",
"optimizer_settings": {
"learning_rate": 5,
"fun_tol": 1e-8,
"max_iter": 100,
},
# Put the observable to minimize in the optimizer config
"observable": {"-XZ + 2YY": observable},
}
# (optional) Put any other observables we would want to measure in the regular config
config["output"] = {
"expectation_values": {
"enabled": True,
"observables": {"ZZ": SparsePauliOp("ZZ", 1.0)},
}
}
rich.print(config)