Optimising Qubit Layouts and Transpilation for Better Circuit Performance
Learn how qubit layout and transpilation choices reduce SWAPs, improve fidelity, and sharpen Qiskit/Cirq hardware performance.
If you want practical qubit programming results, the difference between a toy circuit and a useful one often comes down to layout and transpilation. On real hardware, your logical qubits must map onto a constrained physical topology, and every extra SWAP gate costs depth, time, and fidelity. That is why a solid quantum hardware guide should not stop at theory: it should show how to place qubits intelligently, how to guide the transpiler, and how to reduce the penalty from device connectivity. If you are just starting to learn quantum computing, this guide is designed to bridge the gap from circuit sketches to hardware-aware execution.
We will compare layout strategies in Qiskit and Cirq, show how topology-aware transpilation changes circuit shape, and explain how to choose mappings that lower SWAP overhead without overfitting to a single backend. You will also see how layout decisions interact with quantum error and decoherence, and why better compilation can improve final circuit fidelity even before you reach for quantum error mitigation. For teams building production-adjacent workflows, this is the same mindset discussed in From Qubit Theory to DevOps: treat hardware constraints as part of the software design, not a late-stage inconvenience.
1. Why qubit layout matters more than most beginners expect
Logical qubits are not physical qubits
In an ideal simulator, a two-qubit gate can act on any pair of qubits without penalty. Real devices are different: superconducting backends, ion traps, and other architectures impose connectivity graphs that limit which qubits can interact directly. When your algorithm assumes full connectivity, the compiler inserts routing operations to satisfy device constraints, and those operations often become the dominant source of depth inflation. That is why layout is not a cosmetic choice; it is a first-order performance decision.
For developers familiar with classical deployment constraints, this is similar to optimizing data locality in distributed systems. The same logic applies here: put frequently interacting logical qubits near physically adjacent qubits to minimize communication overhead. If you are deciding between platforms or trying to understand how tools fit into a broader stack, our developer-focused quantum DevOps guide is a useful companion. Likewise, if you are exploring where to start experimenting safely, read Quantum Readiness for Developers for a practical entry point.
SWAP overhead is the hidden tax on fidelity
Each SWAP gate is typically decomposed into multiple native entangling operations, which means one routing decision can multiply the error exposure of your circuit. If your target hardware has modest two-qubit fidelity, that overhead can outweigh the algorithmic signal you were trying to preserve. The cost is not only in error rate; deeper circuits are also more vulnerable to calibration drift, queue-time variation, and decoherence. In short, routing is where theoretical elegance meets physical reality.
There is a useful lesson here from why cloud quantum jobs fail: many failures that look like random noise are actually predictable consequences of depth, topology, and calibration windows. When you reduce SWAPs, you reduce the number of opportunities for those failure modes to compound. That is one reason experienced practitioners often focus on circuit structure and mapping before they tune advanced mitigation techniques.
Good layout is algorithm-aware, not just hardware-aware
A naive mapping strategy might place qubits sequentially on the least-bad linear chain available. A better strategy asks which pairs interact repeatedly, which qubits carry the “hot” parts of the algorithm, and whether certain entangling patterns form subgraphs that should stay compact. For example, in chemistry workloads, the interaction graph may have clear clusters, while in QAOA-like circuits the problem graph itself becomes the topological guide. A layout heuristic should therefore respect both the backend’s connectivity and the algorithm’s communication pattern.
This is where practical examples matter. In a Qiskit tutorial for beginners, you might start with a Bell pair or GHZ state. In a real workload, however, the compiler must juggle parameterized layers, controlled rotations, and measurement placement. The best maps are the ones that preserve the circuit’s structural intent while minimizing the compiler’s need to “repair” it later.
2. Understanding backend topology before you transpile
Coupling maps, directionality, and gate sets
Most cloud devices expose a coupling map that defines which qubits can interact directly. But connectivity alone is not the full story: many devices also have directional constraints on two-qubit gates and a limited native basis. Your transpiler must therefore perform two tasks at once: route qubits through the connectivity graph and decompose abstract operations into native gates. Ignoring either side leads to inflated depth or suboptimal basis conversion.
For developers used to writing portable code, this is the quantum equivalent of targeting multiple CPU instruction sets. Your high-level program may be elegant, but backend-specific lowering determines performance. If you want a broader operational view of how quantum workloads fit into IT environments, see From Qubit Theory to DevOps. For a concise primer on why some jobs fail after submission, Quantum Error, Decoherence, and Why Your Cloud Job Failed explains the hidden variables.
Topology varies across device families
Not all hardware is optimized for the same kinds of circuits. Heavy-hex architectures favor sparse, structured entanglement graphs that can be excellent for certain workloads but awkward for others. Linear or near-linear layouts can perform well when the circuit interaction graph is similarly sparse, whereas dense all-to-all interaction patterns require aggressive routing. The important takeaway is that there is no universal “best” topology, only a topology that is best for your specific circuit family and backend.
This is why a good quantum hardware guide should teach you to inspect the backend before you run. In practice, that means checking qubit counts, basis gates, calibration data, and the coupling graph, not just picking the largest available processor. Think of it as choosing a Kubernetes cluster by node locality and network profile rather than just raw CPU count.
Read calibration data as part of the layout decision
Connectivity is necessary but not sufficient. Two physically adjacent qubits can still be a poor choice if one has a much higher error rate, shorter coherence time, or unstable readout calibration. Strong layout decisions therefore combine graph structure with quality metrics, especially for workloads that involve repeated entangling layers. If your compiler offers it, use backend properties to bias placement away from known weak spots.
This is the same mindset behind effective operational checklists in adjacent technical domains: gather data, compare options, and avoid assuming that the most obvious choice is the best one. For a practical example of building reliability into process design, our article on quantum workloads for IT teams is especially relevant. And if you are currently comparing cloud environments, that workflow mirrors the experimentation advice in Where to Start Experimenting Today.
3. Layout heuristics that actually reduce SWAP overhead
Start with interaction graphs, not qubit indices
The most effective layout heuristic is often simple: build an interaction graph from the circuit and try to preserve frequently interacting pairs as adjacency pairs on hardware. If the circuit has repeated CNOTs between the same logical qubits, those qubits should be placed near each other if possible. This is especially important for QAOA, variational circuits, and structured algorithms where the entanglement pattern is known in advance. Even when the transpiler can route around bad mappings, giving it a good starting point usually improves the final result.
This approach is often more useful than trying to micromanage every gate. You are optimising the communication pattern, not the transcript of the circuit. If you need a refresher on how circuit structure affects results, pair this section with our failure analysis guide, which shows how a few extra routing operations can push an otherwise valid job into poor-fidelity territory.
Use greedy placement for small circuits, heuristic search for larger ones
For small circuits, greedy mapping can be enough: place the highest-degree logical qubits onto the best-connected physical nodes first, then extend outward. For larger circuits, heuristic search methods such as score-based initial layout selection, lookahead routing, or stochastic optimization often produce better outcomes. The reason is that the routing cost of one placement choice may not appear until several layers later, so local decisions can be misleading.
In practical terms, your algorithm should be aware of future entangling layers. That may mean estimating gate frequency by pair, clustering by interaction strength, or simulating several candidate mappings and choosing the one with the lowest estimated cost. If you want to think about this through the lens of engineering trade-offs, the same “optimize for the whole pipeline” mindset appears in qubit-to-DevOps planning.
Exploit symmetry and circuit structure when present
Certain circuits have symmetries that a compiler can exploit. For example, repeated blocks or mirror-like structures can reduce the search space for layout and may make it easier to preserve locality across layers. If the entanglement pattern repeats, a stable mapping may be better than aggressive remapping between layers because it avoids repeated routing churn. The key is to respect the circuit’s recurring structure rather than treating every layer as independent.
That principle matters in practical quantum programming because one “good” layer may still be part of a bad overall compilation if it forces future layers to pay a routing penalty. A robust Qiskit tutorial should show you how to inspect compiled depth and two-qubit counts, not just whether the circuit ran. In other words, success is not binary; it is measured in performance deltas.
4. Qiskit transpilation strategy: how to guide the compiler
Choose an initial layout intentionally
In Qiskit, the initial layout determines how logical qubits are assigned to physical qubits before routing begins. This is one of the highest-leverage choices you can make because it affects every later pass. When possible, provide a custom layout for circuits with known entanglement hotspots rather than leaving placement entirely to the transpiler. Even a modestly informed layout can outperform a generic one if it keeps the busiest qubits close together.
A simple workflow is to identify the pairs with the most two-qubit interactions and place them on the most connected or best-calibrated physical qubits. Then compare the compiled output using metrics such as depth, two-qubit count, and estimated success probability. If you are building your first workflow, combine this section with our developer readiness guide for a broader tool-selection context.
Control pass manager behavior, not just the preset level
Qiskit’s preset pass managers are helpful, but for performance-sensitive work you often need more control. You may want to change the optimization level, inject custom passes, or inspect the pass pipeline to see where layout and routing decisions are being made. Presets are convenient starting points; they are not always the best choice for a specific backend or workload. A high optimization level may reduce gate count in one scenario and increase compile time or destabilize a beneficial structure in another.
There is no substitute for measuring the compiled output. Compare the untranspiled and transpiled circuits, and verify whether the transpiler introduced the right amount of routing. If you want a practical understanding of what can go wrong after submission, revisit Why Your Cloud Job Failed and use it as a diagnostic companion to this section.
Measure the right metrics after compilation
Depth alone does not tell the whole story. Two circuits with the same depth can have very different fidelity depending on the number of entangling gates, idle periods, and readout assignments. In a real benchmark, track at least logical depth, two-qubit gate count, SWAP count, basis-gate count, and if available, estimated error based on backend properties. This gives you a more meaningful picture of whether the transpilation actually improved the job.
A practical habit is to keep a small benchmark suite of circuits you run repeatedly on the same backend family. That is the quantum equivalent of a performance regression test. For teams setting up that kind of repeatable workflow, the operational framing in From Qubit Theory to DevOps is a good reference point.
5. Cirq transpilation strategy: moments, device constraints, and custom routing
Understand how Cirq treats compilation and devices
In Cirq, compilation often revolves around device constraints, moment structure, and explicit transformation passes. Rather than assuming a universal compiler preset will do all the work, you typically specify how your circuit should conform to the target device model. This can be powerful when you want clearer control over moment grouping, gate decomposition, and insertion of routing operations. For developers comparing tooling, the Cirq vs Qiskit question usually comes down to workflow preference and how much you want to shape the compile process directly.
Cirq’s explicitness can make topology issues easier to reason about because the device constraints are visible in the code path. That does not remove the complexity, but it can make debugging easier when a circuit refuses to compile as expected. If you are working through your first practical circuits, keep the perspective from IT teams entering quantum workloads in mind: implementation discipline matters as much as algorithm choice.
Use device-aware optimization passes sparingly and deliberately
Cirq allows you to define or apply compilation passes that adapt your circuit to a device model. That is useful when you need precise control, but it also means you can accidentally overfit the circuit to one backend. The goal is not to make a circuit look pretty; the goal is to preserve functionality while minimizing the cost of moving it onto hardware. Good compilation is not about maximizing transformation, but about minimizing the loss induced by transformation.
A useful working approach is to compare a default compilation path against a custom one and verify whether the custom pass actually lowers the estimated routing burden. If not, the extra complexity is probably not worth it. When in doubt, use the same discipline you would apply when assessing failure causes in cloud quantum job troubleshooting.
Build small device-specific examples before scaling up
It is tempting to jump straight into larger algorithms, but layout strategies are much easier to validate on small circuits. Build a Bell pair, a three-qubit entangler, or a simple variational layer and compare how different devices and compilation paths change the output. These small examples make it easier to see whether a layout heuristic is genuinely helping or merely rearranging gates in a way that looks sophisticated. Once you trust the pattern, you can scale the same technique to larger workloads.
This “start small, then scale” model mirrors the advice in Quantum Readiness for Developers. It is especially valuable for teams that need to move quickly without burning compute budget on avoidable compilation inefficiencies.
6. Concrete example: reducing SWAPs in a simple entangling circuit
Example scenario and what goes wrong by default
Suppose you have a four-qubit circuit where qubits 0 and 3 interact repeatedly, and qubits 1 and 2 interact repeatedly, but the backend coupling map is linear. A naive layout that maps logical qubits in order often forces multiple SWAPs to bring the end qubits together for every interaction. The result is a circuit that is functionally correct but operationally expensive. On noisy hardware, that extra routing can be the difference between a useful output distribution and something indistinguishable from noise.
That is why the compiled circuit matters as much as the algorithm itself. If your circuit is a benchmark or a portfolio project, this is a great way to demonstrate real-world quantum programming skill. A clear before-and-after comparison pairs well with practical project advice from how to turn a project into a portfolio piece, even though the domain is different: the lesson is to show measurable improvement, not just code.
How to improve the mapping in Qiskit
In Qiskit, you would first inspect the circuit’s interaction pattern and the backend coupling map. Then you can propose a custom initial layout that places the heavily interacting logical qubits on adjacent physical qubits, ideally ones with lower error rates. After that, run transpilation with multiple optimization levels and compare the resulting two-qubit counts and depth. If the custom layout reduces SWAPs, it should also shorten the circuit and improve expected fidelity.
You should also watch for trade-offs. A layout that minimizes SWAPs may place one qubit onto a slightly noisier physical location, so you need to compare the full compiled profile rather than a single number. For this reason, serious practitioners often run several candidate layouts and choose the one with the best combined metric, not merely the lowest SWAP count. That discipline aligns with the broader engineering mindset described in from qubit theory to DevOps.
How to validate the result in Cirq
In Cirq, you can validate the same idea by comparing the circuit before and after device compilation and checking whether the routed version preserves locality more effectively. Because Cirq often makes compilation steps more explicit, it can be easier to inspect how moment structure changes when routing is introduced. If the final circuit contains fewer nonlocal interactions or shorter routed paths, that is a strong sign the strategy worked. The underlying principle is unchanged: keep the algorithm’s interaction graph as close as possible to the hardware graph.
Use this as a repeatable test case, not just a one-off demo. When you can demonstrate that a specific layout reduces routing on two different tooling stacks, you are no longer guessing. You are validating a hardware-aware heuristic.
7. When transpilation helps, and when it hurts
Optimization can overfit to one run
There is a point at which aggressive optimization becomes counterproductive. A transpiler can sometimes increase compile time, obscure the logical structure of a circuit, or exploit a backend property that changes by the time your job executes. That is why you should not blindly assume that the deepest optimization preset is always the best one. The right choice depends on queue times, calibration stability, and how sensitive your algorithm is to gate decomposition.
This is especially relevant for cloud workflows where backend conditions change often. If you want to understand the downstream risks, the troubleshooting mindset in Quantum Error, Decoherence, and Why Your Cloud Job Failed is essential reading. It helps you distinguish between a bad circuit and a good circuit executed under bad conditions.
Not every circuit benefits equally from aggressive routing
Some algorithms naturally map well onto device topology. Others, particularly dense circuits with many long-range interactions, may suffer from routing overhead no matter what you do. In those cases, it can be better to choose a different backend, decompose the workload into smaller subcircuits, or rethink the algorithmic formulation. Layout is powerful, but it cannot rescue a fundamentally mismatched workload by itself.
That is why developers should compare mapping strategy and hardware choice together. A good quantum hardware guide will encourage you to evaluate whether the device’s connectivity actually fits your circuit family before you invest effort in hand-tuning compilation.
Error mitigation should complement, not replace, layout discipline
Once you have reduced unnecessary routing, you may still benefit from measurement mitigation, zero-noise extrapolation, or post-processing techniques. But mitigation should be applied after you have removed avoidable compilation cost, not instead of it. There is no point compensating for excessive SWAPs with increasingly elaborate mitigation if the underlying compiled circuit is already too deep. The first win is always to make the circuit as hardware-friendly as possible.
That division of labor matters in practical quantum engineering. Build the best layout you can, then layer in quantum error mitigation where it adds the most value. This staged approach keeps the workflow easier to debug and often produces more stable results in practice.
8. Best practices for production-style quantum workflows
Keep a backend-aware benchmark harness
If you regularly experiment with quantum circuits, create a benchmark harness that records backend, calibration date, layout strategy, transpilation level, depth, two-qubit count, and output quality. This lets you compare results over time and identify whether a mapping strategy is genuinely improving fidelity or just appearing to do so on a single run. In fast-moving environments, reproducibility is the difference between a useful engineering workflow and a collection of anecdotes. For teams building repeatable processes, the operational framing in From Qubit Theory to DevOps is highly relevant.
Benchmarking also helps you avoid false confidence. A layout that works on one backend family may underperform on another due to differences in coupling graph, native gate set, or error distribution. Keep the harness simple, but make it consistent.
Treat layout as part of design, not post-processing
Too many teams build the logical circuit first and worry about hardware mapping later. That approach works for demonstrations, but not for performance-sensitive work. If you already know the target backend family, design the circuit with its topology in mind from the beginning. This can change the order of operations, the choice of entangling pattern, and even whether a given algorithm formulation is sensible on the device.
That is also why clear docs matter. The developer journey is easier when there is a practical ramp from first principles to deployment, which is exactly the kind of structure offered in Quantum Readiness for Developers and the adjacent DevOps-focused guidance.
Document the assumptions behind your heuristic
Whenever you use a custom layout or a heuristic compilation pass, record why it was chosen. Was the circuit interaction graph sparse? Were certain qubits avoided because of calibration concerns? Did the chosen layout reduce SWAPs at the expense of a slightly noisier readout qubit? Documentation makes your results explainable and reusable, which matters if you are building a portfolio or collaborating in a team. It also helps future you understand whether the heuristic still makes sense when the backend changes.
This is the same spirit as writing good postmortems and clear operational notes. If your workflow ever behaves unexpectedly, the diagnostic patterns in quantum job failure analysis will be much easier to apply if you captured assumptions up front.
9. Qiskit vs Cirq: how to choose for topology-aware work
Choose based on control surface and ecosystem fit
For many developers, the decision between Qiskit and Cirq comes down to how much control they want over compilation versus how much ecosystem support they need. Qiskit offers a broad toolkit, strong cloud integrations, and a mature transpilation stack that is especially helpful when you want to compare preset optimization levels or backend-specific passes. Cirq often feels more explicit and flexible for users who want direct visibility into circuits and device constraints. Both can support topology-aware workflows effectively, but they reward slightly different styles of engineering.
If you are comparing the two in your own learning path, use a practical lens: which tool lets you inspect the compiled output more clearly, reproduce benchmarks consistently, and integrate with your target provider? That decision is much easier when informed by a broader perspective on Cirq vs Qiskit and the hardware realities discussed in From Qubit Theory to DevOps.
Think in terms of workflow maturity
For experimentation and learning, either framework can be excellent. For consistent hardware-aware performance analysis, the best choice is the one that gives you repeatable control over layout, compilation, and backend selection. Teams often start in one stack and borrow concepts from the other once they need finer-grained control. What matters most is not loyalty to a framework, but whether the stack lets you reduce SWAP overhead and preserve circuit fidelity.
That pragmatic approach is especially important if your goal is to build a portfolio of quantum circuits examples that demonstrate real competence. It is better to show a clearly measured performance improvement in one stack than to scatter shallow demos across several tools.
Use the tool that best exposes your bottlenecks
In Qiskit, the transpiler and pass manager make it straightforward to study the effect of different optimization levels. In Cirq, the explicit device and compilation model can make layout constraints feel more transparent. If your bottleneck is uncertainty about how a pass changes a circuit, choose the tool that shows you that change more clearly. The best framework is the one that helps you learn faster and produce better circuits with fewer surprises.
That is the most practical answer to the Cirq vs Qiskit debate. Try both, compare the compiled outputs, and let the topology-aware metrics decide.
10. A comparison table of layout and transpilation choices
| Strategy | Best For | Pros | Risks | Typical Use Case |
|---|---|---|---|---|
| Default transpiler settings | Quick experiments | Fast to use, low setup overhead | Can insert avoidable SWAPs | First-pass validation |
| Custom initial layout | Known interaction patterns | Can reduce routing and depth | Requires circuit analysis | QAOA, repeated entanglement layers |
| Greedy placement heuristic | Small-to-medium circuits | Simple, interpretable | May miss global optimum | Proof of concept, benchmarks |
| Heuristic search / lookahead | Complex circuits | Better global routing decisions | Higher compile time | Performance-sensitive workloads |
| Backend-calibration-aware mapping | Noisy hardware runs | Can improve practical fidelity | Backend conditions change rapidly | Cloud hardware submissions |
This table is intentionally practical rather than theoretical. Your job is not to choose the most sophisticated method; it is to choose the method that gives the best net result for your circuit and backend. In many cases, a simple layout improvement beats a clever but fragile optimization. That is why performance measurement should always accompany compilation changes.
11. Frequently asked questions
How do I know if my layout is good?
Start by comparing the number of SWAPs, two-qubit gates, and overall depth after transpilation. A good layout usually lowers routing overhead while preserving or improving estimated fidelity. If the circuit has known interaction hotspots, those qubits should remain physically close on the backend. You should also check whether the layout avoids the worst-calibrated qubits.
Should I always use the highest optimization level in Qiskit?
Not always. Higher optimization levels can reduce depth, but they may also increase compile time or transform the circuit in ways that are not ideal for your specific backend. The best practice is to compare multiple optimization levels and inspect the compiled circuit metrics. Use the level that gives the best overall balance of fidelity, depth, and stability.
Is Cirq better than Qiskit for topology-aware compilation?
Neither is universally better. Qiskit is often preferred for its broad transpilation ecosystem and cloud support, while Cirq can be appealing for its explicit device model and circuit visibility. The better choice depends on how much control you want, which backend you target, and how you prefer to reason about compilation. If you are comparing them, benchmark the same circuit in both.
Can layout alone fix a noisy circuit?
No. Layout can reduce unnecessary SWAPs and improve locality, but it cannot eliminate hardware noise or compensate for a poor algorithmic fit. It should be combined with backend selection, calibration awareness, and where appropriate, error mitigation. Think of layout as the first optimization layer, not the last.
What is the best first project to practice these ideas?
A simple entangling circuit, a Bell-state benchmark, or a small variational circuit is ideal. These examples are simple enough to inspect manually but still show the effect of routing and topology. If you document the before-and-after metrics, the project can also become a strong portfolio artifact. That is the kind of demonstrable improvement that turns a basic exercise into evidence of skill.
12. Final checklist for better circuit performance
Before transpiling
Inspect the backend topology, calibration snapshot, basis gates, and readout quality. Identify the circuit’s most frequent two-qubit interactions and decide whether they cluster naturally. If possible, choose a backend whose connectivity matches the algorithm’s communication pattern rather than forcing the circuit onto an awkward graph. This reduces the amount of work the transpiler has to do in the first place.
During compilation
Try more than one initial layout and compare routing outcomes. In Qiskit, inspect the pass manager and optimization level; in Cirq, check whether the device compilation path preserves useful structure. Do not focus solely on gate count; track SWAPs, depth, and the placement of noisy qubits. The best compile is the one that keeps the circuit short and physically plausible.
After compilation
Benchmark the result against a baseline. If the new mapping does not improve depth or estimated fidelity, revise the heuristic or change the backend choice. Then layer in quantum error mitigation only after you have removed avoidable routing cost. This sequence gives you a cleaner, more reproducible workflow and a better chance of producing meaningful hardware results.
For a broader view of how these choices fit into real-world engineering practice, revisit From Qubit Theory to DevOps and Quantum Error, Decoherence, and Why Your Cloud Job Failed. Together they reinforce the main lesson of this guide: in quantum computing, topology is not a footnote, it is part of the algorithm.
Pro Tip: If you are choosing between two layouts with similar SWAP counts, prefer the one that places the most error-sensitive logical qubits on the most reliable physical qubits. A small reduction in readout or two-qubit error often matters more than a tiny depth improvement.
Related Reading
- Quantum Readiness for Developers: Where to Start Experimenting Today - A practical starting point for emulators, tools, and small-scale workflows.
- From Qubit Theory to DevOps: What IT Teams Need to Know Before Touching Quantum Workloads - Learn how to operationalize quantum experiments inside real teams.
- Quantum Error, Decoherence, and Why Your Cloud Job Failed - Troubleshoot the most common hardware-side causes of poor results.
- How to Turn a Statistics Project into a Freelance or Internship Portfolio Piece - A useful model for presenting measurable technical work.
- Placeholder - Not used.
Related Topics
Daniel Mercer
Senior Quantum Content Strategist
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Security and Compliance for Quantum Development Teams
Measuring Quantum Program Performance: Benchmarks, Metrics and Reproducibility
Mapping Classical Algorithms to Quantum Circuits: Practical Decomposition Techniques
Local Quantum Development Environments: Setting Up Simulators, Toolchains and CI Pipelines
Cirq vs Qiskit: A Practical Comparison With Code Examples
From Our Network
Trending stories across our publication group