This question has a very involved answer, because it’s at least three questions in one. What part of a classical computer, exactly, does the “quantum” replace? How do you measure the performance of a quantum computer, to prove it works better than a classical one and is worth the excitement? I would also have to explain the concepts in quantum mechanics that find their application in quantum computers, since quantum mechanics is confusing, and many popular explanations you’ll hear are either too simplified or actually wrong. You’re here because you want the right answer, so I will try to cover all of my bases and give you the most complete answer I can. I will use simple words, but I will also use many words.
Let’s start with the first question: what are the parts of a standard computer, and what does a quantum computer actually add or replace? There are the monitor, keyboard, and mouse, but these are fine as they are. We could make the internet connection better – there’s a branch of quantum information science dedicated to making communication un-hackable, or at least, make it impossible for intruders to hide their tracks. But what makes something a quantum computer is the CPU and the RAM – that is, the calculator and short-term memory of the computer.
Taking advantage of quantum effects can make your computer run more quickly – but only for specific problems (about quantum mechanics, usually, or chemistry, or number theory). You will not see quantum CPUs and RAM in your cell phone or even your laptop, simply because most non-scientists don’t need to solve those kinds of problems.
Once you’ve successfully installed a quantum CPU and RAM, how would you measure if it’s better? You would feed it a complicated program and time how quickly the computer can execute it. Maybe you could then feed it the same program, but with larger and larger inputs, then look at the shape of the resulting graph: time to complete vs. input size (which computer scientists like to call n). This is the computer science concept of “time complexity.” Simple programs, like “print out a line of "n" letters” take, say, "n" microseconds to execute, so the time complexity is said to be "O(n)", since the time vs. input size graph is roughly a straight line. Intermediately difficult programs, like “sort these n words alphabetically,” take about "n x n" microseconds (or "n x log(n)") if the algorithm is clever). Truly awful programs, like “guess a password "n" letters long,” would have a time complexity of "O(26n)", which for a 10-letter password would take about four and a half years if it guessed a million times per second. A quantum computing algorithm called Grover’s Search Algorithm can take that third problem and perform it in about 10 seconds, if set up correctly. However, you can sort of see why the first problem is already handled fine by classical computers. There’s no conceivable way to speed up simple programs like this, which make up the bulk of the calculations done by standard processors. The only programs that need substantial speed-up are for breaking codes, modeling large organic molecules like proteins and DNA, and other similarly niche problems. This is why I say most consumer electronics won’t get a “quantum” upgrade in the near future.
This last part of the question is the most difficult to explain: what is the quantum mechanical explanation behind quantum computers? Quantum mechanics deals in probabilities, but in a way that involves negative and even complex numbers. The basic math is actually not especially difficult to do, but what makes quantum mechanics hard to explain is the interpretation of what the math says. The image of Schrodinger’s cat, for example, is often misinterpreted and misunderstood, and unfortunately the misunderstood aspects of it have come to dominate the popular imagination.
Quantum mechanics is actually more intuitive to understand in terms of waves. Microscopic particles can actually be described as waves with a central position and a “spread.” You can actually decompose this wave form as the sum of many waves of distinct wavelengths, so you also have a “spread” in the distribution of wavelengths. The two “spreads” are directly related to Heisenberg’s Uncertainty Principle (also often misunderstood, and I won’t go into too much depth explaining it), and that sum of sine waves is an example of a “superposition.” Is it this wave and that wave? Is it this or that wave? It’s neither - our language and our way of thinking does not have a good word to relate the concept of a sum of waves to regular daily life. This is an example of the math being perfectly reasonable (a sum of functions), but whose explanation is difficult to put into words. The best we can do is say that the “state” of a system of quantum particles can be described by a “superposition” of different waves, and that the peaks and dips of the wave represent the probability of finding the particle in that position.
These waves also interfere, which is to say they can accumulate and make peaks (like a tsunami) or cancel each other out. Going back to the image, you can see that the orange wave, which is the sum of the other five, has zeroes at places where all the other waves canceled each other out, and a giant peak in the center, where the other waves “worked together.” The simplified idea of quantum computing is to arrange particles to represent parts of your problem (say, letters in a password are represented by waves with different wavelengths), and make the waves cancel out for the wrong answers and peak for the right one. So, instead of guessing every combination of letters one by one, you have designed your computer to perform just one step!
If you’re interested in understanding this in more depth, the mathematical foundation of quantum mechanics is linear algebra – the study of vectors and matrices. I hope this explanation helped, even if it did end up being rather long.
Before talking about a quantum computer, lets first look at a very simple example of a classical computer.
A classical computer keeps track of a long list of "bits", that is, switches that can be on or off. These switches are called computer memory. For example, the switches in the memory can look like:
| on | off | off | on | off | off | on | on |
When we want to do something on the computer, we come up with rules for flipping these switches in order to represent mathematical operations. For example, we might say, if two switches next to each other are the same, turn them off. We can apply this rule and get this result:
| on | off | off | on | off | off | off | off |
The result from flipping the switches is the answer the computer gives us. How long it takes us to get the answer depends on how many switches we have to flip, and how many times.
These days, we know how to turn many simple problems into these kinds of switch-flipping operations, and get all the things that modern computers can do. But, some problems require so many switches and so many flips that they take too long to do even on the biggest classical computers.
A quantum computer is different because instead of having switches that are definitely "on" or "off", we have switches that have some probability of being "on" or "off" - we don't know the answer, just the probability. These probabilistic switches are called quantum-bits, or "qubits" for short.
Instead of flipping a qubit, a quantum computer changes the probability of it being on or off. But, because each qubit can be in-between "on" and "off", it can store the same information as hundreds or thousands of classical switches, so when a quantum computer changes its probability, we get the benefit of flipping lots of switches for the cost of doing a single operation. This speedup is the reason why a quantum computer can be much faster than a classical one.
Once the quantum computer is done changing all these probabilities based on the rules we give it, we look at the qubits to get an answer - at this point, each qubit turns into a normal switch that is "on" or "off", based on the probabilities it had. At this point, we get a similar answer to the classical computer but hopefully for much less effort.
There is still a lot of science and engineering that we need to figure out to make a practical quantum computer work though, because working with these probabilistic switches (qubits) is much harder than with classical memory.
Hello Rual, a quantum computer, unlike a computer no longer needs to rely exclusively on ones and zeros. A conventional computer functions through circuits that are built from transistors that can quickly be tuned to allow a voltage through or not. In this way, when a voltage is present,the computer understands this as a 1 while if it isn't present, the computer interprets this as a 0. When huge numbers of transistors are put together, complex codes can turn those ones and zeros into the processes that, in the end, make your computer function. A quantum computer no longer relies on the same transistor technology and instead relies on different quantum states to provide the analogous ones and zeros.
Quantum states are specific allowable states for some particle or system that can be measured. In contrast to the transistor, quantum states can exist simultaneously, allowing for a computer to complete computations with more than just ones and zeros. This creates an incredibly consequential increase in computational power, allowing for never-before-solved problems to be resolved in minutes. All of this hinges on the ability to measure these quantum states reliably; therein lies the most difficult aspect for modern scientists. Quantum computers require a lot of energy to cool a system to low enough temperatures that allow for effective trapping of these quantum states. It is therefore highly likely that you would see a quantum computer laptop anytime soon. Here is another video that might give you some more useful information.