Monte Carlo and the Holy Grail: Quadratic Faster Simulation

1. Description

        For uncertain or chaotic problems, if there is a solution, what method is most effective? This article discusses the Monte Carlo method.

        Have you ever tried tossing a lousy candy wrapper in the trash, even if you put it right over the bin, the crumpled plastic is likely to spin and land next to it. A closer look reveals that when you let go, the piece of plastic pushes the air beneath it as it descends, and the air flows over the ridges and wrinkles of the sheet, causing it to spin and move sideways. Even the tiniest of air currents or the slightest extra pulse in your hand can dramatically change your landing position. This is a very complex process involving turbulence and chaos, which may be a familiar example because the output is highly sensitive to initial conditions.

     

Placing a crumpled candy wrapper above the litter box doesn't guarantee it will end up in the litter box.

2. Monte Carlo method for solving chaotic problems

        Now let's say you're obsessed with it, you get some pencils and a stack of A4 paper and try to figure out what your chances of success are, given parameters such as: plastic wrap geometry, air pressure and temperature, altitude and release angle , and trash can diameter, to name a few. I don't mean to dissuade you, but even if you find the governing equations, they will be very dependent on these parameters, and even the slightest fluctuation in them can mean the difference between failure and success, which is very common in chaotic systems. common.

        If you're still undaunted by the task, I'd say there's another approach, less analytical but more useful. Rather than calculating the exact chance of success, drop it multiple times and track the outcome (in this case, the landing coordinates). From this log, you can find the probability distribution of outcomes , allowing you to state things like "In 15% of the trials, the candy wrapper fell between 5 and 10 cm from the center of the bin."

Imaginary histogram of landing coordinates after a large number of samples. More samples approximate the actual probability distribution.

        By repeating the process many times, the result should approximate the actual probability distribution of landing coordinates (for a defined set of parameters, such as those mentioned above). This is an example of a Monte Carlo method , a strategy that provides approximate results for a likely deterministic (often very complex) process. Its applications are ubiquitous , ranging from physics simulations (such as the related study of plastic-wrapped projectiles) to predicting the behavior of financial markets.

        As for its title, Monte Carlo was the code name suggested by colleagues of physicists Johann von Neumann and Stanislaw Ulam, one of its earliest users, whose work on nuclear weapons required secrecy . The name Monte Carlo itself refers to a casino in Monaco where Ulam's uncle would gamble away money borrowed from relatives . In fact, the casino reference simply and effectively illustrates the randomness of the method.

        Before getting into the method itself, I have to be honest with you, this post is much more technical than my previous posts , so let me start with a summary, which can be read independently of the rest of the post :

Monte Carlo methods consist of approximating the outcome of a complex process and obtaining a probability distribution by repeating the process multiple times, producing samples each time. With enough samples, we can statistically approximate the result by collecting a probability distribution over the samples. For example, instead of calculating how many grains of rice a handful of rice holds, one can repeatedly get several handfuls of rice and average the results to get a good guess. I guess, the actual calculation will be very complex, as it may depend on even the smallest details of the hand. Many Monte Carlo implementations are variations on this simple case.

The idea is simple: the more repetitions, the closer the estimate will be to the actual result. This closeness is quantified by the error ε. In classical Monte Carlo methods, this error is proportional to 1/√N, where N is the number of repetitions. This scaling means that for very large N, the rate of decrease in error has a slope of 1/√N (at most a constant multiplicative factor).

It turns out that Monte Carlo has a quantum-enhanced version with an error ratio of 1/N, where N is the number of applications of a certain quantum operator. This scaling represents a quadratic speedup of quantum Monte Carlo relative to its classical counterpart.

This quantum-enhanced Monte Carlo is based on the Quantum Amplitude Estimation Algorithm (QAEA), which is a hybrid of Grover's quantum amplitude amplification and quantum phase estimation algorithms, hence the name. Current quantum computers are not suitable for their implementation, but efforts are underway to simplify some of their subroutines, such as variational versions of the quantum phase estimation algorithm .

I hope I haven't lost you before I get here and you want to learn more about quantum enhanced Monte Carlo. In this case, it's best to be familiar with the basics of quantum mechanics, quantum circuits, and routines like the quantum Fourier transform (QFT) - you can read more about that in this article by my colleague Dr. Hamza  Jaffali information.

3. Analysis of Monte Carlo Algorithm

        Regarding the underlying algorithm, you can find an excellent QAEA tutorial in qiskit , for example, but its connection to Monte Carlo methods is limited to more academic sources, such as research papers like this one . Therefore, the goal of this paper is to piece together QAEA and show how it can be used in Monte Carlo-like problems. We start with a very simple example from quantum mechanics.

        General single-qubit states with complex coefficients. We can estimate the complex coefficients a and b using Hadamard's test, which can be classified as classical Monte Carlo sampling.

        Above, the coefficients allow us to compute | a |² and | b |², representing the probabilities of measuring |0〉 and |1〉, respectively. However, if you only have access to the full state |Ψ〉, it is not trivial to measure the complex amplitudes  a  and  b  instead of their absolute values. The best you can do is use the Hadamard test, which is the process I described and implemented in a previous article . But here we will consider the simpler case of states with real coefficients:

        Simpler states with real coefficients. p represents the probability of measuring 0, and 1 represents the probability measured with probability 1-p, since the normalized probabilities must sum to 1.

        If the goal is to estimate  p , i.e. measure the probability of |0〉, the classical strategy is to measure the state | ψ many times, say, N. The probability  p  is estimated as:

        The larger the sample, the closer the estimate is to the actual p-value.

The error scales to 1 /         √N for sufficiently large  N. The fundamental point of the procedure is that each sample requires a single state preparation and measurement of t . One of the groups constitutes a single sampling step, providing samples. The definition of what is a "step" or "repeat" in the context of quantum enhancement is not so obvious (at least not to me at first), but I hope to clarify.

        A single sample in the Monte Carlo Classic version requires state preparation, measurement and result log.

        For comparison, let us apply quantum-enhanced Monte Carlo to obtain   an estimate of the probability p . We first assume, as before, that the quantum state is initialized at the |0> state and then rotated to the general state by unitary V. The next step is to construct another with | ψ〉 as one of its eigenvectors, whose eigenvalue is a complex phase containing   an estimate of the probability p , as shown below. 

        The (unitary) operator V is used to construct unitary U. The original state |ψ〉 is the eigenvector of U whose eigenvalues ​​provide the probability p. Quite a tongue twister, right?

        Building a unitary U requires the basic idea of ​​Grover's algorithm, and I should write an article about it soon, because it involves some neat tricks, well visualized in Bloch spheres. For now, just assume it exists and behaves as shown in the image above. The next step is to estimate the phase θ , which is equivalent to estimating p .

        If you've seen the quantum phase estimation algorithm before, you can guess what happens next. Given the relationship between the complex phase angle and the probability p , if we manage to estimate the former, we can obtain the latter. I will avoid explaining in detail how quantum circuits work, and instead focus on the inputs and outputs of the algorithmic blocks. Here's what a quantum circuit looks like:

        Quantum Phase Estimation Algorithm. It requires operators U and V, and an inverse Fourier transform. We'll cover steps 1 through 3 below. We construct the state |ψ〉, perform M-1 control operations on it, and obtain a binary string from the measured output of the auxiliary register, which provides an estimate of the probability p.

        If you think this is just a phase estimation algorithm, you are absolutely right. However, if you are not familiar with it, fear not! I will introduce each block one by one. Are you ready?

        In block 1  , we prepare  the m  -qubit auxiliary register (which will eventually record  the θ  estimate as a binary fraction) and the state | ψ〉 itself. For this we need operators  V  and  M  Hadamar Gate.

        Next, in module 2  , we apply consecutive controlled  U-  gates that control the cascade of gates along the auxiliary registers. In total, we need  M-1  controlled  U  operations, = 2m . Note that | ψ〉 is  an eigenstate of U  , so these operations do not fundamentally change it, they just multiply it by the global complex phase.

        Finally, in module 3  , we measure the auxiliary register to obtain a string of 0s and 1s, labeled  J₁J₂...jem . They allow us to estimate the angle θ by a binary fraction: θ  = 2 π (0. J₁J₂...Jem ). Here's how to read it:

        Represent the phase θ as a binary fraction. The readout from the auxiliary circuit provides a string of 0s and 1s that can be converted to an estimate of phase. To be precise, the readings do not always provide correct estimates, but this does not affect the overall efficiency of the method.

        We now have all the main elements of the algorithm and can discuss them. We can start with what are the "basic steps" we should take a closer look at the binary fraction above. You will agree that, given the binary fraction shown above, the number of auxiliary qubits ( m in total ) determines the minimum correction to phase  θ  . In fact, each of them increases the precision by a factor of 1/2, so if your quantum register has m auxiliary qubits, your estimate may be off the exact value by about 1/2 m, since it represents minimal contribution.

        In other words, the error ε is about 1/ M ( = 2m , that is, it scales to  1/ M. Furthermore, since the set of controlled U blocks consists of M − 1 applications of U, we treat controlled U operations as fundamental steps.

        The basic steps or operations of the quantum and classical versions of Monte Carlo.

        Since the estimation error is proportional to 1/M, where M  is the "elementary step" in the algorithm, we say that it represents a quadratic speedup relative to classical methods (such as naive sampling), with an error scaled 1/ √N .

Four. Postscript

        There are many more details in this story, which will be left in later articles. One of these is the construction of the operator U already mentioned   , and the other is that not all runs of the quantum algorithm provide the best estimate for the phase  θ  , which requires multiple runs to give better results. However, this does not hurt quadratic acceleration. Furthermore, we can generalize this approach to assess probabilities for more than just individual configurations. For example, we could in principle extend this to assess expected values.

Guess you like

Origin blog.csdn.net/gongdiwudu/article/details/132424126