Engineering numerical analysis (bulk/self-feeding/non-whole)

1. Monte Carlo

basic process

Based on the principle of random sampling, the Monte Carlo simulation method estimates and analyzes the target variable by generating a large number of random samples. Specifically, the basic process of the Monte Carlo simulation method is as follows:

        1. Determining the problem: first of all, it is necessary to clarify what is the problem to be solved, and what is the goal that needs to be estimated or solved.

        2. Building a model: It is necessary to build an appropriate mathematical model to describe the problem.

        3. Generate random samples: According to the model of the problem, use an appropriate random sampling method to generate random samples.

        4. Calculate the objective function: For each generated random sample, calculate the value of the objective function according to the definition of the problem.

        5. Statistical analysis: perform statistical analysis on the calculated objective function values, which may include calculation of mean, variance, confidence

        Intervals, probability distributions, etc. Estimates or approximate solutions to problems can be obtained through statistical analysis.

        6. Convergence test: After obtaining a certain number of random samples, a convergence test is required to determine the stability of the results.

        Qualitative and accurate.

        7. Interpretation and application of results: According to specific problems, explain and apply the results obtained by the Monte Carlo method.

advantage

  1. Wide applicability: Monte Carlo methods can be applied to various problem areas, including numerical calculation, probability statistics, optimization problems, etc. Its flexibility allows it to solve complex problems that are difficult to solve with traditional analytical methods.

  2. Relatively simple: Monte Carlo methods are relatively simple to implement, especially compared to other numerical methods. It usually only needs to generate random samples and perform simple statistical analysis without deriving complex mathematical formulas or solving high-order equations.

  3. Parallelizable processing: Since the calculation steps of the Monte Carlo method are independent of each other, the problem can be decomposed into multiple sub-problems and performed in parallel. This makes the Monte Carlo method more scalable and efficient in parallel computing environments.

  4. Provides uncertainty quantification: Monte Carlo methods are able to quantify the uncertainty in a problem and provide a probability distribution or confidence interval about the outcome. This allows decision makers to better understand risks and uncertainties and make decisions accordingly.

shortcoming

  1. High computational cost: Monte Carlo methods usually need to generate a large number of random samples to obtain accurate results. Computational costs increase exponentially as the complexity and dimensionality of the problem increases. This can require significant computing resources and time.

  2. Slow Convergence: Monte Carlo methods usually have slow convergence. Especially for high-dimensional problems in the problem space, the number of generated random samples needs to be large to obtain satisfactory results. This can lead to longer calculation times.

  3. Random Error: The results of the Monte Carlo method are affected by random sampling and therefore have random error. Even with a large number of samples, there may be some bias in the results. This requires assessment of the reliability of the results through convergence tests and statistical analysis.

  4. Not suitable for some problems: Monte Carlo methods are not suitable for all types of problems. For some problems, especially those that are highly structured and regular, more efficient analytical or numerical methods may exist.

example

     Computes the integral of [a,b].

      We randomly take a series of points xi between [a,b], and then average the estimated area as an approximation of the integral estimate. The more sampling points, the closer the estimate of this integral will be.

2. Generalized Polynomials

Orthogonal polynomials are a special class of polynomial functions that are orthogonal under the inner product of some weight function. These polynomial functions are closely related to many probability distributions and play an important role in probability and statistics.

Guess you like

Origin blog.csdn.net/weixin_44307969/article/details/130978102