Commonly used models for mathematical modeling

As a programmer in mathematical modeling, he also knows some commonly used algorithms for various types of models. The more commonly used methods of mathematical modeling evaluation models, classification models, and prediction models are summarized as follows:

Next, these typical models will be introduced in detail.

1. Evaluation model

In mathematical modeling, the evaluation model is one of the more basic models. Appropriate evaluation standards and indicators are usually designed based on the characteristics and needs of the problem to evaluate and compare the performance of different solutions or models to help make decisions. Typical models include: analytic hierarchy process, fuzzy comprehensive evaluation, entropy method, TOPSIS method, data envelopment analysis, rank sum ratio method, and gray correlation method .

‍1. Analytic hierarchy process

(1) Basic idea

AHP analytic hierarchy process is a research method that combines qualitative and quantitative calculations of decision-making weights to solve complex multi-objective problems . It decomposes complex decision-making problems into multiple levels by building a hierarchical structure, and uses expert judgment and comparison to determine the weight of each factor to arrive at the final decision result. It is more effectively applied to topics that are difficult to solve using quantitative methods.

(2) Analysis steps

  • Step 1: Construct a judgment matrix;
  • Step 2: Calculate the weight;
  • Step 3: Consistency check.

(3) The software operation uses SPSSAU to perform the analytic hierarchy process, and just input the judgment matrix:

Interpretation of the judgment matrix: Tickets are more important than the scenery, so it is 3 points; on the contrary, the scenery is more important than the tickets, so it is 0.33333 points. Transportation is more important than scenery, which is 2 points, and the rest is similar.

For detailed description of AHP level analysis and case operation interpretation, please click to view the help manual below:

AHP Analytical Hierarchy Process Help Manual

2. Fuzzy comprehensive evaluation

(1) Basic idea

Fuzzy comprehensive evaluation is an evaluation method that deals with fuzzy information . In fuzzy comprehensive evaluation, fuzzy evaluation indicators are converted into membership degrees through membership functions , and then different indicators are given different importance according to their weights. Finally, a comprehensive evaluation result is obtained by weighted summation of the membership degrees. The fuzzy comprehensive evaluation method can effectively deal with uncertainty and ambiguity problems and is suitable for complex decision-making in the real world.

(2) Analysis steps

  • Step one: Determine the evaluation indicators and comment set;
  • Step 2: Determine the weight vector matrix A and construct the weight judgment matrix R;
  • Step 3: Calculate the weight and conduct decision evaluation.

(3) Software operation uploads data to the SPSSAU system. Select [Fuzzy Comprehensive Evaluation] on the right side of the analysis page, drag the variables to the corresponding analysis box on the right, and click "Start Analysis". The operation is as follows:

For detailed instructions on fuzzy comprehensive evaluation and interpretation of case operations, please click to view the help manual below.

Fuzzy comprehensive evaluation help manual

‍3. Entropy method

(1) Basic idea

The entropy value method is an objective weighting method used to determine the weight of each indicator in the comprehensive evaluation. Entropy is a measure of uncertainty. The greater the amount of information, the smaller the uncertainty and the smaller the entropy ; the smaller the amount of information, the greater the uncertainty and the greater the entropy. Therefore, the information carried by the entropy value is used to calculate the weight, combined with the degree of variation of each indicator, and the tool of information entropy is used to calculate the weight of each indicator, providing a basis for comprehensive evaluation of multiple indicators.

(2) Analysis steps

  • Step one: data standardization;
  • Step 2: Non-negative translation;
  • Step 3: Calculate the weight and conduct decision evaluation.

(3) Software operation

Upload the data to the SPSSAU system, select [Entropy Method] on the right side of the analysis page, drag the variables to the corresponding analysis box on the right, and click "Start Analysis". The operation is as follows:

For a detailed explanation of the entropy method and interpretation of case operations, please click to view the help manual below.

Entropy method help manual

‍4. TOPSIS method

(1) Basic idea

The TOPSIS method is a multi-attribute decision-making method based on distance and similarity measures . The TOPSIS method first compares multiple alternatives with the ideal solution and calculates the similarity and distance between each alternative and the ideal solution. Then, based on the calculation results, the alternatives are evaluated and ranked, and the best one is selected. The TOPSIS method can better handle multi-attribute decision-making problems, and is especially suitable for situations where multiple evaluation indicators need to be considered.

(2) Analysis steps

  • The first step: prepare the data and process it with the same trend (the researcher needs to process it by himself);
  • Step 2: Data normalization processing to solve dimensional problems (data processing->generate variables, usually select 'sum of squares normalization');
  • Step 3: Find the optimal and worst matrix vectors (SPSSAU automatic processing);
  • Step 4: Calculate the distance D+ between the evaluation object and the positive ideal solution or the distance D- between the negative ideal solution;
  • Step 5: Calculate the C value close to the program based on the distance value, sort it , and draw a conclusion.

(3) SPSSAU software operation

Upload the data to the SPSSAU system, select [TOPSIS method] on the right side of the analysis page; drag the variables to the analysis box on the right; click "Start Analysis", the operation is as follows:

For a detailed explanation of the TOPSIS method and interpretation of case operations, please click to view the help manual below

TOPSIS Method Help Manual

‍5. Data Envelopment Analysis

(1) Basic idea

Data envelopment analysis (DEA) is a research method for multi- index input and output evaluation . It uses mathematical programming models to calculate and compare the relative efficiencies between decision-making units (DMUs) to evaluate the evaluation objects.

(2) Analysis steps

  • Step one: Determine the decision-making unit and evaluation indicators;
  • Step 2: DEA model selection;
  • Step 3: Calculation efficiency evaluation;
  • Step 4: Efficiency analysis and improvement.

(3) SPSSAU software operation

Upload the data to the SPSSAU system, select [DEA] on the right side of the analysis page, drag the variables to the corresponding analysis box on the right, select "DEA Type", and click "Start Analysis". The operation is as follows:

For detailed description of data envelopment analysis DEA and interpretation of case operations, please click to view the help manual below

Data Envelopment Analysis DEA Help Manual

6. Rank sum ratio method

(1) Basic idea

The rank sum ratio (RSR) method is a ranking-based model comparison method . The essential principle is to use RSR value information to perform various mathematical calculations. The RSR value is between 0 and 1 and is continuous. Generally, the larger the value, the better the evaluation.

(2) Analysis steps

  • Step 1: List the original data, with one row representing an evaluation object and one column representing an evaluation indicator. The final result is m*n matrix;
  • Step 2: Calculate the rank value of the m*n matrix, that is, the original data;
  • Step 3: Use the rank value of Step 2 to calculate the RSR value and RSR value ranking;
  • Step 4: List the distribution table of RSR and obtain the Probit value;
  • Step 5: Calculate the regression equation;
  • Step 6: Sort and classify.

(3) Software operation

Upload the data to the SPSSAU system, select [Rank Sum Ratio] on the right side of the analysis page, drag the variables to the corresponding analysis box on the right, select "Compilation Method", "Number of Grades", click "Start Analysis", the operation is as follows:

For detailed description of rank sum ratio and interpretation of case operations, please click to view the help manual below.

Rank Sum Ratio Help Manual

7. Gray correlation method

(1) Basic idea

Gray correlation analysis method is a research method that assists decision-making by studying the degree of data correlation (the degree of correlation between the parent sequence and the feature sequence) and measuring the degree of correlation between data through the degree of correlation (i.e., the magnitude of the correlation ) .

(2) Analysis steps

  • The first step: determine the parent sequence and feature sequence , and prepare the data format;
  • Step 2: Perform dimensionless processing of the data (usually required);
  • Step 3: Find the gray correlation coefficient value between the parent sequence and the feature sequence;
  • Step 4: Solve the correlation value;
  • Step 5: Sort the correlation values ​​and draw conclusions.

(3) Software operation

Upload the data to the SPSSAU system, select [Gray Correlation Method] on the right side of the analysis page; drag the variables to the corresponding analysis box on the right, select "Dimensionalization Method", and click "Start Analysis", the operation is as follows:

For detailed description of gray correlation method and interpretation of case operations, please click to view the help manual below

Gray correlation method help manual

2. Classification model

The classification model of mathematical modeling refers to a data mining method that classifies the input data set based on known classification labels. The goal of classification is to predict each case of the data into a target classification as accurately as possible. Typical models include K-means clustering, Fisher discriminant analysis, binary logistic regression, decision tree, random forest, neural network classification, K nearest neighbor algorithm, etc.

1. K-means clustering

(1) Basic idea

The K-means algorithm is a typical distance-based clustering algorithm. It uses distance as the similarity evaluation index, that is, it is considered that the closer the distance between two objects, the greater the similarity. This algorithm believes that clusters are composed of objects that are close together, so it takes the ultimate goal of obtaining compact and independent clusters . Because it needs to calculate distance, it is decided that the K-means algorithm can only handle numerical data, but not categorical attribute data.

(2) Analysis steps

  • The first step: K-means algorithm first needs to select K initial clustering centers
  • Step 2: Calculate the distance between each data object and the K initial clustering centers , and divide the data objects into the data set closest to the clustering center. When all data objects are divided, K data sets are formed ( That is, K clusters)
  • Step 3: Next, recalculate the mean of the data objects in each cluster , and use the mean as the new cluster center.
  • Step 4: Finally, calculate the distance between each data object and the new K initial clustering centers, and re-divide
  • Step 5: After each division, the initial clustering center needs to be recalculated, and this process is repeated until all data objects cannot be updated to other data sets.

(3) Software operation

Upload the data to the SPSSAU system, select [Clustering] on the right side of the analysis page; drag the variables to the corresponding analysis box on the right, click "Start Analysis", the operation is as follows:

Add a comment on the image, no more than 140 words (optional)

Supplement: When SPSSAU performs cluster analysis, put the corresponding data type into the analysis column on the right, and it can automatically identify the data for quantitative or classification or cluster analysis of mixed data .

  • When only quantitative data analysis is performed, SPSSAU uses the K-means clustering method for clustering by default;
  • When only analyzing categorical data, SPSSAU uses the K-modes clustering method for clustering by default;
  • When performing mixed (quantitative + categorical) data analysis, SPSSAU will use the K-prototype clustering method for clustering.

For detailed description of cluster analysis and interpretation of case operations, please click to view the help manual below.

Cluster Analysis Help Manual

2. Fisher discriminant analysis

(1) Basic idea

The basic idea of ​​Fisher discriminant analysis is to classify samples by projecting the samples onto a straight line so that the distance between similar samples is as small as possible and the distance between different types of samples is as large as possible .

(2) Software operation

Upload the data to the SPSSAU system, select [Discriminant Analysis] on the right side of the analysis page; drag the variables into the corresponding analysis box on the right

For detailed explanations of discriminant analysis and interpretation of case operations, please click to view the help manual below. Discriminant Analysis Help Manual

3. Binary logistic regression

(1) Basic idea

Binary logistic regression analysis is a commonly used classification method. Its basic idea is to classify samples by establishing a logistic regression model . Convert the linear combination of predictor variables into a probability value between 0 and 1, and then use this probability value as the basis for classification. Compared with other classification methods, binary logistic regression analysis has the advantages of simple model and strong interpretability of parameters , and has been widely used in practical applications.

(2) Analysis steps

The first step: establish a binary logistic regression model; the second step: evaluate the model; the third step: apply the model for classification prediction.

(3) Software operation

Upload the data to the SPSSAU system, select [Binary Logit Regression] on the right side of the analysis page; drag the variables to the corresponding analysis box on the right, click "Start Analysis", the operation is as follows:

Add a comment on the image, no more than 140 words (optional)

For detailed instructions on binary logistic regression analysis and interpretation of case operations, please click to view the help manual below

Binary logistic regression analysis help manual

4. Machine learning

Decision trees, random forests, neural networks, K nearest neighbor algorithms, naive Bayes, and support vector machines can be classified into this category of machine learning classification. For information about the six typical types of machine learning algorithms, you can read this previous article:

The secrets of six machine learning algorithms: from decision trees to neural networks, even beginners can easily master them!

3. Prediction model

1. ARIMA forecast

(1) Basic idea

The ARIMA model is the most common time series prediction analysis method and is suitable for stationary time series data. It consists of three parts: autoregression (AR), difference (I) and moving average (MA). SPSSAU can intelligently find the best AR model, I is the difference value and MA model, and finally gives the best model prediction results. Of course, researchers can also set the autoregressive order p, the difference order d value and the moving average order q by themselves, and then build the model.

(2) Software operation

For detailed explanations of ARIMA forecasts and case operation interpretation, please click to view the help manual below

ARIMA Forecasting Help Manual

2. Exponential smoothing method

(1) Basic idea

The exponential smoothing method is often used when there are few data sequences, and is generally only suitable for short- and medium-term predictions . May perform poorly on data with long-term trends or complex non-linear relationships. Exponential smoothing can be further divided into primary smoothing, secondary smoothing, and cubic smoothing ; the primary smoothing method is a weighted prediction of historical data, the secondary smoothing method is suitable for data with a certain linear trend, and the cubic smoothing method is suitable for data with a certain curve relationship. use. If you do not set a smoothing method, SPSSAU will automatically run three smoothing methods and select the corresponding smoothing method for the best effect.

In the exponential smoothing method, the initial value S0 and the smoothing coefficient alpha are two parameters used to determine the initial state of the prediction model and the weight of past observations.

(2) Software operation

For a detailed explanation of the exponential smoothing method and interpretation of case operations, please click to view the help manual below:

Exponential Smoothing Help Manual

3. Gray prediction model

(1) Basic idea

The gray prediction model can effectively predict data sequences with a very small number (such as only 4) and low data integrity and reliability . It uses differential equations to fully explore the essence of data. Modeling requires less information, has higher accuracy, is simple to operate, and easy to test, and does not need to consider distribution patterns or changing trends. However, the gray prediction model is generally only suitable for short-term data and data with a certain exponential growth trend for prediction, and long-term prediction is not recommended.

(2) Software operation

For detailed description of the gray prediction model and interpretation of case operations, please click to view the help manual below

Gray prediction model help manual

4. Markov prediction

(1) Basic idea

Markov prediction is a prediction method based on Markov chain . A Markov chain is a stochastic process with Markov properties, that is, the probability of the future state only depends on the current state and has nothing to do with the past state . Markov predictions exploit this property to predict future events.

Markov prediction involves 3 term nouns.

(2) Software operation

For detailed description of Markov prediction and case operation interpretation, please click to view the help manual below

Markov Prediction Help Manual

5. Machine learning prediction

Machine learning is a powerful technique for learning patterns and regularities from data and using this knowledge to make predictions.

Regarding the description of the six types of machine learning algorithms, I have already written a detailed introduction last week and will not repeat them here. You can click on the article below to learn: Six machine learning algorithms revealed: from decision trees to neural networks, even beginners can easily master it!

Guess you like

Origin blog.csdn.net/m0_37228052/article/details/133376992