2023 Certification Cup International Competition (Little American Competition) Question A Question B Question C Question D Question Analysis

The difficulty evaluation and number of topic candidates for the 2023 Certification Cup International Competition (Little America Competition) are as follows:

Difficulty: D>C=B>A

Topic selection: A>B=C>D

Question A (MCM) Sunspot prediction

Complete code idea: https://www.jdmm.cc/file/2709944/

Question A restatement

2023 Certification Cup International Competition (Little US Competition) Question A Restatement: Sunspots are phenomena on the solar photosphere. They are temporary spots that are darker than the surrounding area. They are areas of reduced surface temperature caused by magnetic flux concentrations that inhibit convection. Sunspots occur within active regions, usually as pairs of opposite magnetic polarities. Their numbers vary according to the approximately 11-year solar cycle.

A single sunspot or group of sunspots may persist anywhere from a few days to a few months, but will eventually decay. Sunspots expand and contract as they move across the Sun's surface, ranging from 16 kilometers (10 miles) [1] to 160,000 kilometers (100,000 miles) in diameter. Without a telescope [2], some larger sunspots can be seen from the Earth. When they first appear, they may move at relative speeds of a few hundred meters per second or with normal motion.

The solar cycle usually lasts about 11 years, ranging from less than 10 to 12 years. The highest activity point of sunspots in a cycle is called solar maximum, and the lowest activity point is called solar minimum. This cycle is also observed in most other solar events and is associated with changes in the Sun's magnetic field, which changes polarity with this cycle.

The number of sunspot viruses also changes over long periods of time. For example, during the period known as the Modern Maximum from 1900 to 1958, the solar maximum trend in sunspot numbers was upward; over the next 60 years, the trend was mainly downward [3]. Overall, the Sun is as active as it was 8,000 years ago [4].

Because sunspots are related to other types of solar activity, sunspots can be used to help predict space weather, the state of the ionosphere, and conditions related to shortwave radio propagation or satellite communications. Many models based on time series analysis, spectral analysis and neural networks have been used to predict sunspot activity, but the results are often poor. This may be related to the fact that most predictive models are phenomenological at the data level. Although we generally know the length of the solar activity cycle, this cycle is not completely stable, the maximum intensity of activity varies over time, and the timing of the peak and the duration of the peak are difficult to predict accurately.

We need to predict sunspots and usually we need to take the average results by month. You and your team have been asked to develop sound mathematical models that make sunspot predictions as credible as possible. Relevant observational data are publicly available at many observatories and space science research organizations, including observations of the historical number of sunspots, the area of ​​sunspots, and other possibly related indicators.

Task:

Please predict the beginning and end of the current and next solar cycle;

Please predict the start time and duration of solar maximum for the next solar cycle;

Predict the number and area of ​​sunspots during the current and next solar cycle and explain the reliability of your model in your paper.

Analysis of ideas for question A

The analysis of the ideas for Question A of the Certification Cup is as follows:

Question 1: Predicting the start and end of the current and next solar cycle To predict the start and end of the solar cycle, one can use time series analysis and statistical models such as ARIMA (Autoregressive Integrated Moving Average Model) or similar methods. The key is to use historical solar activity data, including sunspot counts and other related indicators. Averages can be extracted on a monthly or annual basis, and trends and periodicities can be modeled to estimate the time horizon of future solar cycles. Meanwhile, machine learning methods, such as neural networks, can also be used to capture more complex patterns.

Question 2: Predicting the onset and duration of solar maxima for the next solar cycle Solar maxima are often associated with an increase in the number of sunspots. Time series analysis can be used to predict peaks in sunspot numbers and, from this, estimate the onset and duration of solar maximums. Machine learning models can also be used to more accurately capture the various factors that influence solar maximum. Considering the complexity of solar activity, integrating multiple models or using deep learning methods may help improve prediction accuracy.

Question 3: Predict the number and area of ​​sunspots in the current and next solar cycle, and explain the reliability of the model. For prediction of the number and area of ​​sunspots, time series analysis, statistical models, and machine learning methods can be combined. The model is trained using historical data and adjusted based on various factors of solar activity to improve the model's accuracy. At the same time, considering the uncertainty of solar activity, a sensitivity analysis should be performed on the uncertainty of the model and the corresponding confidence interval or probability distribution should be provided in the results.

Problem B (MCM) Industrial surface defect detection

Question B restatement

Restatement of Question B of the 2023 Certification Cup International Competition (Little US Competition): Surface defects in metal or plastic products will not only affect the appearance of the product, but may also cause serious damage to the performance or durability of the product. Automatic surface anomaly detection has become an interesting and promising research area with a very high and direct impact on the application areas of visual inspection. The Colecto Group provides a dataset of images of defective production items, and we would like to use this dataset as an example to study a mathematical model for the automatic detection of product surface defects by taking photos.

Doman Tabernik, Mattik UC, and Danijel Skoy built a model that uses deep learning to detect surface defects, which is claimed to provide good recognition even after a small amount of training. However, our problem at this point is slightly different; first, we want our model to be deployable on cheap handheld devices. Such devices have very limited storage space and computing power, so the model is very demanding in terms of computational effort and required storage space. Second, since this dataset does not contain all defect patterns, we expect the model to have relatively good generalization capabilities when encountering other defect types. You and your team are asked to build easy-to-use mathematical models to complete the following tasks.

Task:

  1. Determine whether surface defects appear in photographs and measure the computational effort and storage required for the model to do so;

  2. Automatically mark locations or areas where surface defects occur and measure the computational effort, storage space and marking accuracy required for the model.

  3. Please be clear about the generalization ability of your model, i. e. Why is your model still feasible if the types of defects you encounter are not entirely feasible?

Analysis of ideas for question B

Establishing an automatic surface anomaly detection model suitable for cheap handheld devices with limited computing and storage resources is a challenging task that requires comprehensive consideration of the model's lightweight, efficiency, and generalization capabilities. Here are suggestions for your proposed task:

Task 1: Lightweight model selection for surface defect detection: Considering the resource limitations of handheld devices, it is recommended to choose lightweight deep learning models, such as MobileNet, ShuffleNet or EfficientNet. These models have a smaller number of parameters and computational requirements while maintaining high accuracy.

Model compression technology: Use model compression technology, such as quantization, pruning and model distillation, to reduce the storage space requirements and calculation amount of the model. These techniques can significantly reduce model size while maintaining model performance.

Lightweight feature extraction: Consider using lightweight feature extractors to reduce computational burden. In deep learning models, the feature extraction layer is usually the computationally intensive part.

Storage and computing optimization: Optimize the inference process of the model, such as using a lightweight inference engine to reduce memory usage and accelerate inference speed.

Evaluation Metrics: Measure the accuracy of the model in detecting surface defects and record the amount of computation and storage required.

Task 2: Surface defect location marking object detection model: Choose a lightweight object detection model suitable for embedded devices, such as Tiny YOLO or SSD Lite. These models can more accurately mark the location of surface defects.

Model Accuracy: Optimize the model to improve accuracy in marking defect locations. This may require more computational and storage resources, but can be balanced by further model compression and optimization.

Storage and Compute Optimization: Likewise, the inference process is optimized to accommodate the resource constraints of embedded devices.

Marking accuracy: Evaluate the accuracy of the model in marking surface defect locations, and record the computational effort, storage space, and marking accuracy.

Task 3: Data enhancement of model generalization ability: Use data enhancement techniques such as rotation, flipping and scaling to simulate more defect types and scenarios and improve the model's generalization ability to unseen defect types.

Transfer learning: Improve the model's ability to adapt to diverse defects by pre-training on a data set containing other defect types.

Model robustness: Pay attention to robustness when designing the model so that it can maintain relatively good performance in the face of unknown defect types.

Model interpretability: Increase model interpretability, allowing users to understand the model's behavior on different defect types and thereby better understand its generalization capabilities.

Taking the above suggestions into consideration, a lightweight, efficient, surface defect detection model with good generalization ability that can be deployed on a handheld device can be constructed. In practical applications, careful tuning and verification are required to ensure that the model meets specific requirements.

Issue C (ICM) Avalanche Protection

Question C restatement

2023 Certification Cup International Competition (Little American Competition) Question C Restatement: Avalanches are an extremely dangerous phenomenon. Today, we have a good understanding of avalanche formation. However, we are not yet able to predict in detail the exact cause of an avalanche, when and where it will be triggered. Villages and roads can protect themselves from avalanches in various ways. Some possibilities include restraining construction in vulnerable areas, preventing avalanches from forming by planting forests or installing barriers, reducing avalanche impacts through protective structures such as snow sheds, and using explosives to artificially trigger avalanches before too much snow accumulates [2].

Our focus now is on using explosives to trigger small artificial avalanches. What needs to be determined is the appropriate time and associated parameters to trigger the explosion. While using more explosives provides better personal safety, it disrupts the normal lives of the animals that live in these areas. When it comes to human safety, making slides safer by artificially triggering avalanches is far-reaching. However, the Nature Conservancy disagrees that human-triggered avalanches over large areas, especially in ski resorts, are increasingly having a negative impact on animals. In addition, when snow falls on warm ground, it is compressed by strong winds and becomes hard [3]. Due to widespread heavy snowfall and strong winds, the snowpack has become increasingly stable, making the success rate increasingly lower. That’s why we need you and your team to build robust models to study this problem.

Task:

  1. Find useful and easily measurable parameters to measure the risk of avalanche occurrence.

  2. For slopes at risk of avalanches, we need to conduct simple field surveys to determine the appropriate timing of using blasts to induce small avalanches, placement of explosives, and appropriate blasting capabilities.

Analysis of ideas for question C

Task 1: Avalanche risk assessment parameters

1.1 Snow layer structure and stability density distribution: By measuring the density distribution of the snow layer, the structure and stability of the snow can be understood. Unstable snow makes avalanches more likely.

Temperature gradient: The temperature gradient within the snow layer will affect the structure of the snow, and large temperature gradients may increase the risk of avalanches.

1.2 Terrain and Slope Slope: Steep slopes increase avalanche risk. Measure slope using topographic maps and remotely sensed data.

Terrain Curvature: Consider the curvature of the terrain, as snow is more likely to accumulate on uneven terrain.

1.3 Weather and Snowfall Snowfall: Heavy snowfall may increase avalanche risk. Measure the amount and frequency of snowfall.

Wind direction and speed: Strong winds can move and deposit snow, affecting avalanche formation.

1.4 Friction of snow layer and friction of structural snow: Understand the friction within the snow layer to determine whether the snow slides easily.

Adhesion: Consider snow adhesion, or how tightly it sticks to the ground.

Task 2: Field investigation of artificially induced small avalanches 2.1 Field survey of avalanche frequency: Through field survey, identify areas with high avalanche frequency.

Topographic features: Examine the topographic features of the slope, such as concavities and convexities, vegetation distribution, etc.

2.2 Appropriate time for blasting-induced small avalanche parameters: According to the snow layer structure and weather conditions, determine the appropriate time window for blasting.

Explosive placement: Determine the best location for explosive placement, usually at a certain depth within the snow layer.

Blasting Capacity: Depending on the size of the slope and the structure of the snow layer, determine the appropriate blasting capability to trigger small-scale avalanches.

2.3 Environmental protection and animal impact protection measures: Design protection measures to reduce negative impacts on the surrounding ecological environment.

Animal migration: Consider the seasons when animals migrate and avoid blasting during critical times.

2.4 Model establishment mathematical model: Use the collected data to establish a mathematical model, which can include terrain analysis, snow layer structure, weather data, etc., to predict avalanche risks and optimize blasting parameters.

Issue D (ICM) The Twilight Zone Factor of Telescopes

Question D restatement

Restatement of Question D of the 2023 Certification Cup International Competition (Little American Competition): When we use an ordinary optical telescope to observe a distant target in dim light, the larger the entrance aperture, the more light entering the binoculars. many. The greater the magnification of a telescope, the narrower the field of view and the darker the image. But the higher the magnification, the larger the target and the greater the detail [1]. We need a comparative value, suitability of binoculars. Zeiss uses an empirical formula called the dusk factor, which is defined as follows:

where m is the magnification and d is the lens diameter in mm.

The twilight factor is a number used to compare the effectiveness of binoculars or spotting scopes. The higher the twilight factor, the more details you can see in low light. However, the twilight factor can also be misleading, as shown in the following example: two binoculars, 8 x 56 and 56x8 (such a model does not exist, but is theoretically possible), have the same twilight factor of 21.2. While the 8 x 56 model is ideal at dusk, the 56 x 8 model will be completely unusable even during daylight hours[3].

We would like to have a more useful metric for expressing a telescope's performance in low light, and use only basic parameters. This will provide a normative reference for telescope selection. More detailed metrics reflecting image quality are beyond the scope of our discussion, such as contrast, transmission, color reproduction, etc.

Task:

1. Please consider the visual characteristics of the human eye in dim light, establish a reasonable model, and propose a dusk coefficient algorithm suitable for direct observation by the human eye with binoculars.

2. If the visual receptor is not the human eye, but a CMOS video recording device, please consider the perception characteristics of CMOS in dim light and establish a reasonable mathematical model, and propose a twilight coefficient algorithm for lenses suitable for CMOS video recording.

Note: When studying the above issues, if it involves the performance parameters of photoreceptors, please find the required data yourself. Alternatively, you can calculate some fictitious examples in your paper, but you should give reasonable definitions of the required parameters and an achievable, low-cost method of measurement. So that we can implement the measurement according to your measurement plan and give the final result.

Analysis of ideas for question D

Task 1: Dusk coefficient algorithm directly observed by human eyes

1.1 Consideration of the visual characteristics of the human eye - pupil diameter: The pupil of the human eye will expand under low light, allowing more light to enter the eye.

  • Sharpening perception: The human eye is more sensitive to objects with higher contrast, and the impact of contrast on image perception can be considered in the model.

1.2 Dusk coefficient algorithm

Where:- m is the magnification. - d is the lens diameter in mm. - Pupil Factor is a coefficient related to pupil diameter, taking into account how much the pupil dilates in low light. - Contrast Factor is a coefficient related to contrast that takes into account the impact of contrast on visual perception.

Task 2: Dusk coefficient algorithm for CMOS video equipment

2.1 Consideration of CMOS sensing characteristics - camera sensitivity: Consider the sensitivity of the CMOS sensor in low light, generally expressed in ISO.

  • Noise level: Noise generated by CMOS sensors may affect image quality under low-light conditions.

2.2 Dusk coefficient algorithm

Where:- m is the magnification. - d is the lens diameter in mm. - ISO Factor is a coefficient related to the camera's ISO value, taking into account the camera's sensitivity in low-light conditions. - Noise Factor is a coefficient related to the camera noise level, taking into account the impact of noise on image quality.

Suggestions on parameter measurement methods

Dusk coefficient by direct observation of the human eye - Pupil diameter measurement: Measure the diameter of the pupil in low light conditions using a pupillometer or by a camera.

  • Contrast measurement: Measure image contrast using image processing tools.

Twilight coefficient of CMOS video equipment - ISO value measurement: Check the camera settings directly or obtain the ISO value through image processing tools.

  • Noise level measurement: Take a pure black image in low light conditions and measure the noise level in the image. The above recommendations can be used to implement measurements and calculate the twilight coefficient based on the resulting data, providing a standardized metric for comparing telescope performance in low light.

Guess you like

Origin blog.csdn.net/qq_45857113/article/details/134732107