[Paper Summary] Notes on the Application of Deep Learning in the Field of Agriculture 10

1. Optimized deep residual network system for diagnosing tomato pests

Tomato production is constantly threatened by the bites of several pests, mainly whitefly and cotton bollworm. Pests exist throughout the tomato growing season, timely detection and control of these stubborn pests contribute to economic losses. In recent years, deep learning has been widely used to identify plant pests and diseases. The performance of deep learning models is greatly affected by the network structure and hyperparameters, especially hyperparameters, which often require manual selection. In order to obtain suitable hyperparameters, this paper improves the fruit fly optimization algorithm and uses the improved algorithm to optimize the learning rate of the deep network. Experimental results show that the improved fruit fly optimization algorithm can get a better learning rate. Compared with the other 6 networks, the average diagnostic accuracy of IResNet50 for 7 tomato pests is 94.4%, which is better than other models.

  • Summary: +1 point for ResNet improved algorithm, -1 point for too old topic, +1 point for multi-method comparison.

2. Applying convolutional neural networks for detecting wheat stripe rust transmission centers under complex field conditions using RGB-based high spatial resolution images from UAVs

The use of drones provides a timely and low-cost method for acquiring high spatial resolution imagery for crop disease detection. This study explored convolutional neural networks (CNNs) and RGB-based UAV high-spatial-resolution images to detect wheat stripe rust transmission centers (infected area accounted for less than 1.35%) under complex field conditions in Hubei. To take full advantage of the end-to-end learning capabilities, CNNs semantic segmentation architecture (deeplabv3+) was applied to pixel-wise classify images to detect healthy wheat and striped rust-infected wheat (SRIW). Using a rich dataset with different field conditions and sunlight lighting properties, we are able to detect SRIW accurately (Rust class F1 = 0.81). The study also evaluates the impact of classification framework and spatial resolution on model training. The results show improved model accuracy for rust classes when CNNs with imbalanced classes are trained using a multi-branch binary framework rather than a multi-classification framework. A coarser spatial resolution (8 cm) significantly reduces the model accuracy (Rust class F1-score). Furthermore, a macroscopic disease index (MDI) was defined to quantitatively measure the occurrence of SRIW. Our results demonstrate the power of ultra-high spatial resolution UAV imaging in detecting SRIW. With the end-to-end deep learning segmentation method greatly reducing the need for intensive preprocessing, the combination of CNN and RGB-based UAV ultra-high spatial resolution images provides a simple and fast method for large-scale accurate detection of crop diseases .

  • Summary: This paper is much more informative than the previous paper. Although it is also a pest detection problem, it has specific scientific problems + 0.5 points. +2 points for semantic segmentation, +2 points for high-resolution georeferenced aerial images captured by drone flights, +1 point for comparison of multiple semantic segmentation algorithms (DeepLabv3+, FPN, U-Net, Linknet, Manet, Pan, PSPnet) .

3. Improving vegetation segmentation with shadow effects based on double input networks using polarization images

Fractional vegetation cover (FVC) plays an important role in the study of vegetation growth state, and the key issue is to accurately segment and extract green vegetation from the background. However, the shadows produced by natural light will produce extreme illuminance differences in the image, which greatly reduces the accuracy of vegetation extraction. The polarization information of the ground object has nothing to do with the physical state of the reflectance of the ground object, and can be used to eliminate the influence of strong reflection in the image to a certain extent, reduce the difference in illumination under extreme sunlight conditions, and help improve the vegetation recognition effect under shadow conditions. In order to improve the accuracy of vegetation segmentation under shaded conditions, a vegetation polarization reflection information is proposed, and an improved semantic segmentation network is proposed, especially a dual-input residual network based on DeepLabv3plus (DIR_DeepLabv3plus), and a fusion based on cascaded addition is proposed Strategy. The network independently extracts low-level features and high-level features at different spatial scales from light intensity (red-green-blue (RGB)) images and degree-of-linear polarization (DoLP) images through a deep residual network and the Yalu Spatial Pyramid Pooling (ASPP) structure. , which effectively improves the accuracy of vegetation segmentation under shaded conditions. The results show that the mean values ​​of intersection over union (mIoU) of unshaded vegetation, light-shaded and shaded vegetation are 94.01%, 92.508% and 90.969%, respectively. Compared with the anti-shading algorithm (SHAR-LABFVC) to extract the color index method and the green fraction vegetation cover from the digital image, the method greatly improves the extraction accuracy, compared with the method without polarization information, the vegetation under different shading conditions The mIoU values ​​are 0.18%, 1.00% and 1.49% higher, respectively. This study provides a new method for vegetation segmentation that improves the accuracy of FVC calculation in shaded conditions.

  • Summary: Semantic segmentation +2, shadow theme +0.5, improvement method +2 points, own data set +1 point.

4. 3D point cloud density-based segmentation for vine rows detection and localisation

The adoption of new sensors for crop monitoring has resulted in the acquisition of large amounts of data that are often not directly usable for agricultural applications. 3D point cloud maps of fields and plots generated from remote sensing data are examples of such big data, which require the development of specific algorithms to process and interpret it with the ultimate goal of obtaining valuable information about crop conditions.
This manuscript presents an innovative 3D point cloud processing algorithm for vine row detection and localization in vineyard maps, based on keypoint detection and density-based clustering methods. Vine row localization is a key stage in the interpretation of complex and huge 3D point clouds of agricultural environments, which makes it possible to shift the focus from the macro level (plots and plot scales) to the micro level (plants, fruits and branches). The output of the algorithm fully describes the spatial position of each row of vines in the 3D model of the entire agricultural environment through a set of key points and an interpolation curve. In particular, the algorithm is considered robust and: (i) independent of the employed onboard sensors used to acquire field data (models with color or spectral information are not required); (ii) able to manage Orientation (eg curves) of the vineyard, and (iii) unhindered by missing vegetative occurrence. The experimental results obtained by processing the models of seven case plots prove the reliability and accuracy of the algorithm: the automatic detection of grape rows is 100% consistent with manual detection; the obtained positioning indicators show that the average error is 12 cm, and the standard deviation is 10 cm, fully compatible with the considered agricultural application. Additionally, the algorithm output can be used to enhance path planning for autonomous agricultural machines for field operations

  • Summary: This article uses a 3D point cloud processing algorithm (+2.5 points), and uses drones to measure vines in 7 vineyards (a large amount of actual data +3 points), and the theme is novel (+1 point).

5. Design of smart seed sensor based on microwave detection method and signal calculation model

Most of the sensors currently used for sowing monitoring are photoelectric sensors. They are susceptible to dust and light and cannot accurately identify overlapping seeds. Monitoring accuracy still needs to be improved. In this study, a smart seed sensor based on a microwave detection method and a signal calculation model was designed to improve the dust resistance and monitoring accuracy of double-overlapping seeds. The smart seed sensor uses a microwave radio frequency front-end as a signal source to generate a 24 GHz electromagnetic wave signal. The relative motion between the sensor and the seed produces a weak IF (intermediate frequency) signal. After amplifying and filtering the IF signal, the sensor produces a raw pulse signal. Different from traditional sensors, this smart seed sensor adds a calculation and identification part: the original pulse signal is collected by the voltage acquisition module, the generated voltage signal is recorded, the number of seeds is determined according to the voltage signal calculation model, and then the corresponding seed signal is output. The seed signal is finally collected by the seed monitoring system, and the target parameters are calculated. The smart seed sensor designed in this study was compared with conventional photoelectric sensors and high-frequency radio wave type WaveVision seed sensors. The test results show that the smart seed sensor has a monitoring accuracy of more than 99% for a single seed, and a recognition accuracy of 56.3% for double overlapping seeds. However, the identification accuracy of the photoelectric sensor and the WaveVision seed sensor for double overlapping seeds is almost 0. Moreover, smart seed sensors are superior to photoelectric sensors in terms of dust resistance. Smart seed sensors have better comprehensive performance. This study has a positive effect on the further development of seeding monitoring technology.

  • Summary: This paper mainly studies a new type of sensor, and has done various experiments to verify that the sensor has improved accuracy in anti-dust, seed overlap, etc. compared with traditional sensors. The theme of the research is relatively new + 1 point, hardware design + 3 points, multi-faceted experimental design and detection effectiveness + 2 points.

6. A deep learning approach incorporating YOLO v5 and attention mechanisms for field real-time detection of the invasive weed Solanum rostratum Dunal seedlings

Eggplant Dunal is a common invasive weed that can damage native ecosystems and biodiversity. Detecting Dunal at an early stage of growth allows it to be treated before serious damage is done. To this end, a convolutional neural network model YOLO-CBAM was constructed for detecting eggplant Rostratum dunar seeds, which combines YOLO v5 and attention mechanism. A method for slicing high-resolution images by calculating the overlap ratio is devised to construct a dataset, which reduces the possibility of loss of details due to compression of high-resolution images during training. Multi-scale training methods have been used to improve training performance. Comparative tests show that the precision and recall of the proposed YOLO_CBAM are higher than those of YOLO v5. After multi-scale training, the network performance is further improved, and the average precision (AP) of YOLO_CBAM is increased from 0.9017 to 0.9272. The trained network model was deployed to Jetson AGX Xavier for field trials. The precision of the network model is 0.9465, and the recall rate of real-time recognition is 0.9017. The detection speed and detection effect can be applied to real-time detection of wild weed Dunal seedlings in the field.

  • Summary: Weed detection through YOLOv5 and attention mechanism + 3 points, hardware operation + 2 points = 5 points. When reading this paper, I can't help but want to ask SO WHAT? As the author mentioned in the figure below in the paper, the robot carries the equipment to obtain the image of the weeds on the ground for identification. So what if the recognition accuracy is very high? What practical problems can it solve?

7. N distribution characterization based on organ-level biomass and N concentration using a hyperspectral lidar

Accurate estimation of nitrogen concentration and biomass (W) in plant organs provides information on nitrogen distribution mechanisms, which is crucial for improving nitrogen use efficiency (NUE) and optimizing nitrogen management. Remote sensing data can cause asymptotic saturation problems when extracting W, while commercial lidar systems employing only one band have limited capabilities in N retrieval. Combining the advantages of passive remote sensing and conventional lidar, hyperspectral lidar (HSL) is able to simultaneously extract structural and spectral information of plants. The purpose of this study was to evaluate the ability of HSL to estimate N concentration and W in maize at the organ level and to test whether HSL could characterize N distribution in maize at different growth stages and under different N fertilization conditions. Broad HSL performance (R = 0.71–0.91) was observed for leaf and stem N extraction based on the partial least squares regression (PLSR) method with spectral indices as input. A strong relationship (R ≥ 0.75) was established between the extracted height indicators (stem height and plant height) and organ-level W. N concentrations and W were successfully monitored based on estimated W allocation, N concentration dynamics, and changes in N with W accumulation, demonstrating the ability of HSL to characterize N distribution patterns within maize plants. Our findings suggest that the novel HSL system has great potential in monitoring plant nitrogen distribution and serving precision agriculture.

  • Summary: There are a lot of studies on measuring the nitrogen and biomass content of plants. The subject is relatively old - 0.5. Hyperspectral radar is used to predict + 3 points, and my own data and experiments + 2 points = 4.5 points

8. Practical cucumber leaf disease recognition using improved Swin Transformer and small sample size

Convolutional neural network (CNN)-based deep learning methods have been widely used in dataset augmentation and identification of plant leaf diseases. Compared with CNN, recently developed Transformer-based models such as Swin Transformer (SwinT) show very competitive or even better performance on various vision benchmarks. In this paper, a modified SwinT-based backbone network is proposed and applied to data augmentation and identification of practical cucumber leaf diseases. First, the patch partitioning of SwinT is improved by stepwise small patch embedding, which enhances feature extraction without increasing the number of parameters. Secondly, the leaf extraction module composed of the proposed backbone network and Grad-CAM is integrated into the generative adversarial network (GAN), and the STA-GAN (GAN based on SwinT and attention guidance) is constructed, which can only be used in healthy images with complex backgrounds. Leaf regions generate disease spots to enhance disease datasets. Finally, the recognition model of cucumber leaf disease is trained through the proposed backbone network using transfer learning and augmented datasets. From the experimental results, STA-GAN shows a stronger ability to generate high-quality images than LeafGAN. In addition, using STA-GAN as the backbone of the recognition model, using SwinT, the original SwinT, EfficientNet-B5 and ResNet-101 as the backbone of the recognition model, the accuracy of disease recognition is higher than that of SwinT, the original SwinT, EfficientNet-B5 and ResNet-101 Many results showed that the improved SwinT-based method really helps to improve the performance of data augmentation and practical cucumber leaf disease identification. The proposed method has the potential to address the common challenges of insufficient data volume and complex background in other similar plant science tasks.

  • Summary: The topic of identifying diseased leaves - 1 point, but identifying with a small sample is a popular direction + 1 point. Improved Swin Transformer model +3 points. Unsupervised learning +1 point, own data including 4 leaf diseases +0-0.5 points.

9. A novel approach for the 3D localization of branch picking points based on deep learning applied to longan harvesting UAVs

Longan is a well-known specialty fruit and cultivated medicinal plant with important food and medicinal value; how to improve the productivity of harvest is an important issue. At present, longan is mainly planted in hilly areas. For complex site conditions and tall trees, ground harvesting machinery cannot work properly. Aiming at longan fruit picking by UAV, this study proposes a method combining improved YOLOv5s, improved DeepLabv3+ model and depth image information for three-dimensional (3D) localization of complex natural picking points. First, the improved YOLOv5s model is used to quickly detect longan fruit clusters and main fruit branches in complex orchard environments. The correct main fruit branch is obtained according to its relative position relationship, which is used as the input of the semantic segmentation model. Secondly, use the improved DeepLabv3+ model to semantically segment the image extracted in the previous step to obtain the 2D coordinate information of the main longan fruit branches. Finally, combined with the growth characteristics of longan fruit clusters, RGB-D information fusion is performed on the main fruit branch in 3D space to obtain the central axis and pose information of the main fruit branch, as well as the 3D coordinates of picking to calculate the integral, providing a destination for longan harvesting drones information. In order to verify the effectiveness of the proposed method, the identification and location test of the main fruit branches and picking points was carried out in the longan garden. The experimental results show that the detection accuracy rate of longan cluster fruit and main fruit branch is 85.50%, and the accuracy rate of main fruit branch semantic segmentation is 94.52%. The whole algorithm takes 0.58 s in the actual scene, and can quickly and accurately locate the pick-up point. To sum up, this paper gives full play to the advantages of the combination of convolutional neural network and RGB-D image information, and further improves the efficiency of longan collection drones in 3D space to accurately locate the picking points.

  • Summary: 3d target detection YOLOV5+DeepLabv3 model = 5 points, actual scene data + 1 point. Hardware +1 point.

10. Detection and classification of whiteflies and development stages on soybean leaves images using an improved deep learning strategy

This paper presents a new strategy for the detection and classification of five developmental stages in adult whiteflies. Whitefly is a major pest in soybean crops, and control management decisions can be made by detecting, enumerating, and distinguishing its 5 life stages in leaves in the field. The solution is based on a deep learning object detection algorithm (YOLOv4) with innovations in data augmentation, image mosaicing, and fusion of hypothesized object categories, modified into a specific new learning strategy. A real and annotated image dataset was provided by a control experiment from infected whitefly eggs, which contained 121 images and 973 annotated objects. Experimental results show that the proposed algorithm has a promising performance with an f1-score of 0.87 compared to the single YOLOv4 algorithm, while the f1-score is 0.80, and the overall strategy can be extended to other similar image-based pest management tasks.

  • Summary: This study identified five life stages of whitefly, the main pest of soybean crops, by improving the method of YOLOV4. The topic of this research is relatively old, and I personally think that the experimental data in this paper is small, and it is difficult to obtain very accurate results. In addition, the image recognition method is to enlarge the leaves, and it is difficult to identify the leaves very close to the lens in the actual detection process.

11. Multiple disease detection method for greenhouse-cultivated strawberry based on multiscale feature fusion Faster R_CNN

Diseases have a significant impact on strawberry quality and yield, and deep learning has become an important way to detect crop diseases. To address the problem of complex backgrounds and small disease spots in strawberry disease images in natural environments, we propose a new Faster R_CNN architecture. The multi-scale feature fusion network consists of ResNet, FPN and CBAM blocks, which can effectively extract rich strawberry disease features. A dataset of strawberry leaves, flowers, and fruits was established. Experimental results show that the model can effectively detect healthy strawberries and seven kinds of strawberry diseases under natural conditions. The mAP is 92.18%, and the average detection time is only 229 ms. Comparing this model with Mask R_CNN and YOLO-v3, we find that our model can guarantee high accuracy and fast detection operation requirements. Our method provides an effective solution for crop disease detection and can improve farmers' management of the strawberry cultivation process.

  • Summary: This study uses the attention mechanism and the improved Faster R_CNN architecture to identify strawberry pests and diseases + 3 points, and the data comes from photos taken by myself and open source images on the Internet.

12. TomatoScan: An Android-based application for quality evaluation and ripening determination of tomato fruit

In this study, new methods such as contact imaging and spot beam injection were used to predict tomato fruit quality-related indicators and determine the ripening period. A total of 220 tomato samples were used, divided into six ripening stages and two storage stages. Contact images are captured by an RGB smartphone camera. After selecting superior features of contact images using stepwise regression, a multilayer perceptron artificial neural network was used to create predictive and classification models. Using white light pair a (CIELAB color space), titratable acidity and soluble solids content, 650nm laser for carotenoids, combination of 532 and 650nm lasers for L (CIELAB color space), elastin and lycopene and 650 and 780nm wavelengths combination to obtain the best predictive performance. White light is also the best light source for sorting tomatoes based on stage of ripeness. Based on the architecture of the prediction/classification model created in MATLAB and the bias and weight values ​​of the neurons, the application called TomatoScan was developed for Android smartphones. Evaluation Results The results of the TomatoScan application are almost similar to those obtained during the testing phase of the neural network model using MATLAB software. According to the results of the test dataset, the estimated coefficients (R) of L*, a*, elasticity, total chlorophyll, carotenoids, lycopene, titratable acidity, and soluble solids content are 0.901, 0.964, 0.856, 0.664, 0.824, respectively , 0.923, 0.816, and 0.792, while the corresponding values ​​of the mean square error are 3.549, 13.485, 0.000, 14.070, 0.065, 39.198, which are 0.058 and 0.259, respectively. TomatoScan was also able to determine the ripening stage of tomatoes with an overall accuracy rate of 75.00%.

  • Summary: This study injects tomato fruit quality and ripeness using contact imaging and spotlight beams. Developed a tomato fruit quality inspection APP, which needs to be combined with a smartphone with a CMOS sensor camera + 4 points.

13. A low-cost integrated sensor for measuring tree diameter at breast height (DBH)

The measurement of tree diameter at breast height (DBH) is the basis for estimating forest wood volume, biomass and carbon fluxes. Traditional contact methods for measuring DBH are time-consuming and laborious. Therefore, it is very important to realize a low-cost and fast method for measuring DBH. This paper proposes a contactless approach by integrating passive (smartphone) and active optical sensors (laser ranger). Using this device, the horizontal distance from the sensor to the tree trunk acquired by the laser range finder and the image of the target tree acquired by the smartphone are simultaneously collected. An automatic detection algorithm is used to identify the tree trunk in the image, and then the diameter of the tree is measured based on the principle of photogrammetry combined with horizontal distance measurement. The performance of the proposed method was verified using a tape measure on 371 trees, the main species being Italian poplar (Populus euramevicana) and pine (Pinus tabuliformis), ranging in diameter from 6 to 51 cm. To investigate factors that might influence the method, the results were further analyzed under 4 different conditions, namely different light conditions, urban and natural forest conditions, and different tree species with different surface texture characteristics. The results show that the measurement results of the proposed device are in good agreement with those of the traditional contact method, with an absolute mean error (MAE) of 1.12 cm and an RMSE of 1.55 cm. The appeal of the method is that it is low cost, portable, easy to use and sufficiently accurate. It is also expected that the proposed method will facilitate the measurement of DBH-related canopy structural parameters, such as tree volume, as well as other parameters, such as tree height.

  • Summary: This research adopts a brand-new method to quickly identify tree diameter at breast height. The research is supported by both hardware and software. Two experimental sites were selected to measure the diameter at breast height of 37+210 trees in parks and plantations. Tested to make sure the results are valid, but 10% of the validation set is too small. The topic of this research is relatively new and solves practical problems + 2 points, hardware + 2 points, collected experiments by myself and used data in two different environments, but the amount of data is not large + 2 points = 6 points.

14. MS-DNet: A mobile neural network for plant disease identification

From a food security perspective, plant disease identification has recently attracted considerable attention. Due to the complexity and diversity of plant diseases, plant disease identification using image processing techniques is a challenging task. While deep neural networks hold great promise for identifying various plant diseases, they suffer from some drawbacks, such as their need for a large number of parameters, which requires a large amount of annotated data to train the model. To overcome this challenge, this study proposes a novel lightweight network architecture named MS-DNet for crop disease recognition; the network has a small model size and high computational speed. The method achieved satisfactory performance in comparative experiments, with an average accuracy of 98.32% in identifying different crop disease types. Experimental results further demonstrate that the proposed method outperforms other existing methods and demonstrates its efficiency and scalability. Our code can be found at https://github.com/xtu502/Automatic-crop-disease-identification-under-field-conditions.

  • Summary: This study collected 966 crop disease images, including 500 images of rice and 466 images of maize plant diseases. First pre-train the model on the PlantVillage diseased leaf dataset consisting of 54,306 plant leaf images, and then transfer learning to its own dataset. Using the improved model of DenseNet to compare with other methods (+2 points), the author named the lightweight network architecture of MS-DNet. Additionally, the research has open-sourced the code (+2 points).

15. An online machine learning-based sensors clustering system for efficient and cost-effective environmental monitoring in controlled environment agriculture

Sensors are crucial in controlled environment agriculture to measure parameters for effective decision-making. Currently, most growers randomly install a limited number of sensors due to economic impact and data management concerns. The microclimate in protected cultivation systems is constantly influenced by the macroclimate (environment), which further complicates decisions around optimal sensor placement. The influence of ambient weather on the indoor microclimate makes it challenging to predict or obtain the ideal condition of the system through the use of sensors. This study proposes and implements a machine learning (K-Means++) algorithm to select optimal sensor locations by clustering. Temperature and relative humidity data were collected over a year from 56 different locations within the greenhouse, covering the four main seasons (spring, summer, autumn and winter). Use quartiles to process the data to remove outliers or noise. Convert raw temperature and relative humidity data to other air properties (temperature, enthalpy, humidity, etc.) and use them in simulations. The results show that the number of optimal sensor locations is between 3 and 5. A web-based online machine learning system was developed to systematically determine the optimal number and placement of sensors.

  • Summary: This study analyzed the temperature, humidity, temperature and relative humidity, specific volume and other indicators in the greenhouse by analyzing 56 sensors in the greenhouse, and then used the K-Means++ algorithm to cluster the collected data for optimal sensor placement. Finally, web-based machine learning software for optimized sensor placement and model validation. The research has a good problem, and applied practical methods to solve it, and developed a web page, but the interface is not open source. 5 points.

16. Machine learning-based prediction of nutritional status in oil palm leaves using proximal multispectral images

This study evaluates the application of proximal multispectral imagery accompanied by 4 machine learning methods to estimate the nutritional status of oil palm leaves. The image responds to five bands: blue, green, red, red edge and near-infrared region, with center wavelengths of 475, 560, 668, 717 and 840 nm. Mean and standard deviation (SD) values ​​were extracted from leaf pixels in each band, and 5 mean and 5 SD values ​​were obtained from 5 bands. From these mean and SD values ​​34 vegetation variables were generated. A total of 44 variables consisted of 10 mean- and SD-based features, and 34 vegetation variables were used as input candidates for analyzes targeting 10 target variables: Nitrogen (N), Phosphorus (P), Potassium (K) , Calcium (Ca), Magnesium (Mg), Iron (Fe), Manganese (Mn), Zinc (Zn), Boron (B) and Chlorophyll (SPAD). Modeling of P and Zn based on stepwise selection has no apparent input. Therefore, eight nutritional models were proposed in this study. Each target is modeled using a training set of 50 samples, and the performance of the model is evaluated using a test set of 15 samples. Modeling based on Random Forest (RF), Support Vector Regression (SVR), Partial Least Squares Regression (PLSR) and Artificial Neural Network (ANN), models for chlorophyll, N and Ca predictions can be screened, K and Mg predictions Models can undergo coarse screening. The chlorophyll model developed based on RF had prediction statistics of 0.752, 5.46 SPAD and 5.65 SPAD in terms of coefficient of determination of prediction (r), root mean square error of prediction (RMSEP) and standard error of prediction (SEP), respectively. The other 2 screening models developed for N and Ca based on SVR and RF gave r, RMSEP and SEP performance ranging from 0.655 to 0.718, 0.12 to 0.17% and 0.12 to 0.18%, respectively. In the 2 rough screening models built using the RF algorithm, the prediction statistics for r ranged from 0.496 to 0.530, and the prediction statistics for RMSEP and SEP were 0.07-0.16%. In this study, Fe, Mn, and B models gave poor results, with r, RMSEP, and SEP ranging from 0.308–0.491, 2.39–72.9 ppm, and 2.45–62.8 ppm, respectively. Based on the findings, this study confirmed that the multispectral information from the proximal end of oil palm leaves is meaningful enough to explain the status of chlorophyll and macronutrients (N, K, Ca, and Mg) in the leaves.

  • Summary: This study used 4 machine learning methods to estimate the status of 8 nutrient elements in multispectral images of oil palm leaves at different ages, and quantitatively determined the actual content of 8 nutrient elements at different ages through experiments. According to the comparison between the predicted value and the actual value, it is found that different elements will have better prediction results with different methods. A large number of elemental analysis experiments of different ages + 3 points. 2 points for theme + hyperspectral image.

17. A deep learning image segmentation model for agricultural irrigation system classification

Effective water management requires a large-scale understanding of agricultural irrigation systems and how they change in response to various stressors. Here, we leverage advances in machine learning and the availability of high-resolution remote sensing imagery to help address this long-standing problem. The authors develop a deep learning model to classify regional-scale irrigation systems using remote sensing imagery. After testing different model architectures, hyperparameters, class weights, and image sizes, a U-Net architecture with a Resnet-34 backbone was chosen. Apply transfer learning to improve training efficiency and model performance. In a case study in Idaho, USA, the authors considered four irrigation systems as well as urban and background areas as land use/cover categories and applied the model to 8,600 high-resolution (1 m) images, The images are labeled with ground truth observations for irrigation types. Images are from the USDA's National Agricultural Imagery Program. Our model achieves state-of-the-art performance in segmenting different classes on training data (85% to 94%), validation data (72% to 86%) and test data (70% to 86%), demonstrating that The effectiveness of the model in segmenting images based on spatial features. In addition to using deep learning and remote sensing to solve the real-world problem of multi-irrigation type segmentation, this study also develops and publicly shares labeled data and trained deep learning models for irrigation type segmentation, which can also be applied/transferred to other regions around the world. Additionally, this study provides insights into the impact of transfer learning, imbalanced training data, and the effectiveness of various model structures for multi-irrigation type segmentation.

  • Summary: This study used high-resolution (1-meter) imagery from the National Agricultural Imagery Program (NAIP), which acquires aerial imagery every three years during the agricultural growing season. The image contains 4 bands of red, green, blue and near-infrared. The study area was divided into 6 categories, and then more than 8600 coordinates were randomly generated as the center points of the NAIP image patches. Each image tile is sized to 400x400 pixels (400x400 meters). The workload is heavy and has practical research significance (+4 points). The author mentioned that all data and models are open source (+2 points), https://github.com/ehsanraei/Irrigation but no information can be viewed.

18. Grapevine stem water potential estimation based on sensor fusion

In viticultural management and irrigation planning, estimating grape moisture status is critical to achieving the desired balance between wine grape quality and yield. Smart agriculture has a growing need to extract meaningful information from field data to support irrigation decisions, which can be facilitated through the use of field monitoring techniques as well as advanced modeling algorithms. The study was carried out in experimental plots of Vitis Vinifera cv. "Sauvignon Blanc" within a commercial vineyard. The main goal is to generate a water stress estimation model for wine grape vines based on the fusion of data from multiple sensors. Sensors were used to collect data from five monitored vines, each treated with a different state of water stress. The sensor provides measurements of trunk and fruit growth, leaf temperature and soil water content (SWC) at depths of 20 and 40 cm. In addition, weather stations at the vineyards recorded temperature, relative humidity, wind speed and solar radiation. A daily multivariate time series was formed using data derived from sensors. Stem water potential (SWP) values ​​of monitored vines were measured and factors were analyzed based on multivariate time series to determine their correlations and interactions. Finally, a predictive model for SWP estimation was defined using the boosted regression tree (BRT) algorithm and a set of factors was optimized. Validation was performed using comparative statistics between a randomly selected test set and the set of predicted SWP values ​​achieved using the trained BRT model. After processing the data and removing most of the factors caused by multicollinearity, the model consisted of daily maximum leaf temperature (LT), SWC at 40 cm at noon, tree water deficit (TWD), daily amplitude of SWC at 20 cm, water input , vapor pressure deficit and phenological periods. The largest contributor to the BRT algorithm was the maximum LT (44.5%), followed by midday SWC at 40 cm (16.9%) and TWD (16%). Model performance was estimated using various measures and resulted in a correlation of 0.9 between the estimated SWP value and the test set. No significant differences were found between their means (t = -0.31, p-value = 0.76) and distributions (D = 0.13, p-value = 0.77). The RMSE is 0.16 MPa, which is 12% error when normalized to the range of the test set. Therefore, the sensor fusion approach via SWP is a promising technique that integrates representative factors of the soil-water-plant-atmosphere continuum; it should be further investigated in different climatic conditions and in multiple grape varieties.

  • Conclusion: Using sensors to monitor vines with 5 different irrigation strategies (low, medium, high, first low and then high, first high and then low), the collection of indicators such as trunk trunk, leaf temperature, berries, climate and actual irrigation volume to determine the interactions between these factors and their impact on vine water potential. This study focuses on estimating water stress in vineyards during the 2021 growing season using sensor fusion methods. Estimation of water stress by multiple aspects proved to be feasible. There are many indicators measured in the experiment, the structure of the experiment is also very complete, and the topic itself has practical significance. A large number of physiological experiments + 4 points, good topics and practical significance + 1 point, sensor fusion model building + 2 points = 7 points.

19. A visual identification method for the apple growth forms in the orchard

The work aims to perform visual recognition of the growth form of the fruit to facilitate the subsequent use of different harvesting mechanisms for the different growth forms of the fruit by the robot. Using the improved YOLOv5 deep learning algorithm, a visual recognition method for the growth form of apples in an orchard is proposed. Specifically, the feature extraction module of the YOLOv5 algorithm imitates the BiFPN model and proposes the BiFPN-S structure. Function extension and function reuse have been enhanced to better fit functions. The improved algorithm is called YOLOv5-B. The network SiLU activation function was replaced by the ACON-C activation function to improve its network performance. The COCO dataset is used to pre-train the network, and then the working dataset is trained by transfer learning method. After training, the resulting optimal model was applied to a visual recognition test of apple fruit growth. The results show that the improved algorithm model takes into account high precision and real-time performance, reaching 98.4%, and the F1 value is 0.928. The test device has an average accuracy rate of 98.45% in identifying the growth shape of apples, and the processing speed is 71 FPS.

  • Summary: This research uses the improved YOLOV5 algorithm to detect 4 states of fruit on apple trees (a branches and stems that do not cover a single apple; b branches and stems that cover an apple fruit; c branches that do not cover overlapping apples and stems; d covering branches and stems of overlapping apples) achieved high accuracy. But the practical significance of this research topic is not great. The 4 states can be understood as 4 occlusion states rather than the growth shape of the apple itself. The reference value is 3 points.

20. Improved Na estimation from hyperspectral data of saline vegetation by machine learning+

Using remote sensing to monitor vegetation growth status is the current trend of agricultural research. This study aimed to identify an optimal hyperspectral vegetation extraction framework for improving leaf Na monitoring in Northwest China based on saline-alkaline vegetation hyperspectral data. Partial least squares (PLS), support vector machine (SVM), and random forest (RF) models were constructed to model leaf Na, and aggregated boosting tree (ABT) and random forest (RF) variable importance screening methods were used to evaluate leaf Na The extracted variables are optimized. Then, an optimal variable screening method and an inverted vegetation Na model were identified. The results showed that 33 vegetation indices met the requirements, and the RF (R = 0.73, RMSE = 0.50) and PLS (R = 0.72, RMSE = 0.59) models were relatively good, followed by the SVM (R = 0.68, RMSE = 0.53) model, Therefore, it is feasible to construct a spectral index to estimate Na content in leaves of saline vegetation, followed by SVM (R = 0.68, RMSE = 0.53) model. In addition, all three models were improved using the ABT variable importance screening method, among which the RF (R = 0.81, RMSE = 0.42) model had the most satisfactory effect. Also, based on the RF importance screening method, all three models showed significant improvements, the most effective of which was the SVM (R = 0.82, RMSE = 0.45) model. This study demonstrated that ABT-RF and RF-SVM are the most ideal combined frameworks for inverting Na content in saline plant leaves. This study combines variable screening methods with model building to improve the accuracy of hyperspectral sensors in monitoring changes in vegetation-related chemical characteristics.

  • Method: 45 sampling points and 56 vegetation were set up, and each sampling point was a typical saline vegetation. 1 Vegetation spectra were collected in the 350–2500 nm range using a portable characteristic spectrometer (FieldSpec3, ASD). Measurements were made using sensors positioned 15 cm directly above the leaf surface, and each observation was repeated 10 times. A total of 56 vegetation samples were collected, and the samples were screened according to the normal vegetation spectral curve, and the Na data of vegetation leaves were obtained. Finally, 51 qualified sample data were selected to construct the model.
  • Personal summary: The method of this study is of great reference value and has a good scientific question. A large number of experiments were used to quantify the Na content of 56 actual vegetation in 45 locations. In addition, the hyperspectral sensor monitored the Na data of the vegetation, and then combined different machine learning methods to find the optimal prediction result. Reference value: a large number of experiments + 3 points, hyperspectral sensing data + 2 points, the theme has good practical significance + 0.5 points = 5.5 points

Guess you like

Origin blog.csdn.net/LuohenYJ/article/details/126106628