Halcon template matching - isotropic scaling shape template (super detailed explanation)

need

Consider the following image, which has three patterns of the same shape but different sizes. Observation can find that the three patterns have an isotropic scaling relationship. What if we want to sample template matching to identify these three patterns? Halcon provides the create_scaled_shape_model() operator to address this scenario.
insert image description here

operator function prototype

create_scaled_shape_model(Template : : NumLevels, AngleStart, AngleExtent, AngleStep, ScaleMin, ScaleMax, ScaleStep, Optimization, Metric, Contrast, MinContrast : ModelID)

Parameter Description

Template : image type, input image whose domain will be used to create the model.

NumLevels : The maximum number of pyramid levels.
The number of pyramid levels is determined by the parameter NumLevels. Objects should be chosen as large as possible, as the time required to find them will be greatly reduced. On the other hand, the choice of numlevel must guarantee that the model is still identifiable and contains a sufficient number of points (at least 4) at the highest level of the pyramid. This can be checked using the output of inspect_shape_model. If not enough model points are generated, the number of pyramid levels is reduced internally until enough model points are found on the highest pyramid level. If this process will result in a model with no pyramid levels, that is, if the number of model points on the lowest pyramid level is already too low, then create_scaled_shape_model will return an error message. If NumLevels is set to 'auto' (or 0 for backward compatibility), then create_scaled_shape_model automatically determines the number of pyramid levels. The automatically calculated number of pyramid levels can be queried using get_shape_model_params. In rare cases, it may occur that the value of create_scaled_shape_model that determines the number of pyramid levels is too large or too small. If the number of layers in the pyramid is chosen too large, the model may not be recognized in the image, or it may be necessary to select very low parameters for MinScore or Greediness in find_scaled_shape_model to find the model. If too few pyramid levels are chosen, the time required to find the model in find_scaled_shape_model may increase. In these cases, the output of inspect_shape_model should be used to choose the number of pyramid levels.

AngleStart : The minimum rotation angle of the pattern.
AngleExtent : The extent of the rotation angle.
AngleStep : The step size for each rotation of the angle.
The parameters AngleStart and AngleExtent determine the possible rotation range within which the model can appear in the image. Note that the model can only be found within this angle range via find_scaled_shape_model. The parameter AngleStep determines the step size within the selected angle range. Therefore, if sub-pixel accuracy is not specified in find_scaled_shape_model, this parameter specifies the achievable accuracy of angles in find_scaled_shape_model. AngleStep should be chosen based on the size of the object. Smaller models don't have many different discrete rotations in the image, so a larger AngleStep should be chosen for smaller models. If AngleExtent is not an integer multiple of AngleStep, modify AngleStep accordingly. To ensure that find_scaled_shape_model returns no model instances with a rotation angle value of exactly 0.0, the possible rotation range is modified as follows: If there is no positive integer n such that AngleStart plus n times AngleStep is exactly 0.0, AngleStart is reduced up to AngleStep, and AngleExtent Added to AngleStep.

ScaleMin : The minimum scale of the pattern.
ScaleMax : The maximum scaling of the pattern.
ScaleStep : The step size by which the pattern is scaled each time.
The parameters ScaleMin and ScaleMax determine the range of possible scales (dimensions) of the model. A scale of 1 corresponds to the original size of the model. The parameter ScaleStep determines the step size in the selected range. Therefore, if subpixel accuracy is not specified in find_scaled_shape_model, this parameter specifies the accuracy achievable by the scale in find_scaled_shape_model. Like AngleStep, ScaleStep should be chosen based on the size of the object. If the range of scales is not an integer multiple of ScaleStep, then ScaleStep will be modified accordingly. To ensure that the scale value returned by find_scaled_shape_model is not a model instance that is exactly 1.0, the possible scale range is modified as follows: If there is no positive integer n such that ScaleMin plus n times ScaleStep is exactly 1.0, then ScaleMin will decrease to ScaleStep and ScaleMax will increase , so that ScaleStep will increase the possible range.

Optimization : An optional optimization method when generating the model.
For particularly large models, it may be useful to reduce the number of model points by setting Optimization to a value other than "none". If Optimization = 'none', store all model points. In all other cases, the number of points will be reduced according to the value of Optimization. If the number of points decreases, you may need to set the Greediness parameter in find_scaled_shape_model to a smaller value, for example, 0.7 or 0.8. For small models, the reduction in the number of model points does not result in a faster search, since in this case more potential model instances usually need to be examined. If Optimization is set to 'auto', create_scaled_shape_model automatically determines the reduction in the number of model points.

Metric : The parameter Metric determines which bars the model will recognize in the image
If Metric = 'use_polarity', the object and model in the image must have the same contrast. For example, if the model is a bright object on a black background, it will only be discovered if that object is also brighter than the background. If Metric = 'ignore_global_polarity', the object will also be found in the image if the contrast is globally inverted. In the above example, the object will also be found if the color of the object is darker than the background. In this case, the runtime of find_scaled_shape_model increases slightly. If Metric = 'ignore_local_polarity', the model will be found even if the contrast changes locally. This mode can be used, for example, if the object consists of a part with medium gray values, with darker or lighter sub-objects in it. Because the runtime of find_scaled_shape_model increases significantly in this case, it is usually better to create several models to reflect possible contrast changes of the create_scaled_shape_model object, and match them with find_scaled_shape_model at the same time. The above three metrics can only be applied to single-channel images. If using a multi-channel image as the model image or search image, only the first channel is used (and no error message is returned). If Metric = 'ignore_color_polarity', the model will be found even if the color contrast changes locally. For example, if parts of objects can change their color, for example, from red to green. In particular, this mode is useful if you don't know in advance which channels the object will be visible in. The runtime of find_scaled_shape_model also increases significantly in this mode. The metric 'ignore_color_polarity' can be used for any number of channel images. It has the same effect as 'ignore_local_polarity' if it is used for single channel images. Note that for Metric = ' ignore_color_polarity', the number of channels can be different between models created with create_scaled_shape_model and searched with find_scaled_shape_model. This can be used, for example, to create models from synthetically generated single-channel images. Also, it should be noted that channels do not need to contain spectral subdivisions of light (like in an RGB image). For example, a channel can also contain images of the same object obtained by illuminating the object from different directions.

Contrast : Threshold or hysteresis threshold for the contrast and optionally minimum size of matched objects in the template image.
The parameter contrast determines the contrast the model points must have. Contrast is a measure of the difference in local gray values ​​between an object and the background and between different parts of an object. Contrasts should be chosen so that only important features of the template are used for the model. Contrast can also contain tuples with two values. In this case, the model is segmented using a method similar to the hysteresis thresholding method used in edges_image. Here, the first element of the tuple determines the lower threshold, while the second element determines the upper threshold. See hysteresis_threshold for more information on the hysteresis threshold method. Contrast optionally contains the third value as the last element of the tuple. This value determines the threshold for selecting important model components based on their size, i.e. components with points smaller than the therefore specified minimum size will be suppressed. For each successive pyramid level, the minimum size threshold is divided by 2. Hysteresis thresholds should not be implemented if small model components should be suppressed, but three values ​​must be specified in the comparison. In this case, the first two values ​​can simply be set to the same value. The effect of this parameter can be inspected in advance using inspect_shape_model. If contrast is set to 'auto', create_scaled_shape_model automatically determines the three values ​​described above. Alternatively, only the contrast ('auto_contrast'), hysteresis threshold ('auto_contrast_hyst') or minimum size ('auto_min_size') can be determined automatically. The remaining values ​​that are not automatically determined can additionally be passed as tuples. Various combinations are also allowed: for example, if ['auto_contrast', 'auto_min_size'] is passed in, both contrast and minimum size are automatically determined. If you pass in ['auto_min_size', 20, 30], the minimum size is automatically determined, while the hysteresis thresholds are set to 20 and 30, and so on. In some cases, the automatic determination of the contrast threshold may be unsatisfactory. For example, if certain model components should be included or suppressed for application-specific reasons, or if the object contains several different contrasts, then manually setting these parameters should be preferred. Therefore, the contrast threshold should be determined automatically using determine_shape_model_params and validated using inspect_shape_model before calling create_scaled_shape_model.

MinContrast : The minimum contrast of matching objects in the image to be searched.
Using MinContrast, it is possible to determine at least which contrast the model must have in the recognition performed by find_scaled_shape_model. In other words, this parameter separates the model from noise in the image. Therefore, a good choice is the range of gray value changes in the image caused by noise. For example, if the gray value fluctuates within 10 gray levels, MinContrast should be set to 10. If the model and search images use multi-channel images, and the parameter Metric is set to 'ignore_color_polarity' (see above), the noise in one channel must be multiplied by the square root of the number of channels to determine the min contrast. For example, if in one channel the gray values ​​fluctuate within 10 gray levels, and the image is a three-channel image, MinContrast should be set to 17. Obviously, MinContrast must be less than Contrast. If the model is to be recognized in very low contrast images, the min contrast must be set to a correspondingly small value. In order to ensure that find_scaled_shape_model can extract the position and rotation of the model robustly and accurately, and to identify the model in the case of severe occlusion, MinContrast should be slightly larger than the fluctuation range of gray value caused by noise. If min contrast is set to "auto", the minimum contrast is automatically determined based on the noise in the model image. Therefore, automatic judgment only makes sense when the image noise in the recognition process is similar to the noise in the model image. Furthermore, in some cases it may be desirable to increase the automatically determined value in order to increase robustness to occlusion (see above). The automatically calculated minimum contrast can be queried using get_shape_model_params.

ModelID : ModelID is the handle to this model, it is used in subsequent calls to find_scaled_shape_model

Optionally, a second value can be passed in Optimization. This value determines whether the model is fully pre-generated. For this, the second value of Optimization must be set to 'pregeneration' or 'no_pregeneration'. If the second value is not used (i.e. if only one value is passed), the mode set by set_system('pregenerate_shape_models', ...) is used. With the default value ('pregenerate_shape_models' = 'false'), the models are not fully pre-generated. Full pre-generation of the model usually results in a slightly lower runtime because the model does not need to be transformed at runtime. However, in this case, the memory requirements and time required to create the model are much higher. It should also be noted that the two modes cannot be expected to return exactly the same results, as converting the model at runtime will necessarily cause the internal data of the converted model to differ from the internal data of the pre-generated converted model. For example, if the model is not fully pre-generated, find_scaled_shape_model will usually return a slightly lower score, which may require setting MinScore to a slightly lower value than a fully pre-generated model. Furthermore, the pose obtained by interpolation may be slightly different in the two modalities. If maximum accuracy is required, the pose of the model should be determined by a least squares adjustment.
If full model pre-generation is selected, the model is pre-generated and stored in memory for the selected angle and scale range. The memory required to store the model is proportional to the angular order, the scale order, and the number of points in the model. Therefore, if AngleStep or ScaleStep is too small, or the extent of AngleExtent or scale is too large, it may happen that the model no longer fits in (virtual) memory. In this case, the AngleStep or ScaleStep must be enlarged, or the AngleExtent or the zoom range must be reduced. In any case, it is desirable for the model to fit perfectly in main memory, as this avoids paging by the OS, so the time to find the object will be much less. Since angles can be determined at sub-pixel resolution via find_scaled_shape_model, AngleStep >= 1° and ScaleStep >= 0.02 can be selected for models smaller than 200 pixels in diameter. If AngleStep = 'auto' or ScaleStep = 'auto' is selected (or 0 for backward compatibility in both cases), then create_scaled_shape_model automatically determines the appropriate angle or scale step, respectively, based on the size of the model. The automatically calculated angles and scale steps can be queried using get_shape_model_params.
If pre-generation of a full model is not selected, the model is only created in the reference pose at each pyramid level. In this case the model must be converted to different angles and scales in find_scaled_shape_model at runtime. Therefore, it may take slightly more time to identify the model.
Note that the pre-generated shape models are customized for a specific image size. For runtime reasons, using images of different sizes in parallel during the search process for the same model is not supported. In this case, a copy of the same model must be used, otherwise the program may crash!

Example explanation

1. Read the image

read_image (Image, 'green-dot')

2. Blob analysis to generate the ROI that will create the template

*二值化
threshold (Image, Region, 0, 128)
*连通域分割
connection (Region, ConnectedRegions)
*处于面积过滤
select_shape (ConnectedRegions, SelectedRegions, 'area', 'and', 10000, 20000)
*填补区域
fill_up (SelectedRegions, RegionFillUp)
*膨胀
dilation_circle (RegionFillUp, RegionDilation, 5.5)
*ROi区域裁剪
reduce_domain (Image, RegionDilation, ImageReduced)

3. Create a template

create_scaled_shape_model (ImageReduced, 5, rad(-45), rad(90), 'auto', 0.8, 1.0, 'auto', 'none', 'ignore_global_polarity', 40, 10, ModelID)

4. Get the template XLD outline

get_shape_model_contours (Model, ModelID, 1)

5. Normalize the XLD profile

*获取ROI的重心和角度
area_center (RegionFillUp, Area, RowRef, ColumnRef)
*计算仿射变换矩阵
vector_angle_to_rigid (0, 0, 0, RowRef, ColumnRef, 0, HomMat2D)
*对XLD轮廓仿射变换。
affine_trans_contour_xld (Model, ModelTrans, HomMat2D)

6. Re-read the new image for template matching

read_image (ImageSearch, 'green-dots')
find_scaled_shape_model (ImageSearch, ModelID, rad(-45), rad(90), 0.8, 1.0, 0.5, 0, 0.5, 'least_squares', 5, 0.8, Row, Column, Angle, Scale, Score)

7. Perform affine transformation on all the searched XLD contours that meet the conditions

for I := 0 to |Score| - 1 by 1
    hom_mat2d_identity (HomMat2DIdentity)
    hom_mat2d_translate (HomMat2DIdentity, Row[I], Column[I], HomMat2DTranslate)
    hom_mat2d_rotate (HomMat2DTranslate, Angle[I], Row[I], Column[I], HomMat2DRotate)
    hom_mat2d_scale (HomMat2DRotate, Scale[I], Scale[I], Row[I], Column[I], HomMat2DScale)
    affine_trans_contour_xld (Model, ModelTrans, HomMat2DScale)
    dev_display (ModelTrans)
endfor

8. Clear the model handle

clear_shape_model (ModelID)

Effect:

insert image description here

Full code:

read_image (Image, 'green-dot')
threshold (Image, Region, 0, 128)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, 'area', 'and', 10000, 20000)
fill_up (SelectedRegions, RegionFillUp)
dilation_circle (RegionFillUp, RegionDilation, 5.5)
reduce_domain (Image, RegionDilation, ImageReduced)
create_scaled_shape_model (ImageReduced, 5, rad(-45), rad(90), 'auto', 0.8, 1.0, 'auto', 'none', 'ignore_global_polarity', 40, 10, ModelID)
get_shape_model_contours (Model, ModelID, 1)
area_center (RegionFillUp, Area, RowRef, ColumnRef)
vector_angle_to_rigid (0, 0, 0, RowRef, ColumnRef, 0, HomMat2D)
affine_trans_contour_xld (Model, ModelTrans, HomMat2D)
read_image (ImageSearch, 'green-dots')
find_scaled_shape_model (ImageSearch, ModelID, rad(-45), rad(90), 0.8, 1.0, 0.5, 0, 0.5, 'least_squares', 5, 0.8, Row, Column, Angle, Scale, Score)
for I := 0 to |Score| - 1 by 1
    hom_mat2d_identity (HomMat2DIdentity)
    hom_mat2d_translate (HomMat2DIdentity, Row[I], Column[I], HomMat2DTranslate)
    hom_mat2d_rotate (HomMat2DTranslate, Angle[I], Row[I], Column[I], HomMat2DRotate)
    hom_mat2d_scale (HomMat2DRotate, Scale[I], Scale[I], Row[I], Column[I], HomMat2DScale)
    affine_trans_contour_xld (Model, ModelTrans, HomMat2DScale)
    dev_display (ModelTrans)
endfor
clear_shape_model (ModelID)```


Guess you like

Origin blog.csdn.net/weixin_44901043/article/details/123544462