Matlab Simulation of Target Detection Algorithm Based on Gaussian Mixture Model

Table of contents

1. Theoretical basis

2. Core program

3. Simulation conclusion


1. Theoretical basis

       The Gaussian model is to use the Gaussian probability density function (normal distribution curve) to accurately quantify things, and decompose a thing into several models based on the Gaussian probability density function (normal distribution curve). The principle and process of establishing a Gaussian model for the image background: the image grayscale histogram reflects the frequency of a certain grayscale value in the image, and can also be considered as an estimate of the grayscale probability density of the image. If the image contains a relatively large difference between the target area and the background area, and there is a certain difference in grayscale between the background area and the target area, then the gray histogram of the image shows a double-peak-valley shape, and one of the peaks corresponds to the target , the other peak corresponds to the central gray level of the background. For complex images, especially medical images, they are generally multimodal. By viewing the multimodal nature of the histogram as a superposition of multiple Gaussian distributions, the image segmentation problem can be solved. In the intelligent monitoring system, the detection of the moving target is the central content, but in the detection and extraction of the moving target, the background target is very important for the recognition and tracking of the target. Modeling is an important part of background target extraction.

        The mixed Gaussian model method is a commonly used algorithm in the field of background extraction. This method uses several Gaussian models to model the brightness of each pixel, so that the method has certain scene adaptation when performing background extraction. Ability, can overcome the shortcomings of other background extraction algorithms (such as: multi-frame average method, statistical median method, etc.) that cannot adapt to environmental changes. For the multi-peak Gaussian distribution model, each pixel of the image is divided into multiple Gaussians with different weights Each Gaussian distribution corresponds to a state that may produce the color of the pixel, and the weights and distribution parameters of each Gaussian distribution are updated over time.

       When dealing with color images, it is assumed that the three color channels of image pixels R, G, and B are independent of each other and have the same variance. For the observation data set {x1,x2,...,xN} of random variable X, xt=(rt,gt,bt) is the sample of pixels at time t, then the probability density function of the mixed Gaussian distribution that a single sampling point xt obeys:

2. Core program

.............................................................................
    % Now update the weights. Increment weight for the selected Gaussian (if any),
    % and decrement weights for all other Gaussians.   
    Weights = (1 - ALPHA) .* Weights + ALPHA .* matched_gaussian;

    % Adjust Mus and Sigmas for matching distributions.
    for kk = 1:K
        pixel_matched = repmat(matched_gaussian(:, kk), 1, C);
        pixel_unmatched = abs(pixel_matched - 1); % Inverted and mutually exclusive

        Mu_kk = reshape(Mus(:, kk, :), D, C);
        Sigma_kk = reshape(Sigmas(:, kk, :), D, C);

        Mus(:, kk, :) = pixel_unmatched .* Mu_kk + ...
             pixel_matched .* (((1 - RHO) .* Mu_kk) + ...
  			  (RHO .* double(image)));

        % Get updated Mus; Sigmas is still unchanged
        Mu_kk = reshape(Mus(:, kk, :), D, C); 
        
        Sigmas(:, kk, :) = pixel_unmatched .* Sigma_kk + ...
                     pixel_matched .* (((1 - RHO) .* Sigma_kk) + ...
		     repmat((RHO .* sum((double(image) - Mu_kk) .^ 2, 2)), 1, C));       
    end
    
    % Maintain an indicator matrix of those components that were replaced because no component matched. 
    replaced_gaussian = zeros(D, K); 
    
    % Find those pixels which have no Gaussian that matches
    mismatched = find(sum(matched_gaussian, 2) == 0);
    
    % A method that works well: Replace the component we            
    % are least confident in. This includes weight in the choice of 
    % component.        
    for ii = 1:length(mismatched)
        [junk, index] = min(Weights(mismatched(ii), :) ./ sqrt(Sigmas(mismatched(ii), :, 1)));
        % Mark that this Gaussian will be replaced
        replaced_gaussian(mismatched(ii), index) = 1;
        
        % With a distribution that has the current pixel as mean
        Mus(mismatched(ii), index, :) = image(mismatched(ii), :);
        % And a relatively wide variance
        Sigmas(mismatched(ii), index, :) = ones(1, C) * INIT_VARIANCE;

        % Also set the weight to be relatively small
        Weights(mismatched(ii), index) = INIT_MIXPROP;  
    end
    
    % Now renormalise the weights so they still sum to 1
    Weights = Weights ./ repmat(sum(Weights, 2), 1, K);
    active_gaussian = matched_gaussian + replaced_gaussian;
    %----------------------------------------------------------------------
    
    %--------------------------background Segment--------------------------
    
    % Find maximum weight/sigma per row. 
    [junk, index] = sort(Weights ./ sqrt(Sigmas(:, :, 1)), 2, 'descend');
   
    % Record indeces of those each pixel's component we are most confident
    % in, so that we can display a single background estimate later. While 
    % our model allows for a multi-modal background, this is a useful
    % visualisation when something goes wrong. 
    best_background_gaussian = index(:, 1);
    
    linear_index = (index - 1) * D + repmat([1:D]', 1, K);
    weights_ordered = Weights(linear_index);
    for kk = 1:K
        accumulated_weights(:, kk) = sum(weights_ordered(:, 1:kk), 2);
    end
    
    background_gaussians(:, 2:K) = accumulated_weights(:, 1:(K-1)) < BACKGROUND_THRESH;
    background_gaussians(:, 1) = 1;            % The first will always be selected
    % Those pixels that have no active background Gaussian are considered forground. 
    background_gaussians(linear_index) = background_gaussians;
    active_background_gaussian = active_gaussian & background_gaussians;
    
    foreground_pixels = abs(sum(active_background_gaussian, 2) - 1);
    foreground_map = reshape(sum(foreground_pixels, 2), HEIGHT, WIDTH);
    foreground_with_map_sequence(:, :, tt) = foreground_map;   
    %----------------------------------------------------------------------
     
    %---------------------Connected components-----------------------------
    objects_map = zeros(size(foreground_map), 'int32');
    object_sizes = [];
    object_positions = [];
    new_label = 1;
    
    [label_map, num_labels] = bwlabel(foreground_map, 8);

    for label = 1:num_labels 
       object = (label_map == label);
       object_size = sum(sum(object));
       if(object_size >= COMPONENT_THRESH)
           
          %Component is big enough, mark it
          objects_map = objects_map + int32(object * new_label);
          object_sizes(new_label) = object_size;
          
          [X, Y] = meshgrid(1:WIDTH, 1:HEIGHT);    
          object_x = X .* object;
          object_y = Y .* object;
    
          object_positions(:, new_label) = [sum(sum(object_x)) / object_size;
				      sum(sum(object_y)) / object_size];

          new_label = new_label + 1;
       end
    end
    num_objects = new_label - 1;
    
    %---------------------------Shadow correction--------------------------
    % Produce an image of the means of those mixture components which we are most
    % confident in using the weight/stddev tradeoff.
    index = sub2ind(size(Mus), reshape(repmat([1:D], C, 1), D * C, 1), ...
    reshape(repmat(best_background_gaussian', C, 1), D * C, 1), repmat([1:C]', D, 1));

    background = reshape(Mus(index), C, D);
    background = reshape(background', HEIGHT, WIDTH, C); 
    background = uint8(background);
    background_sequence(:, :, :, tt) = background;
    
    background_hsv = rgb2hsv(background);
    image_hsv = rgb2hsv(image_sequence(:, :, :, tt));
    
    for i = 1:HEIGHT
        for j = 1:WIDTH      
            if (objects_map(i, j)) && (abs(image_hsv(i,j,1) - background_hsv(i,j,1)) < 0.7)...
                  && (image_hsv(i,j,2) - background_hsv(i,j,2) < 0.25)...
                  && (0.85 <=image_hsv(i,j,3)/background_hsv(i,j,3) <= 0.95)
               shadow_mark(i, j) = 1;  
            else
               shadow_mark(i, j) = 0;  
            end               
        end    
    end
   
    foreground_map_sequence(:, :, tt) = objects_map;
%     objecs_adjust_map = objects_map & (~shadow_mark);
    objecs_adjust_map = shadow_mark;
    foreground_map_adjust_sequence(:, :, tt) = objecs_adjust_map;    
    %----------------------------------------------------------------------
end
%--------------------------------------------------------------------------


% -----------------------------Result display-------------------------------
figure;
while 1
for tt = 1:T
%   tt = 30;
  subplot(2,2,1),imshow(image_sequence(:, :, :, tt));title('original image');
  subplot(2,2,2),imshow(uint8(background_sequence(:, :, :, tt)));title('background image');
  subplot(2,2,3),imshow(foreground_map_sequence(:, :, tt)); title('foreground map without shadow removing');
%   subplot(2,2,4),imshow(uint8(background_sequence(:, :, :, tt)));
  subplot(2,2,4),imshow(foreground_map_adjust_sequence(:, :, tt));title('foreground map with shadow removing');
  drawnow;pause(0.1);
end
end
up141

3. Simulation conclusion

Guess you like

Origin blog.csdn.net/ccsss22/article/details/130187462