Opencv C++ 6. Grayscale transformation: linear transformation, grayscale inversion, logarithmic transformation, gamma transformation, (adaptive) histogram equalization

1. Principle of grayscale transformation:

The original image pixel gray value r is mapped to a gray value s through the transformation function T: s=T(r).

2. Grayscale transformation method:

  1. Linear transformation (brightness and contrast adjustment) :

    • Principle: Linear transformation is a simple method of adjusting brightness and contrast by applying a linear transformation formula to the gray level of each pixel. Apply the formula to each pixel output_pixel = input_pixel * alpha + beta, where alphacontrols contrast and betacontrols brightness. Increase alphathe value to increase contrast, and increase betathe value to increase brightness.
  2. Logarithmic transformation :

    • Principle: Logarithmic transformation modifies each pixel value of the image by applying a logarithmic function. This transformation is suitable for enhancing the low gray levels of an image because it stretches the differences between low gray levels. The formula is output_pixel = c * log(1 + input_pixel)where cis the scaling constant.
  3. Gamma correction :

    • Principle: Gamma correction modifies each pixel value of the image by applying a power function. Gamma correction can be used to adjust the contrast and brightness of an image. The formula is output_pixel = c * (input_pixel ^ gamma), where cis the scaling constant and gammais the gamma value. Increasing gammathe value increases contrast.
  4. Histogram equalization :

    • Principle: Histogram equalization aims to stretch the gray level distribution of the image to make it more uniform. This is achieved by redistributing pixel values ​​to increase the number of brightness levels. This helps enhance the contrast of the image and highlight details in the image. The principle of this method is to remap the cumulative distribution function of the image so that it becomes a uniform distribution.
  5. Adaptive histogram equalization :

    • Principle: Adaptive histogram equalization divides the image into small blocks, and then performs histogram equalization on each block. This makes the distribution of gray levels in different areas of the image more even, especially when the image has obvious brightness changes.

3. Linear transformation

#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;
using namespace std;

int main() {
    // 加载图像
    Mat image = imread("D://lena.png");

    if (image.empty()) {
        cout << "无法加载图像" << endl;
        return -1;
    }

    // 用户定义的亮度和对比度参数
    double alpha = 1.5; // 控制对比度
    int beta = 30; // 控制亮度

    // 线性变换
    Mat adjusted_image = Mat::zeros(image.size(), image.type());

    for (int y = 0; y < image.rows; y++) {
        for (int x = 0; x < image.cols; x++) {
            for (int c = 0; c < image.channels(); c++) {
                adjusted_image.at<Vec3b>(y, x)[c] = saturate_cast<uchar>(alpha * image.at<Vec3b>(y, x)[c] + beta);
            }
        }
    }

    // 显示原始图像和调整后的图像
    imshow("原始图像", image);
    imshow("亮度和对比度调整后的图像", adjusted_image);
    waitKey(0);

    return 0;
}

Function introduction:

    // 线性变换
    Mat adjusted_image = Mat::zeros(image.size(), image.type());

Mat::zerosis a function in OpenCV that creates a matrix (image) and initializes all its elements to zero. In this particular case, Mat::zeros(image.size(), image.type())creates a imagematrix with the same dimensions and number of channels as and initializes all its elements to zero.

Specific explanation:

  • MatIs the data structure used in OpenCV to represent images and matrices.
  • image.size()Returns imagethe dimensions (number of rows and columns) of the original image.
  • image.type()Returns imagethe data type and channel number information of the original image.

Mat::zeros(image.size(), image.type())An imageimage will be created with the same dimensions and number of channels as the original image, but the values ​​of all pixels will be initialized to zero. This is to store the adjusted image in subsequent operations, since we want the output image to have pixel values ​​all zero before performing the linear transformation.

In fact, Mat::zerosit is often used in image processing to create matrices used to store intermediate or result images to ensure that their initial values ​​are zero. This helps avoid potentially junk values ​​or unpredictable results.


    for (int y = 0; y < image.rows; y++) {
        for (int x = 0; x < image.cols; x++) {
            for (int c = 0; c < image.channels(); c++) {
                adjusted_image.at<Vec3b>(y, x)[c] = saturate_cast<uchar>(alpha * image.at<Vec3b>(y, x)[c] + beta);
            }
        }
    }
  1. for (int y = 0; y < image.rows; y++): This is the outer loop that iterates through each row of the image.

  2. for (int x = 0; x < image.cols; x++): This is the inner loop that iterates through each column (pixel) of the image.

  3. for (int c = 0; c < image.channels(); c++): This is the inner loop that iterates through the channels of the image (for example, for a color image, the channels might be B, G, R).

  4. adjusted_image.at<Vec3b>(y, x)[c] = saturate_cast<uchar>(alpha * image.at<Vec3b>(y, x)[c] + beta);: This line of code performs the actual linear transformation operation. Specifically, it does the following for each channel of each pixel:

    • image.at<Vec3b>(y, x)[c]: This part of the code imageobtains the gray level of (y, x)the channel at the current pixel position from the original image c.
    • alpha * image.at<Vec3b>(y, x)[c] + beta: This is the formula for linear transformation. alphaControl the adjustment of contrast, betacontrol the adjustment of brightness. Brightness and contrast adjustments can be achieved by alphamultiplying by the current pixel's gray level and adding .beta
    • saturate_cast<uchar>(...): This is a saturation operation that ensures the adjusted pixel value is between 0 and 255. If the result is less than 0, it will be truncated to 0; if the result is greater than 255, it will be truncated to 255.
    • adjusted_image.at<Vec3b>(y, x)[c]: The final adjusted pixel value will be stored in channel at adjusted_imagethe same position in .(y, x)c

4. Grayscale inversion:

#include<iostream>
#include<opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main()
{
	Mat image1, output_image, image1_gray;   //定义输入图像,输出图像,灰度图像
	image1 = imread("D://lena.png");  //读取图像;
	if (image1.empty())
	{
		cout << "读取错误" << endl;
		return -1;
	}

	cvtColor(image1, image1_gray, COLOR_BGR2GRAY);  //灰度化
	imshow(" image1_gray", image1_gray);   //显示灰度图像

	output_image = image1_gray.clone();
	for (int i = 0; i < image1_gray.rows; i++)
	{
		for (int j = 0; j < image1_gray.cols; j++)
		{
			output_image.at<uchar>(i, j) = 255 - image1_gray.at<uchar>(i, j);  //灰度反转
		}
	}
	imshow("output_image", output_image);  //显示反转图像


	waitKey(0);  //暂停,保持图像显示,等待按键结束
	return 0;
}

The result is:

The original image is on the left and the reversed image is on the right

Function introduction:

cvtColor(image1, image1_gray, COLOR_BGR2GRAY);

cvtColorThe general syntax of a function is as follows:

void cvtColor(InputArray src, OutputArray dst, int code, int dstCn = 0);

Here are some common codevalues ​​and their meanings:

  • CV_BGR2GRAY: BGR to grayscale conversion.
  • CV_BGR2HSV: BGR to HSV (hue, saturation, value) conversion.
  • CV_BGR2Lab:BGR to Lab conversion.
  • CV_BGR2YUV: BGR to YUV conversion.
  • CV_RGB2BGR: RGB to BGR conversion.
  • CV_GRAY2BGR: Grayscale to BGR conversion.

What is selected in this code isCV_BGR2GRAY,即将BGR转化为灰度。

output_image = image1_gray.clone();
	for (int i = 0; i < image1_gray.rows; i++)
	{
		for (int j = 0; j < image1_gray.cols; j++)
		{
			output_image.at<uchar>(i, j) = 255 - image1_gray.at<uchar>(i, j);  //灰度反转
		}
	}
  1. output_image = image1_gray.clone();: First, create an image1_grayoutput image of the same size and type as the input image output_imageand initialize it to be the same as the input image.

  2. Next, the program uses nested loops to iterate over image1_grayeach pixel in the input image.

  3. output_image.at<uchar>(i, j) = 255 - image1_gray.at<uchar>(i, j);:In the inner loop, for each pixel (i, j), it does the following:

    • image1_gray.at<uchar>(i, j)(i, j): Get the grayscale value at pixel from the input image . at<uchar>(i, j)Indicates that the pixel value is an unsigned character type (8-bit grayscale image).
    • 255 - image1_gray.at<uchar>(i, j): Subtract 255 from the obtained grayscale value to achieve grayscale inversion. This causes lighter pixels to become darker and darker pixels to become lighter, inverting the appearance of the image.

Through this process, the gray value of each pixel in the input image is inverted, and the final result is stored in output_image. This operation can be used to create a negative effect or to convert a negative into a positive to achieve special effects on an image.

5. Gamma correction

#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;
using namespace std;

int main() {
    // 加载图像
    Mat image = imread("D://lena.png");

    if (image.empty()) {
        cout << "无法加载图像" << endl;
        return -1;
    }

    // 伽马值
    double gamma = 2;//大于等于0,小于0为黑色。

    // 伽马校正
    Mat gamma_corrected_image = Mat::zeros(image.size(), image.type());

    for (int y = 0; y < image.rows; y++) {
        for (int x = 0; x < image.cols; x++) {
            for (int c = 0; c < image.channels(); c++) {
                double pixel_value = image.at<Vec3b>(y, x)[c] / 255.0;
                double corrected_value = pow(pixel_value, gamma) * 255.0;
                gamma_corrected_image.at<Vec3b>(y, x)[c] = saturate_cast<uchar>(corrected_value);
            }
        }
    }

    // 显示原始图像和伽马校正后的图像
    imshow("原始图像", image);
    imshow("伽马校正后的图像", gamma_corrected_image);
    waitKey(0);

    return 0;
}

Rendering:

6. Logarithmic transformation

#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <cmath>

using namespace cv;
using namespace std;

int main() {
    // 加载图像
    Mat image = imread("D://lena.png");

    if (image.empty()) {
        cout << "无法加载图像" << endl;
        return -1;
    }

    // 对数变换参数
    double c = 1.0; // 常数
    double gamma = 0.5; // 对数变换的参数

    // 对数变换
    Mat log_transformed_image = Mat::zeros(image.size(), image.type());

    for (int y = 0; y < image.rows; y++) {
        for (int x = 0; x < image.cols; x++) {
            for (int c = 0; c < image.channels(); c++) {
                double pixel_value = image.at<Vec3b>(y, x)[c] / 255.0;
                double corrected_value = c * log(1 + pixel_value) / log(1 + gamma);
                log_transformed_image.at<Vec3b>(y, x)[c] = saturate_cast<uchar>(corrected_value * 255.0);
            }
        }
    }

    // 显示原始图像和对数变换后的图像
    imshow("原始图像", image);
    imshow("对数变换后的图像", log_transformed_image);
    waitKey(0);

    return 0;
}

Rendering:

7. Histogram equalization

In this example, we load an image and IMREAD_GRAYSCALEload it as a grayscale image using mode. We then use equalizeHistthe function to perform histogram equalization to enhance the contrast of the image.

Histogram equalization is a simple yet effective method that can improve the contrast of an image by making the pixel values ​​in an image more evenly distributed. This example demonstrates how to use OpenCV to implement histogram equalization to improve the visual quality of images.

#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;
using namespace std;

int main() {
    // 加载图像
    Mat image = imread("D://lena.png", IMREAD_GRAYSCALE);  // 以灰度模式加载图像

    if (image.empty()) {
        cout << "无法加载图像" << endl;
        return -1;
    }

    // 直方图均衡化
    Mat equalized_image;
    equalizeHist(image, equalized_image);

    // 显示原始图像和均衡化后的图像
    imshow("原始图像", image);
    imshow("直方图均衡化后的图像", equalized_image);
    waitKey(0);

    return 0;
}

Rendering:

Why do we need to convert the image to grayscale mode before histogram equalization?

Histogram equalization is usually used for grayscale image processing, rather than color images, mainly for the following reasons:

  1. Simplified processing: Histogram equalization is a very basic and common image enhancement technique. In grayscale images, each pixel has only one grayscale value, so the processing is relatively simple. When processing color images, histogram equalization needs to be performed on each channel separately, adding complexity.

  2. Principle of histogram equalization: The core idea of ​​histogram equalization is to stretch and expand the pixel value range of the image by redistributing pixel values ​​to make the histogram of the image more uniform. In a grayscale image, this means adjusting the image's gray levels so that they are more evenly distributed between 0 and 255. In color images, this operation needs to be done separately for each channel.

  3. Color information redundancy: Color images contain more information, including color and brightness information. In some cases, only the brightness information needs to be equalized since the color information may not need to be changed. By converting a color image to grayscale, you can better control the effects of equalization to maintain the overall appearance of the image.

  4. Common applications: Histogram equalization is commonly used in applications such as medical image processing, computer vision, and image enhancement, which often use grayscale images. Therefore, in these scenarios, histogram equalization is often applied directly to grayscale images to improve contrast.

Although histogram equalization is usually used for grayscale images, there are also extended methods for color images, such as equalizing the luminance channel while leaving the color information unchanged. This type of method is often called color histogram equalization or color equalization. In this case, luminance information is processed, while color information is preserved.

8. Adaptive histogram equalization

#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;
using namespace std;

int main() {
    // 加载图像
    Mat image = imread("D://lena.png", IMREAD_GRAYSCALE);  // 以灰度模式加载图像

    if (image.empty()) {
        cout << "无法加载图像" << endl;
        return -1;
    }

    // 自适应直方图均衡化
    Mat adaptive_equalized_image;
    equalizeHist(image, adaptive_equalized_image);

    // 显示原始图像和自适应直方图均衡化后的图像
    imshow("原始图像", image);
    imshow("自适应直方图均衡化后的图像", adaptive_equalized_image);
    waitKey(0);

    return 0;
}

In this example, we load an image and IMREAD_GRAYSCALEload it as a grayscale image using mode. We then use equalizeHistthe function to perform adaptive histogram equalization to enhance the contrast of the image.

Adaptive histogram equalization is an improved histogram equalization method that applies equalization separately to different image areas to cope with lighting differences. This example demonstrates how to use OpenCV to implement adaptive histogram equalization to improve the visual quality of images.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Guess you like

Origin blog.csdn.net/w2492602718/article/details/134020528