How to use and configure Basler industrial cameras for machine vision (C++)

Basler industrial cameras are used for binocular vision. Many problems arise and are recorded:

First, read more manuals: https://zh.docs.baslerweb.com/software

There are all source codes and reference examples in the manual. In fact, during use, most of them are these source codes. Select the corresponding example code for specific projects.

1. Camera and lens selection

You can use Balser's lens selection tool to configure and select according to the distance to the target you need to measure, the target size and other information. The address is here ; the interface is as follows:
Insert image description here
the focal length of the camera is generally related to the measured target distance, and the corresponding selection method is generally The summary is as follows:
(1) The smaller the focal length, the wider the measurement range. Correspondingly, the target generally looks very small, similar to the minimum magnification of shooting targets on mobile phones;
(2) The larger the focal length, the narrower the measurement range. It is better to check small targets at long distances, but too large a focal length will easily lead to incomplete shots of large targets, similar to the magnification of shooting targets when taking photos with a mobile phone; (
3) There is a specific calculation formula for reference . It is recommended to configure it directly through basler Tools for selection

2. Camera installation and use

(1) The use of balser cameras requires network cables and power cables (12V power supply). You can find the corresponding cables on the official website;
(2) Connect the camera to the power supply and connect the network cable to the computer. Go to the official website to download the camera software and driver. Generally, after installation under Windows , corresponding folders will be generated in the directory, including C++ corresponding libraries and reference code examples, etc. Directly download the tar.gz package under Linux, decompress it to the corresponding file directory, and execute: source pylon directory/bin/pylon- setup-env.sh ./pylon directory is enough. For example, I installed pylon7.3.0. The decompressed directory is: /opt/pylon7, which directly contains the inlcude folders. Execute in the Linux directory:

source /opt/pylon7/bin/pylon-setup-env.sh /opt/pylon7

Complete the installation of the camera driver and environment under Linux.
(3) Configure the camera IP. To configure the camera IP under Windows, you can use basler's tool:
ipconfigurator. Under Linux, if the installed system has a GUI interface, you can also use it. Configure the camera IP address and ensure that the IP address and the IP address of the interface connected to the computer are in The same network segment is enough. The recommended subnet mask is 255.255.255.0. The manufacturer said that other ones may cause abnormal disconnection errors;
(4) After the IP configuration is completed, start the pylon Viewer tool under Windows and check the camera. You can find The camera instructions indicate that the IP configuration is correct and ensure that the computer and camera are in the same network segment
(5) To maintain the frame rate of the basler camera, it is necessary to ensure a Gigabit network. If there are two cameras, a 2000M network is required after connecting to the switch, in order By analogy, the maximum frame rate can be reached, otherwise problems such as camera frame loss will occur. Directly connecting the camera to the computer will not affect it, just turn on the huge frame rate.

3. C++ calls camera environment deployment

1. After the camera driver and software are installed, there are the following folders:

The development environment is under the development folder. After decompression under Linux, it is:
Insert image description here
2. You can use C++ IDE development tools such as visual studio for programming, and add dependent libraries and header files, etc. By linking the include and lib files, the camera call can be started;

4. C++ camera call

1.Camera initialization:

Camera initialization includes finding the camera, matching the corresponding camera, and setting the camera's exposure, frame rate, gain, etc. The initialization of the two cameras is organized as follows according to the API document:
Join the API: Grab_MultipleCameras

void initCamera()
{
    
    
    try
    {
    
    
        PylonInitialize();	//初始化
        CTlFactory& tlFactory = CTlFactory::GetInstance();
        pTL = dynamic_cast<IGigETransportLayer*>(tlFactory.CreateTl( BaslerGigEDeviceClass ));
        if (pTL == NULL)
        {
    
    
            throw RUNTIME_EXCEPTION( "No GigE cameras available." );
        }

        DeviceInfoList_t allDeviceInfos;
        if (pTL->EnumerateDevices( allDeviceInfos ) == 0)
        {
    
    
            throw RUNTIME_EXCEPTION( "No GigE cameras available." );
        }

        DeviceInfoList_t usableDeviceInfos;
        string left_camera_ip="172.16.105.21"
		//left_camera_ip 用于区分两个相机,可以通过配置的IP,序列号等进行区分
        if (string(allDeviceInfos[0].GetIpAddress()) == left_camera_ip) {
    
    
            usableDeviceInfos.push_back(allDeviceInfos[0]);
            subnet = allDeviceInfos[0].GetSubnetAddress();//主相机
            usableDeviceInfos.push_back(allDeviceInfos[1]);
            LOG(INFO)<<"主相机:"<<allDeviceInfos[0].GetIpAddress()<<endl;
            LOG(INFO)<<"副相机:"<<allDeviceInfos[1].GetIpAddress()<<endl;
        }
        else if(string(allDeviceInfos[1].GetIpAddress()) == left_camera_ip) {
    
    
            usableDeviceInfos.push_back(allDeviceInfos[1]);
            subnet = allDeviceInfos[1].GetSubnetAddress();//主相机
            usableDeviceInfos.push_back(allDeviceInfos[0]);
            LOG(INFO)<<"主相机IP:"<<allDeviceInfos[1].GetIpAddress()<<endl;
            LOG(INFO)<<"SubnetAddress:"<<allDeviceInfos[1].GetSubnetAddress()<<endl;
            LOG(INFO)<<"DefaultGateway:"<<allDeviceInfos[1].GetDefaultGateway()<<endl;
            LOG(INFO)<<"SubnetMask:"<<allDeviceInfos[1].GetSubnetMask()<<endl;

            LOG(INFO)<<"副相机IP:"<<allDeviceInfos[0].GetIpAddress()<<endl;
            LOG(INFO)<<"SubnetAddress:"<<allDeviceInfos[0].GetSubnetAddress()<<endl;
            LOG(INFO)<<"DefaultGateway:" <<allDeviceInfos[0].GetDefaultGateway()<<endl;
            LOG(INFO)<<"SubnetMask:"<<allDeviceInfos[0].GetSubnetMask()<<endl;


        }
        else{
    
    
            LOG(INFO) << "Camera IP is error ,please set IP" << endl;
        }
       // CInstantCameraArray cameras = { 2 };
       //初始化两个相机
        for (size_t i = 0; i < 2; ++i)
        {
    
    
            cameras[i].Attach(tlFactory.CreateDevice(usableDeviceInfos[i]));
            const CBaslerGigEDeviceInfo& di = cameras[i].GetDeviceInfo();
            LOG(INFO) << "Camera serial: " << di.GetSerialNumber() << endl;
        }

//        srand( (unsigned) time( NULL ) );
//        DeviceKey = rand();
//        GroupKey = 0x112233;

        for (size_t i = 0; i < cameras.GetSize(); ++i)
        {
    
    
            cameras[i].Attach( tlFactory.CreateDevice( usableDeviceInfos[i] ) );
            //cameras[i].RegisterConfiguration( new CActionTriggerConfiguration( DeviceKey, GroupKey, AllGroupMask ), RegistrationMode_Append, Cleanup_Delete );
            //cameras[i].SetCameraContext( i );
            const CBaslerGigEDeviceInfo& di = cameras[i].GetDeviceInfo();
            cout << "Using camera " << i << ": " << di.GetSerialNumber() << " (" << di.GetIpAddress() << ")" << endl;
        }

        cameras.Open();

        //相机基本设置
        SetCamera(cameras[0], Type_Basler_ExposureTimeAbs, expore_time_l);			//曝光时间
        SetCamera(cameras[0], Type_Basler_GainRaw, gain_l);						//增益
        SetCamera(cameras[0], Type_Basler_AcquisitionFrameRateAbs, fps_l);			//频率
        SetCamera(cameras[0], Type_Basler_Width, 2448);
        SetCamera(cameras[0], Type_Basler_Height, 2048);

        SetCamera(cameras[1], Type_Basler_ExposureTimeAbs, expore_time_r);			//曝光时间
        SetCamera(cameras[1], Type_Basler_GainRaw, gain_r);						//增益
        SetCamera(cameras[1], Type_Basler_AcquisitionFrameRateAbs, fps_r);			//频率
        SetCamera(cameras[1], Type_Basler_Width, 2448);
        SetCamera(cameras[1], Type_Basler_Height, 2048);

        //设置相机触发模式 	TriggerSelector
        //TriggerSoftware
        //主相机设置为软件触发,输出设置为exposure active
        //SetCamera(cameras[0], Type_Basler_Freerun, 0);

        //从相机设置:触发模式为外触发,IO设置为1
        //SetCamera(cameras[1], Type_Basler_Line1, 0);

    }
    catch (const GenericException& e)
    {
    
    
        if(cameras.IsGrabbing())
            cameras.StopGrabbing();
        // Error handling
        LOG(INFO) << "init,An exception occurred." << endl
                  << e.GetDescription() << endl;
    }
}

2. Call the camera

When calling the camera, the most common problem is that the grab image loses frames. Among the reasons why the grab image loses frames, it is mainly caused by the camera frame rate setting being too high, insufficient bandwidth and other issues.
Among them:
if(cameras.IsGrabbing()) This sentence can be changed into a while loop, so that the output can be continued. If means the nearest output, it depends on the actual usage. Basically, it takes about 50ms to grab a frame. Among them, cameras.StartGrabbing() can be placed in the initialization, so that the grabbing can be continued and 20 frames per second can be guaranteed. There is no need to start and stop grabbing frequently, which is actually very time-consuming.

void GetCameraImage() {
    
    
	try {
    
    
        //pTL->IssueActionCommand(DeviceKey, GroupKey, AllGroupMask, subnet );
		//1秒内抓取了多少张图,全部存下来
        int skiptime = 1000;
        //LOG(INFO)<<"采集图像的最长时间:"<<skiptime<<" ms"<<endl;
        cameras.StartGrabbing(GrabStrategy_OneByOne,GrabLoop_ProvidedByUser);

        if(cameras.IsGrabbing()) {
    
    
            std::chrono::high_resolution_clock::time_point tStartTime(std::chrono::high_resolution_clock::now());
            int lTimeAloInterval = 0;
            count_grab_once++;
            cameras[0].RetrieveResult(skiptime, ptrGrabResultl, TimeoutHandling_ThrowException);
            cameras[1].RetrieveResult(skiptime, ptrGrabResultr, TimeoutHandling_ThrowException);

            if (ptrGrabResultl->GrabSucceeded() && ptrGrabResultr->GrabSucceeded() ) {
    
    
                intptr_t cameraContextValuel = ptrGrabResultl->GetCameraContext();
                intptr_t cameraContextValuer = ptrGrabResultr->GetCameraContext();
                const uint8_t *pImageBufferl = (uint8_t *) ptrGrabResultl->GetBuffer();
                const uint8_t *pImageBufferr = (uint8_t *) ptrGrabResultr->GetBuffer();
                // 将 pylon image转成OpenCV image.
                Mat SaveImagel = cv::Mat(ptrGrabResultl->GetHeight(), ptrGrabResultl->GetWidth(), CV_8UC1,
                                        (uint8_t *) pImageBufferl);
                Mat SaveImager = cv::Mat(ptrGrabResultr->GetHeight(), ptrGrabResultr->GetWidth(), CV_8UC1,
                                         (uint8_t *) pImageBufferr);

               
            }
         
            lTimeAloInterval =std::chrono::duration_cast<std::chrono::duration<double, std::ratio<1, 1000> >>(std::chrono::high_resolution_clock::now() - tStartTime).count();
            LOG(INFO) << "------------ single Grab image cost time:----------------" << lTimeAloInterval << " ms" << endl;
        }
        cameras.StopGrabbing();
	}
	catch (const GenericException& e)
	{
    
    
        if(cameras.IsGrabbing())
            cameras.StopGrabbing();
		// Error handling
		LOG(INFO) << "An exception occurred." << endl
			<< e.GetDescription() << endl;
	}
	
}

3. Turn off the camera

Camera turns off in time:

void CloseCamera()
{
    
    
	//最后终止Pylon相机,即调用PylonTerminate。//关闭摄像头
	try
	{
    
    
		if (cameras.IsOpen()) {
    
    
			cameras.DetachDevice();
			cameras.Close();
			cameras.DestroyDevice();
			//关闭库
			LOG(INFO) << "SBaslerCameraControl deleteAll: PylonTerminate";
			PylonTerminate();
		}
	}
	catch (const Pylon::GenericException& e)
	{
    
    
		LOG(INFO) << "close camera failed..." << e.what();
	}
}

5. Frequently Asked Questions

1. The camera cannot be connected

The IP configuration is incorrect. Make sure they are on the same network segment and have the same subnet mask.

2. Frame loss after connection

 Error: e1004 The bufer was incopletely gratbed. This can be caused by perfomnane problens of the metwork hardware used,fer underuns can also case ina loss.To fix this, us the pylonbioEtonfigurator tol to optinize your setip and use more uffers for aratbin in your aplication to prerent buferunderruns

It often occurs when connecting multiple cameras. Make sure that jumbo frames are turned on and the camera frame rate and network transmission frame rate meet the requirements. Please refer to the API manual.
Insert image description here

3. After the camera is connected, the camera can be found but is stuck during use.

I don’t know what causes this problem. Since the camera can be found and connected to the camera, there are probably several possibilities:
(1) It may be that the camera has not been called for a long time, causing the camera to sleep. This can be solved by shutting down and restarting. .
(2) The camera is abnormally interrupted during the opening process, that is, the handle is not released. In this case, use the official software pylon Viewer to reopen and close it to recover.
Insert image description here

4.grab time out grab timeout

It is usually caused by the waittime being set too short. You can change it to a larger value:

 cameras[1].RetrieveResult(waittime, ptrGrabResultr, TimeoutHandling_ThrowException);

Another possibility is that the camera has not been captured successfully after being connected, resulting in a long waiting time. The code needs to be checked. This is often caused by triggering operations.

Guess you like

Origin blog.csdn.net/llsplsp/article/details/132848545