使用OpenCl在Android相机的预览功能中做基于CV的应用开发

这个教程的设计是帮助你使用OpenCL ™在Android相机的预览功能中做基于CV的应用开发。程序是使用 Eclipse-based ADT tools编写的(现在已经不再被Google使用了),但你可以很容易的在Android Studio中进行复用。

该教程假设你已经安装和配置了如下开发环境:

  • JDK
  • Android SDK and NDK
  • Eclipse IDE with ADT and CDT plugins

也假设你已经熟悉了Android java JNI的程序开发。如果你需要上面环境搭建的帮助,你可以参考《Android开发介绍》教程。
这个教程也假设你拥有Android设备并且打开了OpenCL。
相关源码被放在了OpenCV samples opencv/samples/android/tutorial-4-opencl 目录。

前言

通过OpenCL使用GPGPU来提升应用程序的性能是现在的一个趋势。一些CV算法(例如:图片滤镜)在GPU上运行要快于CPU。 近来在Android操作系统上也可以运行。

最流行的CV应用情景是操作Android相机的预览模式,对每一帧应用于CV算法并且显示被CV算法修改过的预览帧。

来让我们考虑如何在这个情景中使用OpenCL。我们尤其会尝试使用这两种方法:直接调用OpenCL的API和最近推出的OpenCV T-API(又叫 Transparent API)OpenCL绝对会加速一些OpenCV算法。

应用程序架构

从Android API Level 11(Android 3.0)开始 Camera API 允许OpenGL纹理作为预览帧的目标,Android API level 21 带来Camer2 API,后者提供更多对于相机设置和使用模式之上的控制,它为预览帧提供了更多的目标,特别是OpenGL纹理 。

在OpenGL纹理中得到一个预览帧最好使用OpenCL,因为这里有一个OpenGL-OpenCL Interoperability API (cl_khr_gl_sharing), 通过OpenCL函数可以共享OpenGL的纹理数据,而不是拷贝(当然也会有一些限制)。

来给我们的应用程序创建一个好的基础,配置Android相机发送预览帧到OpenGL纹理并且不做任何处理的显示出来。
一个最小的Activity类,类似于下面的:

public class Tutorial4Activity extends Activity {
    private MyGLSurfaceView mView;
    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        requestWindowFeature(Window.FEATURE_NO_TITLE);
        getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
                WindowManager.LayoutParams.FLAG_FULLSCREEN);
        getWindow().setFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON,
                WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
        setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);
        mView = new MyGLSurfaceView(this);
        setContentView(mView);
    }
    @Override
    protected void onPause() {
        mView.onPause();
        super.onPause();
    }
    @Override
    protected void onResume() {
        super.onResume();
        mView.onResume();
    }
}

添加一个最小的View类:

public class MyGLSurfaceView extends GLSurfaceView {
    MyGLRendererBase mRenderer;
    public MyGLSurfaceView(Context context) {
        super(context);
        if(android.os.Build.VERSION.SDK_INT >= 21)
            mRenderer = new Camera2Renderer(this);
        else
            mRenderer = new CameraRenderer(this);
        setEGLContextClientVersion(2);
        setRenderer(mRenderer);
        setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
    }
    @Override
    public void surfaceCreated(SurfaceHolder holder) {
        super.surfaceCreated(holder);
    }
    @Override
    public void surfaceDestroyed(SurfaceHolder holder) {
        super.surfaceDestroyed(holder);
    }
    @Override
    public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
        super.surfaceChanged(holder, format, w, h);
    }
    @Override
    public void onResume() {
        super.onResume();
        mRenderer.onResume();
    }
    @Override
    public void onPause() {
        mRenderer.onPause();
        super.onPause();
    }
}

注意:我们有两个renderer类:一个是早先版本的Camera API 和一个新版本的Camera2

扫描二维码关注公众号,回复: 2961488 查看本文章

一个最小Renderer类可以通过Java(OpenGL ES 2.0 available in Java)实现,我们修改预览纹理需要通过OpenCL来移动OpenGL填充到JNI。这是一个Java封装好的JNI填充的简单例子:

public class NativeGLRenderer {
    static
    {
        System.loadLibrary("opencv_java3"); // comment this when using OpenCV Manager
        System.loadLibrary("JNIrender");
    }
    public static native int initGL();
    public static native void closeGL();
    public static native void drawFrame();
    public static native void changeSize(int width, int height);
}

因为Camer和Camer2 APIs在相机的体系和控制上有显著的区别,让我们来为两个各自的renderer创建一个基础类。

public abstract class MyGLRendererBase implements GLSurfaceView.Renderer, SurfaceTexture.OnFrameAvailableListener {
    protected final String LOGTAG = "MyGLRendererBase";
    protected SurfaceTexture mSTex;
    protected MyGLSurfaceView mView;
    protected boolean mGLInit = false;
    protected boolean mTexUpdate = false;
    MyGLRendererBase(MyGLSurfaceView view) {
        mView = view;
    }
    protected abstract void openCamera();
    protected abstract void closeCamera();
    protected abstract void setCameraPreviewSize(int width, int height);
    public void onResume() {
        Log.i(LOGTAG, "onResume");
    }
    public void onPause() {
        Log.i(LOGTAG, "onPause");
        mGLInit = false;
        mTexUpdate = false;
        closeCamera();
        if(mSTex != null) {
            mSTex.release();
            mSTex = null;
            NativeGLRenderer.closeGL();
        }
    }
    @Override
    public synchronized void onFrameAvailable(SurfaceTexture surfaceTexture) {
        //Log.i(LOGTAG, "onFrameAvailable");
        mTexUpdate = true;
        mView.requestRender();
    }
    @Override
    public void onDrawFrame(GL10 gl) {
        //Log.i(LOGTAG, "onDrawFrame");
        if (!mGLInit)
            return;
        synchronized (this) {
            if (mTexUpdate) {
                mSTex.updateTexImage();
                mTexUpdate = false;
            }
        }
        NativeGLRenderer.drawFrame();
    }
    @Override
    public void onSurfaceChanged(GL10 gl, int surfaceWidth, int surfaceHeight) {
        Log.i(LOGTAG, "onSurfaceChanged("+surfaceWidth+"x"+surfaceHeight+")");
        NativeGLRenderer.changeSize(surfaceWidth, surfaceHeight);
        setCameraPreviewSize(surfaceWidth, surfaceHeight);
    }
    @Override
    public void onSurfaceCreated(GL10 gl, EGLConfig config) {
        Log.i(LOGTAG, "onSurfaceCreated");
        String strGLVersion = GLES20.glGetString(GLES20.GL_VERSION);
        if (strGLVersion != null)
            Log.i(LOGTAG, "OpenGL ES version: " + strGLVersion);
        int hTex = NativeGLRenderer.initGL();
        mSTex = new SurfaceTexture(hTex);
        mSTex.setOnFrameAvailableListener(this);
        openCamera();
        mGLInit = true;
    }
}

如你所见,继承者对Camera和Camera2 APIs,应当实现如下的抽象方法:

protected abstract void openCamera();
protected abstract void closeCamera();
protected abstract void setCameraPreviewSize(int width, int height);

让我们将其实现的细节放在教程之外,在这里参考他们的源代码

预览框架修改

这里的OpenGL ES 2.0初始化的细节被直接的和杂乱的引用了过来,但是重点在于将OpenGL纹理当作了相机预览显示的目标应当是类型GL_TEXTURE_EXTERNAL_OES (not GL_TEXTURE_2D),内部保持图片数据为YUV格式。这使得无法通过CL-GL互相(cl_khr_gl_sharing)操作共享和通过C/C++代码访问它的像素数据。为了克服这种限制我们必须执行一个OpenGL渲染从这个纹理到另一个使用了FrameBuffer对象中。

C/C++代码

之后我们就可以从C/C++代码里通过glReadPixels()读取拷贝像素数据,并且得到通知后通过glTexSubimage2D()写回纹理

直接的OpenCL调用

还有GL_TEXTURE_2D纹理也被OpenCL通过非拷贝的方式进行了共享,但我们需要使用一个特殊的方式创建一个OpenCL的上下文:

void initCL()
{
    EGLDisplay mEglDisplay = eglGetCurrentDisplay();
    if (mEglDisplay == EGL_NO_DISPLAY)
        LOGE("initCL: eglGetCurrentDisplay() returned 'EGL_NO_DISPLAY', error = %x", eglGetError());
    EGLContext mEglContext = eglGetCurrentContext();
    if (mEglContext == EGL_NO_CONTEXT)
        LOGE("initCL: eglGetCurrentContext() returned 'EGL_NO_CONTEXT', error = %x", eglGetError());
    cl_context_properties props[] =
    {   CL_GL_CONTEXT_KHR,   (cl_context_properties) mEglContext,
        CL_EGL_DISPLAY_KHR,  (cl_context_properties) mEglDisplay,
        CL_CONTEXT_PLATFORM, 0,
        0 };
    try
    {
        cl::Platform p = cl::Platform::getDefault();
        std::string ext = p.getInfo<CL_PLATFORM_EXTENSIONS>();
        if(ext.find("cl_khr_gl_sharing") == std::string::npos)
            LOGE("Warning: CL-GL sharing isn't supported by PLATFORM");
        props[5] = (cl_context_properties) p();
        theContext = cl::Context(CL_DEVICE_TYPE_GPU, props);
        std::vector<cl::Device> devs = theContext.getInfo<CL_CONTEXT_DEVICES>();
        LOGD("Context returned %d devices, taking the 1st one", devs.size());
        ext = devs[0].getInfo<CL_DEVICE_EXTENSIONS>();
        if(ext.find("cl_khr_gl_sharing") == std::string::npos)
            LOGE("Warning: CL-GL sharing isn't supported by DEVICE");
        theQueue = cl::CommandQueue(theContext, devs[0]);
        // ...
    }
    catch(cl::Error& e)
    {
        LOGE("cl::Error: %s (%d)", e.what(), e.err());
    }
    catch(std::exception& e)
    {
        LOGE("std::exception: %s", e.what());
    }
    catch(...)
    {
        LOGE( "OpenCL info: unknown error while initializing OpenCL stuff" );
    }
    LOGD("initCL completed");
}

注意:
编写这些JNI代码需要来自Khronos的OpenCL 1.2的头文件和libOpenCL.so这个二进制库,后者需要从你的设备中获取。

这些纹理能被包装成一个 cl::ImageGL对象并且被OpenCL来调用:

cl::ImageGL imgIn (theContext, CL_MEM_READ_ONLY,  GL_TEXTURE_2D, 0, texIn);
cl::ImageGL imgOut(theContext, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, texOut);
std::vector < cl::Memory > images;
images.push_back(imgIn);
images.push_back(imgOut);
theQueue.enqueueAcquireGLObjects(&images);
theQueue.finish();
cl::Kernel Laplacian = ...
Laplacian.setArg(0, imgIn);
Laplacian.setArg(1, imgOut);
theQueue.finish();
theQueue.enqueueNDRangeKernel(Laplacian, cl::NullRange, cl::NDRange(w, h), cl::NullRange);
theQueue.finish();
theQueue.enqueueReleaseGLObjects(&images);
theQueue.finish();

OpenCV T-API

但是,您可能希望使用OpenCV T-API隐式调用OpenCL代码,而不是编写OpenCL代码。你需要将创建的OpenCL上下文传递给OpenCV(通过 cv::ocl::attachContext())别且通过cv::UMat包装OpenGL纹理。不幸的是UMat保存在OpenCL buffer内部,这些不能被包装入OpenGL纹理和OpenCL图片中,所以我们必须拷贝这些数据:

cl::ImageGL imgIn (theContext, CL_MEM_READ_ONLY,  GL_TEXTURE_2D, 0, tex);
std::vector < cl::Memory > images(1, imgIn);
theQueue.enqueueAcquireGLObjects(&images);
theQueue.finish();
cv::UMat uIn, uOut, uTmp;
cv::ocl::convertFromImage(imgIn(), uIn);
theQueue.enqueueReleaseGLObjects(&images);
cv::Laplacian(uIn, uTmp, CV_8U);
cv:multiply(uTmp, 10, uOut);
cv::ocl::finish();
cl::ImageGL imgOut(theContext, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, tex);
images.clear();
images.push_back(imgOut);
theQueue.enqueueAcquireGLObjects(&images);
cl_mem clBuffer = (cl_mem)uOut.handle(cv::ACCESS_READ);
cl_command_queue q = (cl_command_queue)cv::ocl::Queue::getDefault().ptr();
size_t offset = 0;
size_t origin[3] = { 0, 0, 0 };
size_t region[3] = { w, h, 1 };
CV_Assert(clEnqueueCopyBufferToImage (q, clBuffer, imgOut(), offset, origin, region, 0, NULL, NULL) == CL_SUCCESS);
theQueue.enqueueReleaseGLObjects(&images);
cv::ocl::finish();

Note
We have to make one more image data copy when placing back the modified image to the original OpenGL texture via OpenCL image wrapper.
Note
By default the OpenCL support (T-API) is disabled in OpenCV builds for Android OS (so it’s absent in official packages as of version 3.0), but it’s possible to rebuild locally OpenCV for Android with OpenCL/T-API enabled: use -DWITH_OPENCL=YES option for CMake.
cd opencv-build-android
path/to/cmake.exe -GNinja -DCMAKE_MAKE_PROGRAM=”path/to/ninja.exe” -DCMAKE_TOOLCHAIN_FILE=path/to/opencv/platforms/android/android.toolchain.cmake -DANDROID_ABI=”armeabi-v7a with NEON” -DCMAKE_BUILD_WITH_INSTALL_RPATH=ON path/to/opencv
path/to/ninja.exe install/strip
To use your own modified libopencv_java3.so you have to keep inside your APK, not to use OpenCV Manager and load it manually via System.loadLibrary(“opencv_java3”).
Performance notes

To compare the performance we measured FPS of the same preview frames modification (Laplacian) done by C/C++ code (call to cv::Laplacian with cv::Mat), by direct OpenCL calls (using OpenCL images for input and output), and by OpenCV T-API (call to cv::Laplacian with cv::UMat) on Sony Xperia Z3 with 720p camera resolution:

C/C++ version shows 3-4 fps
direct OpenCL calls shows 25-27 fps
OpenCV T-API shows 11-13 fps (due to extra copying from cl_image to cl_buffer and back)

出处:http://docs.opencv.org/master/d7/dbd/tutorial_android_ocl_intro.html

猜你喜欢

转载自blog.csdn.net/kingroc/article/details/70792658