Intelligent sign language digital real-time translation based on Android+OpenCV+CNN+Keras - deep learning algorithm application (including Python, ipynb engineering source code) + data set (1)


Insert image description here

Preface

This project relies on the Keras deep learning model and is designed to classify and recognize sign language in real time. To achieve this goal, the project incorporates relevant algorithms from the OpenCV library to capture the position of the hands, enabling real-time recognition of sign language in video streams and images.

First, the project uses algorithms from the OpenCV library to capture hand positions in video streams or images. This can involve technologies such as skin color detection, motion detection, or gesture detection to pinpoint sign language gestures.

Next, the project uses a CNN deep learning model to classify the captured sign language. After training, it can recognize different sign language gestures as specific categories or characters.

During the real-time recognition process, the sign language gestures in the video stream or image are passed to the CNN deep learning model, which makes inferences and recognizes the gestures into the corresponding categories. This enables the system to recognize sign language gestures in real time and convert them into text or other forms of output.

Overall, this project combines computer vision and deep learning technology to provide a real-time solution for sign language recognition. This is a beneficial tool for hearing-impaired people and sign language users to help them communicate and understand others more easily.

overall design

This part includes the overall system structure diagram and system flow chart.

Overall system structure diagram

The overall structure of the system is shown in the figure.

Insert image description here

System flow chart

The system flow is shown in the figure.

Insert image description here

Operating environment

This part includes Python environment, TensorFlow environment, Keras environment and Android environment.

Python environment

Python 3.6 and above configuration is required. In Windows environment, it is recommended to download Anaconda to complete the configuration of the environment required for Python. The download address is https://www.anaconda.com/ . You can also download a virtual machine to run the code in Linux environment.

TensorFlow environment

Replace the Anaconda image source, open cmd and enter the following commands directly:

conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config -set show_channel_urls yes

Delete the default image source in the corresponding environment of the Anaconda visual interface.

Search for TensorFlow in the visual package management of the corresponding Anaconda environment and select version 1.12.0.

Keras environment

Search for Keras-related packages in the visual package management of the corresponding Anaconda environment and select version 2.2.4 corresponding to TensorFlow.

Android environment

This part includes installing Android Studio and importing TensorFlow's jar package and so library.

1. Install AndroidStudio

The installation reference tutorial address is https://developer.android.google.cn/studio/install.html . Create a new Android project, open AndroidStudio, and select the menu item File→New→NewProject→Empty Activity→Next.

Name can be defined by yourself; Save location is the address saved by the project, which can be defined by yourself; MinimumAPI is the minimum version of the project that is compatible with Android phones, which is greater than 18. Click the Finish button to complete.

2. Import TensorFlow’s jar package and so library

The download address is https://github.com/PanJinquan/Mnist-tensorFlow-AndroidDemo/tree/master/app/libs .

Under/app/libs New armeab-iv7aFolder, Add libtensorfow_inference.so; libandroid_tensorAow_inference_java.jarPlace /app/libsunder, right-click addasLibrary.

app\build.gradleConfiguration, defaultConfigadd in:

	mult.iDexEnabled true
	ndk{
    
    
		abiFilters "armeabi-v7a"
	}

Add sourceSets under the Android node to specify the path of jniLibs:

sourceSets{
    
    
		main{
    
    
			jniLibs.srcDirs=['libs']
		}

If there are no dependencies, add the jar file compiled by TensorFlow and import it:

implementation files('libs/libandroid_tensorflow_inference_java.jar')

3. Import the OpenCV library

Enter the OpenCV official website https://opencv.org/releases/ , download the corresponding version of the Android package and complete the decompression, as shown in the figure.

Insert image description here
Click in the Android Studio menu File→New→Import Moduleand select the sdk folder in the Android package, as shown in the following two pictures.

Insert image description here

Insert image description here

Click the menu item File→Project Structure, select Dependencies, select APP under the Modules column, click the [+] icon in the third column from the left, select ModuleDependency, and click the OK button to exit.

Open the file in the root directory build.gradleand note compileSdkVersion buildToolsVersion, minSdkVersion and targetSdkVersion. Click sdk, open the build.gradle file in the root directory, and change the values ​​after compileSdkVersion, buildToolsVersion, minSdkVersion and targetSdkVersion in the file to the same files as in the APP, as shown in the figure.

Insert image description here

app/src/mainCreate new jniLibs under sdk/native/libsto jniLibs.

Related other blogs

Intelligent sign language digital real-time translation based on Android+OpenCV+CNN+Keras - deep learning algorithm application (including Python, ipynb engineering source code) + data set (2)

Intelligent sign language digital real-time translation based on Android+OpenCV+CNN+Keras - deep learning algorithm application (including Python, ipynb engineering source code) + data set (3)

Intelligent sign language digital real-time translation based on Android+OpenCV+CNN+Keras - deep learning algorithm application (including Python, ipynb engineering source code) + data set (4)

Intelligent sign language digital real-time translation based on Android+OpenCV+CNN+Keras - deep learning algorithm application (including Python, ipynb engineering source code) + data set (5)

Project source code download

For details, please see my blog resource download page


Download other information

If you want to continue to understand the learning routes and knowledge systems related to artificial intelligence, you are welcome to read my other blog " Heavyweight | Complete Artificial Intelligence AI Learning - Basic Knowledge Learning Route, all information can be downloaded directly from the network disk without following any routines.
This blog refers to Github’s well-known open source platform, AI technology platform and experts in related fields: Datawhale, ApacheCN, AI Youdao and Dr. Huang Haiguang, etc., which has nearly 100G of related information. I hope it can help all my friends.

Guess you like

Origin blog.csdn.net/qq_31136513/article/details/133064374