Article Directory
- Primer
- APP development actual combat camera translation
- 1 Development Readiness
- 1.1 Adding Huawei maven warehouse at the project level in gradle
- 1.2 application level dependent build.gradle inside with SDK
- 1.3 AndroidManifest.xml file inside the camera application and storage rights
- 2 code key step in the development of
- 2.1 Dynamic Access Request
- 2.2 Creating cloud side of the text parser. Text can be created by the text analyzer detector configurator "MLRemoteTextSetting".
- 2.3 by android.graphics.Bitmap create "MLFrame" object is used to detect whether an image analyzer.
- 2.4 calls "asyncAnalyseFrame" method of text detection.
- 2.5 Creating text translator. You can customize the parameters like "MLRemoteTranslateSetting" create a translation through text translator.
- 2.6 calls "asyncAnalyseFrame" method of text translation of the contents of the text recognition captured.
- 2.7 release resources to complete the translation.
- 3 Source
- 4 Demo effect
- 5 knot after speech
- More detailed reference guide developed by Huawei's development league's official website:
- Past Links
- Next Issue
Primer
Presumably there are many small partners like to travel, to go overseas to play around it is better, however, and before the tour we must have food, clothing, housing, transportation, play a variety of routes to do the Raiders, then eagerly anticipating starting ...
Imaginary tour
Before departure, imagine there may be a beautiful tourist destination building:
delicious food:
pretty little sister:
a life of leisure:
The actual tour
But in reality, if we have to place the language barrier, the high probability may encounter the following problems:
People ignorant circle maps
fantasy menu
The magic of directional signs
mall assortment of goods
too difficult
Camera translation help you busy
With Huawei HMS ML Kit text recognition and translation services, these are not a problem, I come to tell you today about, how to use Huawei HMS ML Kit provided SDK to develop a camera translation services. Simply take the picture in terms of translation small application developers only need two big step:
Text Recognition
Get Picture to take a picture, then the acquired image frames to Huawei HMS ML Kit text recognition service do text recognition
Huawei text recognition service also provides off-line SDK (side) and cloud side in two ways, free end side can be detected in real time higher cloud side to identify the type and precision. The practical side we use cloud capabilities provided.
Text recognition properties | Specifications (HMS 4.0) |
---|---|
End | Support CJK |
Multilingual cloud side | Chinese, English, French, Spanish, Thai and other 19 languages |
Tilt recognition | 30 degree tilt circumstances can identify |
Curved text support | Support 45-degree bend case can still successfully identified |
Tracking text | End support track |
The above specifications are for reference only, Huawei Developer Connection's official website shall prevail
translation
The text after recognition to Huawei HMS MLKit translation services do text translation, you can get to the desired result after translation.
Translation is a service provided by the cloud side.
Text translation properties | Specifications (HMS 4.0) |
---|---|
multilingual | Seven languages, English, French, Spanish, soil, Arabic, Thai |
Delay | 300ms / 100 words |
BLEU value | >30 |
The term Dynamic Configuration | stand by |
The above specifications are for reference only, Huawei Developer Connection's official website shall prevail
APP development actual combat camera translation
It says too much nonsense, straight to the point, right
1 Development Readiness
Due to the use of cloud services side, it is necessary to Huawei's Developer Connection registered developer account, and the opening of these services in the cloud, here is not to go into detail, according to the official's direct AppGallery Connect configuration, service provisioning step that is part of the operation available:
registered Developer, please refer to the opening of service stamp:
1.1 Adding Huawei maven warehouse at the project level in gradle
Open AndroidStudio project level build.gradle file.
Incremental maven add the following address:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
}
1.2 application level dependent build.gradle inside with SDK
Integration SDK. (Because of the ability of the cloud side, it can be introduced into only the base package SDK)
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-translate:1.0.2.300'
}
1.3 AndroidManifest.xml file inside the camera application and storage rights
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
2 code key step in the development of
2.1 Dynamic Access Request
private static final int CAMERA_PERMISSION_CODE = 1;
@Override
public void onCreate(Bundle savedInstanceState) {
// Checking camera permission
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
2.2 Creating cloud side of the text parser. Text can be created by the text analyzer detector configurator "MLRemoteTextSetting".
MLRemoteTextSetting setting = (new MLRemoteTextSetting.Factory()).
setTextDensityScene(MLRemoteTextSetting.OCR_LOOSE_SCENE).create();
this.textAnalyzer = MLAnalyzerFactory.getInstance().getRemoteTextAnalyzer(setting);
2.3 by android.graphics.Bitmap create "MLFrame" object is used to detect whether an image analyzer.
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 calls "asyncAnalyseFrame" method of text detection.
Task<MLText> task = this.textAnalyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText mlText) {
// Transacting logic for segment success.
if (mlText != null) {
RemoteTranslateActivity.this.remoteDetectSuccess(mlText);
} else {
RemoteTranslateActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Transacting logic for segment failure.
RemoteTranslateActivity.this.displayFailure();
return;
}
});
2.5 Creating text translator. You can customize the parameters like "MLRemoteTranslateSetting" create a translation through text translator.
MLRemoteTranslateSetting.Factory factory = new MLRemoteTranslateSetting
.Factory()
// Set the target language code. The ISO 639-1 standard is used.
.setTargetLangCode(this.dstLanguage);
if (!this.srcLanguage.equals("AUTO")) {
// Set the source language code. The ISO 639-1 standard is used.
factory.setSourceLangCode(this.srcLanguage);
}
this.translator = MLTranslatorFactory.getInstance().getRemoteTranslator(factory.create());
2.6 calls "asyncAnalyseFrame" method of text translation of the contents of the text recognition captured.
final Task<String> task = translator.asyncTranslate(this.sourceText);
task.addOnSuccessListener(new OnSuccessListener<String>() {
@Override
public void onSuccess(String text) {
if (text != null) {
RemoteTranslateActivity.this.remoteDisplaySuccess(text);
} else {
RemoteTranslateActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
RemoteTranslateActivity.this.displayFailure();
}
});
2.7 release resources to complete the translation.
if (this.textAnalyzer != null) {
try {
this.textAnalyzer.close();
} catch (IOException e) {
SmartLog.e(RemoteTranslateActivity.TAG, "Stop analyzer failed: " + e.getMessage());
}
}
if (this.translator != null) {
this.translator.stop();
}
3 Source
The old rules, simple little Demo source code uploaded Github, GitHub source address please poke (project directory: Photo-Translate), we can make a reference scenario based optimization.
4 Demo effect
5 knot after speech
The small demonstration program for everyone APP developed to use both sides of two cloud capabilities of Huawei HMS ML Kit, character recognition and translation capabilities, Huawei's character recognition and translation can also help developers to do a lot of other interesting and powerful features such as:
[General] text recognition
1, the bus license plate character recognition
2, text recognition in the document reading scene
[card] text recognition certificate class
1, bank card card number can be identified by the character recognition for bank card binding, etc. scene
2, of course, in addition to identifying bank cards, you can also identify the various life card card number, such as membership cards, discount cards
3, also can be implemented to identify ID cards, exit permits and other documents like numbers
[translation]
1, road signs translation signs
2, document translation
3, page translation, such as the type of language recognition site review area and translated into the language of the corresponding country;
4, product description translation sea Amoy
5, the restaurant ordering menu translation
More detailed reference guide developed by Huawei's development league's official website:
Huawei Developer Connection Machine Learning Services Development Guide
Past Links
The first phase: with Huawei HMS MLKit Android SDK thirty minutes on the development of a smiling snapshot artifact
II: Andrews actual development, segmentation SDK to develop a passport DIY small program with Huawei HMS MLKit image
Next Issue
Huawei accordance with past practice based on machine learning services, will be behind a series of hands-on experience to share, we can continue to focus -