uni-app practical tutorial

1. Preparation

2. Introduction

  • Development tools: HBuilderX
  • Cross-segment framework (uniapp encapsulates a set of cross-terminal frameworks in the form of aligned WeChat applet APIs, which can span 10 different applet platforms)
  • HTML5+ (support native ios, android)
  • cloud development

3. What is NVue

The uni-app App section has a built-in native rendering engine based on weex improvement, which provides native rendering capabilities.

On the App side, if you use a vue page, use webview rendering; if you use an nvue page (short for native vue), use native rendering. Two types of pages can be used in one app at the same time, for example, the home page uses nvue, and the second-level page uses vue pages, as in the hello uni-app example;

Although nvue can also render on multiple terminals and output H5 and applets, nvue's css writing method is limited, so if you don't develop App, you don't need to use nvue.

4. Framework selection

a. Rendering efficiency ranking

Native > Flutter > RN > Uniapp

b. Mobile ecological ranking

Native > RN > Uniapp > Flutter

c. Ranking of learning costs

Native > Flutter > RN > Uniapp (but in fact, with the entry of hooks, the learning cost of RN is not much different from that of Uniapp)

d. Brief description of the framework

**[flutter]** It is written according to the flutter sdk launched by google, which completely subverts the development concept of android. You must know that both android and flutter are from google, and the android sdk used for android development, but flutter is not the same, and a self-made set Its own sdk directly uses the GPU rendering mechanism, and the very direct canvas draw view on the user's mobile phone is already very nx.

【reactNative】 The bridge (bridge) technology is also very powerful! He passed the bridge between android and js, compiled in the middle, and finally converted the android control, which is a bit like javac compiling the class, and the native control is finally drawn on the user's mobile phone. Among them, the performance problem is not too big, because the original page is obtained, so it is more popular.
Relatively speaking, flutter is the most direct, but I also want to point out that for the performance of the current user's mobile phone, android has reached 11 now, the middle layer The performance of some conversion losses is basically imperceptible to users.

**【uniapp】**The current framework adopts vue, and uses cloud packaging to generate apk for release.
I am not optimistic about this framework. Developers can open the layout boundary to view the layout, which is completely presented to the user in the form of a webpage. Basically, it is not linked to android very much. Even if it is linked, the official has already made an API for users to use. Although it is
completely made in the form of a web page, the performance is not too bad, because the official has made a lot of web page optimization, and the specific performance is temporarily Inferior to flutter, reactNative.

5. Practical goals

  1. Select a picture from the photo album or photo, and display it on the page
  2. Convert the image to base64 encoding
  3. Call the Baidu Ai interface to identify the main content of the picture
  4. Show recognition results
  5. Use the recognition result to query the garbage classification to which it belongs, and display the result
  6. Package and publish as ipa.apk

6. Development

1. Create a project

New steps: File>New>Project
insert image description here

After the creation is complete, the directory is as follows:

insert image description here

pages.json is equivalent to app.json in the WeChat applet, and is used to configure basic configurations such as routing.

2. Debugging

Normal interface debugging can be HBuildXadjusted through preview in . However, the WeChat API needs to be called normally in the host of the WeChat simulator. So we need to run the code to the WeChat applet simulator.

insert image description here

The platform will compile and package the code into WeChat applet code, and start the hot update mode. And a unpackagedirectory will be created in which the compiled code is stored.

insert image description here

Then we need to use the WeChat applet developer tool to import mp-weixinthe project.

3. Upload spam pictures

Open pages/index/index.vue, the page interaction design is like this:

  1. Take a picture or choose a picture from the album

  2. Submit pictures to Baidu AI to identify object information

  3. According to the photo object information returned by Baidu AI, determine which type of garbage the object belongs to

Let's implement step 1 first, and modify templatethe code as follows:

<template>
	<view class="content">
		<button type="primary" @click="btnTaskPhoto">识别通用物体</button>
		<img :src="imageSrc" alt="">
	</view>
</template>

<script>
	export default {
		data() {
			return {
				imagePath: "",
				imageSrc: ""
			}
		},
		onLoad() {

		},
		methods: {
			btnTaskPhoto() {
				uni.chooseImage({
					count: 1,
					success: (res) => {
						console.log(res);
						this.imagePath = res.tempFilePaths[0];
						this.images2base64();
					}
				});
			},
			images2base64() {
				wx.getFileSystemManager().readFile({
					filePath: this.imagePath,
					encoding: "base64",
					success: (res) => {
						this.imageSrc = 'data:image/png;base64,' + res.data
						console.log(res);
					}
				})
			}
		}
	}
</script>

<style>
	.content {
		display: flex;
		flex-direction: column;
		align-items: center;
		justify-content: center;
	}
</style>

As shown in the figure:

insert image description here

Open the applet developer tool, and we will debug it as follows:

insert image description here

After the upload is successful, the picture is displayed on the page:
insert image description here

4. Recognize objects in pictures

  • First, enter Baidu Smart Cloud and register an account, select Product > Artificial Intelligence Application > Image Recognition.
    insert image description here

  • Then enter the application list and create an application. After the creation is complete, you can get API Keythe sum of the application Secret Key.
    insert image description here

  • Then get the access token through the authentication interface.

https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=【百度云应用的AK】&client_secret=【百度云应用的SK】
  • Call the general object and scene recognition API . Modification images2base64method:

    images2base64() {
    	wx.getFileSystemManager().readFile({
    		filePath: this.imagePath,
    		encoding: "base64",
    		success: async (result) => {
    			this.imageSrc = 'data:image/png;base64,' + result.data;
    			// 获取accexx_token
    			const params = {
    				'client_id': 'llwgkZX3f5hYcP2xiaK7ewdf',//自己的Api Key
    				'client_secret': 'GaQgeSPhjX8Sv21uAxG0G3NpGGVELvYn'//自己的Secret Key
    			};
    			const res1 = await uni.request({
    				url: `https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=${params.client_id}&client_secret=${params.client_secret}`
    			})
    			const {
    				access_token
    			} = res1.data;
    			// 提交图片到百度AI,识别
    			const res2 = await uni.request({
    				method: "POST",
    				url: `https://aip.baidubce.com/rest/2.0/image-classify/v2/advanced_general`,
    				header: {
    					"Content-Type": "application/x-www-form-urlencoded"
    				},
    				data: {
    					access_token: access_token,
    					image: encodeURI(result.data)
    				}
    			})
    			console.log(res2);
    		}
    	})
    }
    

    However, it prompts an error:Open api qps request limit reached

insert image description here

img

When trying to recognize again, there are already recognized results:

insert image description here

5. Display the recognition result

//调用,res2是百度识别的结果
this.parseResult(res2.data.result);

//添加methods
parseResult(result) {
    
    
    let itemList = result.map(item => item.keyword + item.score);
    uni.showActionSheet({
    
    
    	itemList: itemList
    })
}

The page is displayed as follows:

七、uni-cloud

1. Create an Alibaba Cloud development environment

insert image description here

Enter the uni-cloud backend service space list management interface, click [New Service Space], a pop-up window will pop up to create a new pop-up window, you can choose Alibaba Cloud and Tencent Cloud, and each account of Alibaba Cloud can create a free service space. As shown in the figure:
insert image description here
Create an Aliyun space here.
insert image description here
Then go back HBuilderX, right-click uniCloudthe directory, and click [Associate Cloud Service Space or Project]
insert image description here

After the association is successful, uniCloudthe name of the associated cloud service space will appear on the right side of the directory.
insert image description here

2. Create a new cloud function

Right-click on uniCloudthe directory and select 新建云函数/云对象.

insert image description here
insert image description here

uniCloudThere will be an additional ImageClassifydirectory under the directory, which contains index.jstwo package.jsonfiles. Small program file structure similar to WeChat;
insert image description here
/ImageClassify/index.jsthe code is as follows:

'use strict';
exports.main = async (event, context) => {
	const params = {
		'client_id': 'llwgkZX3f5hYcP2xiaK7ewdf',
		'client_secret': 'GaQgeSPhjX8Sv21uAxG0G3NpGGVELvYn'
	};
	// 获取accexx_token
	const res = await uniCloud.httpclient.request(
		`https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=${params.client_id}&client_secret=${params.client_secret}`,
		{
			method:"GET",
			dataType: "json",
		}
	)
	const access_token = res.data.access_token;

	// 提交图片到百度AI,识别
	let res2 = await uniCloud.httpclient.request(
		"https://aip.baidubce.com/rest/2.0/image-classify/v2/advanced_general", {
			method: "POST",
			header: {
				"Content-Type": "application/x-www-form-urlencoded"
			},
			dataType: "json",
			data: {
				access_token: access_token,
				image: event.image
			}

		});
	//返回数据给客户端
	return res2.data;
};

Upload deployment:
insert image description here
After the upload is complete, you can see the cloud function in the cloud service space:

insert image description here

If you want to do debugging, you only need to enable breakpoint debugging in the upper right corner of the console, and when you request it again, the breakpoint in the cloud function will be triggered;
insert image description here

3. Call the cloud function

Modified pages/index/index.vuemethod image2base64:

images2base64() {
	let that = this;
	wx.getFileSystemManager().readFile({
		filePath: this.imagePath,
		encoding: "base64",
		success: async (result) => {
			this.imageSrc = 'data:image/png;base64,' + result.data;
			uniCloud.callFunction({
				name: "ImageClassify",
				data: {
					image: encodeURI(result.data)
				},
				success: (res) => {
					console.log(res);
					that.parseResult(res.result.result);
				}
			})

		}
	})
},
parseResult(result) {
	let itemList = result.map(item => item.keyword + item.score);
	uni.showActionSheet({
		itemList: itemList
	})
}

But when running it prompts:应用未关联服务空间,请在uniCloud目录右键关联服务空间

insert image description here

We need to HBuilder Xrerun it.

4. Debugging

a. Connect to the local cloud function

insert image description here

It is normal when it is recognized again in the applet simulator.

insert image description here

Check the network and you can see that it is the local ip accessed:
insert image description here

b. Connect to the cloud cloud function

insert image description here

As you can see, this is the interface to access the cloud:

insert image description here

Eight, conditional compilation

Conditional compilation is marked with special comments, and the code in the comments is compiled to different platforms according to these special comments at compile time. Everything can be conditionally compiled; official documentation

// #ifdef APP-PLUS
	console.log("我在原生App环境里");
// #endif

// #ifdef MP-WEIXIN
	console.log("我在微信小程序环境里");
// #endif

9. Adapt to the ios environment

1. Conditional compilation

When converting the selected photo to base64 above, we use wx.getFileSystemManagerit, which does not work properly in ios. Here we can use conditional compilation to distinguish different environments to run different APIs.

We still modify pages/index/index.vue:

images2base64() {
	let that = this;

	// #ifdef APP-PLUS
	// html5+.io文档地址:https://www.html5plus.org/doc/zh_cn/io.html
	plus.io.resolveLocalFileSystemURL(this.imagePath, entry => {
		entry.file(file => {
			let reader = new plus.io.FileReader();
			reader.onloadend = (e) => {
				//e.target.result是base64文件编码,注意是带编码头的。所以我们需要去掉编码头(data:image/jpeg;base64,)。
				console.log(e.target.result);
				const base64 = e.target.result.substr(22); //截取掉文件头
				uniCloud.callFunction({
					name: "ImageClassify",
					data: {
						image: base64
					},
					success: (res) => {
						console.log(res);
						that.parseResult(res.result.result);
					}
				})
			}
			reader.readAsDataURL(file);
		});
	})
	// #endif
	
	// #ifdef MP-WEIXIN
	wx.getFileSystemManager().readFile({
		filePath: this.imagePath,
		encoding: "base64",
		success: async (result) => {
			this.imageSrc = 'data:image/png;base64,' + result.data;
			uniCloud.callFunction({
				name: "ImageClassify",
				data: {
					image: encodeURI(result.data)
				},
				success: (res) => {
					console.log(res);
					that.parseResult(res.result.result);
				}
			})

		}
	})
	// #endif
},

2. Debugging

Open: Run > Run to Phone or Simulator > Run to IOS Dock. The first time you open it, you need to install the simulator plug-in

In React Native and Flutter, you must use a Mac to develop IOS applications.

If we use uniapp, we don’t need a mac, because there is cloud packaging, but we can’t debug it. If we need to debug, we still need to use mac.

10. Garbage classification

1. Use the plug-in market cloud function

Open the plug-in market and search for [garbage classification]
insert image description here

Import the plug-in (this plug-in internally uses the third-party http api to search and determine what type of garbage the object is. The third-party http api has been deprecated, so this plug-in is actually invalid. Here is only for step-by-step demonstration):
insert image description here

The page opens automatically Hbuilder X, giving a prompt:
insert image description here

Click OK to download the plugin to the local cloud function directory uniCloud.
insert image description here

We can then upload the deployment manually. Then we adjust the business logic after identifying garbage:

parseResult(result) {
	if (!result || !result.length) {
		return null;
	}
	this.selectRecResult(result[0].keyword);
},
selectRecResult(name) {
	uniCloud.callFunction({
		name: "TrashClassify",
		data: {
			keyword: name
		},
		success: (res) => {
			console.log(res);
			//拿到区分垃圾分类后的信息,后面可以做相应的业务处理
		}
	})
}

Eleven, packing

HBuilderXFor native apps, cloud packaging is provided. If we don't have a mac, we can use it directly if we want to release ios.

insert image description hereinsert image description here

Guess you like

Origin blog.csdn.net/bobo789456123/article/details/129073157