Infant Intelligent Monitoring Robot Based on Python+Raspberry Pi

Outline Design Specification

1 Introduction

1.1 Purpose of writing

​ In the previous stage of the infant intelligent monitoring robot system, that is, the requirements analysis stage, the requirements of the system users for the system have been elaborated in detail, and these user requirements have been described and clarified in detail in the requirements specification .

​ At this stage, on the basis of system requirements analysis, a general design has been made for the infant intelligent monitoring robot system. It mainly solves the problem of program module design to realize the system requirement. Including how to divide the system into several modules, determine the interface between modules, the information transmitted between modules, and the design of data structure and module structure, etc. All the general design of the system in this stage will be described in detail in the following general design report.

​ In the next stage of detailed design, programmers can refer to this summary design report, and carry out detailed design of the system on the basis of the module structure design of the infant intelligent monitoring robot system in the summary design. This manual can also be referred to in the subsequent software testing and software maintenance stages, so as to understand the design structure of each module completed in the outline design process, or find out the shortcomings of the design at this stage when modifying.

1.2 Development purpose

​ In the context of the full opening of the national second-child policy, the proportion of newborn babies in my country is on the rise year by year. At the same time, with the accelerated pace of life, how to take scientific and healthy care of babies has become the focus of many people's attention. However, the variety of service robots in my country's children's smart hardware market is relatively single, which cannot meet the needs of the current market. Therefore need a more complete and easy-to-operate robot to help people share the burden of looking after infants.

​ The intelligent care robot of this project uses computer vision and natural language processing technology to solve the disadvantages of people's care in different modes. Facial image recognition, artificial intelligence question and answer and other functions greatly conform to the development of the times, improve the quality of life of people today, and provide different conveniences for different families.

​Intended Readers : Developers and maintainers of this system

1.3 Background

​ Infant care products have been used in European and American countries for more than 30 years, and are also widely used in family care, with advanced research technology and strong innovation ability. The world's first intelligent robot was invented by Joseph Engelberg, the "father of robotics" in the United States. The original intelligent nursing robot can assist the disabled to live normally, professional escort, nursing service, barrier-free travel, barrier-free home, with functions such as reminding medication, monitoring blood pressure, etc. It is not only suitable for elderly care institutions, but also for the elderly living alone without their children. .

​ In contrast, the overall level of China's industry is still in the initial stage of imitation. The investment in research and development, the focus of market positioning, and the stimulation of innovation awareness still need to be improved. Smart nursing care is now mostly used in the medical industry in my country, and the service objects are also based on Most of them are sick and elderly, and there is relatively little care for babies.

​ Therefore, this project aims at the lack of domestic infant intelligent monitoring robots, and innovates existing projects to maximize the realization of scientific baby care.

​ The National Conditions of the Two-Child Policy According to the survey and research, the number of newborn babies in my country reached 10.16 million in 2019, but the research on smart home service robots in my country is relatively weak. Therefore, the development of home service intelligent robots will be the mainstream direction of intelligent hardware development.

Description :

  • The name of the software system to be developed; intelligent monitoring robot for infants and young children

  • Task proposer for this item: Weather Portfolio Team

  • Developer: Weather Portfolio Team

  • User: young parents

  • Computing station (centre) running the software:

1.4 Description

​ The infant intelligent monitoring robot of this project adopts computer vision and natural language processing technology, calls the basic interface to develop new functions, and solves the disadvantages of people's care in different situations.

​ For example: the facial image recognition function in the static nursing mode can analyze the emotions and needs of infants and young children according to their facial expressions; the following in the dynamic nursing mode can realize the basic obstacle avoidance function, in addition to the artificial intelligence question and answer function, Provides simple and convenient operation methods; indoor environment monitoring includes humidity, temperature, and pressure. If it reaches a certain period of time, it will automatically update and other functions. The monitoring results will be fed back to the web interface through the Alibaba Cloud IOT platform as a subscriber. The research and development of this project greatly complies with the development of the times, can effectively improve the quality of life of the current people, and provide different levels of convenient services for different families.

1.5 Related Technologies

● Raspberry Pi (Raspberry Pi): A Linux-based single-board computer. it is powered by raspberry pi in uk

Developed by the foundation to promote basic computer science education in schools with low-cost hardware and free software. It is a microcomputer motherboard based on ARM, with SD/MicroSD card as the memory hard drive, there are 1/2/4 USB ports and a 10/100 Ethernet port around the card board (Type A has no network port), which can be connected to The keyboard, mouse and network cable also have a TV output interface for video analog signals and an HDMI high-definition video output interface. Hardware information: CPU: Broadcom BCM2837, ARM Cortex-A43 1.2GHz 44-bit quad-coreARMv8CPU, GPU: BroadcomVideoCroreIV, OpenGLES2.0, 1080p30 h.244/MPEG-4 AVC HD decoder, RAM: 1GB (LPDDR2).

● Natural language processing (NLP): Natural language processing is the research in human-human communication and human-computer

A discipline of linguistic problems in communication. Natural language processing needs to develop a model representing linguistic competence and language application (linguistic performance), establish a computing framework to realize such a language model, propose corresponding methods to continuously improve such a language model, and design each language model based on such a language model. practical systems, and discuss the evaluation techniques of these practical systems.

● Facial Expression Recognition (FER): An emerging research topic in the field of artificial intelligence, the research goal is to enable some artificial intelligence products such as robots to automatically recognize human expressions, and then analyze human emotions. Machine automatic facial expression recognition can further Enhancing the friendliness and intelligence of human-computer interaction has very important research and application value, and is an important part of computer vision research.

  • ●Touch sense sensor: a detection device that can feel the measured information and can transfer the sensed information
  • According to certain rules, it is transformed into electrical signals or other required forms of information output to meet the requirements of information transmission, processing, storage, display, recording and control.
  • ●I2C bus: a simple, two-way two-wire synchronous serial bus developed by Philips. it only
  • Two wires are required to transfer information between devices connected to the bus.
  • ●I2C interface: The interface of I2C bus is output in the form of open collector (OC) or open drain (OD).

It is mainly to prevent signal confusion on the I2C bus. When the output terminal of the I2C bus is not equipped with a pull-up resistor, it can only output low level. If you need to ensure the normal operation of the I2C bus, you need Add pull-up resistors R1 and R2.

● IoT application development (IoT Studio): Alibaba Cloud provides productivity tools for IoT scenarios, which can

Covering the core application scenarios of various IoT industries, it helps you complete the development of equipment, services and applications efficiently and economically. The IoT development service provides a series of convenient IoT development tools such as mobile visualization development, Web visualization development, service development, and device development to solve the problems of long development links, complex technology stacks, high collaboration costs, and difficulties in solution migration in the IoT development field The problem that redefines IoT application development.

2. Overall Design

2.1 Requirements Specification

​NLP Natural Language Processing Module (Speech Recognition/Speech Synthesis): Input: Speech Recognition: Call the microphone, or use a local audio file as the input of the program segment Speech Synthesis: Call the local text file as the input of the program segment Functional performance of processing: voice Recognition: Judging whether the microphone is turned on Judging whether the audio file exists Judging whether it is connected to the Internet Speech Synthesis: Judging whether the microphone is turned on Judging whether the text file exists Judging whether it is connected to the Internet Output: Speech Recognition: Text generated from input speech

Image recognition module :

enter:

  • Call the camera, or use the local video as the input processing function performance of the program segment:

  • Judging whether the camera is turned on Judging whether the picture is saved normally Judging whether it is connected to the Internet, otherwise a prompt message will be output

  • Determine whether there is a human face in the screen, if not, discard this group of data output: no output if there is no abnormality.

  • When the baby's modal is abnormal, output according to the situation: "the child may have urinated", "the child may be hungry" and so on.

Environmental information real-time monitoring module :

enter:

Functional performance of temperature, humidity and pressure processing in the environment detected by the sensor:

​ Define the device address and registers for I2C interface communication and define the initial voltage value paho-mqtt library installation and introduction

​ Alibaba Cloud platform completes the device registration and completes the information to obtain the three-in-one certificate, downloads and develops the attributes in the sdk definition object model and outputs it online:

​ Publish data every 10s, the data includes: temperature, humidity, pressure, magnetic field strength in three dimensions, and real-time video can be transmitted through the RTMP protocol

Simple obstacle avoidance module:

enter:

The functional performance of the sense hat(B) sensor expansion board and the Touch sense sensor module processing:

​ Define the device address and registers for I2C interface communication and define the initial voltage value, including ADS1015 and the connection in the four-way obstacle avoidance module and the maximum detection distance output:

​ The action output, and the indicator light on the four-way infrared obstacle avoidance module general integration module light up and go out according to whether there is an obstacle.

2.2 Operating environment

hardware environment

  • A high-performance PC with an RTX2080ti graphics card
  • a server
  • robot

Software Environment

  • Operating system: Raspbian (Debian-based Linux system)
  • Integrated development environment: Visual Studio 2016, Xcode 4.2
  • Development language: Python3
  • Browser: IE, Chrome

2.3 Basic design concept and processing flow

​ With the release of the national second-child policy and the faster pace of life, how to take care of babies scientifically and worry-free is the focus of many people's attention. The infant intelligent monitoring robot uses computer vision and natural language processing technology to solve the problem of young parents' lack of experience in caring for their children. The robot is turned on in the form of questions and answers, and the static nursing or dynamic nursing mode can be selected. In the static nursing mode, if the baby is hungry or unwell, the robot will recognize the cause of the baby’s discomfort through image recognition and give timely feedback to the parents through voice. In the dynamic mode, the robot can dynamically follow the baby and realize the obstacle avoidance function. This system also realizes the detection of indoor ambient temperature, humidity and pressure, etc., to provide the most comfortable environment for babies. The specific workflow is shown in Figure 1.

​ Figure 1 Working mode diagram of the intelligent monitoring robot for infants and young children

2.4 Structure

​ The system is divided into static mode and dynamic mode. In static mode, it is divided into voice recognition module, image recognition module and environmental monitoring module. Under the voice module, the robot can conduct voice questions and answers with users. In the image recognition module In this mode, the cause of the baby's discomfort can be judged according to the baby's expression. In the environment monitoring module, the indoor environment can be monitored to provide the most comfortable environment for the baby. In the dynamic mode, it is an obstacle avoidance module, which can dynamically track the baby's whereabouts and reduce the burden for parents when the baby is learning to walk. The specific structure is shown in Figure 2.

​ Figure 2 Structural diagram of the infant intelligent monitoring robot

2.5 Relationship between function device and program

The relationship between the functions and programs of each module is shown in the table below:

2.6 Open questions

​ Determine the specific algorithm implemented by each module, define the function name of each function, and the specific variable name.

2.7 Robot Functional Process Analysis

System use case diagram

​ The system use case diagram is to more clearly show the function realization of the business scenario of the system, and it is also a bridge to communicate with customers. A lot of things, a thousand words, are not as intuitive as a picture. Programmers can also face the goals of their projects more intuitively. The user system use case diagram and the developer system use case diagram are shown in Figure 4.1: User system use case diagram: facial expression recognition, voice artificial question answering, environmental monitoring, etc. Developer system use case diagram: Developers need a third-party system interface, such as: interface provided by face++, speech synthesis interface, I2C interface, etc. Use these interfaces to develop and develop new functions.

Table 1 System Use Case Diagram—User

Table 2 System Use Case Diagram—Developer

2.8 Data Flow Diagram

From the perspective of data transfer and development, this figure describes the logic function of the robot system, the logical flow of data in the robot system, and the logic transformation process.

Table 3 System data flow diagram

3. Interface design

3.1 User interface

​ After the user starts the machine, open the Aliyun Internet of Things permission file, and the system will automatically start the process to complete the real-time environment information transmission. Open the NLP natural language processing (speech recognition/speech synthesis) module for human interaction. Open the static nursing mode image recognition module, monitor the prepared video materials or real scenes and draw corresponding conclusions and return them to the system for display.

3.2 External interface

sensehat (B) sensor expansion board (onboard gyroscope, accelerometer, magnetometer, barometer, temperature and humidity sensor, etc., I2C interface communication) which uses onboard ADS1015 chip, 4-channel 12-bit precision ADC, scalable AD Function to access more sensor modules Touch sense sensor module sensehat (B) sensor expansion board (onboard gyroscope, accelerometer, magnetometer, barometer, temperature and humidity sensor, etc., I2C interface communication) which uses the onboard ICM20948 (3-axis acceleration, 3-axis gyroscope and 3-axis magnetometer), which can detect motion attitude, orientation and magnetic field to monitor the environmental magnetic field. Onboard SHTC3 digital temperature and humidity sensor, which can sense the temperature and humidity of the environment. Onboard LPS22HB atmospheric pressure sensor, Sensible atmospheric pressure of the environment

4. Run the design

4.1 Run module combination

​ The infant intelligent monitoring robot is divided into two operating modes, dynamic operating mode and static operating mode. In the static mode, it is divided into voice recognition module, image recognition module and environmental monitoring module. Under the voice module, the robot can conduct voice questions and answers with the user. Under the image recognition module, it can judge the cause of the baby’s discomfort according to the baby’s expression. Under the monitoring module, the indoor environment can be monitored to provide the most comfortable environment for the baby. In the dynamic mode, it is an obstacle avoidance module, which can dynamically track the baby's whereabouts and reduce the burden for parents when the baby is learning to walk.

4.2 Operation Control

​ In static mode, analyze the cause of the baby's discomfort. First get the video stream from the local video or camera, and then grab the image every 50ms, save it as a picture and project it to the screen for subsequent processing. Call the interface provided by face++ to obtain face data. Analyze and process the obtained JSON data to extract useful parts for us—

— that is, the composition of emotions. Through the analysis of the composition of the expression, the cause of the abnormal emotion of the baby is speculated.

4.3 Running time

​ The maximum monitoring time interval for the real-time environment transmission of the Internet of Things is 10s, and the average collection and upload time is between 5S-10S. There is no wrong transmission value in the data transmission test, and the update processing time is set to 10s.

​ The obstacle avoidance action interval is 3s. The time from collecting the infrared obstacle avoidance module to calculating the path and issuing the action is 3s. The corresponding time after touching the touch sensor is also 3s. The interval between dance actions after the touch is completed is 3s.

5. System data structure design

Key points of logical structure design

Physical structure design points

The following requirements are required for the input data: Video: 1080p, actual picture symbols and images and videos affected by lighting and different angles in daily life. Voice data: the noise-processed voice should be close to the voice of ordinary people in daily life.

Environmental information: conform to the actual living environment changes and numerical requirements.

The relationship between data structures and programs

6. System error handling design

6.1 Error messages

​ Various plug-ins of the robot may be damaged or burned due to improper operation.

6.2 Remedial measures

Possible workarounds after a fault occurs:

  • Backup technology: when the original system data is lost, the copy establishment and start-up technology is enabled. It is a backup technology for disk media to periodically record disk information to tape;

  • Fallback technique: use another less efficient system or method to obtain some part of the desired result, for example, the fallback technique of an automatic system can be manual operation and manual recording of data;

  • Recovery and Restart Technology: A method of resuming execution of software from the point of failure or of restarting software from scratch.

6.3 System Maintenance Design

Regularly check and maintain robots

7. References

  • KarlE.Wiegers. Software Requirements (Second Edition). Beijing: Tsinghua University Press, 2014

  • Zhang Haifan, Mou Yongmin. Introduction to Software Engineering (4th Edition). Beijing: Tsinghua University Press, 2013

  • Zheng Huizhen. Speech Detection Research Based on LSTM Network and GMM [D]. Shandong Normal University, 2019.

  • ZHENG Yun, XU Kaishou, HE Lu, LI Jinling, ZHENG Yu'ai, LIU Liru. Study on head shape characteristics and correlation of plagiocephaly infants[J]. Chinese Journal of Practical Pediatrics, 2017,32(21):1674-1678.

  • Zhan Yuxian, Xu Shujie, Ling Huolong, Liu Changye, Chen Geng, Zhu Jiahao, Ding Fan, Chen Jinghua. Intelligent Home Robot Based on Raspberry Pi[J]. Electronic World, 2020(10):206.

  • Wu Qida, Huang Dongsheng, Jiang Zhenying, Li Weijun. A Multifunctional Household Monitoring Robot and Its System Design [J]. Fujian Agricultural Machinery, 2020(01):40-45+49.

  • Zhao Mengxing, Xia Quanhui.Characteristics and Cultivation Strategies of Emotional Development of 0-3-year-old Infants[J].Journal of Changchun Institute of Education,2018,34(04):18-21.

Guess you like

Origin blog.csdn.net/newlw/article/details/131294335