论文:Real-Time Radar-Based GestureDetection and Recognition Builtin an Edge-Computing Platform(Sensor)

Publication: IEEE SENSORS JOURNAL, VOL. 20, NO. 18, SEPTEMBER 15, 2020

Objective: Radar-based system to recognize gestures with low complexity 

Existing problem: the proposed 2-D CNN, 3-D CNN and LSTM for gesture classification require huge amounts of memory in the system, and are computationally inefficient.

Method: Feature cube (range, Doppler, azimuth and elevation) as input of a shallow CNN

Performance: classifying 12 gestures in real-time with a high F1-score.

Key words: Gesture classification, Edge-Computing Platform


 

Main contributions:

  1. The proposed signal processing framework is able to recognize more gestures (12 gestures) than those reported in other works in the literature. The framework can run in real-time built in an edge-computing platform with limited memory and computational capability.
  2. We develop a multi-feature encoder to construct the gesture profile, including range, Doppler, azimuth, elevation and temporal information into a feature cube with reduced dimensions for the sake of data processing efficiency.
  3. We develop an HAD algorithm based on the concept of short-term average/long-term average to reliably detect the tail of a gesture.
  4. Since the proposed multi-feature encoder has encoded all necessary information in a compact manner, it is possible to deploy a shallow CNN with a feature cube as its input to achieve a promising classification performance.
  5. The proposed framework is evaluated twofold: its performance is compared with the benchmark in off-line scenario, and its recognition ability in real-time case is assessed as well.

 STEP1: Feature cube 

we encode the range, Doppler, azimuth, elevation and magnitude of those K points with the largest magnitudes in RD(p, q) along IL measurement-cycles into the feature cube
V with dimension IL × K × 5.

STEP2: Hand activity detection (similar as VAD)

Proposed STA/LTA-based gesture detector to detect when a gesture finishes, i.e., the tail of a gesture, rather than detecting the start time-stamp.

STEP3: SUPERVISED LEARNING

Refer to Fig 6.

Experiment:

The radar is connected with an edge-computing platform, i.e., NVIDIA Jetson Nano, which is equipped with Quad-core ARM A57 at 1.43 GHz as central processing unit (CPU), 128-core Maxwell as graphics processing unit (GPU) and 4 GB memory.

Results:

Its performance is thoroughly compared with benchmarks in literature through an off-line crossvalidation, and secondly, its real-time capability is investigated with an on-line performance test.

OFFLINE TEST 

1) Classification Accuracy and Training Loss Curve (Table II)

2) Confusion Matrix (Fig. 11)

3) Computational Complexity and Memory (Table III)

ONLINE TEST

1) Precision, Recall and F1-Score(Table IV)

2) Detection Matrix (Table V)

3) Run time (Table VI)

 

 

 

 

 

 

 

 

 

Guess you like

Origin blog.csdn.net/shadowismine/article/details/127146576