Radar-based Sensor and Data Fusion for Human Activity Recognition (HAR) 

Human Activity Recognition (HAR) has been an active area of research since decades. It plays an important role in applications such as Smart-Home (SH), smart-environment and Ambient Assisted Living (AAL). In an SH environment, HAR aims to identify high-level activities such as cooking, dressing, bathing, walking, sitting, and so on, performed in a ubiquitous environment, with a real-time response. In the case of AAL, HAR has played a vital role by enabling real-time patient monitoring. Such systems allow the inhabitants to live autonomously in their houses and can be used to infer behavioral patterns for health and security interventions. In a smart-environment, HAR enables applications involving, real-time surveillance of public areas such as airports, train stations, subways, shopping malls, etc. HAR together with an intelligent AI-assisted behavioral analysis, can be used to detect abnormal activities in public places and hence can been employed in the context of smart-surveillance.

In the literature, HAR can be divided into two categories, wearable-based and observation-based. In case of the former, researchers use wearable devices and sensors such as accelerometers and gyroscopes, whereas in case of the latter, typically camera sensors are used. Although, vision-based HAR has several advantages, such as accurately modeling a smart-home environment and its occupant’s activities. However, it compromises the privacy of the occupant, which is of a key concern. In this work we will make use of multiple RAdio Detection and Ranging (RADAR) sensors for the purposes of indoor HAR in a smart-home environment. Unlike images raw data when observed by a naked eye cannot be inferred by a human, as a result radar sensor preserves the privacy of the target.
The overall goal of this research is to develop and apply novel machine learning approaches for the radar based HAR in the context of an SH environment. We formulate the problem of HAR in a hierarchical fashion, where the higher-level activities of an occupant can be described by an ordered set of sequence of an actions and action-primitives. The latter, in the context of radar-based HAR can be detected and recognized using various radar-echo representations such as, time-range (range-profile) or time-Doppler (micro-Doppler). Once detected and recognized successfully, action-primitives may be combined with actions and a context of the time, place and a person, as a result allowing us to identify the hierarchical activity of an occupant. In the first phase of this research, we use a single-target/single-sensor based scenario and use supervised machine learning based, multi-model and multi-modal data fusion-based approaches for the action classification. In the second phase, we plan to investigate a multi-sensor/single-target scenarios, where we will employ sensor and data fusion approaches for HAR. Finally, during the last phase of this research we extend the developed methods to a multi-sensor/multi-target scenario.