- Activity recognition
Activity recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents actions and the environmental conditions. Since the 1980s, this research field has captured the attention of several
computer science communities due to its strength in providing personalized support for many different applications and its connection to many different fields of study such as medicine.To understand activity recognition better, consider the following scenario. An elderly man wakes up at dawn in his small studio apartment, where he stays alone. He lights the stove to make a pot of tea, switches on the toaster oven, and takes some bread and jelly from the cupboard.
After taking his morning medication, a computer-generated voice gently reminds him to turn off the toaster. Later that day, his daughter accesses a secure website where she scans a check-list, which was created by a sensor network in her father's apartment. She finds that his father is eating normally, taking his medicine on schedule, and continuing to manage his daily life on his own. That information puts her mind at ease.
Many different applications have been studied by researchers in activity recognition; examples include assisting the sick and disabled. For example, [Pollack, M. E., and et al., L. E. B. 2003. "Autominder: an intelligent cognitive orthotic system for people with memory impairment". "Robotics and Autonomous Systems" 44(3-4):273–282.] shows that by automatically monitoring human activities, home-based rehabilitation can be provided for people suffering from traumatic brain injuries. One can find applications ranging from security-related applications and logistics support to location-based services. Due to its many-faceted nature, different fields may refer to activity recognition as plan recognition, goal recognition, intent recognition, behavior recognition, location estimation and location based services.
Types of activity recognition
Sensor-based activity recognition
Sensor -based activity recognition integrates the emerging area of sensor networks with noveldata mining andmachine learning techniques to model a wide range of human activities. Sensor-based activity recognition researchers believe that by empowering ubiquitous computers and sensors to monitor the behavior of agents (under consent), these computers will be better suited to act on our behalf.Levels of sensor-based activity recognition
Sensor-based activity recognition is a challenging task due to the inherent noisy nature of the input. Thus,
statistical modeling has been the main thrust in this direction in layers, where the recognition at several intermediate levels is conducted and connected. At the lowest level where the sensor data are collected, statistical learning concerns how to find the detailed locations of agents from the received signal data. At an intermediate level,statistical inference may be concerned about how to recognize individuals' activities from the inferred location sequences and environmental conditions at the lower levels. Furthermore, at the highest level a major concern is to find out the overall goal or subgoals of an agent from the activity sequences through a mixture of logical and statistical reasoning.Vision-based activity recognition
It is a very important and challenging problem to track and understand the behavior of agents through videos taken by various cameras. The primary technique employed is computer vision. Vision-based activity recognition has found many applications such as human-computer interaction, user interface design, robot learning, and surveillance, among others.Conferences where vision based activity recognition work often appears is
ICCV andCVPR .In vision-based activity recognition, a great deal of work has been done.Researchers have attempted a number of methods such as
optical flow ,Kalman filtering , hiddenMarkov model s, etc, under different modalities such as single camera, stereo, and infra-red. In addition, researchers have considered multiple aspects on this topic, including single pedestrian tracking, group tracking, and detecting dropped objects.Levels of vision-based activity recognition
In vision-based activity recognition, the computational process is often divided into four steps, namely human detection, human tracking, human activity recognition and then a high-level activity evaluation.
Approaches of activity recognition
Activity recognition through logic and reasoning
Logic-based approaches keep track of all
logically consistent explanations of the observed actions. Thus, all possible and consistent plans or goals must be considered. Kautz [H. Kautz. "A formal theory of plan recognition". In PhD thesis, University of Rochester, 1987.] provided a formal theory of plan recognition. He described plan recognition as a logical inference process of circumscription. All actions, plans are uniformly referred to as goals, and a recognizer's knowledge is represented by a set of first-order statements called event hierarchy encoded in first-order logic, which defines abstraction, decomposition and functional relationships between types of events.Kautz's general framework for plan recognition has an exponential time complexity in worst case, measured in the size of input hierarchy. Lesh and Etzioni [N. Lesh and O. Etzioni. "A sound and fast goal recognizer". In "Proceedings of the International Joint Conference on Artificial Intelligence", 1995.] went one step further and presented methods in scaling up goal recognition to scale up his work computationally. In contrast to Kautz's approach where the plan library is explicitly represented, Lesh and Etzioni’s approach enables automatic plan-library construction from domain primitives. Furthermore, they introduced compact representations and efficient algorithms for goal recognition on large plan libraries.
Inconsistent plans and goals are repeatedly pruned when new actions arrived. Besides, they also presented methods for adapting a goal recognizer to handle individual idiosyncratic behavior given a sample of an individual’s recent behavior. Pollack et al. described a direct argumentation model that can know about the relative strength of several kinds of arguments for belief and intention description.
A serious problem of logic-based approaches is their inability or inherent infeasibility to represent uncertainty. They offer no mechanism for preferring one consistent approach to another and incapable of deciding whether one particular plan is more likely than another, as long as both of them can be consistent enough to explain the actions observed. There is also a lack of learning ability associated with logic based methods.
Activity recognition through probabilistic reasoning
Probability theory and statistical learning models are more recently applied in activity recognition to reason about actions, plans and goals.
Plan recognition can be done as a process of reasoning under uncertainty, which is convincingly argued by Charniak and Goldman [E. Charniak and R. P. Goldman. "A Bayesian model of plan recognition". "Artificial Intelligence", 64:53–79, 1993.] . They argued that any model that does not incorporate some theory of uncertainty reasoning cannot be adequate. In the literature, there have been several approaches which explicitly represent uncertainty in reasoning about an agent's plans and goals.
Using sensor data as input, Hodges and Pollack designed machine learning based systems for identifying individuals as they perform routine daily activities such as making coffee [M. R. Hodges and M. E. Pollack. "An 'object-use fingerprint': The use of electronic sensors for human identification". In "Proceedings of the 9th International Conference on Ubiquitous Computing", 2007.] . Intel Research (Seattle) Lab and University of Washington at Seattle have done some important works on using sensors to detect human plans [Mike Perkowitz, Matthai Philipose, Donald J. Patterson, and Kenneth P. Fishkin. "Mining models of human activities from the web". In "Proceedings of the Thirteenth International World Wide Web Conference (WWW 2004), pages 573–582, May 2004.] [Matthai Philipose, Kenneth P. Fishkin, Mike Perkowitz, Donald J. Patterson, Dieter Fox, Henry Kautz, , and Dirk Hähnel. "Inferring activities from interactions with objects". In "IEEE Pervasive Computing", pages 50–57, October 2004.] [Dieter Fox Lin Liao, Donald J. Patterson and Henry A. Kautz. "Learning and inferring transportation routines". "Artif. Intell.", 171(5-6):311–331, 2007.] . Some of these works infer user transportation modes from readings of radio-frequency identifiers (RFID) and global positioning systems (GPS).
Wifi based activity recognition
When activity recognition is performed indoors and in cities using the widely available
Wifi signals and802.11 access points, there is much noise and uncertainty. These uncertainty are modeled using a dynamicBayesian network model in [Jie Yin, Xiaoyong Chai and Qiang Yang, "High-level Goal Recognition in a Wireless LAN". In "Proceedings of the Nineteenth National Conference on Artificial Intelligence" (AAAI-04), San Jose, CA USA, July 2004. Pages578-584 ] . A multiple goal model that can reason about user's interleaving goals is presented in [Xiaoyong Chai and Qiang Yang, "Multiple-Goal Recognition From Low-level Signals". "Proceedings of the Twentieth National Conference on Artificial Intelligence" (AAAI 2005), Pittsburg, PA USA, July 2005. Pages 3-8.] , where adeterministic state transition model is applied. A better model that models the concurrent and interleaving activities in a probabilistic approach is proposed in [Derek Hao Hu, Qiang Yang. "CIGAR: Concurrent and Interleaving Goal and Activity Recognition", to appear in AAAI 2008] . A user action discovery model is presented in [Jie Yin, Dou Shen, Qiang Yang and Ze-nian Li "Activity Recognition through Goal-Based Segmentation". "Proceedings of the Twentieth National Conference on Artificial Intelligence" (AAAI 2005), Pittsburg, PA USA, July 2005. Pages 28-33. ] , where the Wifi signals are segmented to produce possible actions.A fundamental problem in WiFi based activity recognition is to estimate the user locations. Two important issues are how to reduce the human labelling effort and how to cope with the changing signal profiles when the environment changes. [Jie Yin, Qiang Yang and Lionel Ni. "Adaptive Temporal Radio Maps for Indoor Location Estimation". In "Proceedings of the 3rd Annual IEEE International Conference on Pervasive Computing and Communications" (IEEE PerCom 2005), Kauai Island, Hawaii, March, 2005. Pages 85-94.] dealt with the second issue by transferring the labelled knowledge between time periods. [Xiaoyong Chai and Qiang Yang. "Reducing the Calibration Effort for Location Estimation Using Unlabeled Samples". In "Proceedings of the 3rd Annual IEEE International Conference on Pervasive Computing and Communications", (IEEE PerCom 2005) Kauai Island, Hawaii, March 2005. Pages 95--104.] proposed a hidden Markov model based method to extend labelled knowledge by leveraging the unlabelled user traces. [Jeffrey Junfeng Pan, Qiang Yang and Sinno Jialin Pan. "Online Co-Localization in Indoor Wireless Networks". In "Proceedings of the 22nd AAAI Conference on Artificial Intelligence" (AAAI'07) Vancouver, British Columbia, Canada. July 2007. 1102-1107] proposes to perform location estimation through online co-localization, and [Sinno Jialin Pan, James T. Kwok, Qiang Yang, Jeffrey Junfeng Pan. "Adaptive localization in a dynamic WiFi environment through multi-view learning". In "Proceedings of the 22nd AAAI Conference on Artificial Intelligence" (AAAI'07) Vancouver, British Columbia, Canada. July 2007. 1108-1113 ] proposed to apply multi-view learning for migrating the labelled data to a new time period.
Labs in the world
* [http://ihome.ust.hk/~derekhh/ActivityRecognition/index.html Derek Hao Hu's Activity Recognition Page]
* [http://www.intel.com/research/exploratory/activity_recognition.htm Intel Research Lab at Seattle]
* [http://www.eecs.umich.edu/~pollackm/ Martha Pollack's research group]
* [http://www.cse.ust.hk/~qyang/ Prof Qiang Yang's research group]
* [http://www.cs.washington.edu/ai/Mobile_Robotics/ RSE Lab @ University of Washington, leading by Dieter Fox]
* [http://www.pancube.com/MLMC/MLWSN.html Jeffrey Junfeng Pan's Sensor-based Localization and Tracking Project]
* [http://www.cuslab.com/eng/template/vba.php Ajou University CUSLAB Vision-based Activity Awareness]Related conferences
* [http://www.aaai.org/ AAAI]
* [http://vision.eecs.ucf.edu/ CVPR]
* [http://www.iccv2009.org/ ICCV]
* [http://www.ijcai.org/ IJCAI]
* [http://nips.cc/ NIPS]
* [http://www.pervasive2008.org/ PERVASIVE]
* [http://www.ubicomp.org/ Ubicomp]
* [http://www.percom.org/ PerCom]
* [http://www.iswc.net/ ISWC]See also
*
Planning
*Naive Bayes classifier
*Support vector machines
*Hidden Markov model
*Conditional random field References
Wikimedia Foundation. 2010.