Activity recognition in meetings with one and two Kinect sensors Chapter in Scopus uri icon


  • © Springer International Publishing Switzerland 2016.Knowing the activities that users perform is an essential part of their context, which become more and more important in modern context-aware applications, but determining these activities could be a daunting task. Many sensors have been used as information source for guessing human activity, such as accelerometers, video cameras, etc., but recently the availability of a sophisticated sensor designed specifically for tracking humans, as is the Microsoft Kinect has opened new opportunities. The aim of this paper is to determine some human activities, such as eating, reading, drinking, etc., while a group of persons are seated, using the Kinect skeleton structure as an input. Further, due to occlusion problems, it could be guessed that a combination of two Kinect sensors could give an advantage in activity recognition tasks, especially in meeting settings. In this paper, we are going to compare the performance of a two Kinect system against a single Kinect in order to determine if there is a significant advantage in using two sensors. Also, we compare several classifiers for the activity recognition task, namely Naive Bayes, Support Vector Machines and K-Nearest Neighbor.

Publication date

  • January 1, 2016