Point and Control

From wiki.gpii
Jump to: navigation, search

Point & Control Module

This module supports gesture supported selection of devices in physical environments. It currently use of the Microsoft Kinect depth camera to enable Point&Click interaction for the control of appliances in smart environments. A backend server determines through collision detection which device the user is pointing at and sends the respective control interface to the user's smartphone. Any commands the user issues are then sent back to the server which in turn controls the appliance. New devices can either be registered manually or using markers such as QR codes to identify them and get their position at the same time.


Keywords: remote control, smart home, user location
Technologies: Kinect, Windows, C#
License: MIT
FurtherInfo: https://github.com/teco-kit/PointAndControl Code Repository, Contact [1]

NOTE: If you find this component useful or want to comment leave a short message on the discussion page of this component


Potential Applications

Could be used in the scope of URC all supported systems, i.e. smart home scenarios (generally increases UX).

It removes the need for browsing lists of devices in favour of spatial selection gestures, thus requires motor abilities and spatial cognition to do so.

Particularly we expect a lower digital literacy need for using the system and also expect (in combination e.g. also with speech output) applications benefits for users with visual impairment or users who do not prefer small screens on mobile devices in general.

Technologies

The platform support is currently limited by the dependency on the Kinect sensor. The current implementation is based on the official Kinect for Windows SDK by Microsoft, which is only available for Windows 7 and above. The complete SDK is not needed for execution, but the redistributable files and drivers are currently not included in the repository. The code is written in C# tested with .Net 4.0 and above. The configuration and location of the devices in a 3D room layout is stored in a XML file which is read out during the initialization. Changes in the program are written back to the file. HTTP is used for communication between client and server and the remote control user interfaces are designed using HTML.

A client on the smartphone of the user is required to interact with the system. The current implementation uses a native Android application with an integrated WebView (a simple HTML renderer) to display the remote control user interface. The application manages the connection to the server and provides the UI for registration and device selection.

Licence Information

The code of the system is released under MIT License as stated in the repository.

Status, Known Issues & Planned Work

The system works reliably with the first version of the Kinect sensor. Currently the device repository, device selection and control is integrated in a single component.

Future Work:

  • Test performance of Kinect One sensor
  • Separate repository, selection and control into individual components, communicating via REST/JSON
  • Structured API with documentation
  • Migrate Android native app to web app

Further Resources

Videos/Demos

A video demonstrating the system can be found here: http://www.youtube.com/watch?v=RqlCYBIUMos

Documentation/FAQs

A paper abstract with more information on the system can be found here: http://www.teco.edu/~berning/papers/ubicomp2013_video.pdf

Related

Kinect for Windows SDK and Documentation

Getting Involved

The code is hosted at github: https://github.com/teco-kit/PointAndControl This is a public repository, so everybody can contribute. If you find any bugs, you can also open a new issue.