HTML5-based Learning and Generation for Virtual Sensors

From wiki.gpii
Jump to: navigation, search

jActivity

Commonly web sites are only adapted by static context parameters (screen size), preference sets or location information. This module adds the possibility to classify the user context during the browsing based on the acceleration or orientation of the smartphone or based on the user's touch gestures.

jActivity leverages the HTML5 events "DeviceMotion" and "DeviceOrientation", as well as "TouchEvent". The corresponding specifications can be found here: DeviceOrientation Event Specification (W3C Working Draft). Touch Event API (W3C Recommendation)

The underlying learning system is the start of a reimplementation of the underlying actiServ system (http://www.teco.edu/~berch/publications/ISWC10) using open source components.

It already supports crowdsourcing, i.e. the collection of training data from different users. The training data is differentiated wrt. the user agent, i.e. browser, operating system, device). This allows to not only have a generic, but also user agent-specific classifiers. This is important as different operating systems or browsers support the sensor access differently, e.g. by firing sensor events with different event rates. The system also allows users to login using a Google account and, hence, to provide personalized training data.


Keywords: Activity Recognition, HTML5, Machine Learning
Technologies: HTML5
License: MIT
FurtherInfo: https://github.com/teco-kit/jActivity/, Contact exler@teco.edu

NOTE: If you find this component useful or want to comment leave a short message on the discussion page of this component


Potential Applications

The system can be used to adapt the font size during different activities.

// To use the results of the classifier you make use of the callback function and react to the results according to your needs. 
// For example:
function callback(result)
{ 
   if (result == "Walking") 
   {
      makeFontSizeBigger();
   } else {
      makeFontSizeSmaller();
   }
}

The system might also alter the design of a website, trigger alerts or pop-ups.


Technologies

The current version relies on standards and is modular and extensible. It supports a wide range of HTML5 events, e.g. DeviceOrientation, DeviceMotion and TouchEvent. Though it can be extended by any other HTML5 event and the corresponding features. The system contains three main components: 1. Database to manage the crowd-sourced data including the specific user agent of the device that was used to collect the data to allow a differentiation of different users and devices. 2. An OpenCPU implementation that allows to run R code to build classifiers based on the data available from the database and export them as PMML. 3. A PMML to JavaScript converter that, eventually, brings the JavaScript classifier into the webbrowser.

All three components offer REST APIs to be accessible.The OpenCPU / R component might be replaced by any other component that is able to accept external input vie REST and to output a PMML file via another REST API. The web designer includes a JavaScript source into their website code that calls the converter by sending a GET request containing parameters with specifications of the dataset to be selected, the sampling strategy to be applied and the classifier to be used. The converter communicates with the OpenCPU component via REST, handing over these parameters and asking for the corresponding PMML classifier. The OpenCPU component gets the training data wrt. the specified parameters from the database and builds a classifier based on it. The classifier is exported in PMML and handed over to the converter as requested. The converter translates the PMML file into JavaScript and provides the JavaScript code to the website as requested. The website itself can now score incoming sensor values wrt. the model to make a decision on the user context. This context might be the activity of the user or the hand that was used to navigate on the website.


Licence Information

MIT Licence

See https://github.com/teco-kit/jActivity/blob/master/LICENSE for more Info


Status, Known Issues & Planned Work

Further sensor sources in addition to deviceMotion, deviceOrientation and touch might be implemented, e.g. microphone access. This will also allow for a wider range of different activities to be recognized.

The generalization and personalization features for classifiers need to be ported from the proprietary Matlab code.

More classifiers need to be implemented in JavaScript. Currently, decision trees are supported.


Code Repository and Online Documentation

https://github.com/teco-kit/jActivity/