Context

From wiki.gpii
Jump to: navigation, search

Context and Cloud4All

Note: This page will contain all of the outputs of WP103 from Cloud4All, but please feel free to include as much content as you wish

Meetings

Related Work

Outcomes

under construction

Components

TBD. In the meantime please look D103.1 for requirements and specification and D103.2 for Design.

Environmental Reporter

In collaboration with WP302.2 https://github.com/JChaconTechnosite/Cloud4ALL-Enviromental-reporter

Context Aware Server

https://github.com/barcelonadigital/Cloud4All---Context-Aware-Server More on the page about Context Aware Server

File:Context Aware Server - Installation Guide v0.3.docx

Minimatchmaker

https://github.com/JChaconTechnosite/Cloud4ALL-MiniMatchMaker

Rules

TBD. In the meantime please look D103.1 for an ECA version of the rules and D103.2 for a JENA version of them.

Environment

Context/Note on the confidence interval for data from sensors

Device

Platform

Application

WP103 workflow

(click to enlarge)A103.1 and A103.2 are the input of A103.3 which creates the input for A103.4 that finally creates the output needed for the needs and preferences server

  • A103.1: Context-related profile modification rules will be done by Technosite.
  • A103.2: Application and platform identification rules will be done by CERTH.
  • A103.3: Service continuity and profile prioritisation rules will be done by Barcelona Digital.
  • A103.4: Context-related profile building will be done by CERTH.


A103.1

From the DoW, the starting point is the User Profile Ontology. Context is really dynamic, e.g.: if a beam of light comes to the screen the glare may make impossible to keep the visual interaction, hence an auditory user interface is needed. The current architecture is planned to adapt to the user and his/her static environment, but dynamic changes are not addressed at the moment. A103.1 can join efforts with other activities in WP103.

(click to enlarge)Nowadays the flow manager inspects User Needs & Preferences Server just once per user interaction init. Context changes often. On the other hand, if we send the information from every server connected to Cloud4All to the Flow Manager, we will experiment an overload of information coming from sensors with no users in the same place as the sensor

Going back to context, we’re talking about a very dynamic environment, as the mere movement of the user can provoke a major change in his/her conditions (the glare is not a problem if the user can move the screen to a shaded place), but inspecting the context too often is costly, in terms of computational capabilities, network overhead and even battery power (many sensors, as e.g.: Berkeley motes, carry batteries instead of being plugged to the AC). Also, a lot of sensors will be in places where there are no users interacting, so we should be able to avoid loosing time and power inspecting the context which is not interesting for any user. At least a notion of threshold is to be added, in order to emit only context changes that will provoke a UI change, saving network time and power.

(click to enlarge)If we create a separate flow for the context, several readings during user interaction time can be taken into account. A threshold for emitting just significant context changes from the sensors to the architecture is also planned

If we create a separate context-aware server we can optimize it to frequent readings, and store temporarily all of the data emitted by sensors. It will also serve as a parsing point between sensor data language and matchmaker decision language (like the semantic layer approach suggested by Andreas Zimmermann et al in. An operational Definition of Context.) and it will provide the last sensed information if the sensor is not available in the moment its data is needed.

Taking advantage of the work being tackled by AccessForAll group in Context Definition, we can follow a bottom-up approach and leave the top-down to AccessForAll. We can perform incremental steps:

  1. plan which part of the context to inspect,
  2. create the rules,
  3. check with the needs and preferences group that all the things they want to say are supported and
  4. test with a real device and check if it sends data to the server.

Goto 1 until all of the desired context is covered. Going bottom-up has the risk of being tied to a “device-oriented” approach, so more abstraction is needed to avoid that (the context-aware server would help by adding the abovementioned semantic layer). A condition in the ontology allows categorizing conditions as cases for defining the applicability of specific user needs and preferences: environmental (day/night, light, etc.), location (where I am), temporal aspects (time of the day), the activity (has to do with user conditions), user conditions (behavioral conditions, etc.), functional conditions (whether I use glasses, etc.). The ontology has to cover the widest possible cases, or facilitate modeling while maybe not all of these are covered in the implementation --e.g.: because of the absence of sensors to measure what was modeled.

Currently, work from Dr. Dey and Zimmermann have been studied, and some other initiatives such as Winograd’s are known. But a deeper search for mature frameworks is needed in order to save time, especially to communicate information from sensors to a semantic framework, though our duty is not only about sending data but also rules. Creating a framework from scratch could lead to a huge amount of effort, since the previously seen frameworks are the output of Ph.D. Theses.

(click to enlarge)Ideas extracted from Dey's work (Application,Agreggator,Widget and Sensor) and Zimmermann's work (Sensor,Semantic,Control and Actuator layers, context divided into five categories which are Individuality,Time,Location,Activity and Relations)


A103.1 agreements

(click to enlarge)Open questions and answers below this image

  • Should we use the same rule description language in all WP103 activities?
    • All of the activities of WP103 will look for a common rules description language, ideally for an easy parsing by the Matchmaker.
  • Should we create a context-aware server?
    • Needed to check with the Architecture team.
  • Should we employ thresholds as a decision mechanism to send information?
    • Yes.
  • Should we track the user’s last connection in order to decide which ones receive or generate context information?
    • Not necessarily the user’s last connection, but a mechanism to decide that will be incorporated.


A103.1 next steps

Preliminary Workplan

  • Key path variants to the use cases. Taking the use cases developed within Cloud4all as a starting point, we analyze how context can cause deviations from the "key path" or "best case scenario" workflow. 
  • Study of already existing (in the market) sensors. Which sensors can be used to capture context information, whether embedded in the devices or stand-alone.
  • Choose a rule language to express the conditions extracted from the key path variants (WP103).
  • Choose a methodology to build rules (WP103)
  • Set up a virtual machine and/or an intermediate server able to monitor any changes in the status of the sensors and to generate information for the matchmaker (in JSON).
  • Develop iterative proofs-of-concept. Try some of the sensors.
  • Re-conceptualization of concept taxonomy