Context and Cloud4All
Note: This page will contain all of the outputs of WP103 from Cloud4All, but please feel free to include as much content as you wish
- Context / Cloud4all WP103 meetings.
- F2F Madrid
- SAB Madrid
- Context Properties part of User Preferences Context task force.
- Context Profile (part of "Scenario with Conditional Items in a User Profile").
- Currently cited as reqB.1 of Functional Requirements.
- Context/D103.1 - Materials and methods.
TBD. In the meantime please look D103.1 for requirements and specification and D103.2 for Design.
In collaboration with WP302.2 https://github.com/JChaconTechnosite/Cloud4ALL-Enviromental-reporter
Context Aware Server
TBD. In the meantime please look D103.1 for an ECA version of the rules and D103.2 for a JENA version of them.
- A103.1: Context-related profile modification rules will be done by Technosite.
- A103.2: Application and platform identification rules will be done by CERTH.
- A103.3: Service continuity and profile prioritisation rules will be done by Barcelona Digital.
- A103.4: Context-related profile building will be done by CERTH.
From the DoW, the starting point is the User Profile Ontology. Context is really dynamic, e.g.: if a beam of light comes to the screen the glare may make impossible to keep the visual interaction, hence an auditory user interface is needed. The current architecture is planned to adapt to the user and his/her static environment, but dynamic changes are not addressed at the moment. A103.1 can join efforts with other activities in WP103.
Going back to context, we’re talking about a very dynamic environment, as the mere movement of the user can provoke a major change in his/her conditions (the glare is not a problem if the user can move the screen to a shaded place), but inspecting the context too often is costly, in terms of computational capabilities, network overhead and even battery power (many sensors, as e.g.: Berkeley motes, carry batteries instead of being plugged to the AC). Also, a lot of sensors will be in places where there are no users interacting, so we should be able to avoid loosing time and power inspecting the context which is not interesting for any user. At least a notion of threshold is to be added, in order to emit only context changes that will provoke a UI change, saving network time and power.
If we create a separate context-aware server we can optimize it to frequent readings, and store temporarily all of the data emitted by sensors. It will also serve as a parsing point between sensor data language and matchmaker decision language (like the semantic layer approach suggested by Andreas Zimmermann et al in. An operational Definition of Context.) and it will provide the last sensed information if the sensor is not available in the moment its data is needed.
Taking advantage of the work being tackled by AccessForAll group in Context Definition, we can follow a bottom-up approach and leave the top-down to AccessForAll. We can perform incremental steps:
- plan which part of the context to inspect,
- create the rules,
- check with the needs and preferences group that all the things they want to say are supported and
- test with a real device and check if it sends data to the server.
Goto 1 until all of the desired context is covered. Going bottom-up has the risk of being tied to a “device-oriented” approach, so more abstraction is needed to avoid that (the context-aware server would help by adding the abovementioned semantic layer). A condition in the ontology allows categorizing conditions as cases for defining the applicability of specific user needs and preferences: environmental (day/night, light, etc.), location (where I am), temporal aspects (time of the day), the activity (has to do with user conditions), user conditions (behavioral conditions, etc.), functional conditions (whether I use glasses, etc.). The ontology has to cover the widest possible cases, or facilitate modeling while maybe not all of these are covered in the implementation --e.g.: because of the absence of sensors to measure what was modeled.
Currently, work from Dr. Dey and Zimmermann have been studied, and some other initiatives such as Winograd’s are known. But a deeper search for mature frameworks is needed in order to save time, especially to communicate information from sensors to a semantic framework, though our duty is not only about sending data but also rules. Creating a framework from scratch could lead to a huge amount of effort, since the previously seen frameworks are the output of Ph.D. Theses.
- Should we use the same rule description language in all WP103 activities?
- All of the activities of WP103 will look for a common rules description language, ideally for an easy parsing by the Matchmaker.
- Should we create a context-aware server?
- Needed to check with the Architecture team.
- Should we employ thresholds as a decision mechanism to send information?
- Should we track the user’s last connection in order to decide which ones receive or generate context information?
- Not necessarily the user’s last connection, but a mechanism to decide that will be incorporated.
A103.1 next steps
- Key path variants to the use cases. Taking the use cases developed within Cloud4all as a starting point, we analyze how context can cause deviations from the "key path" or "best case scenario" workflow.
- Study of already existing (in the market) sensors. Which sensors can be used to capture context information, whether embedded in the devices or stand-alone.
- Choose a rule language to express the conditions extracted from the key path variants (WP103).
- Choose a methodology to build rules (WP103)
- Set up a virtual machine and/or an intermediate server able to monitor any changes in the status of the sensors and to generate information for the matchmaker (in JSON).
- Develop iterative proofs-of-concept. Try some of the sensors.
- Re-conceptualization of concept taxonomy