Cloud4all WP103 A103.1 2012-07-17

From wiki.gpii
Jump to: navigation, search

Back to meetings list

Time and date

2012-07-17T15:30CEST


Important Information for the Meeting

This is a meeting of WP103 within the European Project Cloud4All. However, the minutes of the meeting will be made available to the public to allow people outside Cloud4All to follow our thoughts. If, in the meeting, you want to share confidential information that should not appear in the public minutes, you need to state so in the meeting. This is in line with the Cloud4All Consortium Agreement, section 10.

Please connect to the audio using gotomeeting:

1. To talk free using Web click. https://global.gotomeeting.com/join/638565741 Be sure to set AUDIO to Mic and Speaker (VoIP) - a headset is recommended.

2. Or, call in using your telephone.

Meeting ID: 638-565-741

  • Australia: +61 3 9008 7854
  • Austria: +43 (0) 7 2088 1036
  • Belgium : +32 (0) 28 08 4345
  • Canada: +1 (647) 497-9372
  • France: +33 (0) 182 880 162
  • Germany: +49 (0) 811 8899 6928
  • Ireland: +353 (0) 19 030 053
  • Italy: +39 0 693 38 75 53
  • Netherlands: +31 (0) 208 080 212
  • Spain : +34 931 81 6713
  • Switzerland: +41 (0) 225 3311 20
  • United Kingdom: +44 (0) 207 151 1801
  • United States: +1 (805) 309-0027

Access Code: 638-565-741

Participants

  • Andrés Iglesias from Technosite
  • Ignacio Peinado from Technosite
  • Vassilis Koutkias from CERTH/IBBR
  • Andy Heath from Axelrod
  • Jim Tobias (JT)
  • Maria Gemou from CERTH/HIT


Detection of Related Work


WP103 workflow

(click to enlarge)A103.1 and A103.2 are the input of A103.3 which creates the input for A103.4 that finally creates the output needed for the needs and preferences server

  • A103.1: Context-related profile modification rules will be done by Technosite.
  • A103.2: Application and platform identification rules will be done by CERTH.
  • A103.3: Service continuity and profile prioritisation rules will be done by Barcelona Digital.
  • A103.4: Context-related profile building will be done by CERTH.


A103.1

From the DoW, the starting point is the User Profile Ontology. Context is really dynamic, e.g.: if a beam of light comes to the screen the glare may make impossible to keep the visual interaction, hence an auditory user interface is needed. The current architecture is planned to adapt to the user and his/her static environment, but dynamic changes are not addressed at the moment. A103.1 can join efforts with other activities in WP103.

(click to enlarge)Nowadays the flow manager inspects User Needs & Preferences Server just once per user interaction init. Context changes often. On the other hand, if we send the information from every server connected to Cloud4All to the Flow Manager, we will experiment an overload of information coming from sensors with no users in the same place as the sensor

Going back to context, we’re talking about a very dynamic environment, as the mere movement of the user can provoke a major change in his/her conditions (the glare is not a problem if the user can move the screen to a shaded place), but inspecting the context too often is costly, in terms of computational capabilities, network overhead and even battery power (many sensors, as e.g.: Berkeley motes, carry batteries instead of being plugged to the AC). Also, a lot of sensors will be in places where there are no users interacting, so we should be able to avoid loosing time and power inspecting the context which is not interesting for any user. At least a notion of threshold is to be added, in order to emit only context changes that will provoke a UI change, saving network time and power.

(click to enlarge)If we create a separate flow for the context, several readings during user interaction time can be taken into account. A threshold for emitting just significant context changes from the sensors to the architecture is also planned

If we create a separate context-aware server we can optimize it to frequent readings, and store temporarily all of the data emitted by sensors. It will also serve as a parsing point between sensor data language and matchmaker decision language (like the semantic layer approach suggested by Andreas Zimmermann et al in. An operational Definition of Context.) and it will provide the last sensed information if the sensor is not available in the moment its data is needed.

Taking advantage of the work being tackled by AccessForAll group in Context Definition, we can follow a bottom-up approach and leave the top-down to AccessForAll. We can perform incremental steps:

  1. plan which part of the context to inspect,
  2. create the rules,
  3. check with the needs and preferences group that all the things they want to say are supported and
  4. test with a real device and check if it sends data to the server.

Goto 1 until all of the desired context is covered. Going bottom-up has the risk of being tied to a “device-oriented” approach, so more abstraction is needed to avoid that (the context-aware server would help by adding the abovementioned semantic layer). A condition in the ontology allows categorizing conditions as cases for defining the applicability of specific user needs and preferences: environmental (day/night, light, etc.), location (where I am), temporal aspects (time of the day), the activity (has to do with user conditions), user conditions (behavioral conditions, etc.), functional conditions (whether I use glasses, etc.). The ontology has to cover the widest possible cases, or facilitate modeling while maybe not all of these are covered in the implementation --e.g.: because of the absence of sensors to measure what was modeled.

Currently, work from Dr. Dey and Zimmermann have been studied, and some other initiatives such as Winograd’s are known. But a deeper search for mature frameworks is needed in order to save time, especially to communicate information from sensors to a semantic framework, though our duty is not only about sending data but also rules. Creating a framework from scratch could lead to a huge amount of effort, since the previously seen frameworks are the output of Ph.D. Theses.

(click to enlarge)Ideas extracted from Dey's work (Application,Agreggator,Widget and Sensor) and Zimmermann's work (Sensor,Semantic,Control and Actuator layers, context divided into five categories which are Individuality,Time,Location,Activity and Relations)


A103.1 agreements

(click to enlarge)Open questions and answers below this image

  • Should we use the same rule description language in all WP103 activities?
    • All of the activities of WP103 will look for a common rules description language, ideally for an easy parsing by the Matchmaker.
  • Should we create a context-aware server?
    • Needed to check with the Architecture team.
  • Should we employ thresholds as a decision mechanism to send information?
    • Yes.
  • Should we track the user’s last connection in order to decide which ones receive or generate context information?
    • Not necessarily the user’s last connection, but a mechanism to decide that will be incorporated.


A103.1 next steps

These tasks were planned *after* the meeting, though it makes sense to record them here as they will probably change in the future


Preliminary Workplan

  • Key path variants to the use cases. Taking the use cases developed within Cloud4all as a starting point, we analyze how context can cause deviations from the "key path" or "best case scenario" workflow. 
  • Choose a methodology to build rules (WP103)
  • Study of already existing (in the market) sensors. Which sensors can be used to capture context information, whether embedded in the devices or stand-alone.
  • Choose a rule language to express the conditions extracted from the key path variants (WP103).
  • Set up a virtual machine and/or an intermediate server able to monitor any changes in the status of the sensors and to generate information for the matchmaker (in JSON).
  • Develop iterative proofs-of-concept. Try some of the sensors.
  • Re-conceptualization of concept taxonomy