Cloud4all WP103 2015-02-19

From wiki.gpii
Jump to: navigation, search

Back to meetings list

Time and date

Important Information for the Meeting

This is a meeting of WP103 within the European Project Cloud4All. However, the minutes of the meeting will be made available to the public to allow people outside Cloud4All to follow our thoughts. If, in the meeting, you want to share confidential information that should not appear in the public minutes, you need to state so in the meeting. This is in line with the Cloud4All Consortium Agreement, section 10.

Please connect to the audio using gotomeeting:

1. Please join my meeting. https://global.gotomeeting.com/join/619028605

2. Use your microphone and speakers (VoIP) - a headset is recommended. Or, call in using your telephone.

  • United States: +1 (213) 493-0618
  • Australia: +61 2 8355 1036
  • Austria: +43 (0) 7 2088 1036
  • Belgium: +32 (0) 28 08 4345
  • Canada: +1 (647) 497-9372
  • Denmark: +45 (0) 69 91 89 24
  • Finland: +358 (0) 942 41 5788
  • France: +33 (0) 170 950 586
  • Germany: +49 (0) 811 8899 6928
  • Ireland: +353 (0) 19 030 053
  • Italy: +39 0 693 38 75 53
  • Netherlands: +31 (0) 208 080 212
  • New Zealand: +64 (0) 4 974 7243
  • Norway: +47 21 04 29 76
  • Spain: +34 931 81 6713
  • Sweden: +46 (0) 852 500 182
  • Switzerland: +41 (0) 225 3311 20
  • United Kingdom: +44 20 3657 6777

Access Code: 619-028-605 Audio PIN: Shown after joining the meeting

Meeting ID: 619-028-605

Participants

Andres


Agenda

http://piratepad.net/c4a-context


  1. Read previous meeting minutes and make corrections if needed
  2. Topics
    • <add your topic here>

Minutes

best seen as source

Impossible to have the meeting given several health and other conditions.

notes for a future meeting:

A good starting point for task decisions would be my last slide on the review: • Core B Conference – We have to talk about writing papers. Core B seemed too low to Prof. Calvary so I’m trying to reach the deadline for a Core A conference. Sadly it’s the “short papers” deadline so we only have 8 pages. Maybe we could try to split the results in two papers, one with the MMM and the other with the CAS. Or write the MMM one for this hurry and start working on a big comprehensive paper including everything (definition, thresholds, MMM, CAS, rules in the RB-MM…), or…(needs a discussion) • Always evolving code – I need your guidance in moving the code to master branch in GPII, and we have to deal with a number of issues, namely: • XML Setting handler only working if at least two setting of the same type are setted • Acoustic noise sensor making the system to crash • Too fast reports from the environment stacking changes before they’re applied, making the system to crash • Maybe the new temporal reporter can help, what takes us to • Extensive use of temporal conditions (once at 9pm, every day at 9pm, only weekends at 9 pm …) • Whatever it happened to the mote during the review • GPS and other location mechanisms • Complex handshaking between the user smartphone, the CAS and eventually the target machine (eg the ATM) • Unit testing (yeah I’m such a n00b…) • Acceptance tests • PCP showing the messages from the MMM • Cooperate between apps and operating system – E.g who takes care of the fontsize, Android or Smart Twitter (or both) ? – Also thinking in Desktop • Provide a how-to for having external developers coding other systems rather than z-wave – Having a how-to for z-wave would be an excellent starting point • On demand: more motes, more sensors, more rules – Remember our first Stuttgart meeting? (October 2012) • Classify the numeric value into a qualified category (rules, finally!) • Decide a UI action based on the brightness category (more rules!) • Steps to full context • ●1st phase: sensors attached to a mobile device • ●Noise • ●Luminosity • ●Proximity... • ●Some proxy sensors: • – Time of the day • – Day of the week • – Location • Steps to full context • ●2nd phase: human as a sensor • ●Some information can be provided directly by the • user. • ●Inference: • –Status on one of them: Whatssapp, google talk • ●Direct asking: • –Ad-hoc app, if done, must be easier for the user to • employ the app than to change N&P manually (must be • studied) • Steps to full context • ●3nd phase: Sensors separate from mobile • ●WSN • ●Some trust model starts to be needed. • –Several sensors stating different information for the same • topic. • –Luminosity: two motes in the room, a mobile device • carried by the user (btw, on his pocket, on his hand?) [ref • Dey] • Steps to full context • ●4th phase: Full inference of user state • ●Emotional context, Task the user is engaged in... • ●Commitment (pig style) to 1st phase • ●Involvement to 2nd phase • ●On our way to getting 3rd phase • ●We can study 4th phase, to happen after c4 • Logic to move logic between the RB-MM and the MMM transparently – The starting point has to be the MMM “asking for help” scenario