Difference between revisions of "Cloud4All Testing"

From wiki.gpii
Jump to: navigation, search
Line 720: Line 720:
*In platfrom B the optimum settings set B are saved manually by the facilitator
*In platfrom B the optimum settings set B are saved manually by the facilitator
== Installation istructions for 2nd pilots<br/> ==
== Installation instructions for 2nd pilots<br/> ==
=== <span class="mw-headline" id="Hardware_Requirements">Hardware Requirements </span><br/> ===
=== <span class="mw-headline" id="Hardware_Requirements">Hardware Requirements </span><br/> ===
=== <span class="mw-headline" id="GPII_Framework_Dependencies">GPII Framework installation</span><br/> ===
=== <span class="mw-headline" id="GPII_Framework_Dependencies">GPII Framework installation</span><br/> ===
=== <span class="mw-headline" id="Matchmaker_Dependencies">Matchmaker Dependencies </span><br/> ===
=== <span class="mw-headline" id="Matchmaker_Dependencies">Matchmaker Dependencies </span><br/> ===
=== <span class="mw-headline">Installing PMT/PCP</span><br/> ===
=== <span class="mw-headline">Installing PMT/PCP</span><br/> ===
Line 738: Line 732:
[http://wiki.gpii.net/index.php/Running_the_PMT_%26_PCP_on_Windows_and_Linux_with_a_local_web_server http://wiki.gpii.net/index.php/Running_the_PMT_%26_PCP_on_Windows_and_Linux_with_a_local_web_server]
[http://wiki.gpii.net/index.php/Running_the_PMT_%26_PCP_on_Windows_and_Linux_with_a_local_web_server http://wiki.gpii.net/index.php/Running_the_PMT_%26_PCP_on_Windows_and_Linux_with_a_local_web_server]
=== Device reporter Device reporter <br/> ===
=== Device reporter Device reporter<br/> ===
=== Log settings<br/> ===
=== Log settings <br/> ===
==== Shortcut<br/> ====
==== Shortcut<br/> ====
Line 749: Line 741:
=== Snapshot shortcut<br/> ===
=== Snapshot shortcut<br/> ===
=== SP3 apps instructions<br/> ===
=== SP3 apps instructions<br/> ===

Revision as of 15:06, 24 February 2014

Cloud4all uses a UCD methodology, so prospective users will be involved in all stages of the conceptualizacion, design and evaluation of the products developed under it. The organization and management of the testing falls into the sub-project SP4 Pilot testing and thus this is focused on testing the ideas, concepts, and methods identified or developed by this project in real world settings. This means that they must work across platforms and with a sufficient variety of solutions and contexts to test the robustness of the approaches. It is understood that much more extensive testing will be needed than can be done through this IP alone. However, already in the context of this project, there are 3 iterative testing phases scheduled, upon predefined UCD methodologies and UI style guides, that will accompany project prototypes and tools from proof of project concepts (1st iterative phase) to Lo/Me-Fi prototypes assessment of SP3 applications and SP2 tools and algorithms (2nd iterative phase) and, finally, to Hi-Fi prototypes assessment in the context of 3rd iteration phase, before final demonstration events of A403.3. All types of users will participate in all three phases, including beneficiaries of various groups (i.e. older people and people with disabilities), developers and key stakeholders.

The activities in SP4 are centred on the overall project guidance, guidance for user testing and prototype development in relation to user testing and involvement, and it will prepare documentation for a Style guide,Testing plan, Pilot evaluation, User Testing and Prototype Development.

Links to relevant pages

Pilot scenarios:

PCP and PMT:

SP3 applications and their settings:

Three planned iterations

The three planned iterations are detailed in the following table:

Iteration Cycles
Prototypes to be tested
Pilot sites / Number of users
Greece (CERTH)
Spain (FONCE)
Germany (SDC)
1st Iteration

WP102: Interfaces (ID 102.1, ID 102.2, ID 102.3). Input to MS4.

WP104: Security gateway mockup  (ID.104.3). Input to MS7.

Early version of Cloud semantic infrastructure (D.201.1). Input to MS10.

Semantic alignment mechanism (ID201.2)

WP204: Comparative testing of rule-based and statistics-based matchmakers, plus expert-provided adaptations. Input to MS17. Success criterion: All algorithms score at least 50% of the expert-provided solution.

First round of SP3 Applications. Input to MS12.

30 beneficiaries of various groups

10 developers

5 key stakeholders (in expert walkthroughs)

30 beneficiaries of various groups

10 developers

5 key stakeholders (in expert walkthroughs)

30 beneficiaries of various groups

10 developers

5 key stakeholders (in expert walkthroughs)

2nd Iteration

WP103: Context related profile tests (D103.2). Input for MS9.

Security gateway prototype (D104.3)

Business rules inference and trust ontology model (D201.2)

Metadata tool boxes and utility (D202.1, D202.2, D202.4). Input for MS13.

Repository federation algorithms (D203.1) Input for MS15.

WP204: Comparative testing of rule-based, statistics-based and hybrid matchmakers, plus expert-provided adaptations. Input for MS17. Success criterion: All algorithms score at least 65%, and one algorithm at least 75% of the expert-provided solution.

2nd group of SP3 application. Input for MS23

40 beneficiaries of various groups

10 developers

5 key stakeholders (in expert walkthroughs)

40 beneficiaries of various groups

10 developers

5 key stakeholders (in expert walkthroughs)

40 beneficiaries of various groups

10 developers

5 key stakeholders (in expert walkthroughs)

3rd Iteration

WP204: Comparative testing of rule-based, statistics-based and hybrid matchmakers, plus expert-provided adaptations. Success criterion: All algorithms score at least 75%, and one algorithm at least 80% of the expert-provided solution.

Matching tools of WP205 (D205.1, D205.2).

Remaining SP3 applications plus some repeats. Input for MS24.

50 beneficiaries of various groups

10 developers

5 key stakeholders (in expert walkthroughs)

50 beneficiaries of various groups

10 developers

5 key stakeholders (in expert walkthroughs)

50 beneficiaries of various groups

10 developers

5 key stakeholders (in expert walkthroughs)

Testing Discussion

What we would like to test at these pilots

  1. A Rule-based and a Statistical matchmaker that will be able to infer settings between diferrent
  • devices (desktop and mobile)
  • platforms (Windows, Linux, Android, Symbian)
  • applications (SP3 apps)
  1. Multiple implementations of Cloud4all compatible implementations (SP3 apps) in all pilot sites languages ready and connected to the architecture.
  2. A functional PMT with all the basic and some extended functions that will be connected to the architecture and it will be used from the user to create and save his/her initial set of preferences (create an account).
  • A tool that will allow the automatic creation of the user's token from the user
  1. A first Low-Fi PCP ready, working on desktop and mobile, that will be used by the user to fine-tune and save this fine-tuned settings.
  • A way to automatically save the optimum settings of the user as these have been finetuned by the user, using the PCP or manully done by the application settings.
  1. A first Low-Fi functional implementation of the MMs function to propose a new solution/AT to the users
  2. An updated version of the SAT with federated repositories integrated, ready to be tested.
  3. A first Me-Fi prototype of the context related profile adaptations.
  4. Mock-up prototypes of the final PMT design, with the sharing among peers function.
  5. Mock-up prototypes of the final PCP design and work flow.

What we can test

Auto-configuration scenario

The basic auto-configuration scenario as well as the detailed scenarios per user group are availble at the pilots' scenarios report.

Questions we have





What are we waiting, from whom and by when

How are we going to create the token?


 We will have both USB and NFC tags. The architecture team will discuss this. If there is no solution, the facilitators of the pilots will have to do it manually.

The architecture team has to provide us with guidelines on how to create USB and NFC tags on Windows and Linux and which s/w &h/w we should have for this purpose.

End of October

Which of the extended functions will be ready to be tested?



PMT team has to inform us which functions will be included in addition to the ones prepared for the 1st pilot iteration.

End of October

Which settings (in common terms) will be used?


 ?? See http://wiki.gpii.net/index.php/Cloud4all_Testing:_Essential_Registry_Terms ??

This link provides all the CT but will all these be supported by the PCP and the PMT?

PMT/PCP team has to provide us with the common terms that will be included in the tools to be tested.

End of October

The architecture will not be able to log the app specific settings of some platforms (i.e. android and Java phones), so the facilitators have to do it manually.

MMs and Architecture

We need a template format the MMs team for the format they want this information to be delivered to them.

The MMs teams should provide us with a template to use for manually "capturing" the optimum preferences of users in Platform B (for Android and Java phones). The format of this template should comply to the logs of the other platforms so the data will be easily comparable by the MMs teams.

End of October

We need to define the research questions per tool/scenario.

What do the developers what to get from users at this phase?

Check https://docs.google.com/document/d/1rwNaRYfagOY8IWl0Xa4UNS0K2qQWwSgGgb0OSAkqo10/edit# -page 12 and add your thoughts.








The SP4 people will identify this per tools and task.

End of October

How are we going to tackle security issues?



 We are going to have a SP3 application (Easit4all) integrated with the Security Gateway and the Preferences Server in the cloud (http://preferences.gpii.net/user/)

Here we talk about the second iteration. Easit4all will not be available for testing.

EAsit4all is just an adapted client application to test the security gateway.

The security teams will have to provide us with details of its work and how this will be tested at the 2nd pilots.

25 Ocotber

For functional and technical information about how is the security gateway: http://wiki.gpii.net/index.php/Security_Gateway_page

What does the architecture team need to know from the MMs teams for developing the logging tool?




The architecture team has to meet with the MMs team to define the format and the data included in the logs.

The user will save the changes made at the PCP by clicking on a button. Nevertheless, the changes will be shown to user as he/she makes them, using the PCP, even if they are not saved.




For the "proposing a new solution" scenario. When the MMs propose a solution (which is auto-launched) is this auto-configured with the user's settings?

If the user chose to use another solution. the facilitator will manual close the launched one and open a new one. Will the new one be auto-configured?



How are going to trigger the MMs?

Are we using a shortcut in platform B or it will be done through the PMT?


What does the facilitator has to do to trigger the log & save, when the preferences are fine-tuned manually?





Which authentication methods will be available?


 USB and NFC only

How is the PCP going to be launched?
With an icon on the desktop.
How is the PMT going to be launched?


 We will have an open URL with the web page.
Who will the language of every pilot be identified by the PMT - PCP


Both PMT and PCP will automoatically identify as the preferable language, the language of the browser of the system. This means that for the pilots we need to have browsers installed in the native languages. (Is this decision also for long term?)

Is it possible to have a functional PCP for mobile, tablet and desktop?

If yes, by when?

Architecture and PCP  ?? Depends on whether we are able to implement WebSockets communication - in this case we will have the PCP in a browser for mobile devices ??

When we switch devices we don’t evaluate the MMs. What do we evaluate? The transformer? Is there a point in doing this by the means of development? Is it reasonable to test the transformer? Does anyone need this evaluation data?

MMs and Architecture

 The architecture and the transformation in General

What do we want to capture from the user when we demonstrate the context scenario?


 Ignacio and Andres have discussed this and will be added directly to the D402.2.2.

Which SP3 will be available for the pilots and in which language?

SP3 developers and Ignacio (template already sent)

 information available here

How are the setting gathered from the PMT applied to platform? Is this an automatic procedure or the user has to key-out and then key-in?


 An "apply the settings button" will be developed.

Pilots roadmap

This roadmap has been integrated in the Cloud4all roadmap for test - Iteration II.

We from SP4 need to do the following actions:

  • Talk with architecture team and discuss the plan we have with them. --> Eleni, Ignacio on 9/10/2013
  • Define concrete pilot scenarios for the functional tools (auto-configuration - PMT, PCP, MMs- and the SAT) --> Eleni, by 11/10/2013
  • Ask the SP3 developers to provide us with specific scenarios, comprised of 3-4 tasks for each application --> Eleni, by 18/10/2013
  • Create a plan for the tech validation of the end-to-end system --> Eleni and Ignacio, by 18/10/2013
  • Define the research topics/objectives/hypothesis/questions for each pilot sub-scenario --> Eleni, by 18/10/2013
  • Identify how we will answer each research question --> TECH, UPMM CERTH????, by 25/10/2013
  • Create the tools we will use for the evaluation --> TECH, UPMM CERTH????, by 1/11/2013
  • Define how many users will test what --> TECH, by 1/11/2013

Cloud4all roadmap for test - Iteration II

  • 2-Oct: Ignacio will pass the planning document on Google Doc - with the SP3 apps and their proposed settings to Kostas - Including the common terms.
  • 4-Oct: Kostas, Christophe and Claudia will insert all the info gathered until now to the SAT (
  • 4-Oct: On Friday Gianna will send an email to SP3 devs who have not filled in Ignacio’s doc yet and ask them to fill in the SAT with the assistance of Kostas (within a week - 7/10-11/10) - scheduled appointments. It should be done by 9-Oct
  • 9-Oct. Talk with the architecture team and discuss our plan with them (Eleni, Ignacio)
  • 10-Oct. Kostas adds 2 new fields (string) in the SAT to enter the transformation rules from Pilot Common Terms to Specific Terms and vice versa. Kostas meets Gianna to explain how to introduce the transformation rules. Gianna will pursue SP3 developers to introduce this information in the SAT.
  • 11-Oct. Define concrete pilot scenarios for the functional tools (auto-configuration - PMT, PCP, MMs- and the SAT) --> Eleni
  • 11-Oct. Define the final list of solutions/platforms/components that are part of the test. Eleni, Gianna and Kasper
  • 14-Oct. SP3 developers to provide SP4 with specific scenarios, comprised by 3-4 tasks for each application. SP3 developers (coordinated by Gianna) to Eleni
  • 14-Oct. Decide which Pilot common terms we will use at the 2nd pilots. Kostas, Christophe and Claudia. Dependences of PMT/PCT design force us to come up with the common terms before 14-Oct. This can be a first iteration to help PMT team do their job. Publish it the wiki at Pilots iteration 2. Christophe and Claudia do the matching of the Pilot common terms with the specific terms entered by SP3 developers. Output: list of Pilot Common terms.
  • 14-Oct. Draft of the technical/functional pre-test plan. Kasper, Colin, Ignacio, Christophe
  • 14-Oct. Check of the Pilot Common terms by SP3 and SP4. Output: approval/rejection/suggestion of Pilot Common Terms. Eleni
  • 15-Oct. Final version of the common terms. Christophe, Claudia and Kasper
    • Update 14 October: common terms categorised and presented to Jess Mitchell in SP4 meeting.
  • 15-Oct. PMT/PCP teams get all the information about the pilot scenarios and the common terms and start working in some wireframes for the Pilots PCP – PMT (PCP, PMT, Pilot teams)
    • Update 15 October: default values added to common terms (needed by design team)
  • 15-Oct. Final version of the technical/functional pre-test plan. Kasper, Colin, Ignacio, Christophe
  • 18-Oct. Define the research topics/objectives/hypothesis/questions for each pilot sub-scenario --> Eleni
  • 18-Oct. Architecture-SP3 meeting to help SP3 create the transformers (json files).
  • 21-Oct. All solution developers have added the transformations in pseudocode in the SAT.
  • 21-Oct. The SAT is completely finished. Kostas exports to json files + pseudocode transformations.
  • 22-Oct. PCP/PMT teams to discuss the Pilot-PCP/PMT wireframes and estimate development dates.
  • 22-Oct. Architecture-SP3 meeting to solve last question SP3 produce the final version of the transformers (json files).
  • 24-Oct. Identify how we will answer each research question --> TECH, UPMM CERTH????
  • 24-Oct. Make the transformation of the common terms to app specific terms (Claudia, Christophe, Ignacio, Colin - by end of October)
  • 28-Oct. Coding integration tests (SILO) Example: (https://github.com/GPII/windows/blob/v0.2/tests/AcceptanceTests.js#L475-L511)
  • 28-Oct. Start of technical/functional validation. MMs, PMT/PCP and SP3 solutions must be ready at this moment.
    • According to discussion in the RBMM meeting, the deadline for the MMs was 30 November??
  • 01-Nov. Create the tools we will use for the evaluation --> TECH, UPMM CERTH????
  • 01-Nov. Define how many users will test what --> TECH
  • 19-20-Nov. Plenary board and Consortium workshop
  • 21-Nov. Scientific Advisory Board.
  • 25-Nov. Start of Pre-test with first users and support from all development teams.
  • 13-Dec. End of Pre-tests.

What MMs and PCP/PMT will (not) do in Iteration II

  • The RB-MM will not work for Android, it will only be deployed locally.
  • The RB-MM is available in desktop environments, and it enhances the transformation terms of constraints, etc.
  • The ST-MM will run wherever the real-time framework can run -> it should run on Android
  • The ST-MM will work on Symbian -> Eleni to ask Kostas
  • The hybrid MM development starts in M23
  • For the main test scenario, we will not use any recommendations -> only pre-defined conditions.
  • PCP will show you the current value of your preference set
  • The PMT will only save settings that the user has actually adjusted
  • The goal of the Settings Snapshotter for this pilot would be to provide a kind of "logging" or "research facility," unconnected to any user interfaces. The user will tweak their settings in the UI of all their native applications (e.g. NVDA's settings dialog, Windows control panels, etc) and then the pilot facilitator will take a "snapshot" (by running a script or something), which will cause these settings to be saved to a log.
  • Next week, after we get the common terms and the pilot scenarios, the design team will work on wireframes of the PMT and PCT. (Oct 18) (The PCP/PMT will estimate how much time they will need to implement these prototypes.

Critical Next Steps

These two steps are blockers for designing the necessary "pilot wireframes" for the PCP & PMT

  • Pilot scenarios need to be clearly defined by 11-Nov
  • All the common terms need to be listed by 14-Nov

Risks, bugs and stuff to panic about

  • Something wrong with font sizes on Windows (Flow Manager seems to freeze)
  • RFID reading doesn't work on Windows
  • Kasper has a 3-persons workload

Minutes from the SP4 calls

Wednesday 16/10/2013


  • Do the PCP and PMT design teams want to test with users the paper (or clickable) prototypes of the tools they intend to develop in order to gather the users’ needs?
    • If no please say so, so we can exclude it from the evaluation plan
    • If yes, we would need the prototypes (including all the alternatives you have thought for the design of the interface for each functionality) by the end of October.
  • What will the facilitator do to activate the device reporter? (Architecture)
  • Does the facilitator have to do a specific task in Platform B to trigger the MMs? (Architecture)
  • Maavis will NOT be automatically launched when the user keys-in. Is it possible for Maavis to capture and apply the settings of the user from the N&P server without being automatically launched?  Is it possible for the user to use the PCP to fine-tune Maavis and the changes to be applied automatically (before saving)? (Architecture, PCP)


  • PMT team to provide SP4 team with screenshots and the description of the final functionalities of the tool that will be tested during the pilots by end of October. We can use for the moment the mockups for PMT iteration 2 available at fluid wiki (http://wiki.fluidproject.org/display/fluid/%28C4A%29+Preference+Editor+frame+mockups+for+iteration+2) - added to useful gdocs document
  • The PMT and PCP design teams to provide SP4 team with mockups (clickable or paper) with the final advanced PMT (with sharing among peers function)
  • The architecture team needs to create 2 fake device reporter reports to launch Mobile accesisiblity  or Talkback in Android, when a blind user is keyed-in.
  • People tend to get confused when testing an app they have not seen before, i.e. Maavis. We should check the "acceptance" of these apps at the pre-pilots and if users are confused we should keep them out of the pilots.
  • At this iteration we will have to exclude deaf people. The reason behind this is because there are no setting in any SP3 apps regarding their needs. We need to make sure that for the 3rd iteration we need to have settings also for deaf people (subtitles in video) in some SP3 apps.
  • All the sections of D402.2.2 have been allocated to specific teams
  • The user will save the changes made at the PCP by clicking on a button. Nevertheless, the changes will be shown to user as he/she makes them, using the PCP, even if they are not saved.

Wednesday 23/10/2013


  • Results gathering and ananlysis tool
    • We need to start thinking of the tools that we will use to gather and validate the results. Juan is checking the onlinesurvey.
    • Eleni will trigger the discussion about this with an email to Juan, Katerina and Silvia.
    • A good tool to gather and analyse the results is online survey
  • Papers for HCII
    • Silvia is preparing a paper, sent an abstract
    • We will send this paper to the general track, ask Manuel and Gregg for the possibility of submitting other papers or presentations to the special tracks
  • ID403.1.1
    • Everybody should check ID403.1.1 and send their comments to Laia
  • Use Forum
    • We will use the user forum as creativity session to update and refine the Generic Story (Use Case)
    • Not create a new scenario but create variations of the scenario.
    • The use cases will be udpated and will be used as a baseline to create scenarios for the 3rd pilots iteration phase.
    • Nacho will send an email with thoughts to trigger the discussion. People involved will be Eleni, Katerina, Nacho and Ignacio.

Questions/Open issues

  • We need to have a pilots meeting. Is it possible (tehcnically and administrative) to have it during the Madrid meeting?
    • This meeting should start with the presentation of the pilot scenario
    • All components of the scenario should work (80% at least)
    • All the delopers should demonstrate their app and let the facilitators play with them
    • Open discussion should follow covering the following issues:
      • Integration issues
      • End-to-end technical validation plan
      • Updates at the pilot scenario


  • We will not support changes in text size in Windows for the 2nd pilots
    • We will drop the 3 font size bugs. Tomorrow afternoon we will decide when things will be done.
    • Challenge to the MM's: if font size is selected, maybe activate magnifier
  • We would like to have volum support under Windows for the 2nd pilots
    • Kasper to estimate how long it will take to implement a setting handler for volume in Windows
  • PCP does not need to work in Android for the 2nd pilots
  • For people with visual impairments we should check if they could use Linux (Platform A) and go to Windows (Platfrom B)

Wednesday 30/10/2013


  1. ID 403.1
  2. Updates on D402.2.2
  • MMs scenario for proposing a new solution (test scenario or demo?)
  • Context demo
  • Tech val plans

Questions/Open issues

  • Which non-functional mock ups will we have? PMT, PCP, MMs proposing a new solution


  • In Chapter 2.6 we will include the tests/demos which will not be functional
    • PMT (final version), PCP (final version), context, proposing a new solution
  • The protocol will be the same for all pilots sites but we will make clear that each pilot site may have specific diferences, based on the 1st pilots results.
  • We need to have the deliverable ready on the 15th of November.
  • The blind-deaf users will be blind people with mild hearing impairments.
  • User forum -> results and methodology in ID502.1.2. User forums meeting proceedings
  • In ID403.1 we need to add a section to validate the hypothesis and another chapter (in the concolusions) to explain the results of the pilots according to the hypothesis.

Wednesday 11/12/2013


  1. Summary results from the user forum in Greece
  • 30 participants with various disabilities
  • Totally understood, happy about the concept, also expressed concerns and asked questions
  • Gathered high-priority needs and preferences
  • Validated how realistic are the use cases - very realistic, exactly what they want
  • Some additional wants:
    •  Cloud4all implemented in their home appliances
    • Cloud4all implemented in devices without interfaces

 1.- Review meeting (Luxemburg, 15-16th January 2014)

  • General information about SP4 activities during last year (WPs, deliverables) (Presenter: Nacho)
  • General evaluation framework and main results 1st pilot results (Presenter: Nacho)
  • Evaluation framework for the 2nd pilot iteration (Presenter: Eleni)

2.- ID403.1: Some information still missing

  • Objective data (e.g. task timing) What happen with facilitators diaries?


    • Quantitative data was collected, but only qualitative data was reported
    • Objective data should be collected and reported during the 2nd iteration (specific suggestion from the SAB)
    • Log data
    • It seems that the logs were not usable to calculate a metric.
    • Eleni: logs were taken randomly - the system takes long to log the files. The people from the MM teams don't know exactly what the users were doing when the logs were taken.
    • This is something we should try not happen for the 2nd evaluation
      • During the 2nd iteration the logs are also taken manually by the facilitator.
      • Additional technical validation for the snapshot app
    • Summary of 1st pilot results to be presented in the review meeting, there are concerns about how to answer if reviewers ask questions about missing data.

    3.- Ethics audit, implications

    • Received Ethics follow-up report, three main suggestions:

    a) send ethical approvals as soon as possible to EC

    b) send specific project information and informed consents

    c) document how mock-up or organizational implementation decisions meet ethical requirements (should be reviewed by E&LAC and provide a report to EC)

    • Suggestion b) affects the ethical procedures in D402.2.2 (the procedure should be better described)

    4.- Status of D402.2.2.

    • Deliverable has been sent to E&LAC and local ethical committees, also to peer reviewers in the QCB.
    • E&LAC ask for review before December 18th
    • Final deliverable to be sent by 23th December to


    • ACTION (Nacho): Ask Christophe to provide summary of results and rationale for the lack of MM logs metrics during 1st iteration.
    • ACTION (Nacho & All): Additional internal review in D402.2.2. and proof reading should be done in parallel to peer review (should be sent to EC before end of year) :
      • Check that all questionnaires are covering metrics
      • Check lesson learned about objective measures (what happen during first iteration? Will be collected in a similar way?)
      • Check information about logs (prevent failures during first pilot)
      • Include a subsection describing ethical procedures

    Decisions from other calls and Groups


    • The token is given by the system to the user.
    • There will be an "apply my preferences" button on the PMT.

    Open issues

    • We still don't know who the PCP will be launched to the user (automatically or manually from the facilitator)
    • We still don't know if we will test or only demonstrate scenario 1.2.2.a
    • The PMT/PCP is only used in Platform A
    • In platform B the user twicks the settings manually.
    • In platfrom B the optimum settings set B are saved manually by the facilitator

    Installation instructions for 2nd pilots

    Hardware Requirements

    GPII Framework installation

    Matchmaker Dependencies

    Installing PMT/PCP


    Device reporter Device reporter

    Log settings


    Mannual logging

    Snapshot shortcut

    SP3 apps instructions


    See also

    Wiki Categories