GPII Roadmap

From wiki.gpii
Jump to: navigation, search

Areas of GPII needing research and development

  1. Computer Assisted Evaluation and User Preference Profile Creation and Editing
    1. One of the key aspects of the GPII is the ability for users to discover that there are things that would help them and to identify which techniques or services would help them.
    2. Research Area(s):
      1. Research on computer assisted evaluation of users with (& without) disability, literacy, digital literacy or aging related access issues to determine which types of features or services would address their needs. Tools for use by professionals to aid their evaluations as well as tools for people who do not need, cannot afford, or cannot reach a professional to use to do a best effort self or friend/family assisted evaluation. Research on effectiveness of tools identified/developed in increasing accuracy of professionals. Comparison of tools results and professional results for user or family/friend administered evaluation. Evaluation of importance of user confirmation (trial) of automated results – and best mechanism for users control of profile elements. Research on computer assisted evaluation/recommendation for all users -- what works, what doesn't.
      2. This would probably take the form of an "N&P discovery aid" of sorts. (the word "Wizard" for this is deprecated). Very friendly to user and professional (if present).
      3. Includes the ability for users to try out the features generically – and later specifically with particular commercial or open source packages.
      4. Editing and refinement -- how do users add more 'conditionals' (e.g., "don't give me that pill reminder alert tone if I'm watching TV -- I won't do it -- wait for a commercial, then give me the tone right away; I'll get up and take my pill right away."); what do we know about refinement hints (e.g., "do you want to change all instances, or just this one?")
      5. How can we import information from other tools. E.g., a person becomes aware of GPII during an eval at a low vision clinic. Do we really want them to go through 'our own' vision eval at that point? (wastes user's time, insulting to low vision professionals not to use their methodology).
      6. How can we import information from real-time performance tools (e.g., keystroke error, typing rate, apparent decision delay time), or at least use input from such tools to trigger a re-eval ("Judy is taking longer breaks from her terminal; is she tired? eye strain?") [don't forget privacy issues]
      7. How many profile creator/editors do we need? Certainly more than 1.
      8. Other models than a single tool or session. E.g., GPII has a partnership with a cable company to serve its customers. Users who turn the caption feature on may soon get a message: "Do you want captions in other situations, such as at a movie theatre or when a public announcement is made at an airport?"
      9. Peer-based assistance (AssistMeLive) with both preference creation and in-use editing. For example, user is at a different ATM than usual; preferences don't work quite right, user initiates a help session with peer (paid or volunteer) to either edit the preferences, create a new conditional, or otherwise solve not only the current transaction but the settings for future ones.
    3.  END USERS AND OTHER STAKEHOLDERS
      1. This is about a tool to assist Users and Professionals in discovering what things would help a user
        1. So this is a tool to be used by users to create their profiles
        2. Cumulative information from all profiles – would be useful to developers in planning and marketing. Also to funders and planners and policy people. Educational evaluators, etc.
  2. Assistance on Demand
    1. Infrastructure and techniques for providing a wide range of assistance-on-demand (AOD). [first we need a list of all such services, including the ones already in mass operation such as relay -- good for economic modeling]
      1. What is needed to support scalable deployment of such systems? Study services with full (TRS/VRS) and limited (VizWiz, WebAnywhere) implementation in vivo -- what do the users and operators say themselves; what do funders say? Coleman Inst. is connected to some companies that provide remote monitoring support services & has done some research on this.
      2. What is the range of assistance that can be deployed in an ‘on-demand’ basis?
      3. What is the effectiveness/impact of AOD? Do specific measurable changes in user behavior or benefit result? For which types of AOD
      4. What computer based AOD is possible and effective today? Tomorrow? How can these services evolve towards higher degrees of automation and efficiency? What about hybrid service models -- one center provides relay, captioning, navigational assistance, etc. -- when it's a human service, occupancy is everything (% of agent time that is billable)
      5. Can AOD make users more independent?
        1. Allow them to live more independently – at lower cost? Reduce load on spouse?
        2. Allow them to travel more independently (distance and around town)
        3. Begin with study of triggering events for institutionalization ("Grandma couldn't cook for herself any more, so she got depressed and went into a home")
    2. END USERS AND OTHER STAKEHOLDERS
      1. This is about creating an infrastructure for both USER and VENDORS to provide AOD to users.
        1. Vendors – can use the infrastructure to offer services commercially to users, (or students of schools, employees of companies, etc) May just want to use GPII to increase their efficiency, not necessarily to add a new service. For example, GPII could allow a relay provider without its own call center to provide service (analogy to telecom carriers who are not 'facilities-based' but are good at marketing or customer service, or have an enterprise client)
        2. Users – can use it to create volunteer or peer support systems for providing assistance on demand to other users
  3. Automated and Crowd sourced Media Access
    1. People who are deaf, hard of hearing, blind, have low vision, or have learning disabilities benefit from media that has captions and or audio descriptions. Although we know how to manually do captions and descriptions, the cost to do them for everything they are needed for (e.g. for education) is more than available in budgets. Research and tools are needed in order to create more effective and affordable mechanism for doing captions and descriptions.
      1. What are the best mechanisms for creating international federated searches to find captions in any language (e.g., transcript -> translation -> caption synchronization vs. translate as you go)
      2. Mechanisms to translate captions in language needed.
      3. Role of crowd sourcing for correction of captions
      4. Use of speech recognition, re-voicing and knowledgeably participant aided caption correction in creating inexpensive captioning
      5. Situations where transcripts are sufficient, or where an auto-scrolling transcript is sufficient (or even better than captions for users). [R&D for media players that include scrolling transcript option.]
      6. What's going on in this space already (Amara, VoiceStream, rifftrax, DVX) -- not just captioning, but user commenting of all sorts.
      7. Coincidental contribution: students at a lecture are taking notes; notes are real-time processed to create captions. maybe students are rewarded somehow, or the rating of their notes becomes part of their grade. Another: student is asked to describe a chart or graph -- a good test of comprehension that becomes an audio description.
    2.  END USER AND OTHER STAKEHOLDERS
      1. again this is an infrastructure activity. It is a way for other people to find or provide accessible media
      2. users can use this infrastructure to find accessible media if it exists
      3. vendors can use this infrastructure to provide services to make media accessible (e.g. adding captions or descriptions)
      4. users can also volunteer to make media accessible for other users (e.g. volunteer effort to caption or describe media)
      5. content owner -- may have legal responsibility for alternate format; what are implications of crowdsourcing on this, esp. quality assurance? can tools handle both crowdsourcing and "final approval" role?e.g., what about changes *after* final approval by owner?
      6. also on the policy side: can owner object to crowdsourced captions if they are not integrated into content but only provided separately, integrated only at the player?
      7. strong value in education and employment
  4. Document Conversion and translation pipelines [possibly combine with previous item?]
    1. Government and other agencies and companies are increasingly asked to make their documents available in multiple accessible formats.
      1. It would be much better if they could create one document that would act as a single source for all formats. (DAISY has done this first order – more needed)
    2. However it is too hard to teach people to mark up documents properly (although it could be used for training -- feedback/'grading')
      1. It would be better if they could just use any printed doc (or print file for a doc) and have the scanned and auto formatted
      2. This is a machine recognition task that is beyond current state of the market / state of the art – when complex pages are fed into it like those generated by government agencies using old COBOL based systems.
    3. Users also get documents in a wide variety of forms.
    4. The key deliverable would be a “feed it any document – even with invisible table formatting that is broken across pages with footers – and have it properly pull out the text and structure and semantics and create a fully accessible html5 page with ARIA markup.
    5. IF this is done it would save hundreds of millions of dollars worldwide
    6. It would also advance machine vision and semantic extraction.
    7. Coupled with an advance DAISY pipeline it would revolutionize document access and simplify the life of every person trying to create accessible documents.
    8.  END USER AND OTHER STAKEHOLDERS
      1. This is a parallel to number 3 above except this deals with documents rather than media.
      2. It is a way for other people to find or provide accessible documents.
      3. users can use this infrastructure to find accessible document if it exists or route it to remediation service
      4. vendors can use this infrastructure to provide services to make documents accessible (change to braille or audio)
      5. agencies, employers, and businesses can use it to rate document creators, improve their performance
      6. users can also volunteer to make documents accessible for other users (change to braille or audio)
  5. Development Environment/ WorkBench for Access Technologies and services
    1. The goal is to create an environment for developers of access technologies that parallels the environment created by Apple and Android.
    2. Key components of this Environment / workbench would need to be
      1. Tools that make it easy to create access solutions (also easy to offer components to AT developers: e.g., free speech technologies already offered by AT&T Labs)
      2. Components and services for making access solutions -- B2B marketplace
      3. Localization tools to make it easy to translate into different languages,
      4. A marketplace to make it easy to sell and reach worldwide (CLOUD4All is creating some of this )
      5. Consumer and Expert network to make it easy to get good advice and input when developing access solutions and services [I think this is so big it deserves to be its own top level item -- same network can support end users, purchasers, etc.; pre- and post-sale)
    3. This one can be a boon to developers and also greatly increase the number of research projects that actually get into users hands.
    4. DEVELOPER VS END USER
      1. this one is purely developer. It is about making it easier for developers to create market and support new solutions
      2. users however can also be developers if we give them the right tools and the ability to link to programmers etc. that can realize their visions
  6. Secure anonymous identification
    1. This would look at techniques that could be used by users to identify themselves without identifying themselves except when wanted
      1. Should not require any thinking on part of user – so can be used by elders and people with cog disabilities
      2. Should allow pulling prefs from cloud based on this key – OR passing the prefs from the key
      3. Probably wearable so elder doesn’t lose it
      4. Work with computers today – but everything down to a thermostat in the future
      5. horizon scan -- what are the mainstream trends that we can/should jump aboard? everyone hates passwords, captchas, secureID cards.
    2. DEVELOPER VS END USER
      1. this is something that would be created by developers
      2. but the primary target is users. The ability for a user to invoke their preferences without giving up their identity unless everyone needs to give up their identity for that activity.
  7. Mechanism for people who need physical Interfaces
    1. This would look at the use of URC to allow users who need to use their own physical interfaces to automatically connect and use them in conjunction with cloud technologies
      1. Based off of ISO/IEC-24752 (same standard used in i2Home and a dozen other EC grants
      2. Important for those needing interfaces that go beyond what the target device can provide [what is this limit? maybe best begin with reasonable assessment of where target devices are going, and what they'll have/do in 5 years]
      3. Good linkage to smarthome work.
    2. DEVELOPER VS END USER
      1. for this one it is important to distinguish between 3 players; mainstream manufacturers, accessibility vendors, and users
      2. this would give mainstream manufacturers a way to address a much wider range of people with disabilities without having to either understand the disabilities nor understand how to accommodate them. By creating an interface socket they would allow people to bring their own interfaces along and plug them into the functionality of their product. Automatic interface generation tools could look at the interface socket and create an interface that would match the needs of a particular user.
      3. Accessibility vendors can create different tools for automatically generating accessible interfaces. For common devices such as televisions, ovens, thermostats, etc. they could also create interfaces that are custom tuned 2 different types of disabilities but that can be used across products from different vendors.
      4. Users would then be able to access a wide range of products in their environment from a single interface that they would carry with them. Since this device would generate the different interfaces it would provide individuals with a much more consistent way of accessing different products. This would include both different products of the same type (TVs), and different types of products.
      5. This is also a good mechanism for providing access to people with physical disabilities where you are not able to “download from the cloud” their interface because it includes physical components.
  8. Ways to use the cumulative information from all users to improve individual performance
    1. With the tremendous information about users with different disabilities that will be in the system, anonymous analysis can determine new features and capabilities that would benefit a user
      1. Wisdom of the crowd
      2. Provides advice or make suggestions for on user based on what other users like them have found successful [like a recommender service -- lots of literature on this already, i think. but a large measure will remain human-based because of the highly contextual and idiosyncratic nature of disability.]
      3. Good for novice or non-technical users who would otherwise not discover new things.
      4. A lot of work needed but this will be increasingly important as the number of possible solutions grows so that there are too many for most to look through.
      5. Useful for 'early warning' of new technologies that jeopardize accessibility.
      6. Integrate with Needs & Numbers
    2.  END USERS AND OTHER STAKEHOLDERS
      1. 'AT & mainsteam ICT developers', marketers, policy makers, AT funders could use this mass collection of information to better understand what consumers are actually doing. They could use this to design future products better, or they could use it to increase the runtime algorithms in their products on a live basis
      2. users could also (using features built by developers) tap into this information to help them make better decisions about access, where to go
      3. peer support providers could use this information to populate their support scenarios: "Well, most people with the barrier you're facing try X."
      4. could inform public-funded R&D efforts: "a solution to X is missing, or judged largely ineffective by 50% of users."
      5. could be used to train clinical staff or other professionals.

Areas of Research once phase 1 of GPII is up

  1. Library ”AT as a service“ Implementation
    1. Development and deployment of an AT as service package for libraries and public access points.
      1. This would be a great application of GPII and would save public agencies large amount of money while providing greater service to the diversity of patrons.
      2. Patrons would get an eval (See #1 above) and get a profile (at library or clinc)
      3. Libraries would have a pay-as-you-go lease on a wide range of software and services.They only pay for those that patrons use – and only while they are using them.
      4. Users would sit down or roll up to a computer and it would automatically find the apps needed, load them, configure them and leave the user ready to go.
    2. DEVELOPER VS END USER
  2. Library Initial rollout and test of GPII in 3 – 5 countries
    1. By the end of the CLOUD4All grant – we should, with the CLOUD4All and other research, have the components needed for an initial implementation of the GPII. It will not be a complete GPII but it will be phase I and it would be sufficient for an initial test in some early adopter European countries. The goal would be to test the utility of this approach in constrained areas across languages and cultures. Potential areas for the implementation would be
      1. With elders – to provide a much simpler way for them to invoke the features they need. This could demonstrate the easy of use and the ability to invoke alternate, simpler interfaces tuned to the abilities of the users.
      2. Libraries – see above
    2. DEVELOPER VS END USER
  3. Two other rollout items – things that could be online fast. Especially the AnyDoc conversion though it requires completion of the research first.
    1. Assistance on Demand (if this is funded in time to be ready – it is one of the most powerful).
    2. Anydoc (sp??) conversion service _(again if the research is ready in time under separate funding – this has the potential to save the most money for government agencies in making their materials accessible).

The first two might be in a single implementation grant. The latter two could be in a separate one. I don’t think these grants are large enough to be implementation by themselves