Continued Researches on Possible Approaches for Protecting Communication btw LFM and CBFM
- 1 Introduction
- 2 The Workflow to Store OAuth2 Client ID/Secret into a native GPII application
- 3 The Workflow of Using OAuth2 Client ID/Secret
- 4 Environments
- 5 Detailed Requirements
- 6 Ten Immutable Laws of Security
- 7 Explored Approaches
- 8 Other Explored but Dropped Ideas
- 9 Results
This wiki page continues the research after the discussion on the wiki page of "Protect communication between Local Flow Manager and Cloud Based Flow Manager. The discussion notes can be found at here
In the discussion, we agreed that the spec of OAuth 2.0 for Native Apps may not be helpful because Local Flow Manager is aware of user's credentials that are user tokens. This spec is for native applications that should never get to know user's credentials. As a result, the browser redirection described in the spec using pre-registered redirect URLs is not required for GPII use case.
A more suitable solution is to use OAuth2 Resource Owner Grant where the native GPII application acts as a highly privileged application to receive GPII user tokens and use user tokens to access user information from GPII cloud. However, this grant only uses client id and secret to identify a native GPII without the need of a pre-registered redirect URL. This brings up a question of how OAuth2 client ids and secrets can be securely stored on users' machines.
The Workflow to Store OAuth2 Client ID/Secret into a native GPII application
Each GPII native application is assigned an unique pair of OAuth2 client id and secret that is used to authorize this application is a valid native GPII application. Before a native GPII application request for any user information, it first needs to provide its OAuth2 client id/secret to GPII cloud to exchange for an access tokens. The access token then can be used to access user information from GPII Cloud.
Store OAuth2 Client Credential at Installation
1. To install a native GPII application, first download GPII installer;
2. The installer (a human) visits a website provided by GPII cloud;
3. The installer logs into his GPII account. Register if he doesn't have an account;
4. This installer registers this GPII installation and requests an OAuth2 client id/secret for it;
5. The OAuth2 client ID/secret is given on the web UI;
6. The installer starts the GPII installer;
7. One GPII installation step asks the installer to input the OAuth2 client ID/secret. This step can be skipped;
8. GPII installed.
Store OAuth2 Client Credential after GPII is installed
If the client credential is not input at the installation, the installed native GPII application can continue to be used to perform local adaptations, such as reading and reacting to GPII Bearer tokens. However, this native GPII application cannot perform any action that GPII Cloud is involved. To allow the native GPII application to communicate with GPII Cloud, an OAuth2 client credential needs to be input into this GPII application via the application menu.
The Workflow of Using OAuth2 Client ID/Secret
According to the OAuth 2 specification for Resource Owner Grant, before a native GPII application requests any user information, it first needs to request an access token by providing its OAuth client credential:
1. The native GPII application sends a request with these request parameters: OAuth2 client id, OAuth2 client secret, GPII token;
2. GPII Cloud verifies these information. If matches, sends back an access token;
3. The native GPII application sends request with the assigned access token to access information associated with the GPII token.
We consider security for three environments that GPII Local Flow Manager could be installed:
- Private computers and devices
Client id and secret will be saved directly in the file system. In an example that an user loses his phone at an unlock state, the user is responsible to log into his/her GPII cloud account and revoke this client id.
- Shared computers and devices
A typical use case is a family shared environment. Users themselves decide on whether the machine needs to be protected as a private environment use case or a public environment use case.
- Public computers
The main goal of this wiki page is to explore possible approaches to protect client ids/secrets at public environment, as well as potential attacks for each approach that could comprise client id/secret.
Answers to these requirement questions will help to determine whether associations should be made between user tokens, native GPII applications and physical machines. But we can continue to explore the secured ways to store client id/secret without having these answers.
1. Can any user tokens be used on any GPII installed machine?
2. If the answer to 1 is yes for public machines, shall users be allowed to define what user tokens can be used on a particular native GPII application, e.g. for their personal devices?
To determine whether a particular local flow manager can only request data for a set of user tokens.
3. Shall the physical information of GPII-installed machines be tracked?
If yes, when the request is from an unknown machine, reject.
The tracked machine hardware information could be: IP or MAC address, CPU and mother board info etc.
4. The requests received at GPII cloud must be from a local flow manager.
The attack: when an attacker sends requests from a machine that has GPII installed and the attacker uses the client id/secret of that machine. Is there a way to distinguish requests are not from the local flow manager?
5. For user personal tokens only, not applicable to pilot tokens: can one user's personal token be used on multiple devices at the same time?
To determine if the method of checking on one use token being used at multiple machines at the same time can be used to identify if a user token is stolen)
6. If yes to 5, what if these devices are located at different locations?
To determine whether this detection can be used to determine the loss of a user token?
Ten Immutable Laws of Security
Before we start , an interesting reading on Ten Immutable Laws of Security. This list describes the security vulnerabilities that result from the way computers work rather than what software can fix.
Law #1: If a bad guy can persuade you to run his program on your computer, it's not solely your computer anymore.
Law #2: If a bad guy can alter the operating system on your computer, it's not your computer anymore.
Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore.
Law #4: If you allow a bad guy to run active content in your website, it's not your website any more.
Law #5: Weak passwords trump strong security.
Law #6: A computer is only as secure as the administrator is trustworthy.
Law #7: Encrypted data is only as secure as its decryption key.
Law #8: An out-of-date antimalware scanner is only marginally better than no scanner at all.
Law #9: Absolute anonymity isn't practically achievable, online or offline.
Law #10: Technology is not a panacea.
Conclusion: With computers that a user has full control over, for example, a user with admin privilege, nothing can be kept secret from this user. The goal of explorations in this wiki is to protect client credentials on computers that are run(logged in) with a regular user, such as environments of computers in AJC or school labs.
All approaches below are for public environment use case where client id/secrets are assigned to native GPII applications at public spaces such as libraries, schools etc.
Dedicated Private Computer
Description Have a separate non-public-accessible machine at each public location to store client id/secret. This machine is managed by local administrators and is the only machine at that public space to communicate with GPII cloud by sending in client id/secret and receiving access tokens. This machine is configured with networking and proxying to interact with other public machines. The lifespan of access tokens for public space should be shortened.
1. A user inserts a USB on a public computer where GPII is installed
2. The local flow manager reads the user token from the USB;
3. The local flow manager sends a http request to the dedicated private machine to request for an access token;
4. The private machine checks if a valid access token exists. If not, the private machine sends a https request to GPII cloud with client id/secret to request an access token
5. The private machine receives the access token and sends it back as a http response to the public machine;
6. The local flow manager on the public machine uses this access token to communicate with GPII cloud.
Possible Attacks If attackers figure out:
- the address/port of the private machine
- the format of http requests sent from public machines to the dedicated private machine
Attackers can perform the attack by:
2. Open this html in a browser to send requests from public machines to the private machine to trigger the process of retrieving access tokens.
Cons This approach cannot be applied to a shared computer environment.
References Discussions last year on using proxying server for UIO + GPII integration
OS-own Security Storage for Keys
Description Each operating system has its own way to store credentials:
- Windows: Credential Manager, A combine use of Isolated Storage and Windows Data Protection API
- Mac OS: Keychain Services
- Linux: Secret Service API (through libsecret), GNOME Keyring, The use of /etc/shadow
However, with Windows, there are security concerns with using either solution:
- Credential Manager: desktop applications will have unrestricted access to all data stored through Credential Manager APIs (See here).
- Windows Data Protection API: the key used to encrypt data via this API can be provided in 2 methods:
Method 1: Use the logon user's password. The drawback is that all applications running under the same user can access any protected data that they know about.
Method 2: GPII provides an additional "secret" called secondary entropy to encrypt data. The drawback is that GPII should be careful about how they use and store this entropy, which leads us back to the starting point: where and how to store secrets.
Linux shows a better security guard: according to GNOME developer, The keyring manager provides access control lists for each keyring item, controlling which applications are allowed access to that item. If an unknown application attempts to access a keyring item, the keyring manager will prompt the user to allow or deny that application access. This helps prevent malicious or poorly-written programs from accessing the user's sensitive data.
Possible Attacks On Windows,
- When using Credential Manager, once attackers figure out the resource name that the native GPII application uses (from the source code) and the client id, they can write code to retrieve the client secret via Credential Manager APIs.
- When using Windows Data Protection API with method 1, applications running under the same user can access the protected client secrets.
- When using Windows Data Protection API with method 2, steal the secondary entropy.
- A stackoverflow discussion on "Best practice for saving sensitive data in Windows 8": it suggests to use Credential Manager API
- How to store client secret securely on application base in Windows: it suggests to use a combination of Isolated Storage and Windows Data Protection API
- Best way to store a client secret in a native app: it suggests to use OS cryptographic store
Dedicated Process/Service Running on a Different Account for Storing the GPII Client Secret
With this approach, GPII starts 2 processes:
- One is a privileged process that stores the GPII client secret into its own file system so it is not accessible by the login user.
- The other is the regular GPII local flow manager process that communicates with GPII cloud and the privileged process.
The implementation and work flow on Windows systems with anonymous pipes
On windows systems, the privileged process can be a windows service started by a system account so that no account password is required.
1. When the machine starts, the windows service is automatically started;
2. The service will automatically listen for any user desktop logon;
3. When user desktop logon occurs, the service will automatically launch the native GPII application with associated task tray and flow manager as a child process, connected to itself via TCP sockets.
The methods that Steve Grundell has tried and is trying to allow the windows service and its child process to talk to each other:
JIRA ticket for this work: https://issues.gpii.net/browse/GPII-2399
1. named pipes Not doable because:
(1) a proper implementation for the windows service to validate the process at the other end of the pipe is the GPII is too complicated (See IRC discussions).
(2) Named pipes have a worrying security model in that it seems that any process on the machine can read and write to the pipe just by knowing its name.
Not doable because the handles of anonymous pipes can't be shared over different sessions.
3. TCP sockets
This is the method that Steve is currently experimenting with. Steps are:
(1) The windows service starts the child process and gets a process id.
(2) Once the tcp connection is established, the service looks up the tcp table to find the process id at the other end and ensure it's the process that the service launches.
(3) The problem of this method is, the process may be launched via a shell process which might terminate quickly, which might lead to difficulties establishing the process id exactly.
Steve Grundell's proof of concept work to try out above methods: https://github.com/stegru/service-poc.
The other possible implementation with electron IPC
This approach probably could be implemented with electron IPC. Although IPC documentation says: ipcMain handles asynchronous and synchronous messages sent from a renderer process (web page), Antranig and Tony found electron IPC seemed to be a relatively generalised mechanism than what is described in its documentation. More experiments are needed for electron IPC.
The adjusted workflow to request an access token from GPII cloud using this approach
With having 2 separate processes, the windows service and the GPII local flow manager process, involved in this approach, the windows service is the only process that can access the client secret and it should not expose this secret to any other processes in plain text. Steve Grundell suggested this adjusted workflow:
1. The GPII local flow manager contacts GPII cloud for a salt;
2. GPII cloud returns a random hex string as a salt to the GPII local flow manager;
3. The GPII local flow manager gives this salt to the windows service;
4. The windows service encrypts the client secret with the salt: bcrypt_pbkdf("client_secret", "salt") and returns the result to the GPII local flow manager;
5. The GPII local flow manager sends the client id and the encrypted client secret for this client back to the GPII cloud together with the salt received at step 2. Note that the salt is sent along because there might be multiple salts being assigned to different users for the same client id;
6. GPII cloud uses the salt to perform the same bcrypt_pbkdf() calculation on the client secret stored in Cloud for the client id received at step 5, and compares it with the encrypted client secret received at step 5;
7. If a match is found, meaning this native GPII application is valid, the GPII cloud returns an access token.
If the admin access on this machine is compromised, attackers can replace the executable of the GPII application with its own malicious application. As a result, the windows service will start and communicate with that bad process. This bad process can then use the same work flow to request access tokens from GPII Cloud.
Cons Requires to refactor GPII code to implement:
- 3 separate processes: the windows service, the GPII launcher process and the GPII local flow manager process
- The communication between the windows service and the GPII local flow manager process
- Start a service only when it's needed: Mozilla Windows Service Silent Update
- Using Pipes for IPC to communicate btw the parent process and the child process
- Hash-based message authentication code
- Node.js crypto.createHmac(algorithm, key) API
- Windows Runas command
- Node.js child processes API
- What are requirements for a HMAC secret key
- Hash based message authentication code
- Why prefer bcrypt or pbkdf2 over sha256 in password hashes
Yubikey OTP (One Time Password)
Description The client secret is saved securely at GPII cloud. A local admin uses Yubikey to retrieve it from the cloud when starting GPII on a local machine.
1. Someone turns on the computer;
2. Login as a computer user;
3. Starts the GPII application;
4. GPII application asks for OTP (by using yubikey);
5. This person inserts Yubikey into USB and touch the touch area;
6. GPII application receives the OTP;
7. GPII application sends OTP and the client id to GPII cloud (HTTPS)
8. GPII cloud contacts Yubico Authentication Server (sends OTP)
9. Yubico Authentication Server send the client id and the confirmation of OTP back to GPII Cloud, otherwise, bad OTP ends the process
10. According to the client ID and confirmation of OTP, GPII cloud sends the client secret to the native GPII application (HTTPS)
11. The client secret is save in the memory and used by the native GPII application
Possible Attacks The loss of the Yubikey
Pros Good for the shared computer environment
Cons This approach sacrifices local admins' and even users' convenience if:
1. The public location has many public machines. Someone needs to insert the Yubikey to each machine every time at starting GPII application;
2. Some public machines requires users to login to use a machine. At each logout, the state of that machine including running applications, data etc will be restored to the initial state for the next user to have a clean machine. This means every login needs to restart GPII application and needs that Yubikey.
What is YubiHSM The YubiHSM processes the encryption, decryption, and storage of keys. When called to validate a Yubico OTP, it will load the OTP and the associated encrypted key into its onboard processor and perform the decryption and comparison. Subsequently, it will only pass the validation results and associated data (such as usage counters) back to the host machine; the decrypted key and plaintext OTP never leave the YubiHSM hardware.
The Workflow to request an access token from GPII cloud
First of all, the client secret is encrypted and stored on YubiHSM.
1. Before sending requests to GPII Cloud, the native GPII application provides everything except the client secret to YubiHSM
2. YubiHSM decrypts the client secret, add it to the request and directly send to GPII Cloud
3. YubiHSM receives the access token and passes it back to the native GPII application
4. The access token is save in the memory of the native GPII application and used
- Client secrets never leave YubiHSM to prevent human from reading them.
- Expensive and might be an overkill.
- YubiHSM has processing power to perform the decryption and comparison then return the validation results back to the host machine. However, its product website does not mention if it's possible to send http requests from it and receive responses?
Possible Attacks A hack application that simulates the process of providing everything except the client secret to YubiHSM, which triggers YubiHSM to request access token from GPII Cloud and hand it back to the hacker's application.
Track Machine Information
Description A typical solution used by software licensing. Associate each native GPII application with its machine hardware information. The local flow manager generates a form of image code based upon the hardware. Each request requires the present of both client id/secret and this image code.
Possible Attacks Reading the source code can figure out how the image code is generated, then use the algo to manually generate the code. This adds one more barrier to attackers but doesn't provide much extra security.
Usage To be used in combine with other protection methods to associate native GPII applications with physical machines.
Other Explored but Dropped Ideas
Digital Rights Management (DRM)
DRM Protection Methods
1. A common DRM encryption scheme provides an encryption key that works forever. In this case, the key must be tied to the ID number of the user's machine. The key will only decode the file when it's accessed from the computer it was originally installed on. Otherwise, the user could simply forward the key along with the encrypted software to everyone he knows.
This method leads to the same question of how to securely store the encryption key on the client's machine.
2. A Web-based permission scheme to prevent illegal use of the content. When a user installs the software, his computer contacts a license-verification server to get permission (the access key) to install and run a program. If the user's computer is the first to request permission to install this particular piece of software, the server returns the key. If the user gives the software to his friend and the friend tries to install it, the server will deny access. In this type of scheme, a user typically has to contact the content provider to get permission to install the software on another machine.
This method contacts a server to verify the license key at the first time. Using it for continuous client authorization at every request leads to the same question of how to securely store the key on the client's machine.
3. A less common DRM method is the digital watermark. The FCC is trying to require a "broadcast flag" that lets a digital video recorder know if it's allowed to record a program or not. The flag is a piece of code sent out with a digital video signal. If the broadcast flag says a program is protected, a DVR or DVD recorder won't be able to record it. This DRM proposal is one of more disruptive ones out there, because it requires media and equipment that can read the broadcast flag. This is where Philips' Video Content Protection System (VCPS) format comes in. The VCPS technology reads the FCC broadcast flag and determines whether or not a device can record a program. A disc with unprotected video can play on any DVD player, but video with a broadcast flag will only record and play on VCPS-prepared players.
This method leads to the same question of storing and protecting the "broadcast flag" on the client's machine.
Transport Layer Security (TLS)
How TLS Works TLS Client Authentication, also known as two-way TLS authentication, consists of both, browser and server, sending their respective TLS certificates during the TLS handshake process. Just as you can validate the authenticity of a server by using the certificate and asking a well known Certificate Authority (CA) if the certificate is valid, the server can authenticate the user by receiving a certificate from the client and validating against a third party CA or its own CA. To do this, the server must provide the user with a certificate generated specifically for him, assigning values to the subject so that these can be used to determine what user the certificate should validate. The user installs the certificate on a browser and now uses it for the website. (See OWASP Autentication Cheat Sheet - TLS Client Authentiaction)
Leads to the same question of storing and protecting the TLS certificate on the client's machine.
- TLS protocol specification
- OWASP Autentication Cheat Sheet - TLS Client Authentiaction
- The consequence of a stolen client-side private key
The first step is to restrict users' access and privileges on public machines to disallow:
- scripting and running unauthorized programs;
- copying, downloading then running unauthorized applications;
- the use of command line tools: Powershell and cmd on Windows, terminal on Mac and Unix;
- the use of programming tools.
The second is to use https for all http requests and GPII website.
Once these prerequisites are satisfied, some approaches are feasible:
1. OS-own security storage for keys
- Requires fairly strict restrictions on user's access to the computer
- Easy to implement
2. A dedicated process/service running on a different account for storing the GPII client secret
- Requires the protection on admin access.
- Less restrictions on user's access to the computer
- A good amount of implementation work
3. Yubikey OTP (One Time Password) for the shared computer environments
- A good approach for private or shared environments that only have 1 or 2 GPII enabled devices
- Not good for an environment that have many GPII enabled devices since every device will have its own Yubikey. Someone needs to be responsible to carry these Yubikeys and match/insert into each machine.