Potential Work Items
Benefits of such a policy include:
1. Users could read the policy once and determine their own standards for what terms they will accept. Currently, policies are not read at all (by most users) because they are each unique, lengthy, and in complex legalese terms. A standard policy could be appended with simple language explanations of the various terms and conditions. This would facilitate understanding and – once the model is in widespread use – make it worthwhile for users to review it.
2. Thereafter they will only need to verify that other sites employing the model policy conform to the user's standards.
3. Standardization could lend itself to iconic representation of terms which would further simplify end-user review.
4. Standardization would facilitate competition among offerors. If one site/vendor uses the model and conforms to the user's preferences, it may be preferred over another site/vendor that has a custom policy. If two sites/vendors use the model policy and offers similar services as another, the user may use the differences in the selection of standard terms to choose the site/vendor.
5. By establishing a basis for competition among sites/vendors by the stringency of the privacy terms that they offer, overall privacy can be expected to increase.
Proposal is that P3 collect examples of consent anti-patterns... i.e. if we see real instances of poor practice in the collection of user data, or presumed consent, or making service provision conditional on acceptance of privacy-hostile terms, etc to record these instances (not with the intent of alienating the service provider concerned)
Hopefully out of the process of collection and categorization would come a list of common mistakes. P3 could then propose alternatives.
Privacy Risk Assessment
P3wg can make a valuable contribution to privacy by crafting a Privacy Risk Assessment. To date, this type of assessment has not been done even though it is a fundamental to any privacy risk analysis. Current discussions of risk rely on citing of examples of breaches, but have not evaluated which data items subject a person to the most risk.
Assessing risk at the data item level (e.g., first name, last name, street address, social insurance number) would allow us to prioritize data items according to risk and provide Identity Providers and Relying Parties a basis for optimizing their selection of data items to meet both their needs for assurance and the user's need for privacy. For example, we know from experience that certain data items (e.g., last name, ethnicity, street address) have been used to cause physical harm (e.g., kidnapping, murder, genocide). Other data items (e.g., US social security number) have been used to cause financial harm (e.g., stealing of banking and credit card accounts). Other impacts include reputatoinal harm and national security.
The risk assessment would begin by attempting to identify the impacts associated with each data item and their associated likelihood. Once a method for collecting/measuring these data is devised and the informatoin is collected, we would then need to find ways to categorize/summarize our findings to make them useful. For example, we may find that certain data items should not be used for identification purposes because they pose too great a risk. Having data behind such a conclusion aids in convinving Identity Providers and Relying Parties to use alternative selections.