Benefits of such a policy include:
1. Users could read the policy once and determine their own standards for what terms they will accept. Currently, policies are not read at all (by most users) because they are each unique, lengthy, and in complex legalese terms. A standard policy could be appended with simple language explanations of the various terms and conditions. This would facilitate understanding and – once the model is in widespread use – make it worthwhile for users to review it.
2. Thereafter they will only need to verify that other sites employing the model policy conform to the user's standards.
3. Standardization could lend itself to iconic representation of terms which would further simplify end-user review.
4. Standardization would facilitate competition among offerors. If one site/vendor uses the model and conforms to the user's preferences, it may be preferred over another site/vendor that has a custom policy. If two sites/vendors use the model policy and offers similar services as another, the user may use the differences in the selection of standard terms to choose the site/vendor.
5. By establishing a basis for competition among sites/vendors by the stringency of the privacy terms that they offer, overall privacy can be expected to increase.
Proposal is that P3 collect examples of consent anti-patterns... i.e. if we see real instances of poor practice in the collection of user data, or presumed consent, or making service provision conditional on acceptance of privacy-hostile terms, etc to record these instances (not with the intent of alienating the service provider concerned).
In the first instance, we will need to collect some examples of policies which are not necessarily "poor", but which serve as discussion points for the topic.
Hopefully out of the process of collection and categorization would come a list of common mistakes. P3 could then propose alternatives.
A link to a page with Consent and AntiPattern examples
P3wg can make a valuable contribution to privacy by crafting a Privacy Risk Assessment. To date, this type of assessment has not been done even though it is a fundamental to any privacy risk analysis. Current discussions of risk rely on citing of examples of breaches, but have not evaluated which data items subject a person to the most risk.
To date, the only risk that has been considered in the area of identity is the initial vetting risk to Idenity Providers and the authentication risk to Relying Parties. Because Identity Providers and Relying Parties are commonly large enterprises (commercial and government), they have had resources to invest in assessing their exposures. This has not been the case for the public at large who are typically the Users of services.
Assessing risk at the data item level (e.g., first name, last name, street address, social insurance number) would allow us to prioritize data items according to risk and provide Identity Providers and Relying Parties a basis for optimizing their selection of data items to meet both their needs for assurance and the user's need for privacy. For example, we know from experience that certain data items (e.g., last name, ethnicity, street address) have been used to cause physical harm (e.g., kidnapping, murder, genocide). Other data items (e.g., US social security number) have been used to cause financial harm (e.g., stealing of banking and credit card accounts). Other impacts include reputatoinal harm and national security.
The risk assessment would begin by attempting to identify the impacts associated with each data item and their associated likelihood. Once a method for collecting/measuring these data is devised and the informatoin is collected, we would then need to find ways to categorize/summarize our findings to make them useful. For example, we may find that certain data items should not be used for identification purposes because they pose too great a risk. Having data behind such a conclusion aids in convinving Identity Providers and Relying Parties to use alternative selections.
One of the challenges to this effort is the lack of hard data with which to measure the impact and probability of privacy breaches. But there are techniques to overcome this and still develop ordinal measures of risk which can be replaced by hard numbers as data beome available.
Performing this assessment allows us to answer questions such as:
1. Is the privacy risk of establishing a national ID program in country X, worth the reduction in the risk of terrorism? Does a national ID program in country X actually increase the risk of terrorism through increased risk to privacy?
2. Can the privacy risk exposures to Company Y of holding various Personally Identifiable Information (PII) on its clients be reduced by selecting different data items for authentication and/or marketing purposes?
3. What level of regulatory penalties would be effective in compelling enterprises to better protect employee/customer PII?
4. Will the resulting risk reduction justify Company Z's investment in implementing better PII protection policies?