Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

Potential Work Items


  • Title
  • Contributor (name/affiliation)
  • Scope of Work
  • Desired output (i.e. position paper/technical paper)
  • Intended Audience (i.e. submission to policy body/publication on P3 Working Group site etc.)
  • Editor, co-editor, contributors
  • Target date for completion

Model Privacy Policy

h3. Contributor

Jeff Stollman / Iain Henderson

Scope of Work

P3wg can make a valuable contribution to privacy by crafting a model Privacy Policy. This model policy would consist of multiple choice options for the various standard elements of a privacy policy (e.g., what information we collect, with whom we share the information, how we protect the information). This would allow the sites adopting the model policy to rapidly craft comprehensive policies. But more importantly, the use of a standard model would have extensive benefits for users asked to sign the policy.

Benefits of such a policy include:

1. Users could read the policy once and determine their own standards for what terms they will accept. Currently, policies are not read at all (by most users) because they are each unique, lengthy, and in complex legalese terms. A standard policy could be appended with simple language explanations of the various terms and conditions. This would facilitate understanding and – once the model is in widespread use – make it worthwhile for users to review it.
2. Thereafter they will only need to verify that other sites employing the model policy conform to the user's standards.
3. Standardization could lend itself to iconic representation of terms which would further simplify end-user review.
4. Standardization would facilitate competition among offerors. If one site/vendor uses the model and conforms to the user's preferences, it may be preferred over another site/vendor that has a custom policy. If two sites/vendors use the model policy and offers similar services as another, the user may use the differences in the selection of standard terms to choose the site/vendor.
5. By establishing a basis for competition among sites/vendors by the stringency of the privacy terms that they offer, overall privacy can be expected to increase.

note (from Iain) - the Information Sharing Group will be developing equivalent 'information sharing agreements' as seen from the individual perspective; i.e. 'I will let you have data type X for purpose Y, subject to constraint Z'. If the two workgroups collaborate then we'd have the ability to icon based, machine readable policies agreements at both ends of the data sharing pipe.

Desired Output

Output 1

Privacy policy template(s) that can be used by enterprises collecting Personally Identifiable Information (PII) that cover most common policy considerations and offers a fixed menu of choices.

Output 2

Consumer guidance on the impact of their decisions in accepting/rejecting the various terms of the privacy policy.

Intended Audience

Output 1

Enterprises collecting Personally Identifiable Information (PII).

Output 2


Editor, co-editor, contributors

Jeff Stollman

Target date for completion

  Output 1

The first draft privacy policy template will be developed by the end of Q1, 2010.

Output 2

Draft consumer guidance for the first privacy template will be developed 60 after the template is completed.

Consent and Anti-Patterns

Proposal is that P3 collect examples of consent anti-patterns... i.e. if we see real instances of poor practice in the collection of user data, or presumed consent, or making service provision conditional on acceptance of privacy-hostile terms, etc to record these instances (not with the intent of alienating the service provider concerned).

In the first instance, we will need to collect some examples of policies which are not necessarily "poor", but which serve as discussion points for the topic.

Hopefully out of the process of collection and categorization would come a list of common mistakes. P3 could then propose alternatives.

A link to a page with Consent and AntiPattern examples

Privacy Risk Assessment

P3wg can make a valuable contribution to privacy by crafting a Privacy Risk Assessment. To date, this type of assessment has not been done even though it is a fundamental to any privacy risk analysis. Current discussions of risk rely on citing of examples of breaches, but have not evaluated which data items subject a person to the most risk.

To date, the only risk that has been considered in the area of identity is the initial vetting risk to Idenity Providers and the authentication risk to Relying Parties. Because Identity Providers and Relying Parties are commonly large enterprises (commercial and government), they have had resources to invest in assessing their exposures. This has not been the case for the public at large who are typically the Users of services.

Assessing risk at the data item level (e.g., first name, last name, street address, social insurance number) would allow us to prioritize data items according to risk and provide Identity Providers and Relying Parties a basis for optimizing their selection of data items to meet both their needs for assurance and the user's need for privacy. For example, we know from experience that certain data items (e.g., last name, ethnicity, street address) have been used to cause physical harm (e.g., kidnapping, murder, genocide). Other data items (e.g., US social security number) have been used to cause financial harm (e.g., stealing of banking and credit card accounts). Other impacts include reputatoinal harm and national security.

The risk assessment would begin by attempting to identify the impacts associated with each data item and their associated likelihood. Once a method for collecting/measuring these data is devised and the informatoin is collected, we would then need to find ways to categorize/summarize our findings to make them useful. For example, we may find that certain data items should not be used for identification purposes because they pose too great a risk. Having data behind such a conclusion aids in convinving Identity Providers and Relying Parties to use alternative selections.

One of the challenges to this effort is the lack of hard data with which to measure the impact and probability of privacy breaches. But there are techniques to overcome this and still develop ordinal measures of risk which can be replaced by hard numbers as data beome available.

Performing this assessment allows us to answer questions such as:

1. Is the privacy risk of establishing a national ID program in country X, worth the reduction in the risk of terrorism? Does a national ID program in country X actually increase the risk of terrorism through increased risk to privacy?
2. Can the privacy risk exposures to Company Y of holding various Personally Identifiable Information (PII) on its clients be reduced by selecting different data items for authentication and/or marketing purposes?
3. What level of regulatory penalties would be effective in compelling enterprises to better protect employee/customer PII?
4. Will the resulting risk reduction justify Company Z's investment in implementing better PII protection policies?

  • No labels