[WG-UMA] Delegating access management to custodians

Paul C. Bryan email at pbryan.net
Thu Feb 18 14:52:51 EST 2010


Per our discussion on the conference call today, the ball is in my court
to address the issue of delegation, especially with regard to the
pending use case in the Wiki.

The documented use case

First, a play-by-play commentary of the use case as currently
documented:


> Alice, a 14 year old girl, wants to have an account on a popular
> social networking application.


For now, let's call this application FaceSpace. FaceSpace is the host in
our terminology.


> She wants to create a network of her friends with whom she wants to
> share photos and discuss her hobbies. She wants to keep in touch with
> them and does not want to be left behind with new technologies that
> have been used by her peers for some time now.
> 
> When Alice sets up the account at a popular social networking
> application she needs to provide a variety of information. This
> includes providing information about her age. When Alice states that
> she’s 14 then the application detects that she is very young. It then
> informs her that she will need parental control over all the
> information that she submits and wishes to share with other users of
> the application. What the application means is that Alice will not be
> able to control information dissemination by herself but will rely on
> an adult (e.g. her father) to make access control decisions. Alice
> then asks her father Bob for some help with setting up her account and
> setting up the required parental control functionality.


So far, the responsibility for detecting the need for delegation falls
squarely on the shoulders of the host. So far, so good. Some
clarifications I'd like to ask for:

1. Does this adult need to be the legal guardian of Alice, or can any
adult fit the bill? If the former, how will he prove this? Just by
stipulating it?

2. Is this adult accepting liability for Alice's actions?

3. Is this adult accepting liability for any injury Alice suffers?

4. Will this adult need to stipulate this when Alice registers?

5. Will this adult need to register at FaceSpace himself?

6. Will this adult have to agree to terms at FaceSpace before he can
accept responsibility for Alice's account?

7. If Alice pretends to be this adult, does this immunize FaceSpace from
any liability, should Alice suffer any injury?


> Bob is happy that his daughter will be able to communicate with her
> friends but he is concerned with what information will be released and
> how this information might be used by legitimate or malicious users.
> He knows that the social networking application has been certified to
> support parental control and allows third party access control systems
> to be used for that purpose.
> 
> Bob is already using a specialized Authorization Manager – CopMonkey –
> for his own purposes. He uses this component to define access control
> policies for his personal data (home address, telephone number) so
> that he can point other Web services to pull such data as required
> (e.g. during registration process). Moreover, he protects his online
> calendar service using CopMonkey and he established necessary
> relationships between his calendar feeds and his Visa payment service.
> He wants to be sure that Visa knows where Bob is and can correlate
> such data with data regarding his credit and debit card usage.
> 
> Bob decides that CopMonkey will be perfect for parental control over
> Alice's information. Therefore, he introduces this AM to the social
> networking application (establishes a trust relationship between those
> two services). As such, he is able to control how different
> information published by his daughter is accessed by her friends or
> other users of the application.


Does he perform this introduction during registration of Alice's account
at FaceSpace?


> When the account is set up then Alice is able to use the social
> networking application just as any other user. She writes comments
> about her day, posts links to interesting movies. Additionally, she
> uploads some of her photos and short video clips with her friends. She
> knows that her father is very concerned with privacy and that all her
> published data is not shared automatically with other users of the
> application. She’s not sure how that works but leaves this issue to
> her father – after all he is more proficient in defining privacy and
> security settings than she is.


She probably also has some idea that this implies he can see all of the
information she's posting, introducing her to the surveillance/big nanny
society. Soon, she'll probably go


> Alice uploads a photo from her birthday party and wishes to share it
> with her friends on her social networking application. When the upload
> is finished, she clicks on the “Share” link next to the photo. As
> access control is delegated to Bob’s preferred Authorization Manager,
> the social networking application says that access control rules
> cannot be defined but a request has been sent to CopMonkey.


There's nothing in UMA yet that's designed to allow a host to notify an
AM of a resource that "needs to be protected". Is this required?


> Bob knows his daughter has uploaded a photo. Therefore, he logs in to
> his Authorization Manager and sees that he must define terms and/or
> access control policies for this newly created Web resource. He
> specifies terms that only those who can assert to be friends of Alice
> can access the photo. CopMonkey will require an assertion from the
> social networking application. Moreover, Bob decides to apply
> additional constraints in order prevent those photos being passed on
> to other people. Therefore, before accessing the photo it is necessary
> to confirm that it will not be sent to other possibly non-legitimate
> users.
> 
> Jane, one of Alice’s best friends, heard that a new photo from a
> birthday party has been uploaded and wishes to see it immediately. She
> logs in to her social networking application and clicks on the Alice’s
> photo album link. She can see a link to the newly uploaded photo. When
> she clicks the link, the application detects that a request has been
> made to an UMA protected resource. The “magic happens” and Jane is
> redirected to a page stating terms that she has to meet in order to
> see this picture. She ticks the box to assert that she will not send
> the photo to other people. After all, she only wants to see it for
> herself. Moreover, she sees that she needs to assert that she’s a
> friend of Alice. She clicks on “Validate” link next to this term and
> she gets redirected back to the Social Networking Application. This
> app states that CopMonkey wants to get such assertion. She confirms
> that immediately and the assertion is sent back to CopMonkey. After
> the entire process is done, CopMonkey gets the required proofs and can
> issue a token that is later used by Jane to access Alice’s photo. Jane
> does not know anything about that. She only knows she had to confirm
> she’s Alice’s friend and that she won’t share the picture.


1. It would be good to use consistent terminology. We haven't said
assertions; we've been using the word "claims".

2. In the current protocol, a token is not issued that is later used by
Jane after sufficient claims are supplied.

3. How is the claim-of-a-friend verified? Is this simply self-asserted?
Is Bob going to allow anyone who claims to be Alice's friend without
proof to have access?


> Over time, Alice learns that allowing her father to have impact on
> security of the resources that she shares with her friends is not a
> bad thing. She feels safe and knows that everything she submits to her
> social networking application is secure.


This seems very different to my parenting technique. :-\


> Over time, Alice also learns more about security and sees what
> information is prevented from being shared with her friends. In the
> future she hopes to make better security decisions by herself. At some
> point she'll be fully responsible for controlling access to her
> resources. Her father Bob is also happy as he knows that his daughter
> can communicate with her friends in a safe and secure way.


I'm not sure these passages above provide much utility in driving our
protocol specification.


> He checks his Authorization Manager on a daily basis and composes
> access control policies and defines terms as new photos and other
> resources are uploaded by his daughter. Moreover, he audits all access
> requests and sees how Alice's friends access her photos and video
> clips. He hasn't noticed any abuses and is confident in whatever her
> daughter does. After all, he's fully responsible for her privacy and
> security and he puts much effort into ensuring that his daughter stays
> safe and still enjoys the benefits of social networking on the Web.


Okay, so, some additional points about the use case flow:

1. There's no discussion over Alice actually controlling access to
resources. Bob's doing all of the policy work here.

2. Somehow, Bob has to get involved in Alice's registration process, and
establish the relationship with his AM.


Okay, now the assumptions I've made so far about the UMA protocol with
regard to delegation...

MAC vs. DAC

I have spoken about mandatory vs. discretionary access control with
regard to data ownership and delegation, and I think it's probably worth
fleshing-out what I mean in more detail.

Mandatory access controls are unconditionally enforced and cannot be
overridden by discretionary access control. Discretionary access
controls allow further restriction of existing MACs, but cannot override
MACs.

You could in theory have a hierarchy of access controls. Let's say for
example my employer, my department, and ultimately me.

My employer is exercising mandatory access control relative to all. My
department exercises discretionary access control relative to my
employer, but mandatory relative to me. I exercise discretionary access
control relative to all. The point here is mandatoryish vs.
discretionaryish is not absolute, it's always relative to something.

The UMA protocol does not distinguish between any of these forms of
access control. For host to AM, we specify the protocol to join host to
AM, and for host to request policy decisions from AM.

Host's responsibility

So, as pointed-out early on in the commentary, host realized it had
responsibilities as to how it needs to govern access to resources. This
is application-specific and jurisdiction-specific. It's up to host—and
at the complete discretion of the host—to determine what AMs it will
engage with, how many, when to use which, etc. The protocol
intentionally doesn't get involved in this aspect.

If the host is run by my employer, then it will either have mandatory
policies it itself enforces, or could choose to use an AM to enforce its
mandatory policies. We shouldn't know, or care. If I as an employee am
offered the chance to exercise discretionary access control over
resources, it doesn't override policies that the employer have already
put in place.

In the case of Alice and Bob, if host chooses to view Bob's AM as
mandatory access control (because FaceSpace wants to avoid legal
liability), and later if Alice comes along and adds her AM, and view her
policies as discretionary—relatively speaking—no problem. This falls
strictly in the domain of host, and shouldn't have any impact on our
current protocol.

Paul

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://kantarainitiative.org/pipermail/wg-uma/attachments/20100218/3e946471/attachment.html 


More information about the WG-UMA mailing list