[WG-OTTO] FW: [WG-UMA] Legal Use Case - User Managed vs. Controlled Access

Mike Schwartz mike at gluu.org
Wed Aug 19 17:43:19 CDT 2015


Keith,

We certainly won't be the last group not to solve healthcare! I need 
these tools to help my customers who need a higher assurance trust 
model.

What aspects concern you about Adrian's comments?

thx,

Mike


On 2015-08-19 15:56, Keith Hazelton wrote:
> This post to the UMA WG challenges our OTTO presumption that
> federations are an inescapable precondition for the solution. The
> specific challenge comes from what Adrian says about the health record
> domain. —k
> 
> --
> email & jabber: keith.hazelton at wisc.edu
> calendar: http://go.wisc.edu/i6zxx0 [4]
> 
>  From: <wg-uma-bounces at kantarainitiative.org> on behalf of Adrian
> Gropper
> Date: Wednesday, August 19, 2015 at 14:39
> To: Eve Maler
> Cc: "wg-uma at kantarainitiative.org UMA"
> Subject: Re: [WG-UMA] Legal Use Case - User Managed vs. Controlled
> Access
> 
> Eve,
> 
> I really don't see how to introduce UMA in healthcare or anywhere else
> if the use-case is as in the university e-transcript case study. That
> model is unrealistic, at least in healthcare:
> 
>  	* Presumes adoption of shared data models and scopes (the HEAR in
> the demo) to a practical extent for authorization management. FHIR is
> moving in that direction and promises standardization for interchange
> purposes but authorization is a higher bar because it presumes that
> Alice's comprehension, state, and federal data protection mandates
> (42CFR) will align with the interchange standards. There is no reason
> to believe this alignment will happen. FHIR is governed by a group of
> industry peers for their interchange purposes. Authorization is not
> necessarily on their agenda. My example is healthcare specific, but I
> suspect it applies to most other verticals, probably even education.
> 
>  	* Presumes adoption of identity and other federations. There are
> absolutely no ID federations in healthcare and none are even on the
> horizon. Healthcare may be a more extreme case but we see similar
> behavior in many other industries that serve consumers. In finance,
> consumer ID federation is limited to small transactions at ATMs.
> Education is a misleading outlier because the participants are peer
> higher education institutions. ID federation will happen sooner or
> later but the path is far from clear and UMA should not wait if we
> want real-world adoption for IoT and selected verticals.
> 
>  	* The outsourced model for general purpose authorization management
> is currently the Apple App Store and they have no reason to adopt
> standards in the near term. We see the Apple authorization domain
> moving from the regular apps, to HealthKit apps, to payment, and now
> to HomeKit. UMA will enter the market as the standard for businesses
> that want to compete with Apple's strong privacy protections.
> Substitutability of the Authorization Server will be essential to
> competing with Apple and other walled gardens of authorization.
> 
> I'm not as close to other verticals as I am to healthcare but it seems
> to me that the evidence points in the direction of dynamic
> registration of the UMA Authorization Server first, followed by
> dynamic registration of the client second. Although I'd like to see
> every implementation of UMA include OIDC by default, like MITRE ID
> Connect does, the more we rely on federation of identity and standard
> authorization data models, the less likely we are to succeed.
> 
> Adrian
> 
> On Wed, Aug 19, 2015 at 12:18 AM, Eve Maler <eve at xmlgrrl.com> wrote:
> 
>> I hear where you’re coming from, Adrian, but I don’t want to
>> leave the RS use case variants just yet. For our legal subgroup
>> purposes, I think they:
>> 
>> - Demonstrate that data provenance can be usefully known by the
>> recipient, without expensive digital signature solutions, by virtue
>> of the data (or APIs, anyway, for POST operations by the client)
>> residing authoritatively at some original resource server. This has
>> always been touted as a benefit of UMA; see this university
>> e-transcript case study [1] from Maciej.
>> 
>> - Demonstrate that Alice mostly doesn’t own the resource server;
>> she has an account on a resource server that someone else operates.
>> It’s really rare for an individual to run one, though nothing is
>> stopping Alice from doing it in cases where the data is
>> self-asserted. Most of my examples involve third-party-operated
>> RS’s. This can help us align the needs of (what the Binding
>> Obligations draft called) the Authorizing Party and Resource Server
>> Operator roles (and possibly others).
>> 
>> - May help us explore “data controller/processor” regulations
>> wrt UMA (though I’m guessing about this);.
>> 
>> Now, when it comes to authorization servers, which is your
>> particular concern here, we could equally explore similar use case
>> variants, keeping the RS and client elements constant. E.g.:
>> 
>> What happens to the other parties’ adoption willingness,
>> liability, etc. when Alice:
>> 
>> * Chooses her own outsourced (e.g., “social”) AS?
>> * Runs her own AS in a cloud?
>> * Builds her own AS and runs it at home, hosted by her ISP?
>> * Something else?…
>> 
>> Eve
>> 
>> On 18 Aug 2015, at 7:46 PM, Adrian Gropper <agropper at healthurl.com>
>> wrote:
>> 
>> Eve,
>> 
>> You may be right that UMA does not inject a new "data by reference"
>> solution but your use-cases are completely different from mine and I
>> reach a very different conclusion.
>> 
>> In my use-cases, Alice owns her AS vs. all of yours where she owns
>> the RS.
>> 
>> The situation in healthcare has shown little value for Alice owning
>> her RS or outsourcing it. We call Alice's RS a Personal Health
>> Record (PHR). PHRs have failed spectacularly in the marketplace (I'm
>> responsible for $4.2 M and 7 years of that failed market myself)
>> because processing data from the PHR is very expensive for the
>> recipient client. The data has lost provenance (because digital
>> signatures are still uncommon) and it's always stale. Worst of all,
>> the "scope" problem is practically insoluble. The vast majority of
>> data has been munged through two scope filters: first when it fas
>> grabbed from the source RS to the PHR and second when it goes from
>> the PHR to the client. The lack of a consistent data model for the
>> PHR as intermediary RS doesn't help either. The result of this scope
>> problem is twofold. First, because the in and out scopes don't match
>> in the temporal sense, the PHR has a lot of redundancy and lacks the
>> authority (such as a professional license) to eliminate the
>> redundancy. Second, and much more expensive, the client that gets
>> data from the PHR receives a lot of abnormal results that it did not
>> order and now has the liability of dealing or not dealing with these
>> abnormalities. No doctor is paid to deal with this kind of thing and
>> no patient or payer wants to have repeat follow-up for things that
>> have already been addressed in a prior context.
>> 
>> The reason UMA is going to take over healthcare is because it solves
>> all of the problems of PHRs as intermediaries.
>> 
>> Why UMA and not health information exchanges (HIE)? States and the
>> feds have spent more than a decade and many $Billions trying to map
>> the interoperability problem onto a "trusted" intermediary called a
>> HIE. Some of these HIEs act as an RS, transacting the data by value
>> and have most of the same issues as the PHR above. Many HIEs however
>> have adopted the "by reference" model and only manage consent to
>> participate, discovery, and authorization for access. This maps into
>> the AS role in UMA with the AS is operated by a "trusted"
>> institution, the HIE, as part of a federation with RSs and clients.
>> 
>> The problem with the institutional HIE as AS is different from the
>> PHR or HIE "by value" approach and it's _governance_. When it comes
>> to data about human beings, the governance of the AS intermediary
>> may be impossible. The reason is that society is not well equipped
>> to govern activities related to unlicensed actors. Patients are
>> unlicensed actors. This governance problem first shows up as
>> difficulty deciding whether to use an "opt-in" or an "opt-out"
>> consent model for participation in the HIE. Then it shows up in
>> trying to federate access to the HIE over broad ranges of clients
>> ranging form federal facilities (the VA, Medicare), state
>> facilities, multi-$Billion hospitals, solo MDs in another state,
>> nursing homes, pharmacies, home health aides, .... All of these are
>> potential clients of the HIE and federations of such strange
>> bedfellows are difficult to govern. It gets worse when you add IoT.
>> 
>> My thesis is that the only solution is to enable Alice to build,
>> run, or outsource her AS. This avoids the PHR scopes problem and
>> much of the HIE governance problem. The federations, be they
>> authentication or authorization federations, still add significant
>> value, but they have to compete with Alice building or running her
>> own AS and that keeps the federated system honest, market-based, and
>> potentially governable.
>> 
>> As I see it, the problem for UMA and HEART is relatively obvious:
>> ensure that the RS is implemented in a way that makes the AS
>> substitutable. This is what I'm hoping HEART will figure out and
>> it's something a couple of us are building around the MITREid
>> Connect implementation - with very limited resources.
>> 
>> It's not clear to us that are working on this whether this prospect
>> of millions of potential ASs is compatible with UMA 1.0. Apparently
>> this is related to the #154 issue which I'm still trying to
>> understand.
>> 
>> Adrian
>> 
>> On Tue, Aug 18, 2015 at 7:59 PM, Eve Maler <eve at xmlgrrl.com> wrote:
>> 
>> (I’m going to snip the lower part of this thread to focus on the
>> “data by reference” point. I’m also going to inject UMA
>> technical terms so we can be very clear about our mappings.)
>> 
>> UMA does not inject a new “data by reference” solution where
>> before there was none. So I don’t know if we have a super-duper
>> new set of tools at our disposal. Some concrete examples:
>> 
>> 1. Alice sets up a resource server RS1 at home to host her
>> self-asserted personal information (she prefers “aisle”,
>> “nonsmoking”, “room near the elevator”, and nickname
>> “Allie”). RS1 is at alice.com [2], managed entirely by here,
>> hosted by her ISP. She hooks it up to an authorization server AS1 to
>> control release of this information to her travel agent, requesting
>> party Bob, using client app C1 for making travel arrangements.
>> 
>> Importantly, the client app really does “GET” her data. It may
>> cache or store it for short or long periods of time, possibly
>> depending on her (nontechnically imposed) constraints, and it may
>> refresh what it stored periodically, if her policies allow that.
>> 
>> 2. Same, except alice.com [2] is managed by Google.
>> 
>> Meant to highlight the “cloud” aspect of hosting.
>> 
>> 3. Alice uploads a photo she took to RS2, Flixr.com [3]. The
>> requesting party is Charlie at the framing shop and the client app
>> is C2 for printing photos on canvas, for mounting. Otherwise the
>> same.
>> 
>> Meant to highlight the “joint data rights ownership” aspect, and
>> that she has nothing to do with the hosting.
>> 
>> 4. Alice uses RS3, which hosts her credit score and credit record,
>> to check out her financial picture. The requesting party is
>> financial officer David and the client app is C3 for assessing bank
>> clients’ suitability for personal loans. Otherwise the same.
>> 
>> Meant to highlight that Alice “owns” even fewer aspects of the
>> data, in that she didn’t even contribute anything to the
>> “value” of the data.
>> 
>> 5. Alice is a video game community manager, and for work she uses
>> RS4, which is Twitter — a modern Twitter that is UMA-enabled. Its
>> API is very rich, and it allows calls for both GETting and POSTing
>> status updates. The requesting party is her colleague Eric, and he
>> uses a client app C4, a third-party Twitter app that posts status
>> updates to the corporate account she controls. Otherwise the same.
>> 
>> Meant to highlight that clients don’t just receive data, they can
>> insert data into a supposedly “authoritative source” RS.
>> 
>> ====
>> 
>> I realize that in today’s pre-UMA environment, there’s a robust
>> understanding of data controllers and data processors (in various
>> jurisdictions), but I’m not sure exactly how the lines are drawn.
>> In an environment with UMA in the picture, does anything change?
>> What roles would the AS, the not-Alice requesting parties, and the
>> resource servers and client applications in play?
>> 
>> Eve
>> 
>> On 18 Aug 2015, at 10:50 AM, Mark Lizar <mark at smartspecies.com>
>> wrote:
>> 
>> HI Jeff,
>> 
>> [some comments inline]
>> 
>> I think you are suggesting that there needs to be a scenario in
>> which Alice controls certain data and authorizes specific uses of
>> the data without transferring the data to Bob. Bob can view or print
>> (as in a label), but cannot electronically save the data. (Of
>> course, printing the data is a form of saving the data, because the
>> label can be copied or OCRed to recover Alice's address in
>> electronic form.)
> 
>  Perhaps Bobs Health Widget uses a delivery company, which uses a 3rd
> party trust framework, that is verified and audited by another
> intdependant third party to ensure to Alice that her address is not
> accessed, saved or copied by Bob’s Health Widgets. So the name, the
> contents of the package and the address are separated so no one party
> can have all three bits of data?
> 
> How does Bob’s widgets advertise that they have these privacy and
> security practices, which are different than Dave’s Widget company?
> Is Bob’s Widgets more trust worthy than Dave’s?
> 
> In one context, Privacy by Design is a container for trusting process
> that Dave’s company asserts when collecting Alice’s consent and
> data ( to effectively control the data rights management) Because
> Dave’s company holds Alice’s data, Dave’s company is subject
> then to Data Protection laws and Privacy by Design certifies that he
> encrypts Alices data and doesn’t leak it. 3
> 
> In the context of Bob’s Health widgets’ he doesn’t need privacy
> by design, and is not liable to data protection, because Bob may never
> hold’s Alice’s Data.
> 
>>> Notionally, this sounds like a good idea, but enforcement would be
>>> tricky. If Bob is actually Bob's Widget company and Alice orders a
>>> widget and provides her address under this scenario, what happens
>>> if Alice's widget never arrives? Bob cannot tell Alice what
>>> address the widget shipped to, because he no longer has a record
>>> of the address.
> 
> Enforcement can happen in a number of ways:
> - fines by law
> - breach of contract
> - reputation damage
> - 3rd party audit for compliance
> - trust framework enrolment process or customer software
> 
> and so on.
> 
>>> The issue that we are running into full speed is that some data
>>> does not have a single "owner". When Alice transacts with Bob,
>>> both are parties to the transaction. Whether or not Bob is an
>>> individual or an institution, I would assert that the transaction
>>> data is as much his as it is Alice's. In fact, in many
>>> jurisdictions, there are legal reasons (e.g., "Know Your Customer"
>>> in the US) for Bob to maintain certain information about Alice.
>>> And when a third-party payment system is involved (e.g., a credit
>>> card or PayPal), they would also have a stake in the transaction,
>>> giving them a stake in (some of) the data, as well.
>>> 
>>> This problem has not been solved, yet. And I don't think that
>>> there is anything in UMA that takes on this challenge. UMA solves
>>> several use cases, but does not claim to solve this one.
> 
>>> I think we need to be careful trying to avoid applying UMA to
>>> problems that are beyond its scope just because it is such an
>>> elegant solution to portions of the problem.
> 
>  I think what we are exploring here is the transference of liability,
> through consent, access control, and data control scenario’s. If
> Alice has the freshest copy of her own aggregate data, and she sets a
> notice that Dave no longer has a accurate data, then legally, with the
> proposed EU laws I believe Dave will no longer be allowed to process
> that data.
> 
> In this regard I can imagine IOT scenario’s where data is only valid
> when it’s live data. (but that’s just me)
> 
> Best ,
> 
> Mark
> 
>>> Jeff
> 
> Eve Maler | cell +1 425.345.6756 [5] | Skype: xmlgrrl | Twitter:
> @xmlgrrl | Calendar: xmlgrrl at gmail.com
> 
>  --
> 
> Adrian Gropper MD
> 
> RESTORE Health Privacy!
> HELP us fight for the right to control personal health data.
> DONATE: http://patientprivacyrights.org/donate-2/ [6]
> 
> Eve Maler | cell +1 425.345.6756 [5] | Skype: xmlgrrl | Twitter:
> @xmlgrrl | Calendar: xmlgrrl at gmail.com
> 
>  --
> 
> Adrian Gropper MD
> 
> RESTORE Health Privacy!
> HELP us fight for the right to control personal health data.
> DONATE: http://patientprivacyrights.org/donate-2/ [6]
> 
> Links:
> ------
> [1] 
> https://smartjisc.files.wordpress.com/2012/10/smart_hears_draft012.pdf
> [2] http://alice.com/
> [3] http://flixr.com/
> [4] http://go.wisc.edu/i6zxx0
> [5] tel:%2B1%20425.345.6756
> [6] http://patientprivacyrights.org/donate-2/
> 
> _______________________________________________
> WG-OTTO mailing list
> WG-OTTO at kantarainitiative.org
> http://kantarainitiative.org/mailman/listinfo/wg-otto

-- 
-------------------------------------
Michael Schwartz
Gluu
Founder / CEO
mike at gluu.org


More information about the WG-OTTO mailing list