|Table of Contents|
Thursday, December 1
- Discuss end-stage recommendations
Attending: Thomas, Eve, John W, Susan
Susan attended the Wall St. Blockchain event, which had a lot on smart contract standards. Cook County had announced a land records project, but that didn't have a wide enough scope. Illinois has now announced something. So this suggests needing guidance. At banking conferences, there's an assumption that banks can/should serve as IdPs, but then there's the unbanked. OTOH, in the actual identity world, in some countries, this has "been solved" through legislation where governments either do or don't contract with banks to provide IdP services. The IdP value prop hasn't seemed to look that great in the last 15 years so far for banks. (We also note that R3 is losing members...)
Should we try and provide recommendations around the societal/political implications, or just technical? (See the Traveling Salesman movie for a good take, and the paper The Moral Character of Cryptographic Work.) Our use cases do already have societal, social, and cultural implications. Is this about autonomy, or consent? Let's not hassle this out here! See the new paper A Typology of Privacy if you want to go all Socratic.
Eve shared her recent talk on where user-centric identity went wrong (link forthcoming) and how to improve such technology. The "sharper-edged criteria", most of them from 2008, could be useful in pressing for assessing empowerment of people in transacting. The notion of fostering more "peer-like" relationships in a metaphorical sense is behind the criteria:
Does the solution make the right thing to do be the easiest thing to do?
Does the solution enable unilateral user actions that have unambiguously positive outcomes?
Does the solution make what people actually want to do possible?
Does the solution respect and balance all ecosystem parties’ needs?
Does the solution make consent more meaningful?
Is the system’s architecture applicable to multiple or future problems in a clean way?
What is the definition of self-sovereign, actually? Phil W has mentioned that he considers UMA to be a self-sovereign technology, interestingly. Is the definition "I get to host it (what?) where I want to host it"? Or is it "I get to move it (what?) whenever I want/to wherever I want"? Or simply "I have high (significantly higher than before) leverage/negotiating power with the other side"? Is that last one a definition of being able to act as a (metaphorical) peer? Do we need more criteria, or more crisp criteria?
AI: Eve: Provide the rest of the information backing up her user-centric/self-sovereign analysis, and also distribute the newest Sovrin answers to followup questions posed by Eve.
AI: John W: Next week, take a look at Eve's materials and his own "broken"-themed blog post and essay definitions.
No meetings next week; let's all write our assigned pieces instead!
Tuesday, November 29