Authors: Geambasu, Cheung, Moshchuk, Gribble, Levy (Univ of Washington)
Venue: WWW, April 2008
My initial comments: Allow me to warn you that in all likelihood you will have an easier time digesting this material if you go straight to the source. In the below, I wrestle with what the hell the authors are talking about in certain places which is sure to confuse you more if the same questions didn't arise in your own mind as you were trying to understand how the system works. I.e., caveat emptor. On the other hand, if you want to understand how to become a better writer by seeing firsthand how many different ways something can be interpreted then, by all means, eat your heart out.
The problem: A user's data is scattered across the Web and lives under the control of different domains (Flickr, Google Docs, Google Calendar, Picasa, YouTube, ...) which expose data differently (in some cases at a stable, predictable URL and, in other cases, not) and provide different ways to share that data. How can a user: (1) organize, search, and archive that data; (2) create a collection of data that consists of objects owned by different players AND share that collection "in a protected way"; (3) manipulate these data objects using standard applications or scripts (similar to the way that a user can manipulate each file on his desktop system using hundreds of utilities, such as cat, ls, grep).
With desktop computers, a file can be operated on by multiple different applications regardless of which application created the file. Whereas in the web service world, data created as part of one web service (e.g., docs authored at Google docs) cannot be operated on by other web services or using standard utilities (such as ls and grep). Also, on the desktop it's easy to compose applications or utilities in order to get new, more interesting functionalities; e.g., think of all of the command-line shell utilities (such as grep, find, ls, cat, ..., which can be combined using various operators (e.g., pipe, input or output redirection, and so on)).
Their solution: Have each Web service implement a standard interface that enables accessing and modifying objects that live on that Web service. Then a composite application can aggregate objects from these various services in order to provide a user with a single logical view of his scattered Web data. Note that in their conceptualization, the composite app isn't actually downloading and managing the various objects but rather embedding pointers to those objects and presenting thumbnails of them. But the solution principally is a standard API, implemented by each Web service that houses user data. The API deals with how one refers to objects (naming), how Web services expose objects via stable URLs, how access to objects is mediated, and how data objects are shared.
Thus, the solution requires the various Web services to do something in order to support it; as an intermediate deployment strategy, one can create a proxy for each target Web service where the proxy effectively implements the API on behalf of the service. Naturally, each proxy would be very Web-service-specific.
Goals: want a user to be able to create a collection of objects and specify policy for that collection as a whole despite the fact that the objects live under the control of different web services which export different access control policy frameworks. Another goal is to enable a user U to share her data with user V even if some of that data lives on web services where V does not have an account (and does not want to get one); i.e., want the ability to share data that is part of some web service with a user who does not have a relationship with that web service.
Challenges
- Not all Web services expose the objects they house in a way that enables referring to such objects externally.
- Every Web service has its own access control, authentication, authorization framework. No single uniform way to configure access control policy. A Web service may only support coarse-grained (all-or-nothing) access control policies.
- To access a person's data on Web service S, you may be required to have your own account with S.
- The access control frameworks of Web services might not support write privileges and might not support revocation of rights.
- In addition to possibly not providing a stable URL to access a particular object, Web services might not provide a way to programmatically manipulate such objects.
What will web services need to do to support this solution?
Each origin Web services must export an interface to enable other web services to access data that lives on the origin Web service. Each origin Web service must provide metadata for each data object O (that lives on the service) such that O could be rendered (and operated on) within an arbitrary web page.
What does the solution require?
- Need a single global namespace. Every object and object collection must have a unique and permanent name in this namespace.
- The access control interface (to a user's heterogeneous data/object collection) should enable that user to share some portion of that data (rather than requiring the user provide all-or-nothing access). This interface should also enable an entity to access an object on a Web service even if that entity has no account with that service.
- Each object should support some set of functions which will be invoked on the object by other Web services or utilities or directly by the end user. One such function is that each object must be able to be embedded and rendered within an arbitrary web page.
Menagerie
Consists of Menagerie Service Interface (MSI) and Menagerie File System (MFS).
Menagerie Service Interface (MSI)
At first blush this is sort of confusing in terms of who implements the interface? Who uses this interface? These are incredibly simple questions with only a limited range of answers, given the context, and yet the answers are not explicitly stated. So we'll be forced to reason them out; so let's look at the described API functions and the players, which include the origin web services (e.g., Flickr, Picasa, KG) — which own users' data objects — and non-origin web services (also referred to as composite web services) — which access objects owned by other web services. A few types of functions are mentioned:
- An origin web service calls export in order to add the name of one of its objects to the global namespace. Presumably, this function is implemented by a special Web service, namely the global namespace Web service which maintains a directory of all objects exported by any Web service. This function returns the stable URL for the object?
Questions: So is this like a directory service where I can obtain a URL given some object description? Where does the URL point? To the origin web service? I.e., who actually hosts the various data objects? Presumably, the origin web service hosts its objects and must provide an externalized link to such objects (i.e., must make a stable URL available which can be used to refer to / name the given object).
Comments: I think I was taking the term "export" too literally; they just mean that if you query a Web service (using some MSI API function), it will provide a unique name for each object which can be combined with the service's domain name to obtain a globally unique object ID. - A capability is provided by an origin web service (e.g., Flickr) to a non-affiliated user (e.g., my brother who doesn't have an account on Flickr) to access a particular object (e.g., my album of 4th of July photos) for some period of time.
Questions: but who actually asks for the capability? The object owner?? (E.g., my brother asks me and I interact with Flickr to obtain the capability then give it to my brother?) Otherwise, how does Flickr authenticate the person asking since Flickr doesn't have a relationship with that person? - The third type of API function mentioned as part of MSI are object-independent access functions, such as, to read, write, or render a particular object or to obtain an object's metadata. These are perhaps the most straightforward to imagine who implements them and who invokes them. Presumably, an origin web service provides an implementation of these access functions which are invoked by a non-origin web service in order to interact with an object owned by the origin Web service.
For example, let's say we have a photo aggregator app (i.e., a non-origin web service) which creates a virtual file system which contains thumbnails of all of my photos (which are stored, say, on various photo sites, such as, Flickr, Shutterfly, Picasa, and KodakGallery). So this photo aggregator app (PAA) would be invoking these functions on individual photo objects. For example, let's say p1, p2, and p3 all live on Flickr, the PAA will call p1.getThumbnail(), p2.getThumbnail(), p3.getThumbnail() to obtain the thumbnails of photos p1, p2, and p3, respectively.
Let's recap; below is our conjecture:
Function | Does what? | Who implements? | Who invokes? |
---|---|---|---|
export | Exports a name for an object into the global namespace. | The web service which acts as the global namespace. | The origin web service. |
generateCapability | Obtains a token to present to an origin web service in order to gain access to some object stored on that web service. | The origin web service. | Some non-origin web service. Or perhaps the user who owns the object. |
readObject | Returns an object's contents. | The origin web service. | Some non-origin web service. |
Menagerie File System (MFS)
Allows an app to mount a remote MSI object hierarchy into a local file system namespace. For example, I could mount Ann's entire collection of photos (which lives on KodakGallery) and operate on those photos locally. (Would the effects of my edit or tagging operations be pushed back to Ann's version on Kodak Gallery?) So thereafter my app can operate on Ann's photos as if they were files stored on my computer.
Implementation particulars
After all of that rife speculation, let's get down to the details. They talk about three types of operations: "object naming" (list, getattr, mkdir), protection (create_capa, revoke_capa), and access (read, write, get_summary, get_URL).
Object Naming
Each thing (object) has two names:
- One provided by the user; this name is meaningful to the entity who owns the data. For example, a photo on Flickr might be referred to using the path to the photo (i.e., the name of the album that contains it) along with the actual photo name.
- One provided to composite web services for them to use in invoking operations on that object; this is referred to as the service-local ObjectId or ObjectId for short. Menagerie generates globally unique object IDs by combining an object's service-local ObjectId with the domain name of the service hosting that object. So if Flickr contains objects with service-local ObjectIds n1, n2, ... then the Menagerie names for these objects will be flickr.com:n1, flickr.com:n2, ...
Presumably, the origin web service generates this service-local ObjectId (they don't say). I assume that the origin web service creates the ObjectIds because the origin web service implements the MSI interface and most functions in that interface take this ObjectId argument. Hence, that value must be meaningful for the service.
Calling list returns the mapping between the two names. A user's object name hierarchy is the collection of his names for all of the objects he has access to; note that a user's hierarchy may contain the names of objects that belong to someone else. The service-local ObjectIds can be independent of where the object lives within its owner's hierarchy.
Why need two different names for a single object? Because we don't necessarily want to expose a user's organization of her photo albums to the people with whom that user shares a photo? So that the name for an object is stable, even across operations such as the user renaming/moving that object somewhere else in the hierarchy (this is probably why). Also, by using a single ObjectId to refer to a single object, they can identify all accesses to that object and cache it as appropriate. By contrast, if an object is reachable by many different users and the object lives somewhere different in each user's hierarchy and we used those hierarchical paths to identify accesses of the object, we might miscount the number of accesses to an object since that object has different names according to different people.
They describe operations which can be performed on a name hierarchy; presumably, these operations are implemented by each origin web service.
- list: given a directory's ObjectId, return the name and ObjectId for every object within that directory.
- getattr: given an object's ObjectId, return metadata about that object (e.g., its type, size, last modified date, etc.) as well as a capability for that object.
(So this function always gives out a generic capability to whomever asks? Or it performs a privilege check? Or the access control is done earlier — for example, a user is only given the ObjectId for objects that this user is allowed to access. So knowing the ObjectId implies that one has sufficient privileges to read the object?
ANSWER: They assume that capabilities are generic (rather than specific to the holder of the capability) and the distribution of them is only to people who should have them. A web service can however require that a user authenticate himself to the service in addition to requiring that the user possess the necessary capabilities.) - mkdir: add a collection of objects to the hierarchy (to the user-specific hierarchy?).
This is a little strange. Presumably if a user wants to organize his objects on the target Web service, he can do so using that Web service's available controls. Not sure where the created directory lives. (Or is this for augmenting some other directory hierarchy structure, for example one maintained by the service itself for its own purposes?)
Can definitely imagine a composite web service maintaining a user-created directory structure; perhaps that's the intended context for this API call (rather than thinking of mkdir as creating a directory on the origin Web service). Not sure.
Protection
They have a hybrid capability-based protection system. What's the "hybrid" part? With normal (non-hybrid) capabilities, possessing a capability alone suffices to gain the described access to the specified object. But in Menagerie, an origin web service can require that a user provide a capability as well as authenticate himself in order to get access. A Menagerie capability is an unforgeable token that contains the globally unique ID for some object as well as a set of access rights. The semantics are that a user who holds a capability is allowed to access the specified object in the specified way(s). Note that a capability does not contain any identifying information as to who is allowed to access the specified thing in the specified way; the right applies to any user who holds the capability.
As mentioned, a web service can also exert some control over who exercises a capability. In particular, a web service issues two types of object rights: (1) open-access, which gives the capability holder direct access to the specified object without further authentication, and (2) closed-access, wherein the web service can also require that a user authenticate himself.
Implementing capabilities
If a user possesses a capability for some object O, he actually possesses access rights for all objects rooted at O. So the idea is that a capability might be given for a directory D, enabling the holder to access any object within D or within a subdirectory of D, and so on. Each capability consists of an 8-byte "root node global ID" (which is presumably the object's service-local ObjectId along with the domain name of the service where that object lives) and a 16-byte password. These two values are also stored at the web service (that issues the capability) along with the applicable open- and closed-access rights for that object. The password field is to ensure that an attacker who knows the global ID for an object cannot successfully forge a capability for that object.
Most MSI functions require a capability as well as the ObjectId:
* list( capa, objectId )
* getattr( capa, objectId )
* create_capa( capa, objectId, rights )
* read( capa, objectId ): to read an object
* write( capa, objectId, name, content ): to write an object
* get_summary( capa, objectId ): to get an HTML thumbnail of an object
* get_URL( capa, objectId ): to get the URL of an object on its home Web service
* get_URL( capa, objectId ): to get the URL of an object on its home Web service
The exceptions are:
* mkdir( capa, parentId, name )
* mkdir( capa, parentId, name )
* revoke_capa( object_capa, revoke_capa )
Revoking a capability for an object O requires a valid capability on O which has the revocation rights enabled. Revocation entails zero'ing the rights field in that capability's CapTable entry.
Then they give some real-world examples of web services using capabilities.
Accessing Objects
MSI provides a few ways to access objects:
- Can embed an object from a remote service within a page. The service that owns the object is responsible for its presentation and interaction controls.
- Can get object metadata via get_summary, which returns an HTML code snippet that displays a thumbnail of the object, and get_URL, which returns the object's URL within its parent service.
- Can invoke an object-independent access function (e.g., read, write, delete) on an object.
Implementation
The MSI is an XML RPC layer on top of HTTP which means that you can invoke an MSI function by making an XML RPC call, which can be done in various programming languages, including JavaScript.
To build a composite Menagerie app, need web services that support MSI. As an incremental deployment strategy, they created MSI proxies for non-MSI services (including Gmail, Yahoo! mail, Flickr, YouTube, Google Spreadsheets), each of which implements the MSI functions and Menagerie protection model on behalf of a service. So each proxy is service-specific. For services which provide a developer API, creating the proxy was straight-forward. For other services, had to do screen scraping etc. which probably wouldn't scale and is kind of a mess generally.
The Menagerie File System lets an application mount a user's MSI name hierarchy then operate on objects within that hierarchy as if they were local (e.g., invoking system calls or utilities (such as cp and tar) on such objects). A system call (such as fopen, getattr, readdir, read, write) executed on a mounted object gets passed via VFS to MFS which translates that system call to the corresponding MSI call on the Web service which owns this object. They use a couple caches to speed up access to the mounted FS.
Menagerie Applications
The Menagerie Web Object Manager (WOM): use this to create a virtual file system where can drag and drop various objects stored at various web services (without actually affecting how those objects are stored at their home services). Can also share a view of this file system (i.e., export some portion of this file system) to other users -- to give them access to some part of your virtual hierarchy. So in this way, the composite application is both a client (for each web service that owns some object contained in the virtual hierarchy) and a server (for other applications that request to see part of this virtual file system). The kinds of objects one can store in a folder include videos, emails, pictures, spreadsheets, and so on. Moreover, if a user clicks on an object within the folder, the user will be directed to the object's home location (on its parent web service). So if a folder contains a Google docs spreadsheet, the user can click on the particular spreadsheet icon which will open up the spreadsheet (possibly for editing) in Google docs. As mentioned, WOM exports MSI which means a user can get a capability for a WOM virtual folder and give that capability to others.
The Menagerie Group Sharing Service (MGS): instead of a single user creating a collection of Web objects then optionally sharing that collection (read-only) with others, MGS is all about a group collectively owning a collection into which each group member adds various objects from different Web servioces. Lets users share a single virtual desktop.
Some questions
- Are capabilities the right model for this? Will a user understand that a capability (i.e., this thing which looks like a URL) should only be forwarded to or shared with someone to whom you want to grant access? When we repurpose things that users know in one context (e.g., URLs) to have security relevance, this can sometimes cause problems.
- Once we have some Web service that uses closed-access rights for its objects, what's the benefit of a capability in that scenario? Is the idea that we can partition the various rights that a user might have for an object so that only a small slice of them require authenticating with an object's home Web service? For example, broadly allow users to read an object but writing an object requires authentication.
- Somewhat out of the scope of this document is what the typical life cycle of a capability will be. E.g., some user wants to share her 4th of July photos on Flickr with family and friends so she sends an email with a capability which identifies both how to access the photos and provides the privileges necessary to do so. Then her sister Sarah runs an application (like WOM) which invokes MSI functions in order to download thumbnails of the given photos; Sarah adds these photos to her own WOM virtual file system. Naturally, exercising a capability can only be done by a program that invokes MSI functions.
- So a Menagerie capability provides access to all objects rooted at some node (objectId). This is hard-coded into their system and seems not very general. Or at the least, this is a topic of discussion: are there common cases where a user might like to share access to an object without providing access to all of that object's children (and grandchildren and great-...).
No comments:
Post a Comment