I read Fred Wilson‘s post on Explicit vs Implicit Groups tonight. It struck me that the right model for Google Plus (and any group based social network) is the concept of “aged groups”. So I can toss anyone into a group but over time a lack of interaction or engagement “ages” them out of the group. So groups remain small and focussed on the active members. The asynchronous nature of Google+ groups is ideally suited to this model and could be an option for users to set on their accounts.
With aged groups I’d be much more likely to allow Google+ to parse my address book and add members based on groups I already have setup or even suggest groups based on email conversations I’ve had in the past.
The key issue is for the groups to reflect the dynamic nature of relationships. I work in a company and then I leave, guess what? most of those relationships die within a year or so. I don’t want those (now) strangers still shot through my Facebook and Google+ account.
Google made its Google Storage API available to all today. This is the service that is likely to make Google AppEngine useful, as the existing BigTable storage system was just too painful for words and not designed for large blobs. Its certainly feels a bit slicker in execution than AWS S3 and has a more polished user experience from a getting started perspective. It was several months after S3 launched before somebody built a third party browser that would allow you to look at you buckets online.
They use a similar model to S3 of unique bucket names, so get in quick in you want a bucket called “test” or “src” 🙂 The naming conventions for objects are restricted, so you cannot expect to upload an arbitrary directory of files and expect it to succeed. The uploading process must perform some kind of name mapping process that converts illegal names to legal ones.
The do make a big deal about allowing developers to specify whether buckets and their contents are located in Europe (where in Europe?) or not but read the T&Cs carefully. Section 2.2 makes it clear that Google can process your data just about anywhere including the US. It’s only “data at rest” that can be specified as stored in Europe. So all your data is essentially available to US agencies should that choose to take a peak. The most lame restriction in the T&C’s is the restriction on using Google Storage to create a “Google Storage like” system. Let’s put a layer in front of Google Storage that is like Google Storage but slower and less resilient and costs more, oh I definitely want to sign up for that service 🙂
They provide a version of the excellent Boto library that has been repurposed for use against Google Storage, this is the best indication yet that the Google API’s must be pretty close to the S3 APIs in structure. The main difference is the use of OAuth to give fine grained access to the storage objects. This is the biggest win for Google and I hope to see Microsoft Azure, RackSpace CloudFiles and AWS S3 following suite fairly quickly.
It would also be great to see the Boto changes for Google Storage rolled back into the Boto mainline.