Thrift provides a great framework for developing and accessing remote services. It allows developers to create services that can be consumed by any application that is written in a language that there are Thrift bindings for (which is…just about every mainstream one, and more).
This is great for systems that are heterogeneous – for example, you could write a user authentication service in Java, but call it from your Ruby web application.
Thrift manages serialization of data to and from a service, as well as the protocol that describes a method invocation, response, etc,. This is great because instead of writing all the RPC code – you can just get straight to your service logic. Thrift uses TCP (not sure if UDP is/will be supported) and so a given service is bound to a particular port.
As you start to scale an infrastructure with Thrift services, you may find, as I have, that putting all of your Thrift server IP/port combinations in a configuration file that your clients read is just not…the best. If you have an environment where Thrift servers go down for maintenance (or crash), or you add more capacity on the fly, a dynamic way to manage the state of all of the services is needed.
Enter Apache ZooKeeper, a distributed, fault tolerant, highly available system for managing configuration information, naming, and more (distributed synchronization, anyone?).
ZooKeeper appears like a filesystem to clients, it is a hierarchy of znodes, which are analogous to directories or files, both of which can contain a small amount of data.
ZooKeeper can be used to store Thrift service location information, allowing clients to dynamically discover Thrift services. ZooKeeper even provides a way to create ephemeral znodes, which means that once your Thrift service goes down, it will be removed from ZooKeeper automatically. And if that isn’t cool enough, ZooKeeper even supports watches, where clients can ask to be notified whenever a znode changes. This means that clients can begin using new Thrift service capacity instantly, and when failures happen, clients will stop attempting to contact a down Thrift service.
Laying out your Thrift services in ZooKeeper is important. Clients will need to know about the layout when performing service discovery. For example, you can do something such as:
/services/user/user0000000001
/services/user/user0000000002
....
Clients can get a list of all znodes at /services/user to receive the list of servers for the user service. The only problem is…getting the list of znodes is only half the battle. user0000000001 doesn’t really tell you how to access the user service on that node. This is why its important to store some kind of service location data with the znode.
ZooKeeper allows you to set the data for a znode, so when a node comes up, it just needs to also set data at the znode that describes how to locate or access the service. I’ve adopted to use a URI as the znode data – this is very flexible and easy to parse and read from most languages. For Thrift services I am using a URI that uses a thrift scheme:
thrift://10.1.1.10:5656
The URI can be easily adapted to your application, such as adding query string parameters with any extra custom metadata.
Thrift is great, and with ZooKeeper, its even better. I’m in the process of implementing this integration now and am looking forward to all of the benefits it has to offer. I would love to hear any feedback about this approach if anyone has personal experience, or just some good ideas. What is everyone else using to solve this type of problem?
Oh and also, if ZooKeeper isn’t your thing, you should definitely check out Doozer. It has very similar features to ZooKeeper, and although new, it is definitely on my list of projects to watch. Oh, and did I mention that it is written in golang?
Doozer is a highly-available, completely consistent store for small amounts of extremely important data. When the data changes, it can notify connected clients immediately (no polling), making it ideal for infrequently-updated data for which clients want real-time updates. Doozer is good for name service, database master elections, and configuration data shared between several machines.