The Hypercall API of the Rump Kernel project

Today I’m at the New Directions In Operating Systems conference in Shoreditch in London. It’s a cool thing about working for Codethink that they let you go to conferences like these and even pay for the extortionate hotel and train fares involved!

There is fascinating stuff being talked about, I think Robert Watson summed up the theme well as “diagrams with boxes pointing at each other” which is pretty much what we seem to do.

One of the more dramatic rearrangements of the traditional diagram of an operating system is the rump kernel concept, which a couple of people spoke about. Broadly there are some great implications, such as that you run and debug kernel drivers from user space. In practice right now you are limited to NetBSD kernel drivers so it’s not so immediately relevant to me (but perhaps it’ll make NetBSD more relevant to me in future, who knows…!).

The other implication of a rump kernel is you can add a libc and an “application” and you’ve created a really minimal kind of container. Other talks at the conference seemed to be expanding the concept of the “application” to encompass large, distributed networks of systems, but here the “application” needs to be something simple enough to not even need fork().

I wanted to make this fuzzy “container” concept a bit more concrete. All of the diagrams in Antti’s talk slides labelled the bit between the rump kernel and whatever it’s running on as the “hypercall API”, but didn’t dig into the concept. I did a little digging myself and found:

Other implentations exist for bare metal and Xen. I think the Hypercall interface is one of the key insights of the Rump Kernel project: it’s the most minimal definition of what could be called a type of ‘container’ that I’ve seen. All the applications that can run on top of a NetBSD rump kernel can ultimately boil down to that.

There seems to be one big rabbit hole in it:

int rumpuser_open(const char *name, int mode, int *fdp)

Open name for I/O and associate a file descriptor with it.  Notably,
there needs to be no mapping between name and the host's file system
namespace.  For example, it is possible to associate the file descriptor
with device I/O registers for special values of name.

That’s basically your communication channel with the world. What if you want to talk to a graphics device? I guess if the ‘hypercall host’ wants to let you, it can let you rumpuser_open() it and send data…

If this is a type of container interface we can compare it with the container interface used by Docker: that is to say, a fairly large subset of the enourmous Linux syscall API. Whatever the future holds for the rump kernel project and its ‘hypercall’ API, it’s unlikely to grow as large as Linux! By developing an app that targets the Hypercall interface (whether this involves the NetBSD anykernel or something else doesn’t really matter) you reduce the attack surface between the application and the host OS to a very small amount of code: there’s currently 9,248 sLOC in the POSIX implementation of librumpuser, for example.

This whole concept is still fairly cloudy in my head and I doubt it’s something I’m going to work with any time soon. All the information above may be totally wrong and misleading, too, please help by correcting my understanding you are able!


About Sam Thursfield

Who's that kid in the back of the room? He's setting all his papers on fire! Where did he get that crazy smile? We all think he's really weird.
This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to The Hypercall API of the Rump Kernel project

  1. justincormack says:

    The IO interfaces are not all documented there, and need revising. For eg graphics you probably want to use a virtio interface over an emulated PCI bus I think… I am working on related things now…

  2. Antti Kantee says:

    Nice pondering.

    Note that the hypercall API is not a security barrier — I’m not sure what you meant by “rabbit hole”. There’s also talk about removing rumpuser_open() completely, and letting all I/O type devices do whatever they want with private hypercalls. Pushing all I/O activity through open/read/write is too constrained and clumsy.

    I’m not sure where you got the SLoC number from. My directory includes ~7k lines of *.[hc]. Dissecting that number further, there’s autogenerated code, support for remote clients (i.e. support for making syscalls to a rump kernel remotely), multiple implementations of threads, etc. The real minimal number is probably closer to 2k, with half of it being comments.

    If you find wanting to get even less cloudy on the details at some point, please post to the mailing list (linked from “Community” on, and we’ll be happy to clarify.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s