GUADEC 2017: timeline

After the statistics perhaps you are interested in reading a timeline of GUADEC 2017! In particular you can compare it to the burn down chart from the GUADEC HowTo and see how that interacts with reality.

Of course lots of details are excised from this overview but it gives a general sense of the timings. In some follow up posts I’ll go in more detail about what I think went well and what didn’t. We also welcome your feedback on the event (if you can still remember it 🙂

Summer 2014: At some point during GUADEC 2014 I start going on about doing a Manchester edition.

August 2015: Alberto and Allan both float the idea of doing a Manchester bid with me; it seems like there’s just about enough of a team to go for it. I was already planning to be away in summer 2016 at this point so we decided to target 2017.

Alberto has a friend working at MIDAS who gives us a good start and we end up meeting with the Marketing Manchester conference bureau, the University of Manchester and Manchester Metropolitan University.

The meeting with University of Manchester was discouraging (to be honest, they seemed to be geared up only for corporate conferences rather than volunteer-driven events) but Manchester Metropolitan were much more promising.

Winter 2015: We lost touch with MMU for a few months (presumably as University started back up), but we eventually got a proper contact in the conferences department and started moving forwards with the bid.

Spring 2016: Our bid is produced, with Marketing Manchester doing most of the content and layout (as you might be able to tell). Normally I would worry to see only one GUADEEC bid on the table but, having been thinking about our bid for almost a year already I was also glad that it looked like we’d be the main option.

Summer 2016: GUADEC 2016 in Karlsruhe; Manchester is selected as the location for 2017. Much rejoicing (although I am on a 9000 mile road trip at the time).

August 2016: Talks begin with venue drawing up contracts for venue and accommodation. The venue was reasonably painless to sort out but we spent lots of time figuring out accommodation; the University townhouses required final numbers and payment 6 months in advance of the event, so we spent a lot of time looking into other options (but ended up deciding that the townhouses would be best even though we would inevitably lose a bit money on them).

September 2016: We begin holding monthly-ish meetings with myself, Alberto, Allan and Javier present. Work begins on sponsorship brochure (which complicated by needing to coordinate with GNOME.Asia and potentially LAS), talks continue with venue.

December 2016: Contracts finally signed for venue and accommodation (4 months later!), conference dates finalized. We apply for a UK bank account as an “unincorporated association”. Discussion begins about the website, we decide to hold off on announcing the dates until we have some kind of website in place.

January 2017: Basic website finished, dates announced. Lots of work on getting the registration system ready. We begin meeting each week on a Monday evening. Initial logo made by Jakub and Allan.

February 2017: Trip to FOSDEM, where we put up a few GUADEC posters. Summer still seems a long way off. Codethink sponsorship confirmed. We start thinking about keynote speakers. Javier and Lene look into social event venues, including somewhere for the 20th birthday party(with hearts already set on MOSI). The search for new Executive Director for GNOME finally comes to a close with Neil McGovern being hired, and he soon starts joining the GUADEC calls and helping out (in particular with the search for sponsors, which up til now has been nearly all Alberto’s work).

March 2017: After 4 months of bureaucracy, our bank account finally approved. After much hacking and design work, we can finally open registration and the call for papers. We have to finalize room numbers at the University already, although most rooms are still unbooked. Investigation into getting GNOME Beer brewed (which ended up going nowhere, sadly). Requests for visa invites begin to arrive.

April 2017: Lots of planning for social events, the talk days and the unconference days. PIA sponsorship confirmed. Posters being designed. Call for papers closes, voting begins and Kat starts putting together the talks schedule.

May 2017: Birthday planning with help from the engagement team (in particular Nuritzi). The University temporarily decide that we’ll have to pay staff costs of £500 per day to have the canteen open; we do a bunch of research into alternatives but then we go back to the previous agreement of having the canteen open with just a minimum spend. Planning of video recording and design. Schedule and social events planning.

June and July 2017: Continual planning and discussion of everything. More sponsors confirmed. Allan does prodigious amounts of graphic design and organizing printing. Travel sponsorship finally confirmed and lots of visa invitation requests start to arrive. Accommodation bookings continue to come in, along with an increasing amount of queries, changes and cancellations that become quite time-consuming to keep track of and respond to. Evening events being booked and finalized, including more planning of the birthday party with Nuritzi. Discussions of how to make sure the conference is inclusive to newcomers. Water bottles, cake and T-shirts ordered. Registrations keep coming in until we actually hit and go over 200 registrations. We contact volunteers and come up with a timetable.

Finally, the day before GUADEC we collect the last of the printing, bring everything to the venue and hole up in a room on the 2nd floor ready to pre-print names on badges and stuff the lanyard pouches with gift bags. We discover two major issues: firstly the ink on the badges gets completely smudged when we run it through the printer to print a name on it; and secondly the emergency telephone number that we’ve printed on the badges has actually been recycled as the SIM card was inactive for a while and now goes through to some poor unsuspecting 3rd party.

guadec-badges.jpgWe lay out all the badges to try and dry the ink out but 3 hours later the smudging is still happening. We realise that the names will just have to be drawn on with marker pens. As for the emergency telephone… if you look closely at a GUADEC 2017 badge you’ll notice that there’s a sticky label with the correct number covering up the old number on the badge. Each one of these was printed onto stickyback paper and lovingly chopped out and stuck on by hand. You’re welcome! (Nobody actually called the emergency phone during the event).

Javier pointed out that we should be at the registration event at least an hour early (it started at 18:00). I said this was nonsense because most people wouldn’t get there til later anyway. How wrong I was !!! I’m used to organizing music events where people arrive about an hour after you tell them to, but we got to Kro Bar about 17:45 and it was already full to bursting with eager GNOME contributors, many of whom of course hadn’t seen each other for months. This was not the ideal environment to try and set up a registration desk for the first time and I mostly just stood around looking at boxes feeling confused and occasionally moving things around. Thankfully Kat and Benjamin soon arrived and made registration a reality leaving me free to drink a beer and remain confused.

And the rest is history!

Advertisements
Posted in Uncategorized | Leave a comment

GUADEC 2017 by numbers

I’m finally getting around to doing a bit of a post-mortem for the 2017 edition of GUADEC that we held in Manchester this year. Let’s start with some statistics!

GUADEC 2017 had…

  • 264 registrations (up from 186 last year)
  • 209 attendees (up from 160 last year)
  • 72 people staying at the University (30 of whom had sponsorship awarded by the travel committee)
  • 7 people who were sadly unable to attend because their visa application was refused at the last minute

We put four optional questions on the registration form asking for your country of residence, your age, your gender identity and how you first heard about GUADEC. The full set of responses (anonymous, of course) is available here.

I don’t plan to do much data mining of this, but here are some interesting stats:

  • 61 attendees said they are resident in the UK, roughly 32%.
  • The most common age of attendees was 35 (the full age range was between 11 years and 65 years)
  • 14 attendees said they heard about the conference through working at Codethink

We asked for an optional, “pay as you feel” donation towards the costs of the conference at registration time and we suggested payments of £15/€15 for students, £40/€40 for hobbyists and £150/€150 for professionals.

  • 47 attendees (22%) chose to donate nothing
  • 29 attendees (13%) chose 1-15
  • 75 attendees (36%) chose 16-40
  • 51 attendees (24%) chose >40
  • 7 attendees somehow chose “NULL” (I think these were on-site registrations, which followed a different process)

Note that we told Codethink staff that they shouldn’t feel required to donate from their company-provided conference budget as Codethink was already sponsoring at Platinum level, which should account for 15 or more of the people who chose to donate nothing with their registration.

The financial side of things is tricky for me to summarize as the sponsor money and registration donations mostly went straight to the Foundation’s bank account, which I don’t have access to. The fluctionation of GBP against the US dollar makes my own budget spreadsheet even less reliable,but I estimate that we raised around $10,000 USD for the GNOME Foundation from GUADEC 2017. This is of course only possible due to the generosity of our sponsors, and through the great work that Alberto and Neil did in this area.

My van did 94 miles around Manchester during the week of GUADEC. My house is only 4 miles from the centre so this is surprisingly high!

 

Posted in Uncategorized | 1 Comment

BuildStream and host tools

It’s been a while since I had to build a whole operating system from source. I’ve mostly been working on compilers so far this year at Codethink in fact, but my new project is to bring up some odd target systems that aren’t supported by any mainstream distros.

We did something similar about 4 years ago using Baserock and it worked well; this time we are using the Baserock OS definitions again but with BuildStream as a build tool. I’ve not had any chance to get involved in BuildStream up til now (beyond observing it) so this will be good.

The first thing I’m getting my head around is the “no host tools” policy. The design of BuildStream is that every build is run in a sandbox that’s isolated from the host. Older Baserock tools took a similar approach too and it makes a lot of sense: it’s a lot easier to maintain build instructions if you limit the set of environments in which they can run, and you are much more likely to be able to reproduce them later or on other people’s machines.

However your sandbox is going to need a compiler and a shell environment in there if it’s going to be able to build anything, and BuildStream leaves open the question of where those come from. It’s simple to find a prebuilt toolchain at least for mainstream architectures — pretty much every Linux distro can provide one so the only question is which one to use and how to get it into BuildStream’s sandbox?

GNOME and Freedesktop base runtime and SDK

The Flatpak project has a similar need for a controlled runtime and build environment, and is producing a GNOME SDK, and a lower level Freedesktop SDK. These are at present built on top of Yocto.

Up to date versions of these are made available in an OSTree repo at http://sdk.gnome.org/repo. This makes it easy to import them into BuildStream using an ‘import’ element and the ‘ostree’ source:

kind: import
description: Import the base freedesktop SDK
config:
  source: files
  target: usr
host-arches:
  x86_64:
    sources:
      - kind: ostree
        url: gnomesdk:repo/
        track: runtime/org.freedesktop.BaseSdk/x86_64/1.4
        gpg-key: keys/gnome-sdk.gpg
        ref: 0d9d255d56b08aeaaffb1c820eef85266eb730cb5667e50681185ccf5cd7c882
  i386:
    sources:
      - kind: ostree
        url: gnomesdk:repo/
        track: runtime/org.freedesktop.BaseSdk/i386/1.4
        gpg-key: keys/gnome-sdk.gpg
        ref: 16036b747c1ec8e7fe291f5b1f667cb942f0267d08fcad962e9b7627d6cf1981

The main downside to using these is that they are pretty large — the GNOME 3.18 SDK weighs in at 1.5 GB uncompressed and around 63,000 files. Creating a hardlink tree using `ostree checkout` takes up to a minute on my (admittedly rather old) laptop. The Freedesktop SDK is smaller but still not ideal. They are also only built for a small set of architectures — I think just some x86 and ARM families at the moment.

Debian in OSTree

As part of building GNOME’s jhbuild modulesets inside BuildStream Tristan created a script to produce Debian chroots for various architectures and commit them to an OSTree repo. The GNOME components are then built on top of these base Debian images, with the idea that in future they can be tested on top of a whole variety of distros in addition to Debian to make us catch platform-specific regressions more quickly.

The script, which uses the awesome Multistrap tool to do most of the heavy lifting, lives here and pushes its results to a repo that is temporarily housed at https://gnome7.codethink.co.uk/repo/ and signed with this key.

The resulting sysroot are 2.7 GB in size with 105,320 different files. This again takes up to a minute to check out on my laptop. Like the GNOME SDK, this sysroot contains every external dependency of GNOME which adds up to a lot of stuff.

Alpine Linux Toolchain

I want a lighter weight set of host tools to put in my build sandbox. Baserock’s OS images can be built with just a C++ toolchain and a minimal shell environment, so there’s no need to start copying gigabytes of dependencies around.

Ultimately the Baserock project could build its own set of host tools, but to save faff while prototyping things I decided to try Alpine Linux, which is a minimal distribution.

Alpine Linux provide “mini root filesystem” tarballs. These can’t be used directly as they contain device nodes (so require privileges to extract) and don’t contain a toolchain.

Here’s how I produced a workable host tools sysroot. I’m using Bubblewrap (the same tool used by BuildStream to create build sandboxes) as a simple container driver to run the `apk` package tool as root without needing special host privileges. This won’t work on every OS; you can use something like Docker or plain old `chroot` instead if needed.

wget https://nl.alpinelinux.org/alpine/v3.6/releases/x86_64/alpine-minirootfs-3.6.1-x86_64.tar.gz
mkdir -p sysroot
tar -x -f alpine-minirootfs-3.6.1-x86_64.tar.gz -C sysroot --exclude=./dev

alias alpine_exec='bwrap --unshare-all --share-net --setenv PATH /usr/bin:/bin:/usr/sbin:/sbin  --bind ./sysroot / --ro-bind /etc/resolv.conf /etc/resolv.conf --uid 0 --gid 0'
alpine_exec apk update
alpine_exec apk add bash bc gcc g++ musl-dev make gawk gettext-dev gzip linux-headers perl e2fsprogs mtools

tar -z -c -f alpine-host-tools-3.6.1-x86_64.tar.gz -C sysroot .

This produces a 219MB host tools sysroot containing 11,636 files. This is not as minimal as you can go with a GNU C/C++ toolchain but it’s around the right order of magnitude and it checks out from BuildStream’s artifact store into the build directory in a matter of seconds.

We include gawk as it is needed during the GCC build (BusyBox awk is not enough), and gettext-dev is needed by GLIBC (at least, libintl.h is needed and in Alpine only gettext provides that header). Bash is needed by scripts/config from linux.git, and bc, GNU gzip, linux-headers and Perl are also needed for building Linux. The e2fsprogs and mtools are useful for creating disk images.

I’ve integrated this into my builds in a pretty lazy way for now:

kind: import
description: Import an Alpine Linux C/C++ toolchain
host-arches:
  x86_64:
    sources:
    - kind: tar
      url: file:///home/sam/src/buildstream-bootstrap/alpine-host-tools-3.6.1-x86_64.tar.gz
      base-dir: .
      ref: e01d76ef2c7e3e105778e2aa849a42d38dc3163f8c15f5b2de8f64cd5543cf29

This element is obviously not something I can share with others — I’d need to upload the tarball somewhere or set up a public OSTree repo that others could pull from, and then have the element reference that.

However, this is just the first step towards some much deeper work which will result in me needing to move beyond Alpine in any case. In future I hope that it’ll be pretty straightforward to obtain a minimal toolchain as a sysroot that can be pulled into a sandbox using OSTree. The work required to produce such a thing is simple enough to automate but it requires a server to host the binaries which then requires ongoing maintenance for security updates, so I’m not yet going to commit to doing it …

Posted in Uncategorized | Leave a comment

GUADEC Talks Schedule Now Available

As mentioned on the GUADEC 2017 blog, we’ve just published the talks schedule for this year’s edition.

Thanks to everyone who submitted a talk this year, we think there’s a fantastic mix of interesting topics on each day. We sadly had to make some tough decisions and turn down some great submissions as well – we received around 70 talk submissions this year which is a lot to fit into a 3 day, 2 track conference.

It’s now less than 6 weeks until the conference starts! If you’re planning on attending and haven’t yet registered, please register now at registration.guadec.org. It will be possible to register on the day, but we have to finalize room allocations, party venues and food orders way in advance so it will cause problems if everyone waits until the last minute to sign up.

If you’ve not yet booked a room, we have a number of rooms still available at the conference venue. You can book these when you register, or if you already registered then just log into registration.guadec.org and click ‘Edit registration’.

We also have hotel rooms available at fixed prices through Visit Manchester’s hotel booking service. The deadline for booking these rooms is Thursday 29th June.

Thanks to everyone who has opted to volunteer during the conference using the “Keep me informed” box on the registration form. We will be in touch with you very soon to discuss how you can get involved. There’s still time to edit your registration to tick this box if you’ve decided you want to help out on the day!

Posted in Uncategorized | 2 Comments

Manchester

Last night an suicide attack took place in Manchester killing at least 22 people. I don’t have much to comment on that apart from that everyone’s thoughts are with those who have been injured or lost friends and family to the attack, and to quote a friend of mine:

If you think you can sow disunity in Manchester with a bomb, you don’t know Manchester.

Posted in Uncategorized | 1 Comment

Tracker 💙 Meson

A long time ago I started looking at rewriting Tracker’s build system using Meson. Today those build instructions landed in the master branch in Git!

Meson is becoming pretty popular now so I probably don’t need to explain why it’s such a big improvement over Autotools. Here are some key benefits:

  • It takes 2m37s for me to build from a clean Git tree with Autotools,  but only 1m08s with Meson.
  • There are 2573 lines of meson.build files, vs. 5013 lines of Makefile.am, a 2898 line configure.ac file, and various other bits of debris needed for Autotools
  • Only compile warnings are written to stdout by default, so they’re easy to spot
  • Out of tree builds actually work

Tracker is quite a challenging project to build, and I hit a number of issues in Meson along the way plus a few traps for the unwary.

We have a huge number of external dependencies — Meson handles this pretty neatly, although autodetection of backends requires a bit of boilerplate.

There’s a complex mix of Vala and C code in Tracker, including some libraries that are written in both. The Meson developers have put a lot of work into supporting Vala, which is much appreciated considering it’s a fairly niche language and in fact the only major problem we have left is something that’s just as broken with Autotools: failing to generate a single introspection repo for a combined C + Vala library

Tracker also has a bunch of interdependent libraries. This caused continual problems because Meson does very little deduplication in the commandlines it generates, and so I’d get combinational explosions hitting fairly ridiculous errors like commandline too long (the limit is 262KB) or too many open files inside the ld   process. This is a known issue. For now I work around it by manually specifying some dependencies for individual targets instead of relying on them getting pulled in as transitive dependencies of a declare_dependency target.

A related issue was that if the same .vapi file ends up on the valac commandline more than once it would trigger an error. This required some trickery to avoid. New versions of Meson work around this issue anyway.

One pretty annoying issue is that generated files in the source tree cause Meson builds to fail. Out of tree builds seem to not work with our Autotools build system — something to do with the Vala integration — with the result that you need to make clean before running a Meson build even if the Meson build is in a separate build dir. If you see errors about conflicting types or duplicate definitions, that’s probably the issue. While developing the Meson build instructions I had a related problem of forgetting about certain files that needed to be generated because the Autotools build system had already generated them. Be careful!

Meson users need to be aware that the rpath is not set automatically for you. If you previously used Libtool you probably didn’t need to care what an rpath was, but with Meson you have to manually set install_rpath for every program that depends on a library that you have installed into a non-standard location (such as a subdirectory of /usr/lib). I think rpaths are a bit of a hack anyway — if you want relocatable binary packages you need to avoid them — so I like that Meson is bringing this implementation detail to the surface.

There are a few other small issues: for example we have a Gtk-Doc build that depends on the output of a program, which Meson’s gtk-doc module currently doesn’t handle so we have to rebuild that documentation on every build as a workaround. There are also some workarounds in the current Tracker Meson build instructions that are no longer needed — for example installing generated Vala headers used to require a custom install script, but now it’s supported more cleanly.

Tracker’s Meson build rules aren’t quite ready for prime time: some tests fail when run under Meson that pass when run under Autotools, and we have to work out how best to create release tarballs. But it’s pretty close!

All in all this took a lot longer to achieve than I originally hoped (about 9 months of part-time effort), but in the process I’ve found some bugs in both Tracker and Meson, fixed a few of them, and hopefully made a small improvement to the long process of turning GNU/Linux users into GNU/Linux developers.

Meson has come a long way in that time and I’m optimistic for its future. It’s a difficult job to design and implement a new general purpose build system (plus project configuration tool, test runner, test infrastructure, documentation, etc. etc), and the Meson project have done so in 5 years without any large corporate backing that I know of. Maintaining open source projects is often hard and thankless. Ten thumbs up to the Meson team!

Posted in Uncategorized | 1 Comment

Night Bus: simple SSH-based build automation

night-858546_640

My current project at Codethink has involved testing and packaging GCC on several architectures. As part of this I wanted nightly builds of ‘master’ and the GCC 7 release branch, which called for some kind of automated build system.

What I wanted was a simple CI system that could run some shell commands on different machines, check if they failed, and save a log somewhere that can be shared publically. Some of the build targets are obsolete proprietary OSes where modern software doesn’t work out of the box so simplicity is key. I considered using GitLab CI, for example, but it requires a runner written in Go, which is not something I can just install on AIX. And I really didn’t have time to maintain a Jenkins instance.

So I started by trying to use Ansible as a CI system, and it kind of worked but the issue is that there’s no way to get the command output streamed back to you in real time. GCC builds take hours and its test suite can take a full day to run on an old machine so it’s essential to be able to see how things are progressing without waiting a full day for the command to complete. If you can’t see the output, the build could be hanging somewhere and you’d not realise. I discovered that Ansible isn’t going to support this use case and so I ended up writing a new tool: Night Bus.

Night Bus is written in Python 3 and runs tasks across different machines, similarly to Ansible but with the usecase of doing nightly builds and tests as opposed to configuration management. It provides:

  • remote task execution via SSH (using the Parallel-SSH library)
  • live logging of output to a specified directory
  • an overall report written once all tasks are done, which can contain status messages from the tasks
  • parametrization of the tasks (to e.g. build 3 branches of the same thing)
  • a support library of helper functions to make your task scripts more readable

Scheduled execution can be set up using cron or systemd. You can set up a webserver (i’m using lighttpd) on the machine that runs Night Bus to make the log output available over HTTP

You control it by creating two YAML (or JSON) files:

  • hosts describes the SSH configuration for each machine
  • tasks lists the sequence of tasks to run

Here’s an example hosts file:

host1:
  user: automation
  private_key: ssh/automation.key

host2:
  proxy_host: 86.75.30.9
  proxy_user: jenny
  proxy_private_key: ssh/jenny.key

Here’s an example tasks file:

tasks:
- name: counting-example
  commands: |
    echo "Counting to 20."
    for i in `seq 1 20`; do
      echo "Hello $i"
      sleep 1
    done

You might wonder why I didn’t just write a shell script to automate my builds as many thousands of hackers have done in the past. Basically I find maintaining shell scripts over about 10 lines to be a hateful experience. Shell is great as a “live” programming environment because it’s very flexible and quick to type. But those strengths turn into weaknesses when you’re trying to write maintainable software. Every CI system ultimately ends up with you writing shell scripts (or if you’re really unlucky, some XML equivalent) so I don’t see any point hiding the commands that are being run under layers of other stuff, but at the same time I want a clear separation between the tasks themselves and the support aspects like remote system access, task ordering, and logging.

Night Bus is released as a random GitHub project that may never get much in the way of updates. My aim is for it to fall into the category of software that doesn’t need much ongoing work or maintenance because it doesn’t try to do anything special. If it saves one person from having to maintain a Jenkins instance then the week I spent writing it will have been worthwhile.

Posted in Uncategorized | 4 Comments