Calliope 10.0: creating music playlists using Tracker Miner FS

I just published version 10.0 of the open source playlist generation toolkit, Calliope. This fixes a couple of long standing issues I wanted to tackle.

SQLite Concurrency

The first of these only manifest itself as intermittent Gitlab CI failures when you submitted pull requests. Calliope uses SQLite to cache data, and a cache may be used by multiple concurrent process. SQLite has a “Write-Ahead Log” journalling mode that should excel at concurrency but somehow I kept seeing “database is locked” errors from a test that verified the cache with multiple writers. Well – make sure to explicitly *close* database connections in your Python threads.

Content resolution with Tracker Miner FS

The second issue was content resolution using Tracker Miner FS, which worked nicely but very slowly. Some background here: “content resolution” involves finding a playable URL for a piece of music, given metadata such as the artist name and track name. Calliope can resolve content against remote services such as Spotify, and can also resolve against a local music collection using the Tracker Miner FS index. The “special mix” example, which generates nightly playlists of excellent music, takes a large list of songs taken from Listenbrainz and tries to resolve each one locally, to check it’s available and get the duration. Until now this took over 30 minutes at 100% CPU.

Why so slow? The short answer is: cpe tracker resolve was not using the Tracker FTS (Full Text Search) engine. Why? Because there are some limitations in Tracker FTS that means we couldn’t use it in all cases.

About Tracker FTS

The full-text search engine in Tracker uses the SQLite FTS5 module. Any resource type marked with tracker:fullTextIndexed can be queried using a special fts:match predicate. This is how Nautilus search and the tracker3 search command work internally. Try running this command to search your music collection locally for the word “Baby”:

tracker3 sparql --dbus-service org.freedesktop.Tracker3.Miner.Files \
-q 'SELECT ?title { ?track a nmm:MusicPiece; nie:title ?title; fts:match "Baby" }'

This feature is great for desktop search, but it’s not quite right for resolving music content based on metadata.

Firstly, it is doing a substring match. So if I search for the artist named “Eagles”, it will also match “Eagles of Death Metal” and any other artist that contains the word “Eagles”.

Secondly, symbol matching is very complicated, and the current Tracker FTS code doesn’t always return the results I want. There are at least two open issues, 400 and 171 about bugs. It is tricky to get this right: is ' (Unicode +0027) the same as ʽ (Unicode +02BD)? What about ՚ (Unicode +055A, the “Armenian Apostrophe”)? This might require some time+money investment in Tracker SPARQL before we get a fully polished implementation.

My solution the meantime is as follows:

  1. Strip all words with symbols from the “artist name” and “track name” fields
  2. If one of the fields is now empty, run the slow lookup_track_by_name query which uses FILTER to do string matching against every track in the database.
  3. Otherwise, run the faster lookup_track_by_name_fts query. This uses both FTS *and* FILTER string matching. If FTS returns extra results, the FILTER query still picks the right one, but we are only doing string matching aginst the FTS results rather than the whole database.

Some unscientific profiling shows the special_mix script took 7 minutes to run last night, down from 35 minutes the night before. Success! And it’d be faster still if people can stop writing songs with punctuation marks in the titles.

Screenshot of a Special Mix playlist
Yesterday’s Special Mix.

Standalone SPARQL queries

You might think Tracker SPARQL and Tracker Miners have stood still since the Tracker 3.0 release in 2020. Not so. Carlos Garnacho has done huge amounts of work all over the two codebases bringing performance improvements, better security and better developer experience. At some point we need to do a review of all this stuff.

Anyway, precompiled queries are one area that improved, and it’s now practical to store all of an apps queries in separate files. Today most Tracker SPARQL users still use string concatenation to build queries, so the query is hidden away in Python or C code in string fragments, and can’t easily be tested or verified independently. That’s not necessary any more. In Tracker Miners we already migrated to using standalone query files (here and here). I took the opportunity to do the same in Calliope.

The advantages are clear:

  • no danger of “SPARQL injection” attacks, nor bugs caused by concatenation mistakes
  • a query is compiled to SQLite bytecode just once per process, instead of happening on every execution
  • you can check and lint the queries at build time (to do: actually write a SPARQL linter)
  • you can run and test the queries independently of the app, using tracker3 sparql --file. (Support for setting query parameters due to land Real Soon).

The only catch is some apps have logic in C or Python that affects the query functionality, which will need to be implemented in SPARQL instead. It’s usually possible but not entirely obvious. I got ChatGPT to generate ideas for how to change the SPARQL. Take the easy option! (But don’t trust anything it generates).


Next steps for Calliope

Version 10.0 is a nice milestone for the project. I have some ideas for more playlist generators but I am not sure when I’ll get more time to experiment. In fact I only got time for the work above because I was stuck on the sofa with a head-cold. In a way this is what success looks like.

Status update, 17/11/2023

In Santiago we are just coming out of 33 consecutive days of rain. The all-time record here is 41 consecutive days, but in some less rainy parts of Galicia there have been actual records broken this autumn. It feels like I have been unnaturally busy but at least with some interesting outcomes.

I spent a lot of time helping Outreachy applicants get started with openQA testing for GNOME. I have been very impressed with the effort some candidates have put in. We have come a long way since October when some participants had yet to even install Linux. We already have a couple of contributions merged in openqa-tests (well done to Reece and Riya), and some more which will hopefully land soon. The final participants are announced next Monday 20th November.

Helping newcomers took up most of my free time but I did also make some progress on openQA testing for apps, in that, it’s now actually possible 🙂 Let me know if you maintain an app and are interested in becoming an early adopter of openQA!

Improvements to my helper tool for VM-based openQA testing

It’s two years since I started looking into end-to-end testing of GNOME using openQA. While developing the end-to-end tests I find myself running tests locally on my machine a lot, and the experience was fiddly, so I wrote a simple helper tool named ssam_openqa to automate my workflow.

Having chosen to write ssam_openqa in Rust, it’s now really fun to hack on, and I somewhat gratuitously gave it an interactive frontend using the indicatif Rust library.

Here’s what it looks like to run the GNOME OS end-to-end tests with --frontend=interactive in the newly released 1.1.0rc2 release (video):

You can pause the tests while running by pressing CTRL+C, or using the --pause-test or --pause-event to pause on certain events. This lets you open a VNC viewer and access the VM itself, which makes debugging test failures much nicer.

I’m giving a couple more talks about end-to-end testing with openQA this year. This month at OSSEU 2023 in Bilbao, I’m filling in for my colleague James Thomas, talking about openQA in automotive projects. And in October, at XDC 2023 in A Coruña, I’ll be speaking about using openQA as a way to do end-to-end testing of graphical apps. See you there!

Automated point and click

I have been watching GNOME’s testing story get better and better for a long time. The progress we made since we first started discussing the GNOME OS initiative is really impressive, even when you realize that GUADEC in A Coruña took place nine years ago. We got nightly OS images, Gitlab CI and the gnome-build-meta BuildStream project, not to mention Flatpak and Flathub.

Now we have another step forwards with the introduction of OpenQA testing for the GNOME OS images. Take a look at the announcement on GNOME Discourse to find out more about it.

Automated testing is quite tedious and time consuming to set up, and there is significant effort behind this – from chasing regressions that broke the installer build, and debugging VM boot failures to creating a set of simple, reliable tests and integrating OpenQA with Gitlab CI. A big thanks to Codethink for sponsoring the time we are spending on setting this up. It is part of a wider story aiming to facilitate better cooperation upstream between companies and open source projects, which I wrote about in this Codethink article “Higher quality of FOSS”.

It’s going to take time to figure out how to get the most out of OpenQA, but I’m sure it’s going to bring GNOME some big benefits.

New faces in the Tracker project

The GSoC 2021 cohort has just been announced. There’s a fantastic list of organisations involved, including GNOME, and I’m happy that this year two of those projects will be based around Tracker.

The two interns working on Tracker are:

We were lucky to have several promising candidates. I want to shout out Nitin in particular for getting really involved with Tracker and making some solid contributions too. I want to remind all GSoC applicants of two things. Firstly that a track record of high quality open source contributions is something very valuable and always an advantage when applying for jobs and internships. Including next year’s GSoC 🙂 And secondly that if 5 folk propose the same project idea, only one can be chosen, but if 5 different project ideas arrive then we may be able to choose two or even three of them.

I also want to highlight the great work Daniele Nicolodi has been doing recently on the database side of Tracker. If you want a SPARQL 1.1 database and don’t want to go EnterpriseTM Scale, your options are surprisingly limited, and one goal of the Tracker 3 work was to make libtracker-sparql into a standalone database option. Daniele has moved this forward, already getting it running on Mac OS X and cleaning up a number of neglected internal codepaths.

I hope the increased involvement shows our developer experience improvements are starting to pay dividends. More eyes on the code that powers search in GNOME is always a good thing.

Return to Codethink

2020 was a year full of surprises, so surprise that I finished it by returning to work in the same job that I left exactly 3 years ago.

There are a few reasons I did that! I will someday blog in more detail about working as a language teacher. It’s a fun job but to make the most of it you have to move around regularly, and I unexpectedly found a reason to settle in Santiago. Codethink kindly agreed that I could join the ongoing remote-work revolution and work from here.

Three years is a long time. What changed since I left? There’s a much bigger and nicer office in Manchester, with nobody in it due to the pandemic. The company is now grouped into 4 internal divisions. This is still an experiment and it adds some management overhead, also helps to maintain a feeling of autonomy in a company that’s now almost 100 people. (When I started there ten years ago, I think there were seventeen employees?!)

I also want to mention some research projects that my colleagues are working on. Codethink is a services company, but has always funded some non-customer work including in the past work on dconf, Baserock, Buildstream and the Freedesktop SDK. These are termed ‘internal investments’ but they are far from internal, the goal is always to contribute to open software and hardware projects. The process for deciding where to invest has improved somewhat in my absence; it still requires some business case for the investment (I’m still thinking how to propose that I get paid to work on music recommendations and desktop search tools all day), but there is now a process!

Here are two things that are being worked on now:

RISC-V

My contribution to Codethink’s RISC-V research was writing an article about it. The tl;dr is we are playing with some RISC-V boards, mainly in the context of Freedesktop SDK. Since writing that article the team tracked down a thorny bug in how qemu-user uses GLib that had been blocking progress, and got GNOME OS running in qemu-system-riscv. Expect to see a video soon. You can thank us when you get your first RISC-V laptop 🙂

Bloodlight

I never worked on a medical device but some of my colleagues have, and this led to the Bloodlight project. It’s an open hardware device for measuring your heart rate, aiming to avoid some pitfalls that existing devices fall into:

Existing technology used in smart watches suffers various shortcomings, such as reduced effectiveness on darker skin tones and tattoos.

There is a lot of technical info on the project on Github, including an interesting data processing pipeline. Or for a higher level overview, the team recently published an article at coruzant.com.

As is often the case, I can’t say exactly what I’m working on right now, other than it’s an interesting project and I am learning more than I ever wanted about Chromium.

Every Contribution Matters

GNOME is lucky to have a healthy mix of paid and volunteer contributors. Today’s post looks at how we can keep it that way.

I had some time free last summer and worked on something that crossed a number of project boundries. It was a fun experience. I also experienced how it feels to volunteer time on a merge request which gets ignored. That’s not a fun experience, it’s rather demotivating, and it got me thinking: how many people have had the same experience, and not come back?

I wrote a script with the Gitlab API to find open merge requests with no feedback, and I found a lot of them. I started to think we might have a problem.

GANGSTER CAT - Do we have a problem?

Code Reviews are Whose Job?

I’ve never seen a clear breakdown within GNOME of who is responsible for what. That’s understandable: we’re an open, community-powered project and things change fast. Even so, we have too much tribal knowledge and newcomers may arrive with expectations that don’t match reality.

Each component of GNOME lists one or more maintainers, In principle the maintainers review new contributions. Many GNOME maintainers volunteer their time, though. If they aren’t keeping up with review, nobody can force them to abandon their lives and spend more time reviewing patches, nor should they; so the solution to this problem can’t be “force maintainers to do X”.

Can we crowdsource a solution instead? Back in 2020 I proposed posting a weekly list of merge requests that need attention. There was a lot of positive feedback so I’ve continued doing this, and now mostly automated the process.

So far a handful of MRs have been merged as a result. The list is limited to MRs marked as “first contribution”, which happens when the submitter doesn’t have anything merged in the relevant project yet. So each success may have a high impact, and hopefully sends a signal that GNOME values your contributions!

Who can merge things, though?

Back to tribal knowledge, because now we have a new problem. If I’m not the maintainer of package X, can I review and merge patches? Should I?

If you are granted a GNOME account, you get ‘developer’ permission to the gitlab.gnome.org/GNOME group. This means you can commit and merge in every component, and this is deliberate:

The reason why we have a shared GNOME group, with the ability to review/merge changes in every GNOME project, is to encourage drive by reviews and contributions. It allows projects to continue improving without blocking on a single person.

— Emmanuele Bassi on GNOME Discourse

Those listed as module maintainers have extra permissions (you can see a comparison between Gitlab’s ‘developer’ and ‘maintainer’ roles here).

On many active projects the culture is that only a few people, usually the maintainers, actually review and merge changes. There are very good reasons for this. Those who regularly dedicate time to keeping the project going should have the final say on how it works, or their role becomes impossible.

Is this documented anywhere? It depends on the project. GTK is a good example, with a clear CONTRIBUTING.md file and list of CODEOWNERS too. GTK isn’t my focus here, though: it does have a (small) team of active maintainers, and patches from newcomers do get reviewed.

I’m more interested in smaller projects which may not have an active maintainer, nor a documented procedure for contributors. How do we stop patches being lost? How do you become a maintainer of an inactive project? More tribal knowledge, unfortunately.

Where do we go from here?

To recap, my goal is that new contributors feel welcome to GNOME, by having a timely response to their contributions. This may be as simple as a comment on the merge request saying “Thanks, I don’t quite know what to do with this.” It’s not ideal, but it’s a big step forwards for the newcomer who was, up til now, being ignored completely. In some cases, the request isn’t even in the right place — translation fixes go to a separate Gitlab project, for example — it’s easy to help in these cases. That’s more or less where we’re at with the weekly review-request posts.

We still need to figure out what to do with merge requests we get which look correct, but it’s not immediately obvious if they can be merged.

As a first step, I’ve created a table of project maintainers. The idea is to make it a little easier to find who to ask about a project:

Searchable table of project maintainers, at https://gnome-metrics.afuera.me.uk/maintainers.html

I have some more ideas for this initiative:

  • Require each project to add a CONTRIBUTING.md.
  • Agree a GNOME-wide process for when a project is considered “abandoned” — without alienating our many part-time, volunteer maintainers.
  • Show the world that it’s easy and fun to join the global diaspora of GNOME maintainers.

Can you think of anything else we could do to make sure that every contribution matters?

Calliope: Music recommendations for hackers

I started thinking about playlist generation software about 15 years ago. In that time, so much happened that I can’t possibly summarize it all here. I’ll just mention two things. Firstly, Spotify appeared, and proceeded to hire or buy most of the world’s music recommendation experts and make automatic playlists into a commodity. Secondly, I spent a lot of time iterating on a music tool I call Calliope.

Spotify or not?

Spotify’s discovery features can be a great way to find new music, but I’ve always felt like something was missing. The recommendations are opaque. We know broadly how they work, but there’s no way to know why it’s suggesting I listen to ska punk all day, or I try a podcast titled ‘Tu Inglés’, or play some 80’s alternative classics I’m already familiar with. It gets repetitive.

Some of the most original new music isn’t even available on Spotify. Most folk don’t release that small artists have to pay a distributor to get their music to appear on streaming services like Spotify and Apple Music, a dubious investment when the return for the artist might be a cheque for $0.10 and a little exposure. No wonder that some artists use music purchase sites like Bandcamp exclusively. Of course, this means they’ll never appear in your Discover Weekly playlist.

Algorithms decide which social media posts I see, whether I can get a credit card, and how much I would pay to insure a car. Spotify’s recommendation system is another closed system like the others. But unlike credit agencies and big social networks, the world of music has some very successful repositories of open data. I’ve been saving my listen history to Last.fm since 2006. Shouldn’t I do something with it?

Introducing Calliope

Calliope is an open source tool for hackers who want to generate playlists. Its primary goals are to be a fun side project for me and to produce interesting playlists from of my digital music collection. Recently it has begun fulfilling both of those goals so I decided it’s time to share some details.

Querying my music collection with Calliope

The project consists of a set of commandline tools which operate on playlist data. You use a shell pipeline to define the data pipeline. Your local music collection is queried from Tracker or Beets. You can mix in data from Last.fm, Musicbrainz and Spotify. You can output the results as XSPF playlists in your music player. The implementation is Python, but the commandline focus means it can interact with tools in any language that parses JSON.

The goal is not to replace Spotify here. The goal is to make recommendations open and transparent. That means you’re going to see the details of how they work. My dream would be that this becomes an educational tool to help us understand more about what “algorithms” (used in the journalistic sense) actually do.

I’m developing a series of example playlist generation scripts. I’m particularly enjoying “Music I haven’t listened to in over a year” — that one requires over a year of listen history data to be useful, of course. But even the “One hour random shuffle” playlist is fun.

A breakthrough this month was the start of a constraints-based approach for selecting songs. I found a useful model in a paper from 2006 titled “Fast Generation of Optimal Music Playlists using Local Search”, and implemented a subset using the Python simpleai library. Simple things can produce great results. I’m only scratching the surface of what’s possible with this model, using constraints on the duration property to ensure songs and playlists are a suitable length. I expect to show off some more sophisticated examples in future.

I’m not going to talk much more about it here — if it sounds interesting, read the documentation which I’ve recently been working on, clone the source code, and ask me if there’s any questions. I’m keen to hear what ideas you have.

Tracker 3.0: It’s Here!

This is part 1 of a series. Come back next week to find out more about the changes in Tracker 3.0.

It’s too early to say “Job done”. But we’ve passed the biggest milestone on the project we announced last year: version 3.0 of Tracker is released and the rollout has begun!

We wanted to port all the core GNOME apps in a single release, and we almost achieved this ambitious goal. Nautilus, Boxes, Music, Rygel and Totem all now use Tracker 3. Photos will require 2.x until the next release. Outside of GNOME core, some apps are ported and some are not, so we are currently in a transitional period.

The important thing is only Tracker Miner FS 3 needs to run by default. Tracker Miner FS is the filesystem indexer which allows apps to do instant search and content discovery.

Since Photos 3.38 still uses Tracker 2.x we have modified it to start Tracker Miner FS 2 along with the app. This means the filesystem index in the central Tracker 2 database is kept up-to-date while Photos is running. This will increase resource usage, but only while you are using Photos. Other apps which are not yet ported may want to use the same method while they finish porting to Tracker 3 — see Photos merge request 142 to see how it’s done.

Flatpak apps can safely use Tracker Miner FS 3 on the host, via Tracker’s new portal which guards access to your data based on the type of content. It’s up to the app developer whether they use the system Tracker Miner service, or whether they run another instance inside the sandbox. There are upsides and downsides to both approaches.

We published some guidance for distributors in this thread on discourse.gnome.org.

Gratitude

We all owe thanks to Carlos for his huge effort re-thinking and re-implementing the core of Tracker. We should also thank Red Hat for sponsoring some of this work.

I also want to thank all the maintainers who collaborated with us. Marinus and Jean were early adopters in GNOME Music and gave valuable feedback including coming to the regular meetings, along with Jens who also ported Rygel early in the cycle. Bastien dug into reviewing the tracker3 grilo plugin, and made some big improvements for building Tracker Miners inside a Flatpak. In Nautilus, Ondrej and Antonio did some heroic last minute review of my branch and together we reworked the Starred Files feature to fix some long standing issues.

The new GNOME VM images were really useful for testing and catching issues early. The chat room is very responsive and friendly, Abderrahim, Jordan and Valentin all helped me a lot to get a working VM with Tracker 3.

GNOME’s release team were also responsive and helpful, right up to the last minute freeze break request which was crucial to avoiding a “Tracker Miner FS 2 and 3 running in parallel” scenario.

Thanks also to GNOME’s translation teams for keeping up with all the string changes in the CLI tool, and to distro packagers who are now working to make Tracker 3 available
to you.

Coming soon to your distro.

It takes time for a new GNOME release to reach users, because most distros have their own testing phase.

We can use Repology to see where Tracker 3 is available. Note that some distros package it in a new tracker3 package while others update the existing tracker package.

Let’s see both:

Packaging status Packaging status

Coming up…

I have a lot more to write about following the Tracker 3.0 release. I’ll be publishing a series of blog posts over the next month. Make sure you subscribe to my blog or to Planet GNOME to see them all!

Twitter without Infinite Scroll

I like reading stuff on twitter.com because a lot of interesting people write things there which they don’t write anywhere else.

But Twitter is designed to be addictive, and a key mechanism they use is the “infinite scroll” design. Infinite scroll has been called the Web’s slot machine because of the way it exploits our minds to make us keep reading. It’s an unethical design.

In an essay entitled “If the internet is addictive, why don’t we regulate it?”, the writer Michael Schulson says:

… infinite scroll has no clear benefit for users. It exists almost entirely to circumvent self-control.

Hopefully Twitter will one day consider the ethics of their design. Until then, I made a Firefox extension to remove the infinite scroll feature and replace it with a ‘Load older tweets’ link at the bottom of the page, like this:

example

The Firefox extension is called Twitter Without Infinite Scroll. It works by injecting some JavaScript code into the Twitter website which disconnects the ‘uiNearTheBottom’ event that would otherwise automatically fetch new data.

Quoting Michael Shulson’s article again:

Giving users a chance to pause and make a choice at the end of each discrete page or session tips the balance of power back in the individual’s direction.

So, if you are a Twitter user, enjoy your new-found power!

GUADEC call for talks ends this Sunday, 23rd April

GUADEC 2017 is just over three months away, which is a very long time in the future and leaves lots of time to organise everything (at least that’s what I keep telling myself).

However, the call for papers is closing this Sunday so if you have something you want to talk about in front of the GNOME community and you haven’t already submitted a talk then please head to the registration site and do so!

Once the call for papers closes, the Papers Committee will fetch their ceremonial robes and make their way to a cave deep in the Peak District for two weeks. There they will drink fresh spring water, hunt grouse on the moors and study your talk submissions in great detail. When two weeks is up, their votes are communicated back to Manchester using smoke signals and by Sunday 7th May you’ll be notified by email if your talk was accepted. From there we can organise travel sponsorship and finalize the schedule of the first 3 days of the conference, which should be available late next month.

We’ll put a separate call out for BoF sessions, workshops, and tutorial sessions to take place during the second half of GUADEC — the 23rd April deadline only applies to talks.

GUADEC accommodation

At this year’s GUADEC in Manchester we have rooms available for you right at the venue in lovely modern student townhouses. As I write this there are still some available to book along with your registration. In a couple of days we have to give final numbers to the University for how many rooms we want, so it would help us out if all the folk who want a room there could register and book one now if you haven’t already done so! We’ll have some available for later booking but we have to pay up front for them now so we can’t reserve too many.

Rooms for sponsored attendees are reserved separately so you don’t need to book now if your attendance depends on travel sponsorship.

If you are looking for a hotel, we have a hotel booking service run by Visit Manchester where you can get the best rates from various hotels right up til June 2017. (If you need to arrive before Thursday 27th July then you can to contact Visit Manchester directly for your booking at abs@visitmanchester.com).

We have had some great talk submissions already but there is room for plenty more, so make sure you also submit your idea for a talk before 23rd April!

GUADEC 2017: Friday 28th July to Wednesday 2nd August in Manchester, UK

I'm going to GUADEC 2017

The GUADEC 2017 team is happy to officially announce the dates and location of this year’s conference.

GUADEC 2017 will run from Friday 28th July to Wednesday 2nd August. The first three days will include talks and social events, as well as the GNOME Foundation’s AGM. This part of the conference will also include a 20th anniversary celebration for the GNOME project.

The second 3 days (from Monday 31st July to Wednesday 2nd August) are unconference-style and will include space for hacking, project BoF sessions and possibly training workshops.

The conference days will be at Manchester Metropolitan University’s Brooks Building. The unconference days will be in a nearby University building named The Shed.

Registration and a call for papers will be open later this month. More details, including travel and accommodation tips, are available now at the conference website: https://2017.guadec.org/

We are interested in running training workshops on Monday 31st July but nothing is planned yet. We would like to hear from anyone who interested in helping to organise a training workshop.

Inside view of MMU Brooks Building
Inside view of MMU Brooks Building

 

Manchester GNOME 3.22 Release Party – Friday 23rd Sept. @ MADLab

We are hosting a party for the new GNOME release this Friday (23rd September).

The venue is MADLab in Manchester city centre (here’s a map). We will be there between 18:00 and 21:00. There will be some free refreshments, an overview of the new features in 3.22, advice on how install a free desktop OS on your computer and how contribute to GNOME or a related Free Software project.

Everyone is welcome, including users of rival desktop environments & operating systems 🙂

release-party-invite

Codethink is hiring!

We are looking for people who can write code, who match one of these job descriptions at least slightly, and who are willing to relocate to Manchester, UK (so you must either be an EU resident, or able to get a work permit for the UK.) Manchester is number 8 in Lonely Planet’s Best In Travel list for 2016, so really you’d be doing yourself a favour to move here. Remote working is possible if you have lots of contributions to public software projects that demonstrate your amazingness.

There is a nice symmetry to this blog post, I remember reading a similar one quite a few years ago, which led to me applying for a job at Codethink, and i’ve been here ever since, with various trips to exotic countries in between.

If you’re interested, send a CV & cover letter to jobs@codethink.co.uk.