Status update, 19/05/2022

I had some ambitious travel plans last month – ambitious by 2020’s standards, anyway – and somehow they came off without any major issues or contagions. In Cologne I was amazed to go to Freedom Sounds festival and witness the return of the Singers ATX alongside host of other ska legends. And then my first ever trip to Italy where I attended Linux App Summit. It was a treat to be around people who are as enthusiastic as I am about the rather niche topic of Linux desktop apps… but hopefully it’s a topic which is becoming less niche.

While safely back home I then managed to contract COVID-19, so this status update is going to be fairly short.

Things I spent time on that may be of interest to someone, include

  • a minimal “GUI toolkit” for the monome grid, written partly as another Rust exercise, and partly to make experimental beats. Fun to write. A lot of this was developed offline while travelling, and I discovered Rust’s cargo doc feature which is an incredible tool for the disconnected hacker. Code dump here.
  • related to the above, packaging the user space driver for the monome grid as a systemd portable service.
  • updating Fedora on my partners laptop. This particular laptop was lacking a Wifi driver but that is now solved in Fedora 36. I can provide yet another datapoint that the GNOME 42 screenshot experience creates joy and happiness.
  • watching the various new synthesizers unveiled the Superbooth 2022 event.
  • a single experimental beat, plus some production on a new song due in a few months

Trying out systemd’s Portable Services

I recently acquired a monome grid, a set of futuristic flashing buttons which can be used for controlling software, making music, and/or playing the popular 90’s game Lights Out.

Lights Out game

There’s no sound from the device itself, all it outputs is a USB serial connection. Software instruments connect to the grid to receive button presses and control the lights via the widely-supported protocol Open Sound Control protocol. I am using monome-rs to convert the grid signals into MIDI, send them to Bitwig Studio and make interesting noises, which I am excited to share with you in the future, but first we need to talk about software packaging.

Monome provide a system service named serialosc, which connects to the grid hardware (over USB-serial) and provides the Open Sound Control endpoint. This program is not packaged in by Linux distributions and that is fine, it’s rather niche hardware and distro maintainers shouldn’t have support every last weird device. On the other hand, it’s rather crude to build it from source myself, install it into /usr/local, add a system service, etc. etc. Is there a better way?

First I tried bundling serialosc along with my control app using Flatpak, which is semi-possible – you can build and run the service, but it can’t see the device unless you set the super-insecure “–devices=all” mode, and it still can’t detect the device because udev is not available, so you would have to hardcode the driver name /dev/ttyACM0 and hotplug no longer works … basically this is not a good option.

Then I read about systemd’s Portable Services. This is a way to ship system services in containers, which sounds like the correct way to treat something like serialosc. So I followed the portable walkthrough and within a couple of hours the job was done: here’s a PR that could add Portable Services packaging upstream: https://github.com/monome/serialosc/pull/62

I really like the concept here, it has some pretty clear advantages as a way to distribute system services:

  • Upstreams can deliver Linux binaries of services in a (relatively) distro-independent way
  • It works on immutable-/usr systems like Fedora Silverblue and Endless OS
  • It encourages containerized builds, which can lower the barrier for developing local improvements.

This is quite a new technology and I have mixed feelings about the current implementation. Firstly, when I went to screenshot the service for this blog post, i discovered it had broken:

screenshot of terminal showing "Failed at step EXEC - permission denied"

Fixing the error was a matter of – disabling SELinux. On my Fedora 35 machine the SELinux profile seems totally unready for Portable Services, as evidenced in this bug I reported, and this similar bug someone else reported a year ago which was closed without fixing. I accept that you get a great OS like Fedora for free in return for being a beta tester of SELinux, but this suggests portable services are not yet ready widespread use.

That’s better:

I used the recommended mkosi build to create the serialosc container, which worked as documented and was pretty quick. All mkosi operations have to run as root. This is unfortunate and also interacts badly with the Git safe.directory feature (the source trees are owned by me, not root, so Git’s default config raises an error).

It’s a little surprising the standard OCI container format isn’t supported, only disk images or filesystem trees. Initially I built a 27MB squashfs, but this didn’t work as the container rootfs has to be read/write (another surprise) so I deployed a tree of files in the end. The service container is Fedora-based and comes out at 76MB across 5,200 files – that’s significant bloat around a service implemented in a few 1000 lines of C code. If mkosi supported Alpine Linux as base OS we could likely reduce this overhead significantly.

The build/test workflow could be optimised but is already workable, the following steps take about 30 seconds with a warm mkosi.cache directory:

  • sudo mkosi -t directory -o /opt/serialosc --force
  • sudo portablectl reattach --enable --now /opt/serialosc --profile trusted

We are using “Trusted” profile because the service needs access to udev and /dev, by the way.

All in all, the core pieces are already in place for a very promising new technology that should make it easier for 3rd parties to provide Linux system-level software in a safe and convenient way, well done to the systemd team for a well executed concept. All it lacks is some polish around the tooling and integration.

Linux App Summit 2022

Engineers at Codethink get some time and money each year to attend conferences, and part of the deal is we have to write a report afterwards. Having written the report I thought… this could be a bit more widely shared! So, excuse the slightly formal tone of this report, but here are some thoughts on LAS 2022.

Talk highlights

1. Energy Conservation and Energy Efficiency with Free Software – Joseph De Veaugh-Geiss

Video, Abstract

If you follow climate science you’ll know we have some major problems coming. Solving these will take big societal changes towards degrowth, which was not the topic of this talk at all. This talk was rather about what Free Software developers might do to help out in the wider context of reducing energy use.

I recommend watching in full as it was well delivered and interesting. Here are some of the points I noted:

  • Aviation is estimated to cause 2.5% of CO² emissions in 2017, and “ICT” between 1.5-2.8%.
    • Speaker didn’t know if that number includes Bitcoin or other “proof-of-work” things
    • This number does include Bitcoin. (Thanks to Joseph for clarifying 🙂
    • Two things that make the number high: short-lifespan electronic devices, video streaming/conferencing
  • Software projects could be more transparent about how much energy they use.
    • Imagine 3 word processors which each provide a Github badge showing energy consumption for specific use cases, so you can choose the one which uses least power
  • KDE’s PDF reader (Okular) recently achieved the Blauer Angel eco-certification
  • There is an ongoing initiative in KDE to measure and improve energy consumption of software – a “measurement party” in Berlin is the next step.
    • Their method of measuring power consumption requires a desktop PC and a special plug that can measure power at reasonably frequency.
  • Free software already has a good story here (as we are often the ones keeping old hardware alive after manufacturers drop support), and there’s an open letter asking the EU to legislate such that manufacturers have to give this option.

2. Flathub – now and next

Video, Abstract
There is big growth in number of users. The talk had various graphs including a download chart of something like 10PB of data downloaded.

The goal of Flathub is not to package other people’s apps, rather to reduce barriers to Linux app developers. One remaining barrier is money – as there’s no simple way to finance development of a Linux app at present. So together with Codethink, GNOME and Flathub are working on a way that app users can make donations to app authors.

Notes

There was a larger presence from Canonical than I am used to, and it seems they are once again growing their desktop team, and bringing back the face-to-face Ubuntu Developer Summit (rebranded Ubuntu Summit). All good news.

COVID safety:

  • all participants expect speaker were masked during talks
  • the seats were spaced and our “green pass” (vaccine or -ve test) was checked on day 1
  • in the social events the masks all came off, these were mostly in the outside terrace area.

Online participation:

  • maybe 30% of talks were online and these worked well, in fact the Flathub talk had one speaker on stage and one projected behind him as a giant head, and it worked well.
  • the online attendees were effectively invisible in the venue, it would be better if there was a screen projecting the online chat so we could have some interaction with them.

The organisation was top notch, the local team did a great job and the venue was also perfect for this kind of event. Personally I met more folks from KDE than I ever have and felt like a lot of important desktop-related knowledge sharing took place. Definitely recommend the conference for anyone with an interest in this area.

I don’t have any good photos to share, but check https://twitter.com/hashtag/las2022 for the latest. Hope to see you there next year!

Status update, 15/04/2022

As i mentioned last month, I bought one of these Norns audio-computers and a grey grid device to go with it. So now I have this lap-size electronic music apparatus.

Its very fun to develop for – the truth is I’ve never got on well with the “standard” tools of Max/MSP and Pure Data, I suspect flow-based programming just isn’t for me. As within an hour of opening the web-based Lua editor on the Norns device I already had a pretty cool prototype of something I hadn’t even intended to make. More on that when I can devote more time to it.

There is some work to make the open-source Norns software run on the Linux desktop in a container. It seems the status is “working but awkward.” I thought to myself – “How hard could it be to package this as a Flatpak?”

As soon as I saw use of Golang and Node.js I knew the answer – very hard.

In the world of Flatpak apps we are rigorous about doing reproducible builds from controlled inputs. The flatpak-builder tool requires that all the Go and JavaScript dependencies are specified ahead of time, and build tools are prevented from connecting to the internet at build time. This is a good thing, as the results are otherwise unpredictable. The packaging tools in the Go and Node.js ecosystems make this kind of attention to detail difficult to achieve.

There is a solution, which would be to build the components using BuildStream and develop plugins to integrate with the Node.js and Go worlds, so it can fetch deps ahead of time. There’s an open issue for Go plugin. For Node.js I am not sure if anything exists.

This led me to thinking – how come the open-source, community developed world of Flatpaks is so much better and doing reliable, reproducible builds than the corporate environments that I see in my day job?

One last cool thing – thanks to instructions here by @infinitedigits I was able to produce this amazing music video!

Status update, 18/03/2022

This month has had a big focus on music! I just released a 4 track EP called Rust In Peace, which you can listen to on various popular music platforms and (better still) download it from Bandcamp.

The COVID19 pandemic is not over (I can name 5 folk who have been COVID+ just this week), but its effects on society are becoming less, to the point I could even do a launch gig for the EP in the amazing Café Arume in Santiago – my first real “gig” since 2018.

A similar effect is happening at work, in that work-related travel is becoming possible. My current project, on a huge codebase that resembles a famous painting by Hieronymus Bosch, involves making changes that affects 100s or 1000s of developers, who we meet via Slack DMs and audio-only Zoom calls. For the first time in years, on-site visits are a possibility again and while I’m genuinely not keen to make a habit of intercontinental air travel, meeting the organisation face to face at least once would make a huge difference to my day to day work – the issues are much more about managing processes and people than any specific technologies, and this is super hard to do with strangers.

I am also hoping to travel to Italy next month to attend Linux App Summit 2022 in person, so, see you there? Talk submissions for LAS (online + face to face) are open until midnight tonight (18th March), so perhaps its not too late to submit a talk or lightning talk!

A few other interesting steps I’ve taken this month:

  • Learn GNOME Shell keyboard shortcuts for moving and resizing windows. You can do a lot with Meta+F7/F8, it turns out.
  • Implement the first “Special mix” playlist generator for Calliope; it takes a histogram of my Listenbrainz history, picks a year and finds 60 minutes of songs that I first discovered in that year. Like the old “Mystery years” radio show, but with better music.
  • Buy a Norns audio-computer; basically a RPi 3 with a rich ecosystem of experimental audio effects, written Lua and Supercollider. I have been tempted recently by some of the amazing modern guitar pedals like Chase Bliss Mood and Hologram Microcosm, and this is my attempt to avoid buying any of those 🙂

Status update, 16/02/2022

January 2022 was the sunniest January i’ve ever experienced. So I spent its precious weekends mostly climbing around in the outside world, and the weekdays preparing for the enourmous Python 3 migration that one of Codethink’s clients is embarking on.

Since I discovered Listenbrainz, I always wanted to integrate it with Calliope, with two main goals. The first, to use an open platform to share and store listen history rather than the proprietary Last.fm. And the second, to have an open, neutral place to share playlists rather than pushing them to a private platform like Spotify or Youtube. Over the last couple of months I found time to start that work, and you can now sync listen history and playlists with two new cpe listenbrainz-history and cpe listenbrainz commands. So far playlists can only be exported *from* Listenbrainz, and the necessary changes to the pylistenbrainz binding are still in review, but its a nice start.

Status update, 17/01/2022

Happy 2022 everyone! Hope it was a safe one. I managed to travel a bit and visit my family while somehow dodging Omicron each step of the way. I guess you cant ask for much more than that.

I am keeping busy at work integrating BuildStream with a rather large, tricky set of components, Makefiles and a custom dependency management system. I am appreciating how much flexibility BuildStream provides. As an example, some internal tools expect builds to happen at a certain path. BuildStream makes it easy to customize the build path by adding this stanza to the .bst file:

variables:
    build-root: /magic/path

I am also experimenting with committing whole build trees as artifacts, as a way to distribute tests which are designed to run within a build tree. I think this will be easier in Bst 2.x, but it’s not impossible in Bst 1.6 either.

Besides that I have been mostly making music, stay tuned for a new Vladimir Chicken EP in the near future.

Status update, 19/12/2021

Its a time to be thankful for what you can do, rather than be pissed off about things that you can’t do because we’re in the 3rd year of a global pandemic.

I made it home to Shropshire, in time to cast an important vote, and to spend Christmas with my folks… something I couldn’t do last year.

Work involves a client with software integration difficulties. Our goal is to enable a Python 3 migration in the company, which involves a tangle of dependencies in various languages. The interesting aspect is that we’re trialling BuildStream as the solution. We know BuildStream can control the mix of C/C++/Go/Java/etc. dependencies, in a way which Python-only tools like virtualenv cannot, and we hope it will be less friction compared to introducing a fullblown packaging system like DPKG. The project is challenging for a number of reasons and I am not enjoying working over VPN+SSH to another continent, but I’m sure we will learn a lot.

I was excited to see Your Year in Music available on Listenbrainz this year. I’d love to be able to export the generated playlists with Calliope, but I don’t have the time to implement it myself.

Besides that I am mostly concentrating on relaxing and finishing off some music. Here’s the weather in Wales today – cloud or snow?

Status update, November 2021

I am impressed with the well-deserved rise of Sourcehut, a minimalist and open source alternative to Github and Gitlab. I like their unbiased performance comparison with other JavaScript-heavy Git forges. I am impressed by their substantial contributions to Free Software. And I like that the main developers, Drew DeVault and Simon Ser, both post monthly status update blog posts on their respective blogs.

I miss blog posts.

So I am unashamedly copying the format. I am mostly not paid to work on Free Software but sometimes I am so the length of the report will vary wildly.

This month I got a little more Codethink time to work on openqa.gnome.org (shout out Javier Jardón for getting me that time). Status report here.

image

I spoke at the first ever PackagingCon to spread the good word about Freedesktop SDK and BuildStream.

As always, I did a little review and issue triage for Tracker and Tracker Miners.

And I have been spending time working on an audio effect. More about that in another post.

Automated point and click

I have been watching GNOME’s testing story get better and better for a long time. The progress we made since we first started discussing the GNOME OS initiative is really impressive, even when you realize that GUADEC in A Coruña took place nine years ago. We got nightly OS images, Gitlab CI and the gnome-build-meta BuildStream project, not to mention Flatpak and Flathub.

Now we have another step forwards with the introduction of OpenQA testing for the GNOME OS images. Take a look at the announcement on GNOME Discourse to find out more about it.

Automated testing is quite tedious and time consuming to set up, and there is significant effort behind this – from chasing regressions that broke the installer build, and debugging VM boot failures to creating a set of simple, reliable tests and integrating OpenQA with Gitlab CI. A big thanks to Codethink for sponsoring the time we are spending on setting this up. It is part of a wider story aiming to facilitate better cooperation upstream between companies and open source projects, which I wrote about in this Codethink article “Higher quality of FOSS”.

It’s going to take time to figure out how to get the most out of OpenQA, but I’m sure it’s going to bring GNOME some big benefits.

New faces in the Tracker project

The GSoC 2021 cohort has just been announced. There’s a fantastic list of organisations involved, including GNOME, and I’m happy that this year two of those projects will be based around Tracker.

The two interns working on Tracker are:

We were lucky to have several promising candidates. I want to shout out Nitin in particular for getting really involved with Tracker and making some solid contributions too. I want to remind all GSoC applicants of two things. Firstly that a track record of high quality open source contributions is something very valuable and always an advantage when applying for jobs and internships. Including next year’s GSoC 🙂 And secondly that if 5 folk propose the same project idea, only one can be chosen, but if 5 different project ideas arrive then we may be able to choose two or even three of them.

I also want to highlight the great work Daniele Nicolodi has been doing recently on the database side of Tracker. If you want a SPARQL 1.1 database and don’t want to go EnterpriseTM Scale, your options are surprisingly limited, and one goal of the Tracker 3 work was to make libtracker-sparql into a standalone database option. Daniele has moved this forward, already getting it running on Mac OS X and cleaning up a number of neglected internal codepaths.

I hope the increased involvement shows our developer experience improvements are starting to pay dividends. More eyes on the code that powers search in GNOME is always a good thing.

Calliope, slowly building steam

I wrote in December about Calliope, a small toolkit for building music recommendations. It can also be used for some automation tasks.

I added a bandcamp module which list albums in your Bandcamp collection. I sometimes buy albums and then don’t download them because maybe I forgot or I wasn’t at home when I bought it. So I want to compare my Bandcamp collection against my local music collection and check if something is missing. Here’s how I did it:

# Albums in your online collection that are missing from your local collection.

ONLINE_ALBUMS="cpe bandcamp --user ssssam collection"
LOCAL_ALBUMS="cpe tracker albums"
#LOCAL_ALBUMS="cpe beets albums"

cpe diff --scope=album <($ONLINE_ALBUMS | cpe musicbrainz resolve-ids -) <($LOCAL_ALBUMS) 


Like all things in Calliope this outputs a playlist as a JSON stream, in this case, a list of all the albums I need to download:

{
  "album": "Take Her Up To Monto",
  "bandcamp.album_id": 2723242634,
  "location": "https://roisinmurphy.bandcamp.com/album/take-her-up-to-monto",
  "creator": "Róisín Murphy",
  "bandcamp.artist_id": "423189696",
  "musicbrainz.artist_id": "4c56405d-ba8e-4283-99c3-1dc95bdd50e7",
  "musicbrainz.release_id": "0a79f6ee-1978-4a4e-878b-09dfe6eac3f5",
  "musicbrainz.release_group_id": "d94fb84a-2f38-4fbb-971d-895183744064"
}
{
  "album": "LA OLA INTERIOR Spanish Ambient & Acid Exoticism 1983-1990",
  "bandcamp.album_id": 3275122274,
  "location": "https://lesdisquesbongojoe.bandcamp.com/album/la-ola-interior-spanish-ambient-acid-exoticism-1983-1990",
  "creator": "Various Artists",
  "bandcamp.artist_id": "3856789729",
  "meta.warnings": [
    "musicbrainz: Unable to find release on musicbrainz"
  ]
}

There are some interesting complexities to this, and in 12 hours of hacking I didn’t solve them all. Firstly, Bandcamp artist and album names are not normalized. Some artist names have spurious “The”, some album names have “(EP)” or “(single)” appended, so they don’t match your tags. These details are of interest only to librarians, but how can software tell the difference?

The simplest approach is use Musicbrainz, specifically cpe musicbrainz resolve-ids. By comparing ids where possible we get mostly good results. There are many albums not on Musicbrainz, though, which for now turn up as false positives. Resolving Musicbrainz IDs is a tricky process, too — how do we distinguish Multi-Love (album) from Multi-Love (single) if we only have an album name?

If you want to try it out, great! It’s still aimed at hackers — you’ll have to install from source with Meson and probably fix some bugs along the way. Please share the fixes!

Return to Codethink

2020 was a year full of surprises, so surprise that I finished it by returning to work in the same job that I left exactly 3 years ago.

There are a few reasons I did that! I will someday blog in more detail about working as a language teacher. It’s a fun job but to make the most of it you have to move around regularly, and I unexpectedly found a reason to settle in Santiago. Codethink kindly agreed that I could join the ongoing remote-work revolution and work from here.

Three years is a long time. What changed since I left? There’s a much bigger and nicer office in Manchester, with nobody in it due to the pandemic. The company is now grouped into 4 internal divisions. This is still an experiment and it adds some management overhead, also helps to maintain a feeling of autonomy in a company that’s now almost 100 people. (When I started there ten years ago, I think there were seventeen employees?!)

I also want to mention some research projects that my colleagues are working on. Codethink is a services company, but has always funded some non-customer work including in the past work on dconf, Baserock, Buildstream and the Freedesktop SDK. These are termed ‘internal investments’ but they are far from internal, the goal is always to contribute to open software and hardware projects. The process for deciding where to invest has improved somewhat in my absence; it still requires some business case for the investment (I’m still thinking how to propose that I get paid to work on music recommendations and desktop search tools all day), but there is now a process!

Here are two things that are being worked on now:

RISC-V

My contribution to Codethink’s RISC-V research was writing an article about it. The tl;dr is we are playing with some RISC-V boards, mainly in the context of Freedesktop SDK. Since writing that article the team tracked down a thorny bug in how qemu-user uses GLib that had been blocking progress, and got GNOME OS running in qemu-system-riscv. Expect to see a video soon. You can thank us when you get your first RISC-V laptop 🙂

Bloodlight

I never worked on a medical device but some of my colleagues have, and this led to the Bloodlight project. It’s an open hardware device for measuring your heart rate, aiming to avoid some pitfalls that existing devices fall into:

Existing technology used in smart watches suffers various shortcomings, such as reduced effectiveness on darker skin tones and tattoos.

There is a lot of technical info on the project on Github, including an interesting data processing pipeline. Or for a higher level overview, the team recently published an article at coruzant.com.

As is often the case, I can’t say exactly what I’m working on right now, other than it’s an interesting project and I am learning more than I ever wanted about Chromium.

Every Contribution Matters

GNOME is lucky to have a healthy mix of paid and volunteer contributors. Today’s post looks at how we can keep it that way.

I had some time free last summer and worked on something that crossed a number of project boundries. It was a fun experience. I also experienced how it feels to volunteer time on a merge request which gets ignored. That’s not a fun experience, it’s rather demotivating, and it got me thinking: how many people have had the same experience, and not come back?

I wrote a script with the Gitlab API to find open merge requests with no feedback, and I found a lot of them. I started to think we might have a problem.

GANGSTER CAT - Do we have a problem?

Code Reviews are Whose Job?

I’ve never seen a clear breakdown within GNOME of who is responsible for what. That’s understandable: we’re an open, community-powered project and things change fast. Even so, we have too much tribal knowledge and newcomers may arrive with expectations that don’t match reality.

Each component of GNOME lists one or more maintainers, In principle the maintainers review new contributions. Many GNOME maintainers volunteer their time, though. If they aren’t keeping up with review, nobody can force them to abandon their lives and spend more time reviewing patches, nor should they; so the solution to this problem can’t be “force maintainers to do X”.

Can we crowdsource a solution instead? Back in 2020 I proposed posting a weekly list of merge requests that need attention. There was a lot of positive feedback so I’ve continued doing this, and now mostly automated the process.

So far a handful of MRs have been merged as a result. The list is limited to MRs marked as “first contribution”, which happens when the submitter doesn’t have anything merged in the relevant project yet. So each success may have a high impact, and hopefully sends a signal that GNOME values your contributions!

Who can merge things, though?

Back to tribal knowledge, because now we have a new problem. If I’m not the maintainer of package X, can I review and merge patches? Should I?

If you are granted a GNOME account, you get ‘developer’ permission to the gitlab.gnome.org/GNOME group. This means you can commit and merge in every component, and this is deliberate:

The reason why we have a shared GNOME group, with the ability to review/merge changes in every GNOME project, is to encourage drive by reviews and contributions. It allows projects to continue improving without blocking on a single person.

— Emmanuele Bassi on GNOME Discourse

Those listed as module maintainers have extra permissions (you can see a comparison between Gitlab’s ‘developer’ and ‘maintainer’ roles here).

On many active projects the culture is that only a few people, usually the maintainers, actually review and merge changes. There are very good reasons for this. Those who regularly dedicate time to keeping the project going should have the final say on how it works, or their role becomes impossible.

Is this documented anywhere? It depends on the project. GTK is a good example, with a clear CONTRIBUTING.md file and list of CODEOWNERS too. GTK isn’t my focus here, though: it does have a (small) team of active maintainers, and patches from newcomers do get reviewed.

I’m more interested in smaller projects which may not have an active maintainer, nor a documented procedure for contributors. How do we stop patches being lost? How do you become a maintainer of an inactive project? More tribal knowledge, unfortunately.

Where do we go from here?

To recap, my goal is that new contributors feel welcome to GNOME, by having a timely response to their contributions. This may be as simple as a comment on the merge request saying “Thanks, I don’t quite know what to do with this.” It’s not ideal, but it’s a big step forwards for the newcomer who was, up til now, being ignored completely. In some cases, the request isn’t even in the right place — translation fixes go to a separate Gitlab project, for example — it’s easy to help in these cases. That’s more or less where we’re at with the weekly review-request posts.

We still need to figure out what to do with merge requests we get which look correct, but it’s not immediately obvious if they can be merged.

As a first step, I’ve created a table of project maintainers. The idea is to make it a little easier to find who to ask about a project:

Searchable table of project maintainers, at https://gnome-metrics.afuera.me.uk/maintainers.html

I have some more ideas for this initiative:

  • Require each project to add a CONTRIBUTING.md.
  • Agree a GNOME-wide process for when a project is considered “abandoned” — without alienating our many part-time, volunteer maintainers.
  • Show the world that it’s easy and fun to join the global diaspora of GNOME maintainers.

Can you think of anything else we could do to make sure that every contribution matters?

Search Joplin notes from GNOME Shell

One of my favourite discoveries of 2020 is Joplin, an open, comprehensive notebook app. I’m slowly consolidating various developer journals, Zettelkasten inspired notes, blog drafts, Pinboard bookmarks and abstract doodles into Joplin notebooks.

Now it’s there I want to search it from the GNOME Shell overview, and that’s pretty fun to implement.

It’s available from here and needs to be installed manually with Meson. Perhaps one day this can ship with Joplin itself, but there are few issues to overcome first:

  • It’s not yet possible to open a specific note in Joplin. I suggested adding a commandline option and discovered they plan to add x-callback-url support, which will be great once there’s a design for how it should work.
  • The search provider also appears as an application in the Shell. I think a change in GNOME Shell is needed before we can hide this.

Here’s to the end of 2020. If you’re bored, here’s a compilation of unusual TV news events from the year, including (my favourite) #9, a guy playing piano to monkeys.

Calliope: Music recommendations for hackers

I started thinking about playlist generation software about 15 years ago. In that time, so much happened that I can’t possibly summarize it all here. I’ll just mention two things. Firstly, Spotify appeared, and proceeded to hire or buy most of the world’s music recommendation experts and make automatic playlists into a commodity. Secondly, I spent a lot of time iterating on a music tool I call Calliope.

Spotify or not?

Spotify’s discovery features can be a great way to find new music, but I’ve always felt like something was missing. The recommendations are opaque. We know broadly how they work, but there’s no way to know why it’s suggesting I listen to ska punk all day, or I try a podcast titled ‘Tu Inglés’, or play some 80’s alternative classics I’m already familiar with. It gets repetitive.

Some of the most original new music isn’t even available on Spotify. Most folk don’t release that small artists have to pay a distributor to get their music to appear on streaming services like Spotify and Apple Music, a dubious investment when the return for the artist might be a cheque for $0.10 and a little exposure. No wonder that some artists use music purchase sites like Bandcamp exclusively. Of course, this means they’ll never appear in your Discover Weekly playlist.

Algorithms decide which social media posts I see, whether I can get a credit card, and how much I would pay to insure a car. Spotify’s recommendation system is another closed system like the others. But unlike credit agencies and big social networks, the world of music has some very successful repositories of open data. I’ve been saving my listen history to Last.fm since 2006. Shouldn’t I do something with it?

Introducing Calliope

Calliope is an open source tool for hackers who want to generate playlists. Its primary goals are to be a fun side project for me and to produce interesting playlists from of my digital music collection. Recently it has begun fulfilling both of those goals so I decided it’s time to share some details.

Querying my music collection with Calliope

The project consists of a set of commandline tools which operate on playlist data. You use a shell pipeline to define the data pipeline. Your local music collection is queried from Tracker or Beets. You can mix in data from Last.fm, Musicbrainz and Spotify. You can output the results as XSPF playlists in your music player. The implementation is Python, but the commandline focus means it can interact with tools in any language that parses JSON.

The goal is not to replace Spotify here. The goal is to make recommendations open and transparent. That means you’re going to see the details of how they work. My dream would be that this becomes an educational tool to help us understand more about what “algorithms” (used in the journalistic sense) actually do.

I’m developing a series of example playlist generation scripts. I’m particularly enjoying “Music I haven’t listened to in over a year” — that one requires over a year of listen history data to be useful, of course. But even the “One hour random shuffle” playlist is fun.

A breakthrough this month was the start of a constraints-based approach for selecting songs. I found a useful model in a paper from 2006 titled “Fast Generation of Optimal Music Playlists using Local Search”, and implemented a subset using the Python simpleai library. Simple things can produce great results. I’m only scratching the surface of what’s possible with this model, using constraints on the duration property to ensure songs and playlists are a suitable length. I expect to show off some more sophisticated examples in future.

I’m not going to talk much more about it here — if it sounds interesting, read the documentation which I’ve recently been working on, clone the source code, and ask me if there’s any questions. I’m keen to hear what ideas you have.

Beginning Rust

I have the privilege of some free time this December and I unexpectedly was inspired to do the first few days of the Advent of Code challenge, by a number of inspiring people including Philip Chimento, Daniel Silverstone and Ed Cragg.

The challenge can be completed in any language, but it’s a great excuse to learn something new. I have read a lot about Rust and never used until a few days ago.

Most of my recent experience is with Python and C, and Rust feels like it has many of the best bits of both languages. I didn’t get on well with Haskell, but the things I liked about that language are also there in Rust. It’s done very well at taking the good parts of these languages and leaving out the bad parts. There’s no camelCaseBullshit, in particular.

As a C programmer, it’s a pleasure to see all of C’s invisible traps made explicit and visible in the code. Even integer overflow is a compile time error. As a Python programmer, I’m used to writing long chains of operations on iterables, and Rust allows me to do pretty much the same thing. Easy!

Rust does invent some new, unique bad parts. I wanted to use Ed’s cool Advent of Code helper crate, but somehow installing this tiny library using Cargo took up almost 900MB of disk space. This appears to be a known problem. It makes me sad that I can’t simply use Meson to build my code. I understand that Cargo’s design brings some cool features, but these are big tradeoffs to make. Still, for now I can simply avoid using 3rd party crates which is anyway a good motivation to learn to work with Rust’s standard library.

I also spend a lot of time figuring out compiler errors. Rust’s compiler errors are very good. If you compare them to C++ compiler errors, then there’s really no comparison at all. In fact, they’re so good that my expectations have increased, and paradoxically this makes me more critical! (Sometimes you have to measure success by how many complaints you get). When the compiler tells me ‘you forgot this semicolon’, part of me thinks “Well you know what I meant — you add the semicolon!”. And while some errors clearly tell you what to fix, others are still pretty cryptic. Here’s an example:

error[E0599]: no method named `product` found for struct `Vec<i64>` in the current scope
   --> day3.rs:82:34
    |
82  |       let count: i64 = tree_counts.product();
    |                                    ^^^^^^^ method not found in `Vec<i64>`
    |
    = note: the method `product` exists but the following trait bounds were not satisfied:
            `Vec<i64>: Iterator`
            which is required by `&mut Vec<i64>: Iterator`
            `[i64]: Iterator`
            which is required by `&mut [i64]: Iterator`

error: aborting due to previous error

For more information about this error, try `rustc --explain E0599`.

What’s the problem here? If you know Rust, maybe it’s obvious that my tree_counts variable is a Vec (list), and I need to call .iter() to produce an iterator. If you’re a beginner, this isn’t a huge help. You might be tempted to call rustc --explain E0599, which will tell you that you might, for example, need to implement the .chocolate() method on your Mouth struct. This doesn’t get you any closer to knowing why you can’t iterate across a list, which is something that you’d expect to be iterable.

Like I said, Rust is lightyears ahead of other compilers in terms of helpful error messages. However, if it’s a goal that “Rust is for students”, then there is still lots of work to do to improve them.

I know enough about software development to know that the existence of Rust is nothing short of a miracle. The Rust community are clearly amazing and deserve ongoing congratulations. I’m also impressed with Advent of Code. December is a busy time, which is why I’ve never got involved before, but if you are looking for something to do then I can recommend it!

You can see my Advent of Code repo here. It may, or may not proceed beyond day 4. It’s useful to check your completed code against some kind of ‘model’, and I’ve been using Daniel’s repo for that. Who else has some code to show?

Search Pinboard.in from GNOME Shell

Pinboard.in is a bookmarking and archival website run by Maciej Ceglowski, also a noted public speaker, Antarctic explorer and political activist. I use Pinboard as a way to close browser tabs, by pretending to myself that I’ll one day revisit the 11,000 interesting links that I’ve bookmarked.

Hoping to make better use of this expansive set of thought-provoking articles, hilarious videos and expired domain names, I wrote a minimal search provider for GNOME Shell.

If you’re happy to meson install some Python scripts, then you can use it too! The search provider installs a systemd timer unit which checks for new bookmarks every hour, and downloads them into a Whoosh search index. I chose Whoosh because it’s fun to try new search engines, and my opinion is that it’s fast, powerful, and a little heavy on the disk space usage — my bookmarks take up 7MB in a JSON file but 17MB in the Whoosh index.

A secondary purpose of this effort is to show how easy it is to make search providers. It took me about 8 hours to make this. Feel free to copy and paste for your own search providers, and use the CLI test tool in my desktop-search repo when testing them.

What other services would you like to see integrated into GNOME Shell as search providers? Next on my list might be notes from Joplin.

Tracker 3.0: Where do we go from here?

This is part 5 of a series. Part 1 is here, part 2 is here, part 3 is here and part 4 is here.

The question of whether machines can think is about as relevant as the question of whether submarines can swim.

Edgar W. Dijkstra

In previous post we looked at the Semantic Web, the semantic desktop, and how it can be that after 20 years of development, most desktop search engines still provide little more than keyword matching in your files.

History shows that the majority of users aren’t excited by star ratings, manual tagging or inference. App developers mostly don’t want to converge on a single database, especially if the first step is to relinquish control of their database schema.

So where do we go from here? Let’s first take a look at what happened in search outside the desktop world in the last 20 years.

Twenty Years in Online Search

I’m sadly reminded of this quote:

Much of the proposed value of the Semantic Web is coming, but it is not coming because of the Semantic Web.

Clay Shirky

The companies listed above spend billions of US dollars per year on research, so it’d be surprising if they didn’t have the edge over those of us who prioritize open technologies. We need to accept that we’re unlikely to out-innovate them. But we can learn from them.

The way people interact with a search engine is dramatically similar to 20 years ago. We type our thoughts into a text box. Increasingly people speak a question to a voice assistant instead of typing it, but internally it’s processed as text. There are exceptions to this, such as Reverse Image Search and Shazaam, and if you work for the Government you have advanced and dangerous tools to search for people. But we’re no nearer to Minority Report style interfaces, despite many of the film’s tech predications coming true, and Dynamicland is still a prototype.

The biggest change therefore is how the search engines and assistants process your query once they receive it.

Image
Dynamicland: A fascinating prototype for the future of computing

Behind the <input type="text">

Text is still our primary search interface, but these days there is a lot more than keyword matching going on when you type something into a search engine, or speak into a voice assistant.

Operators and query expansion

Google documents a few ‘common search techniques‘ such as the OR operator, or numeric ranges €50-€100. They provide many more. Other engines provide similar features, here’s DuckDuckGo’s list.

These tools are useful but I rarely see them used by non-experts. However everyone benefits from query expansion, which includes word stemming, spelling correction, automated translation and more.

Natural language processing

In 2013, Google announced the “Hummingbird” update, described as the biggest change to its search algorithm in its history. Google claimed that it involves “paying more attention to all the words in the sentence”, and used the epithet “things, not strings”. SEO site moz.com gives an intesting overview, and uses the term Google seem to avoid — Semantic Search.

How does it work? We can get some idea by studying a service Google provides for developers called Dialogflow, described as “a natural language understanding platform”, and presumably built out of the same pieces as Hummingbird. The most interesting part is the intent matching engine, which combines machine learning and manually-specified rules to convert “How is the weather in Cambridge” into two values, one representing the intent of “query weather” and the other representing the location as “Cambridge, UK.” That is, if you’re British. If you’re based elsewhere, you’ll get results for one of the world’s many other Cambridges. We know Google collects data about us, which is a controversial topic but also helpful to translate subjective phrases into objective data such as which Cambridge I’m referring too.

As we enter the age of voice assistants, natural language processing becomes a bigger and bigger topic. As well as Google’s Dialogflow, you can use Microsoft’s LUIS or Amazon’s Comprehend, all of which are proprietary cloud services. MyCroft maintain some open source intent parsing tools, and perhaps there are more — let me know!

Example of intent analysis from luis.ai

Instant answers and linked data

The language processing in the Hummingbird update brought to the forefront a feature launched a year earlier called Knowledge Graph, which provided results as structured data rather than blue hyperlinks. It wasn’t the first mainstream semantic search engine — maybe Wolfram Alpha gets that crown — but it was a big change.

Wolfram Alpha claims to work with manually curated data. To a point, so does Google’s Knowledge Graph. Google’s own SEO guide asks us to mark up our websites with JSON-LD to provide ‘rich search results’.

DuckDuckGo was also early to the ‘instant answers’ party. Interestingly their support was developed largely in the open. You can see a list of their instant answers here, and see the code on GitHub.

Back to the Desktop Future

Web search involves querying every computer on the Web, but Desktop search involves searching through just one. However, it’s important to see search in context. Nearly every desktop user interacts with a web search engine too and this informs our expectations.

Unsurprisingly, I’m going to recommend that GNOME continues with Tracker as its search engine and that more projects try it out. I maintain a Wishlist for Tracker (which used to be a roadmap, but Jeff inspired me to change it ;). Our wishlist goals are mostly for Tracker to get better at doing the things it already does. Our main focus has to be stability, of course. I want to start a performance testing initiative during the GNOME 40 cycle. This would help us test out the speedups in Tracker Miners master branch, and catch some extractor bugs in the process. Debian/Ubuntu packaging and finishing the GNOME Photos port are also high on the list.

When we look at where desktop search may go in the next 10 years, Tracker itself may not play the biggest role. The possibilities for search are a lot deeper. Six months ago I asked on Reddit and Discourse for some “crazy ideas” about the future of search. I got plenty of responses but nothing too crazy. So here’s my own attempt.

Automated Tagging

Maybe because I grew up with del.icio.us I think tagging is the answer to many problems, but even I’ve realised that manually tagging your own photos and music is a long process that even I’m not very interested in doing.

A forum commenter noted that AI-based image classification is now practical on desktop systems, using onnxruntime. The ONNX Model Zoo contains various neural nets able to classify an image into 1,000 categories, which we could turn into tags. Another option is the FANN library if suitable models can be found or trained.

This would be great to experiment with and could one day be a part of apps like GNOME Photos. In fact, Tobias already proposed it here.

Time-based Queries

The Zeitgeist engine hasn’t seen much development recently but it’s still around, and crvi recently rejuvenated the Activity Journal app to remind us what Zeitgeist can do.

multi-view-new
GNOME Activity Journal, from discourse.gnome.org

We talked about integrating Zeitgeist with Tracker back in 2011, although it got stuck at the nitpicking stage. With advancements in Tracker 3.0 this would be easier to do.

The coolest way to integrate this would be via a natural language query engine that would allow searches like "documents yesterday" or "photos from 2017". A more robust way would be to use an operator like 'documents from:yesterday'. Either way, at some layer we would have to integrate data from Zeitgeist and Tracker Miner FS and produce a suitable database query.

Query Operators

Users are currently given a basic option — match keywords in a file via Nautilus or the Shell– and an advanced option of opening the Terminal to write a SPARQL query.

We should provide something in between. I once saw a claim that Tracker isn’t as powerful as Recoll as it doesn’t support operators. The problem is actually that we expose a less powerful interface. Support for operators would be relatively simple to implement. This could be a relatively straightforward project and could be implemented incrementally — a good learning project for a search engineer.

Suggestions

The full-text search engine in Tracker SPARQL does some basic query expansion in the form of stemming (turning ‘drawing’ into ‘draw’, for example) and removing stop words like ‘the’ and ‘a’. Could it become smarter?

We are used to autocomplete suggestions these days. This requires the user interface to ask the search backend to suggest terms matching a prefix, for example when I type mon the backend returns money, monkey, and any other terms that were found in my documents. This is supported by other engines like Solr. The SQLite FTS5 engine used in Tracker supports prefix matching, so it would be possible to query the list of keywords matching a prefix. Performance testing would be vital here as it would increase the cost of running a search.

Autocomplete suggestions from Google for 'gnome is ...'

Google takes autocomplete a step further by predicting queries based on other people’s search history and page contents. This makes less sense on the desktop. We could make predictions based on your own search history, but this requires us to record search history which brings in a responsibility to keep it private and secure. The most secure data is the data you don’t have.

Typo tolerance is also becoming the norm, and is a selling point of the Typosense search engine. This requires extra work to take a term like gnomr and create a list of known terms (e.g. gnome) before running the search. SQLite provides a ‘spellfix1’ module that should make this possible.

Suggestion engines can do lots more cool things, like recommending an album to listen to or a website to visit. This is getting beyond the scope of search, but it can reuse some of the data collected to power search.

The Cloud

When Google Desktop joined the list of discontinued Google products, the rationale was this:

People now have instant access to their data, whether online or offline. As this was the goal of Google Desktop, the product will be discontinued.

The life of a desktop developer is much simpler if we assume that all the user’s content lives on the Web. There are several reasons not to accept that as the status quo, though! There is a loss of privacy and agency that comes from giving your data to a 3rd party. Google no longer read your email for advertising purposes, but you still risk not being able to access important data when you need it.

Transferring data between cloud services is something we tried to solve before, the Conduit project was a notable attempt and later the GNOME Online Miners, and it’s still a goal today. The biggest blocker is that most Cloud services have a disincentive to let users access their own data via 3rd party APIs. For freemium, ad-supported services it doesn’t make sense to let users use your service while sidestepping the adverts that fund it.

Conduit screenshot
Conduit, an early data sync app. Screenshot from Wikipedia

Another problem is that maintaining a local index of remote content is risky. It may result in unwanted network traffic — what if an indexer updates the index while the computer is tethered to an expensive 3G connection? It may result in huge local caches of documents the user doesn’t actually care about. It’s difficult to get defaults that work for everyone.

My suggestion is to make online sync features as explicit and opt-in as possible. We should avoid ‘invisible design’ here. If a user has content in Dropbox that they want available locally, they will search online for “how to search dropbox content in gnome.” We don’t need to enable it by default. If they want to search playlists from Spotify and Youtube locally, we can add an interface to do this in GNOME Music.

Another challenge is testing our integrations. It’s hard to automatically test code that runs against a proprietary web service, and at the moment we just don’t bother.

It’s early days for the SOLID project but their aim is to open up data storage and login across web services. I’d love it if this project makes progress.

Mashups, The Revenge

When I first got involved in Tracker, I was doing the classic thing of implementing a music player app, as if the world didn’t have enough. I hoped Tracker could one day pull data from Musicbrainz to correct the metadata in my local media files. Ten years ago we might have called this a “mashup”.

The problem with Tracker Miner FS correcting tags in the background is that it’s not always clear what the correct answer is. Later I discovered Beets, a dedicated tool that does exactly what I want. It’s an interactive CLI tool for managing a music database. GNOME Music has work-in-progress support for using metadata from Musicbrainz, and the Picard is still available too.

Tag editor in GNOME Music with suggestions from MusicBrainz, from app mockups

I think we’ll see more such organisation tools, focused on being semi-automated rather than fully automated.

GNOME Shell search

GNOME Shell’s search is super cool. Did you know you can do maths and unit conversions, via the Calculator search provider? Did you know you can extend it by installing apps that provide other search providers? You probably knew it’s completely private by default. Your searches don’t leave the machine, unlike on Mac OS. That’s why the Shell can’t integrate Web search results — we don’t want to send all your local searches to a 3rd party.

GNOME Shell search result for "52 + 12 miles in km"

We can integrate more offline data sources there. I recently installed the Quick Lookup dictionary app which provides Wiktionary results. It doesn’t integrate with local search, but it could do, by providing an optional bundle of common Wiktionary definitions to download. Endless OS already does this with Wikipedia.

The DBus-federated API of Shell search has one main drawback — a search can wake up every app on your system. If a majority of apps store data with libtracker-sparql, the shell could use libtracker-sparql to query these databases directly and save the overhead of spawning each app. The big advantage of the federated approach, though, is that search providers can store data in the most suitable method for that data. I hope we continue to provide that option.

Natural Language

Could search get smarter still? Could I type "Show me my financial records from last year" and get a useful answer?

Natural language queries are here already, they just haven’t made it to the desktop outside of Mac OS. The best intent parsing engines are provided as Software as a Service, which isn’t an option for GNOME to use. Time is on our side though. The Mycroft AI project maintain two open source intent parsers, Adept and Padatious, and while they can’t yet compete with the proprietary services, they are only getting better.

What we don’t want to do is re-invent all of this technology ourselves. I would keep the natural language processing separate from Tracker SPARQL, so we integrate today’s technology which is mostly Python and JavaScript. A more likely goal would be a new “GNOME Assistant” app that could respond to natural language voice and text queries. When stable, this could integrate closely with the Shell in order to spawn apps and control settings. Mycroft AI already integrates with KDE. Why not GNOME?

This sounds like an ambitious goal, and it is, so let me propose a smaller step forwards. Mycroft allows developers to write Skills as does Alexa. Can we provide skills that integrate content from the desktop? Why should a smart speaker play music from Spotify but not from my hard drive, after all?

End to End Testing

Tracker SPARQL and Tracker Miners have automated tests to spot regressions, but we do very little testing from a user perspective. It requires a design for how search should really work in all of its corner cases — what happens if I search for "5", or "title:Blog", "12 monkeys", or "albums from 2015", or "documents I edited yesterday"?

As more complex features are added, testing becomes increasingly important, not just as “unit tests” but as whole-system integration tests, with realistic sets of data. We have the first step — live VM images — now we need the next step of testing infrastructure, and ,tests.

Funding

Funding is often key to big changes like the ones I’m going to describe below. A lot of development in GNOME happens due to commercial and charitable sponsorship, where contributors develop and maintain GNOME as part of their employment. Contribution also happens through volunteer effort, as part of research funding (see NLnet’s call for “Next Generation Search and Discovery” proposals), or sponsored efforts like GSoC and Outreachy. This is often on a smaller scale, though.

I want this series to highlight that search is a vital part of user experience and an important area to invest design and engineering effort. To answer the question “When can we have all this?”… you’re probably going to have to follow the money.

In conclusion…

This summer was an unusual situation where I had a free summer but a pandemic stopped me from going very far. Between river swims, beach trips and bike rides I still had a lot of free time for hacking – I counted more than 160 hours donated to the Tracker 3 effort in July and August.

I’m now looking for a new job in the software world. I’ll continue as a maintainer of Tracker and I promise to review your patches, but the future might come from somewhere else.

Tracker 3.0: The Good and the Bad

This is part 4 of a series. Part 1 is here, part 2 is here, and part 3 is here. Come back next week for my thoughts on the future of desktop search.

I thought this was going to be the last blog post about Tracker 3.0, but it got rather long and I decided to turn this appraisal of the project and its design into a post of its own. So, get ready for some praise and some criticism!

To criticise the design and implementation of a software project we first need to understand the project requirements. What it is trying to do? I’ve written some goals and non-goals for the Tracker search engine, and proposed them for inclusion in the README. Now I’m going to compare the goals with the reality. I am of course a biased observer, and I welcome you to make your own assessment, but let’s dive in!

Goal 1. Provide real-time searching and browsing of desktop content.

Tracker achieves this with a background indexing service and a fast SQLite database. Queries are fast and SQLite works well.

For apps that want to use search, we provide a flexible API but not a simple API. You can see this in use in Nautilus, and yes that’s 3 screenfuls of code.

I have seen complaints about slow queries, which can happen if the database becomes enourmous (multiple GBs) or becomes corrupted, but these are caused by things mostly out of our control — configuration mistakes, filesystem issues, or SQLite bugs. Where Tracker generates performance complaints it’s almost always related to the background indexing. More on that later.

Searching via GNOME Shell is not quite as fast as Tracker itself, as it relies on spawning many app processes to return results. If more apps start storing data using Tracker SPARQL then we may be able to optimise this in future, by having one process that queries all available databases.

Goal 2. Provide searchable data storage for desktop apps.

This was always a goal but pre-3.0 it was never widely adopted. I wrote about the drawbacks of the old ‘central data store’ model in the first post of the series. A big blocker for apps was that a central data store means a central database schema. It discourages innovation if adding a feature to your app requires first designing it, then negotating a schema change with the Ontology Overloads, and then trying it out and see how it works.

Tracker 3.0 brings independent app databases, so our story has improved a lot, but would you store app data with Tracker SPARQL? Why not use SQLite directly?

It’s true that SQLite is a great storage solution which you should definitely use if it suits you. Here’s what Tracker SPARQL provides that SQLite does not:

  • Implicitly handled database migrations. These are good for simple cases but comes with some restrictions, in particular any migration that may cause data loss is prohibited.
  • The ability to publish app data as a D-Bus endpoint. This is a speculative benefit for the moment, but may allow richer search results and more efficient Shell searches in future.
  • Similar read/write performance to raw SQLite. The SPARQL translation overhead isn’t huge and can often be mitigated altogether with prepared queries.
  • Noticibly higher disk space usage compared to SQLite. See below for more about why this happens.
  • A GObject-based API simpler than SQLite’s own

3. Allow full-text search within common document types.

To do full-text search we need to load data from different file formats. The tracker-extract-3 miner is responsible and you can see supported formats in the code. It’s rare that I see bugs about this part of the code aside from obscure crashes, in fact recently we’ve removed more formats than we added (goodbye DVI, MIDI, and source code formats).

Tracker’s full-text search capability is built on the SQLite FTS5 module. We currently don’t expose the full power of the FTS5 query language, only supporting basic keyword matches. And FTS5 is not particularly powerful compared to larger scale engines like Lucene. However, we are in line with state-of-the-art in other desktop search engines.

4. Allow advanced queries using a standard query language.

It’s fundamental to the current design of Tracker that we use SPARQL to query and update the database. The nice thing about that is it’s a well-maintained standard, reasonably intuitive and with tutorials available online. It even supports Federated Queries out-of-the-box, something neither SQL nor GraphQL manage.

However, SPARQL is not the easiest language to work with — there are many ways to typo a query so that it produces no results without telling you why. It’s also verbose. When porting GNOME Photos I ended up creating a templating system and some page-long queries.

The Tracker Miners don’t have special access to the database — they use SPARQL like everyone else, and we could in principle replace the current database engine with one of the many other SPARQL databases in existence, if there were any suitable alternatives, which there aren’t. SPARQL is quite a high barrier for a database engine to support and those that do support it are either too heavy for desktop/embedded, like JENA and Virtuoso, or unfinished / abandoned like 4store and Oxigraph. In the medium term we are likely to stick with our SQLite + SPARQL translation layer approach.

Removing the “advanced query language” requirement would allow switching to a different query API, which would require rewriting Tracker Miner FS and all apps that use Tracker. It also wouldn’t open up many more options. Xapian is a possibility, but its query engine is much more limited than our current one, and it has its own issues . The state of the art in open-source web search is the Lucene search engine, but it’s written in Java and, for better or worse, there would be controversy if GNOME came to depend on the Java runtime. A C++ port of Lucene exists, but is sufficiently abandoned as to still be hosted on SourceForge.net.

5. Be secure and private by default.

Despite the name, Tracker’s privacy credentials are strong. Your data never leaves your machine. Tracker’s index is as private as the rest of your home directory.

Tracker Miner FS scans files in the background, including inside the Downloads folder, which means a bug in tracker-extract and tracker-miner-fs could exploited by an attacker who tricks you into downloading a file that triggers the bug. The risk is highest in tracker-extract which actually opens the files and parses them. To mitigate this, tracker-extract is sandboxed using SECCOMP, preventing it from making network connections or reading files other than the one it’s processing. SECCOMP isn’t an ideal solution and causes occasional breakages, but it’s an important safeguard.

Of course there is also content-based sandboxing provided for Flatpak apps which I wrote about in Tracker 3.0: What’s New?

6. Be efficient enough for desktop and mobile use

Tracker Miner FS does well in profiling and is being optimised even more. You will find many reports online about “Tracker high CPU usage”, however. It’s a common enough question that we mention it in the FAQ. So is Tracker slow or not?

In almost all cases, reports of high CPU usage are due to internal bugs, rather than problems in the design. And these bugs can be terrible! It’s disappointing to hear about Tracker Miners causing people’s laptop fans to turn on for hours. Such issues are tricky to reproduce and have varied causes — bugs in the complex miner-fs/extractor code, bugs in dependencies, or misguided attempts to index huge dumps of text like source code. I’m yet to see one that indicates a fundamental design flaw, so for now the solution seems to be relying on users who experience these problems taking the time to help diagnose, reproduce and fix them.

As for disk space, I am not happy about the amount of space the index can consume, but it’s part of a speed/size tradeoff we made which is currently balanced towards speed. An empty Tracker SPARQL database can start at 3.2MB of disk space. We store the database schema in the database as RDF data, which is where most of this initial overhead comes from. We also use a data storage structure that’s optimised for RDF querying and serialisation, and it’s not the most efficient way to store data in SQLite.

7. Be maintainable by a small team.

This is the last item on the list but it should be the first, because an unmaintainable project will inevitibly meet a sad end.

The bus factor of Tracker is currently high.

There are around 30,000 lines of code in libtracker-sparql, mostly devoted to translating SPARQL to SQL. Carlos has focused a lot on rewriting this and it’s impressively clean — see the SPARQL parser in particular — but debugging it can be tricky as it generates very long, hard to read SQL queries. It has fairly good test coverage, but more tests would be great.

The Tracker Miner FS code is also complex with about 15,000 lines of code, and it is written with a combination of event callbacks, signals and virtual functions that I like to call “object-oriented spaghetti code”. I tried to graph the flow of execution once and gave up, so instead I spend an hour or more re-discovering how it works every time I do some work there. Deferring to the main loop as much as possible was important in the 2000s when single-core CPUs were still common and a long-running task in a daemon process could cause freezes in the graphical desktop. Now it just overcomplicates the code.

The multiple layers of inheritance are due to an unrealized goal of creating different kinds of content crawlers using the existing miner-fs code. In theory a TrackerMinerFS could crawl anything that can be represented as a GFile. In practice this has never been done. GNOME Online Miners do their own thing, and indeed online resources often don’t have a filesystem-like structure. In Tracker 3.0 we made libtracker-miner private, which enables us to refactor it without being concerned with API stability, and less likely that anyone can reuse it. Perhaps it’s time to simplify this part of the code.

I’ve tried to focus the majority of my energy on making the project more maintainable, something I wrote about already. In the Bugzilla days it was painful to review and test merge requests, but now we have GitLab and automated testing, most merge requests are simple and fun to review. Testing?

Tracker: Tracker Miners:

The situation is OK but there is a long way to go.

Is Tracker the right answer?

A healthy project needs a stream of new contributors. I don’t currently see this happening.

One issue is that Tracker is not widely used outside GNOME. I try to maintain a list of apps and platforms using Tracker which illustrates this. (Please tell me if there’s something missing from the list!). That said, you can never be sure if more widespread use would help spread the work of maintenance, or simply bring more bug reports.

I write these blog posts in the hope that telling the reality of search in GNOME will bring some more understanding and involvement. On screen, the search interface is a small box, which makes it seem like the backend might be small and simple too — trust me that it isn’t.

It’s my opinion that Tracker remains the most suitable option for powering search in GNOME and further afield. (I’d be happy to be proved wrong, if the result is a better desktop search experience.) The main issue is fixing the last of the bugs that cause high resource usage. Look out soon for a testing initiative, in which we’ll try out recent speed improvements and try to catch and reproduce any remaining performance issues.

Come back next week for the final post, my thoughts on the next ten years of desktop search.