The GSoC 2021 cohort has just been announced. There’s a fantastic list of organisations involved, including GNOME, and I’m happy that this year two of those projects will be based around Tracker.
We were lucky to have several promising candidates. I want to shout out Nitin in particular for getting really involved with Tracker and making some solid contributions too. I want to remind all GSoC applicants of two things. Firstly that a track record of high quality open source contributions is something very valuable and always an advantage when applying for jobs and internships. Including next year’s GSoC 🙂 And secondly that if 5 folk propose the same project idea, only one can be chosen, but if 5 different project ideas arrive then we may be able to choose two or even three of them.
I also want to highlight the great work Daniele Nicolodi has been doing recently on the database side of Tracker. If you want a SPARQL 1.1 database and don’t want to go EnterpriseTM Scale, your options are surprisingly limited, and one goal of the Tracker 3 work was to make libtracker-sparql into a standalone database option. Daniele has moved this forward, already getting it running on Mac OS X and cleaning up a number of neglected internal codepaths.
I hope the increased involvement shows our developer experience improvements are starting to pay dividends. More eyes on the code that powers search in GNOME is always a good thing.
One of my favourite discoveries of 2020 is Joplin, an open, comprehensive notebook app. I’m slowly consolidating various developer journals, Zettelkasten inspired notes, blog drafts, Pinboard bookmarks and abstract doodles into Joplin notebooks.
Now it’s there I want to search it from the GNOME Shell overview, and that’s pretty fun to implement.
It’s available from here and needs to be installed manually with Meson. Perhaps one day this can ship with Joplin itself, but there are few issues to overcome first:
It’s not yet possible to open a specific note in Joplin. I suggested adding a commandline option and discovered they plan to add x-callback-url support, which will be great once there’s a design for how it should work.
The search provider also appears as an application in the Shell. I think a change in GNOME Shell is needed before we can hide this.
Hoping to make better use of this expansive set of thought-provoking articles, hilarious videos and expired domain names, I wrote a minimal search provider for GNOME Shell.
If you’re happy to meson install some Python scripts, then you can use it too! The search provider installs a systemd timer unit which checks for new bookmarks every hour, and downloads them into a Whoosh search index. I chose Whoosh because it’s fun to try new search engines, and my opinion is that it’s fast, powerful, and a little heavy on the disk space usage — my bookmarks take up 7MB in a JSON file but 17MB in the Whoosh index.
A secondary purpose of this effort is to show how easy it is to make search providers. It took me about 8 hours to make this. Feel free to copy and paste for your own search providers, and use the CLI test tool in my desktop-search repo when testing them.
What other services would you like to see integrated into GNOME Shell as search providers? Next on my list might be notes from Joplin.
In previous post we looked at the Semantic Web, the semantic desktop, and how it can be that after 20 years of development, most desktop search engines still provide little more than keyword matching in your files.
History shows that the majority of users aren’t excited by star ratings, manual tagging or inference. App developers mostly don’t want to converge on a single database, especially if the first step is to relinquish control of their database schema.
So where do we go from here? Let’s first take a look at what happened in search outside the desktop world in the last 20 years.
The companies listed above spend billions of US dollars per year on research, so it’d be surprising if they didn’t have the edge over those of us who prioritize open technologies. We need to accept that we’re unlikely to out-innovate them. But we can learn from them.
The way people interact with a search engine is dramatically similar to 20 years ago. We type our thoughts into a text box. Increasingly people speak a question to a voice assistant instead of typing it, but internally it’s processed as text. There are exceptions to this, such as Reverse Image Search and Shazaam, and if you work for the Government you have advanced and dangerous tools to search for people. But we’re no nearer to Minority Report style interfaces, despite many of the film’s tech predications coming true, and Dynamicland is still a prototype.
The biggest change therefore is how the search engines and assistants process your query once they receive it.
Dynamicland: A fascinating prototype for the future of computing
Behind the <input type="text">
Text is still our primary search interface, but these days there is a lot more than keyword matching going on when you type something into a search engine, or speak into a voice assistant.
These tools are useful but I rarely see them used by non-experts. However everyone benefits from query expansion, which includes word stemming, spelling correction, automated translation and more.
Natural language processing
In 2013, Google announced the “Hummingbird” update, described as the biggest change to its search algorithm in its history. Google claimed that it involves “paying more attention to all the words in the sentence”, and used the epithet “things, not strings”. SEO site moz.com gives an intesting overview, and uses the term Google seem to avoid — Semantic Search.
How does it work? We can get some idea by studying a service Google provides for developers called Dialogflow, described as “a natural language understanding platform”, and presumably built out of the same pieces as Hummingbird. The most interesting part is the intent matching engine, which combines machine learning and manually-specified rules to convert “How is the weather in Cambridge” into two values, one representing the intent of “query weather” and the other representing the location as “Cambridge, UK.” That is, if you’re British. If you’re based elsewhere, you’ll get results for one of the world’s many other Cambridges. We know Google collects data about us, which is a controversial topic but also helpful to translate subjective phrases into objective data such as which Cambridge I’m referring too.
As we enter the age of voice assistants, natural language processing becomes a bigger and bigger topic. As well as Google’s Dialogflow, you can use Microsoft’s LUIS or Amazon’s Comprehend, all of which are proprietary cloud services. MyCroft maintain some open source intent parsing tools, and perhaps there are more — let me know!
The language processing in the Hummingbird update brought to the forefront a feature launched a year earlier called Knowledge Graph, which provided results as structured data rather than blue hyperlinks. It wasn’t the first mainstream semantic search engine — maybe Wolfram Alpha gets that crown — but it was a big change.
Wolfram Alpha claims to work with manually curated data. To a point, so does Google’s Knowledge Graph. Google’s own SEO guide asks us to mark up our websites with JSON-LD to provide ‘rich search results’.
DuckDuckGo was also early to the ‘instant answers’ party. Interestingly their support was developed largely in the open. You can see a list of their instant answers here, and see the code on GitHub.
Back to the Desktop Future
Web search involves querying every computer on the Web, but Desktop search involves searching through just one. However, it’s important to see search in context. Nearly every desktop user interacts with a web search engine too and this informs our expectations.
Unsurprisingly, I’m going to recommend that GNOME continues with Tracker as its search engine and that more projects try it out. I maintain a Wishlist for Tracker (which used to be a roadmap, but Jeff inspired me to change it ;). Our wishlist goals are mostly for Tracker to get better at doing the things it already does. Our main focus has to be stability, of course. I want to start a performance testing initiative during the GNOME 40 cycle. This would help us test out the speedups in Tracker Miners master branch, and catch some extractor bugs in the process. Debian/Ubuntu packaging and finishing the GNOME Photos port are also high on the list.
When we look at where desktop search may go in the next 10 years, Tracker itself may not play the biggest role. The possibilities for search are a lot deeper. Six months ago I asked on Reddit and Discourse for some “crazy ideas” about the future of search. I got plenty of responses but nothing too crazy. So here’s my own attempt.
Automated Tagging
Maybe because I grew up with del.icio.us I think tagging is the answer to many problems, but even I’ve realised that manually tagging your own photos and music is a long process that even I’m not very interested in doing.
A forum commenter noted that AI-based image classification is now practical on desktop systems, using onnxruntime. The ONNX Model Zoo contains various neural nets able to classify an image into 1,000 categories, which we could turn into tags. Another option is the FANN library if suitable models can be found or trained.
This would be great to experiment with and could one day be a part of apps like GNOME Photos. In fact, Tobias already proposed it here.
Time-based Queries
The Zeitgeist engine hasn’t seen much development recently but it’s still around, and crvi recently rejuvenated the Activity Journal app to remind us what Zeitgeist can do.
We talked about integrating Zeitgeist with Tracker back in 2011, although it got stuck at the nitpicking stage. With advancements in Tracker 3.0 this would be easier to do.
The coolest way to integrate this would be via a natural language query engine that would allow searches like "documents yesterday" or "photos from 2017". A more robust way would be to use an operator like 'documents from:yesterday'. Either way, at some layer we would have to integrate data from Zeitgeist and Tracker Miner FS and produce a suitable database query.
Query Operators
Users are currently given a basic option — match keywords in a file via Nautilus or the Shell– and an advanced option of opening the Terminal to write a SPARQL query.
We should provide something in between. I once saw a claim that Tracker isn’t as powerful as Recoll as it doesn’t support operators. The problem is actually that we expose a less powerful interface. Support for operators would be relatively simple to implement. This could be a relatively straightforward project and could be implemented incrementally — a good learning project for a search engineer.
Suggestions
The full-text search engine in Tracker SPARQL does some basic query expansion in the form of stemming (turning ‘drawing’ into ‘draw’, for example) and removing stop words like ‘the’ and ‘a’. Could it become smarter?
We are used to autocomplete suggestions these days. This requires the user interface to ask the search backend to suggest terms matching a prefix, for example when I type mon the backend returns money, monkey, and any other terms that were found in my documents. This is supported by other engines like Solr. The SQLite FTS5 engine used in Tracker supports prefix matching, so it would be possible to query the list of keywords matching a prefix. Performance testing would be vital here as it would increase the cost of running a search.
Google takes autocomplete a step further by predicting queries based on other people’s search history and page contents. This makes less sense on the desktop. We could make predictions based on your own search history, but this requires us to record search history which brings in a responsibility to keep it private and secure. The most secure data is the data you don’t have.
Typo tolerance is also becoming the norm, and is a selling point of the Typosense search engine. This requires extra work to take a term like gnomr and create a list of known terms (e.g. gnome) before running the search. SQLite provides a ‘spellfix1’ module that should make this possible.
Suggestion engines can do lots more cool things, like recommending an album to listen to or a website to visit. This is getting beyond the scope of search, but it can reuse some of the data collected to power search.
The Cloud
When Google Desktop joined the list of discontinued Google products, the rationale was this:
People now have instant access to their data, whether online or offline. As this was the goal of Google Desktop, the product will be discontinued.
The life of a desktop developer is much simpler if we assume that all the user’s content lives on the Web. There are several reasons not to accept that as the status quo, though! There is a loss of privacy and agency that comes from giving your data to a 3rd party. Google no longer read your email for advertising purposes, but you still risk not being able to access important data when you need it.
Transferring data between cloud services is something we tried to solve before, the Conduit project was a notable attempt and later the GNOME Online Miners, and it’s still a goal today. The biggest blocker is that most Cloud services have a disincentive to let users access their own data via 3rd party APIs. For freemium, ad-supported services it doesn’t make sense to let users use your service while sidestepping the adverts that fund it.
Conduit, an early data sync app. Screenshot from Wikipedia
Another problem is that maintaining a local index of remote content is risky. It may result in unwanted network traffic — what if an indexer updates the index while the computer is tethered to an expensive 3G connection? It may result in huge local caches of documents the user doesn’t actually care about. It’s difficult to get defaults that work for everyone.
My suggestion is to make online sync features as explicit and opt-in as possible. We should avoid ‘invisible design’ here. If a user has content in Dropbox that they want available locally, they will search online for “how to search dropbox content in gnome.” We don’t need to enable it by default. If they want to search playlists from Spotify and Youtube locally, we can add an interface to do this in GNOME Music.
Another challenge is testing our integrations. It’s hard to automatically test code that runs against a proprietary web service, and at the moment we just don’t bother.
It’s early days for the SOLID project but their aim is to open up data storage and login across web services. I’d love it if this project makes progress.
Mashups, The Revenge
When I first got involved in Tracker, I was doing the classic thing of implementing a music player app, as if the world didn’t have enough. I hoped Tracker could one day pull data from Musicbrainz to correct the metadata in my local media files. Ten years ago we might have called this a “mashup”.
The problem with Tracker Miner FS correcting tags in the background is that it’s not always clear what the correct answer is. Later I discovered Beets, a dedicated tool that does exactly what I want. It’s an interactive CLI tool for managing a music database. GNOME Music has work-in-progress support for using metadata from Musicbrainz, and the Picard is still available too.
Tag editor in GNOME Music with suggestions from MusicBrainz, from app mockups
I think we’ll see more such organisation tools, focused on being semi-automated rather than fully automated.
GNOME Shell search
GNOME Shell’s search is super cool. Did you know you can do maths and unit conversions, via the Calculator search provider? Did you know you can extend it by installing apps that provide other search providers? You probably knew it’s completely private by default. Your searches don’t leave the machine, unlike on Mac OS. That’s why the Shell can’t integrate Web search results — we don’t want to send all your local searches to a 3rd party.
We can integrate more offline data sources there. I recently installed the Quick Lookup dictionary app which provides Wiktionary results. It doesn’t integrate with local search, but it could do, by providing an optional bundle of common Wiktionary definitions to download. Endless OS already does this with Wikipedia.
The DBus-federated API of Shell search has one main drawback — a search can wake up every app on your system. If a majority of apps store data with libtracker-sparql, the shell could use libtracker-sparql to query these databases directly and save the overhead of spawning each app. The big advantage of the federated approach, though, is that search providers can store data in the most suitable method for that data. I hope we continue to provide that option.
Natural Language
Could search get smarter still? Could I type "Show me my financial records from last year" and get a useful answer?
Natural language queries are here already, they just haven’t made it to the desktop outside of Mac OS. The best intent parsing engines are provided as Software as a Service, which isn’t an option for GNOME to use. Time is on our side though. The Mycroft AI project maintain two open source intent parsers, Adept and Padatious, and while they can’t yet compete with the proprietary services, they are only getting better.
What we don’t want to do is re-invent all of this technology ourselves. I would keep the natural language processing separate from Tracker SPARQL, so we integrate today’s technology which is mostly Python and JavaScript. A more likely goal would be a new “GNOME Assistant” app that could respond to natural language voice and text queries. When stable, this could integrate closely with the Shell in order to spawn apps and control settings. Mycroft AI already integrates with KDE. Why not GNOME?
This sounds like an ambitious goal, and it is, so let me propose a smaller step forwards. Mycroft allows developers to write Skills as does Alexa. Can we provide skills that integrate content from the desktop? Why should a smart speaker play music from Spotify but not from my hard drive, after all?
End to End Testing
Tracker SPARQL and Tracker Miners have automated tests to spot regressions, but we do very little testing from a user perspective. It requires a design for how search should really work in all of its corner cases — what happens if I search for "5", or "title:Blog", "12 monkeys", or "albums from 2015", or "documents I edited yesterday"?
As more complex features are added, testing becomes increasingly important, not just as “unit tests” but as whole-system integration tests, with realistic sets of data. We have the first step — live VM images — now we need the next step of testing infrastructure, and ,tests.
Funding
Funding is often key to big changes like the ones I’m going to describe below. A lot of development in GNOME happens due to commercial and charitable sponsorship, where contributors develop and maintain GNOME as part of their employment. Contribution also happens through volunteer effort, as part of research funding (see NLnet’s call for “Next Generation Search and Discovery” proposals), or sponsored efforts like GSoC and Outreachy. This is often on a smaller scale, though.
I want this series to highlight that search is a vital part of user experience and an important area to invest design and engineering effort. To answer the question “When can we have all this?”… you’re probably going to have to follow the money.
In conclusion…
This summer was an unusual situation where I had a free summer but a pandemic stopped me from going very far. Between river swims, beach trips and bikerides I still had a lot of free time for hacking – I counted more than 160 hours donated to the Tracker 3 effort in July and August.
I’m now looking for a new job in the software world. I’ll continue as a maintainer of Tracker and I promise to review your patches, but the future might come from somewhere else.
This is part 4 of a series. Part 1 is here, part 2 is here, and part 3 is here. Come back next week for my thoughts on the future of desktop search.
I thought this was going to be the last blog post about Tracker 3.0, but it got rather long and I decided to turn this appraisal of the project and its design into a post of its own. So, get ready for some praise and some criticism!
To criticise the design and implementation of a software project we first need to understand the project requirements. What it is trying to do? I’ve written some goals and non-goals for the Tracker search engine, and proposed them for inclusion in the README. Now I’m going to compare the goals with the reality. I am of course a biased observer, and I welcome you to make your own assessment, but let’s dive in!
Goal 1. Provide real-time searching and browsing of desktop content.
Tracker achieves this with a background indexing service and a fast SQLite database. Queries are fast and SQLite works well.
For apps that want to use search, we provide a flexible API but not a simple API. You can see this in use in Nautilus, and yes that’s 3 screenfuls of code.
I have seen complaints about slow queries, which can happen if the database becomes enourmous (multiple GBs) or becomes corrupted, but these are caused by things mostly out of our control — configuration mistakes, filesystem issues, or SQLite bugs. Where Tracker generates performance complaints it’s almost always related to the background indexing. More on that later.
Searching via GNOME Shell is not quite as fast as Tracker itself, as it relies on spawning many app processes to return results. If more apps start storing data using Tracker SPARQL then we may be able to optimise this in future, by having one process that queries all available databases.
Goal 2. Provide searchable data storage for desktop apps.
This was always a goal but pre-3.0 it was never widely adopted. I wrote about the drawbacks of the old ‘central data store’ model in the first post of the series. A big blocker for apps was that a central data store means a central database schema. It discourages innovation if adding a feature to your app requires first designing it, then negotating a schema change with the Ontology Overloads, and then trying it out and see how it works.
Tracker 3.0 brings independent app databases, so our story has improved a lot, but would you store app data with Tracker SPARQL? Why not use SQLite directly?
It’s true that SQLite is a great storage solution which you should definitely use if it suits you. Here’s what Tracker SPARQL provides that SQLite does not:
Implicitly handled database migrations. These are good for simple cases but comes with some restrictions, in particular any migration that may cause data loss is prohibited.
The ability to publish app data as a D-Bus endpoint. This is a speculative benefit for the moment, but may allow richer search results and more efficient Shell searches in future.
Similar read/write performance to raw SQLite. The SPARQL translation overhead isn’t huge and can often be mitigated altogether with prepared queries.
Noticibly higher disk space usage compared to SQLite. See below for more about why this happens.
A GObject-based API simpler than SQLite’s own
3. Allow full-text search within common document types.
To do full-text search we need to load data from different file formats. The tracker-extract-3 miner is responsible and you can see supported formats in the code. It’s rare that I see bugs about this part of the code aside from obscure crashes, in fact recently we’ve removed more formats than we added (goodbye DVI, MIDI, and source code formats).
Tracker’s full-text search capability is built on the SQLite FTS5 module. We currently don’t expose the full power of the FTS5 query language, only supporting basic keyword matches. And FTS5 is not particularly powerful compared to larger scale engines like Lucene. However, we are in line with state-of-the-art in other desktop search engines.
4. Allow advanced queries using a standard query language.
It’s fundamental to the current design of Tracker that we use SPARQL to query and update the database. The nice thing about that is it’s a well-maintained standard, reasonably intuitive and with tutorials available online. It even supports Federated Queries out-of-the-box, something neither SQL nor GraphQL manage.
However, SPARQL is not the easiest language to work with — there are many ways to typo a query so that it produces no results without telling you why. It’s also verbose. When porting GNOME Photos I ended up creating a templating system and some page-long queries.
The Tracker Miners don’t have special access to the database — they use SPARQL like everyone else, and we could in principle replace the current database engine with one of the many other SPARQL databases in existence, if there were any suitable alternatives, which there aren’t. SPARQL is quite a high barrier for a database engine to support and those that do support it are either too heavy for desktop/embedded, like JENA and Virtuoso, or unfinished / abandoned like 4store and Oxigraph. In the medium term we are likely to stick with our SQLite + SPARQL translation layer approach.
Removing the “advanced query language” requirement would allow switching to a different query API, which would require rewriting Tracker Miner FS and all apps that use Tracker. It also wouldn’t open up many more options. Xapian is a possibility, but its query engine is much more limited than our current one, and it has its own issues . The state of the art in open-source web search is the Lucene search engine, but it’s written in Java and, for better or worse, there would be controversy if GNOME came to depend on the Java runtime. A C++ port of Lucene exists, but is sufficiently abandoned as to still be hosted on SourceForge.net.
5. Be secure and private by default.
Despite the name, Tracker’s privacy credentials are strong. Your data never leaves your machine. Tracker’s index is as private as the rest of your home directory.
Tracker Miner FS scans files in the background, including inside the Downloads folder, which means a bug in tracker-extract and tracker-miner-fs could exploited by an attacker who tricks you into downloading a file that triggers the bug. The risk is highest in tracker-extract which actually opens the files and parses them. To mitigate this, tracker-extract is sandboxed using SECCOMP, preventing it from making network connections or reading files other than the one it’s processing. SECCOMP isn’t an ideal solution and causes occasionalbreakages, but it’s an important safeguard.
Of course there is also content-based sandboxing provided for Flatpak apps which I wrote about in Tracker 3.0: What’s New?
6. Be efficient enough for desktop and mobile use
Tracker Miner FS does well in profiling and is being optimised even more. You will find many reports online about “Tracker high CPU usage”, however. It’s a common enough question that we mention it in the FAQ. So is Tracker slow or not?
In almost all cases, reports of high CPU usage are due to internal bugs, rather than problems in the design. And these bugs can be terrible! It’s disappointing to hear about Tracker Miners causing people’s laptop fans to turn on for hours. Such issues are tricky to reproduce and have varied causes — bugs in the complex miner-fs/extractor code, bugs in dependencies, or misguided attempts to index huge dumps of text like source code. I’m yet to see one that indicates a fundamental design flaw, so for now the solution seems to be relying on users who experience these problems taking the time to help diagnose, reproduce and fix them.
As for disk space, I am not happy about the amount of space the index can consume, but it’s part of a speed/size tradeoff we made which is currently balanced towards speed. An empty Tracker SPARQL database can start at 3.2MB of disk space. We store the database schema in the database as RDF data, which is where most of this initial overhead comes from. We also use a data storage structure that’s optimised for RDF querying and serialisation, and it’s not the most efficient way to store data in SQLite.
7. Be maintainable by a small team.
This is the last item on the list but it should be the first, because an unmaintainable project will inevitibly meet a sad end.
The bus factor of Tracker is currently high.
There are around 30,000 lines of code in libtracker-sparql, mostly devoted to translating SPARQL to SQL. Carlos has focused a lot on rewriting this and it’s impressively clean — see the SPARQLparser in particular — but debugging it can be tricky as it generates very long, hard to read SQL queries. It has fairly good test coverage, but more tests would be great.
The Tracker Miner FS code is also complex with about 15,000 lines of code, and it is written with a combination of event callbacks, signals and virtual functions that I like to call “object-oriented spaghetti code”. I tried to graph the flow of execution once and gave up, so instead I spend an hour or more re-discovering how it works every time I do some work there. Deferring to the main loop as much as possible was important in the 2000s when single-core CPUs were still common and a long-running task in a daemon process could cause freezes in the graphical desktop. Now it just overcomplicates the code.
The multiple layers of inheritance are due to an unrealized goal of creating different kinds of content crawlers using the existing miner-fs code. In theory a TrackerMinerFS could crawl anything that can be represented as a GFile. In practice this has never been done. GNOME Online Miners do their own thing, and indeed online resources often don’t have a filesystem-like structure. In Tracker 3.0 we made libtracker-miner private, which enables us to refactor it without being concerned with API stability, and less likely that anyone can reuse it. Perhaps it’s time to simplify this part of the code.
I’ve tried to focus the majority of my energy on making the project more maintainable, something I wrote about already. In the Bugzilla days it was painful to review and test merge requests, but now we have GitLab and automated testing, most merge requests are simple and fun to review. Testing?
Tracker:
Tracker Miners:
The situation is OK but there is a long way to go.
Is Tracker the right answer?
A healthy project needs a stream of new contributors. I don’t currently see this happening.
One issue is that Tracker is not widely used outside GNOME. I try to maintain a list of apps and platforms using Tracker which illustrates this. (Please tell me if there’s something missing from the list!). That said, you can never be sure if more widespread use would help spread the work of maintenance, or simply bring more bug reports.
I write these blog posts in the hope that telling the reality of search in GNOME will bring some more understanding and involvement. On screen, the search interface is a small box, which makes it seem like the backend might be small and simple too — trust me that it isn’t.
It’s my opinion that Tracker remains the most suitable option for powering search in GNOME and further afield. (I’d be happy to be proved wrong, if the result is a better desktop search experience.) The main issue is fixing the last of the bugs that cause high resource usage. Look out soon for a testing initiative, in which we’ll try out recent speed improvements and try to catch and reproduce any remaining performance issues.
Come back next week for the final post, my thoughts on the next ten years of desktop search.
Gradually, the automated indexers got better. Google’s Pagerank algorithm was a breakthrough, using a statistical approach to determine ‘high quality’ websites and list those ones first.
The holy scriptures of the Web already proposed a different solution: not only should documents form webs, but the data inside them should too. Instead of a statistical model of ‘relevance’, the information would be present as structured data which you could search and query with a semantic search engine. In 1998 (the same year Google was founded) this idea was formalised as the Semantic Web.
Semantic What?
At this point, perhaps you are curious what ‘semantic search’ means, or perhaps your eyes glaze over at the sight of the world ‘semantic’, or perhaps memories of the 2000s have already caused your fists to clench in fury.
The article “Why Machine Learning Needs Semantics Not Just Statistics” gives a good introduction: the word ‘semantic’ is usually used to highlight that, for information retrieval tasks, our current statistical approaches are primitive.
In essence, they are akin to a human shown patterns in a pile of numbers and asked to flag future occurrences without any understanding of what those numbers represent or what the decision involves.
This is one of the reasons that current deep learning systems have been so brittle and easy to fool despite their uncanny power. They search for correlations in data, rather than meaning.
You may be thinking: I know that you can implement today’s machine learning using match boxes, but that link only tells me what semantic search isn’t, it doesn’t tell me what it is. If so, you’re on the money. In the years following its inception, a frequent criticism of the Semantic Web was that it was under-specified and too “utopian”.
Teaching machines to understand meaning
Does that sound utopian? Well, maybe. A lot of digital ink has been spilled on this topic over the last 20 years in often heated debates. However, there are some level headed voices.
One in particular was that of Aaron Swartz in the book “A Programmable Web”, published unfinished after his tragic death in 2013. In the introduction he notes:
The idea’s proponents do not escape culpability for these utopian perceptions… Instead of the “let’s just build something that works” attitude that made the Web (and the Internet) such a roaring success … they’ve convinced people interested in these ideas that the first thing we need to do is write standards.
Certainly, the most visible output of the Semantic Web effort has been various standards. Some early efforts are laughable: you would win an ‘obfusticated data format’ contest with JSON Triples, and all the data formats that use XML (a markup language) as a syntax are questionable at best, or, to quote Swartz again, “scourges on the planet, offenses against hardworking programmers”.
Later standards, particularly the JSON-LD data format and SPARQL query language are nice to use. But their most significant output is the RDF data model.
Here is some fan-mail that RDF has received over the years.
“It would be nice as a universal publishing format… far preferable to XML” (Aaron Swartz),
“A deceptively simple data model [which] trivializes merging of data from multiple sources” (Ian Davis)
“RDF is a shitty data model. It doesn’t have native support for lists. LISTS for fuck’s sake!” (Manu Sporny, creator of JSON-LD)
“Someone should describe RDF in 500 words or less as a generalization of INI. That note would spread understanding of RDF, which is simple but often described so abstractly that it seems complicated.” (Mark Evans, Lambda the Ultimate)
I think RDF is a reasonable data model which maps closely to the more intuitive document/key/value model. (Until you want to make a list). A more important criticism is whether a data model is what we really needed.
Clay Shirky discussed this in a scathing criticism of the Semantic Web from 2003:
Since it’s hard to make machines think about the world, the new goal is to describe the world in ways that are easy for machines to think about… The Semantic Web takes for granted that many important aspects of the world can be specified in an unambiguous and universally agreed-on fashion, then spends a great deal of time talking about the ideal XML formats for those descriptions.
For a detailed history of the Semantic Web, I recommend this Two Bit History article. Meanwhile. we need to go back to the desktop world.
From desktop search…
The 2000’s were also a busy time for GNOME and KDE. During the 90’s desktop search was an afterthought but in the new millenium, perhaps driven by advances on the web, lots of research took place.
Microsoft introduced WinFS (described by Gates as his biggest disappointment), Apple released Spotlight, even Google briefly weighed in and the open source world responded as we always do with several incompatible projects all trying to do the same thing.
Here’s a release timeline of some free desktop search projects:
I was still in school when Eazel created Nautilus and went bust shortly after. They created Medusa to provide full-text search for Nautilus, but without funding the project didn’t get past an 0.5 release. The Xapian library also formed around this time from a much older project. Both aimed to provide background indexing and full-text search, as did the later Beagle.
Tracker began in late 2005, introduced by Jamie McCracken and focusing on a “non-bloated implementation, high performance and low low memory usage”. This was mostly a response to Beagle’s dependency on the Mono C# runtime. Tracker 0.1 used MySQL or SQLite but instead of exposing the SQL engine directly it would translate queries from RDF Query, an XML format which predates SPARQL and is not something you want to type out by hand.
…to the Semantic Desktop
In 2006 the NEPOMUK project began. The goal was not a search engine but “a freely available open-source framework for social semantic desktops”, put even less simply a “Networked environment for personal ontology-based management of unified knowledge”. The project had €17 million of funding, much of it from the EU. The Semantic Web mindset had reached the free desktop world.
I don’t know where all the money went! But one output was NEPOMUK-KDE, which aimed to consolidate all your data in a single database to enable new ways of browsing and searching. The first commit was late 2006. Some core KDE apps adopted it, and some use cases, ideas and prototypes emerged.
Meanwhile, Nokia were busy contracting everyone in the Free Software world to work on Maemo, an OS for phones and tablets which would mark the start of the smartphone era had a certain fruit-related company not beaten them to it.
Nokia began funding six developers to work on Tracker (rather a rare event for a small open source project), and planned to use it for media indexing, search, and app data storage. There was a hackfest where many search projects were represented, and a standardisation effort called XESAM which produced a query language still in use today by Recoll.
Presentations about Tracker from this era show a now-familiar optimism. There’s a plea to store all app data in Tracker’s database, with implied future benefits: data sharing between apps, tagging, and the vague promise of “mashups”. There are the various diagrams of RDF graphs and descriptions of what SPARQL is. But there’s also an increasing degree of pragmatism.
By 2009 it was clear that there was no easy route to the Semantic Desktop valhalla. As search engines and desktop databases become more widely deployed, more and more complaints about performance started to appear and as a project destined for low-powered mobile devices, Tracker had to be extra careful in this regard.
Where are they now?
A decade since all this great tech was developed, why aren’t you using a Nokia smartphone today? The so-called Elopocalyse marks the end of ‘semantic desktop’ investment. Twenty years later, the biggest change to search in GNOME came from the GNOME Shell overview design, which uses a simple D-Bus API with no ‘semantic desktop’ tech in sight.
Tracker is still here, powering full-text search behind the scenes for many apps, and its longevity is a testament to Nokia’s decision to work fully upstream and share their improvements with everyone. Writing a filesystem indexer is hard and we’re lucky to build on the many years of investment from them. Credit also lies with volunteer maintainers who kept it going since Nokia gave up, particularly Martyn Russell and Carlos Garnacho, and everyone who has contributed to fixing, testing, translating and packaging it.
The Nepomuk data model is still used in Tracker. There was an attempt to form a community to maintain it after the funding ended, but their official home hasn’t seen an update in years and so Tracker keeps its own copy with our local modifications.
NEPOMUK-KDE did not make it to 2020. An LWN commentator summarizes the issue:
Nepomuk was 1 big, powerful, triplet-capable database that was meant to hold everything … it got too big and would corrupt sometimes and was slow and unstable…
So when the funding ran out, different ppl worked on it for a long time, trying to make it perform better. They got, well, somewhere, pretty much made the pig fly, but the tech was inherently too powerful to be efficient at the ‘simple’ use case it had to do most of the time: file name and full text search.
Tracker has had its share of performance issues, of course, but the early focus on mobile meant that these were mostly due to coding errors, rather than a fundamentally unsuitable design built around an enterprise-scale database. In 2014 KDE announced the replacementBaloo, a Xapian-based search engine that provides full-text search and little else.
So why does Tracker use RDF and SPARQL, when you can provide full text search without it?
It’s partly for “historical reasons” — it seemed a good idea at the time, and it’s still a good enough idea that there’s no point creating some new and non-standard interface from scratch. SPARQL is a good standard which suits its purpose and is in wide use in government and sciences.
RDF is still widely used too. The idea of providing structured data in websites caught on where there’s a business case for it. This mostly means adding schema.org markup so your content appears in Google, and Open Graph tags so it displays nicely in Facebook. There are also some big open data repositories published as RDF.
Whatever your perspective, it’s pretty clear that machines still don’t understand meaning, and the majority of data on the web is not open or structured. But that doesn’t mean there’s nothing we can do to improve the desktop search experience!
Come back next week for the final part of this series, my thoughts on the next ten years of Tracker and desktop search in general.
However, the shortcomings of the 2000’s era design have been clear for a while.
Back in 2005, around the time your grandparents first met each other on Myspace, it seemed a great idea to aggregate all the metadata we could find into a single database. The old tracker-store database from Tracker 2.x includes the search index created by Tracker Miner FS right next to user data stored by apps like Notes, Photos and Contacts. This was going to allow cool features like tagging people in your photo collection with their phone number and online status (before the surveillence-advertising industry showed how creepy that actually is). Ivan Frade’s decade-old talk “Semantic social desktop& mobile devices” is a great insight into the thinking of the time.
I’m going to dig into Tracker’s origins in a future article, but for now — note that “Security on graphs” is listed in Ivan’s presentation as a “to-do” item.
Security
In a world of untrusted Flatpak apps, “to-do” isn’t good enough. Any app that uses the system search service, even any app that stores RDF data with Tracker, requests a Flatpak D-Bus permission for org.freedesktop.Tracker1. This gives access to the entire tracker-store database, right down to the search terms indexed from ‘Documents’ folder. Imagine your documents as a savoury snack, stored in a big monolithic building. You accidentally install and run a malicious app, represented here as a hungry seagull…
To solve this, we had to make access control more granular. It didn’t make sense to retrofit this to tracker-store. During 2019, Carlos casually eliminated the monolithic tracker-store altogether and in its place implemented a desktop-wide distributed database, taking Tracker from a “public-by-default” model to “private-by-default”
The new libtracker-sparql-3 API lets apps store SPARQL data anywhere they like. You can keep it private, if you just want a lightweight database. Nautilus and Notes are already doing this, to store starred files and note data respectively.
If and when you want to publish data, it’s done by creating a TrackerEndpoint on DBus. Using a SPARQL federated query, one Tracker SPARQL database can pull data from multiple others in a single query. This, for example, allows Photos to merge photo metadata from the search index with album metadata stored in its own database. (I wrote more about this back in March).
The search index created by Tracker Miner FS is published at org.freedesktop.Tracker3.Miner.Files, but we don’t let Flatpak apps access this directly. A new Flatpak portal gates access to search based on content type. You can now install a music player app and let it search ONLY your music collection, where previously your options were “let the app search everything” or “break it”.
A clearer architecture
“I’m finally starting to “get” tracker 3. And it’s like an epiphany. “
If someone asked “What actually is Tracker?” I used to find it tricky to answer. We narrowed the focus down to two things: a lightweight database, and a search engine.
For the last 3 years we worked on separating these two concerns, and as of Tracker 3.0 we are done. Were we starting from search, we could find clearer names for the two parts than ‘tracker’ and ‘tracker-miners’, but we kept the repos and package names the same to avoid making the 2.x to 3.x transition harder for distributors.
The name “Tracker” refers to the overall project. You can use “GNOME Tracker” for clarity where needed. The project maintains two code repositories:
Tracker SPARQL: a distributed database, provided as a GObject C library and implementing the full SPARQL 1.1 query standard.
Tracker Miners: a content indexer for the desktop, providing the Tracker Miner FS system service and its companion Tracker Extract.
The tracker3 commandline tool can operate on any Tracker SPARQL database, and it has some extensions for searching and managing the Tracker Miner FS indexer.
Decentralisation
The headline feature is there’s one less reason to claim “Flatpak’s sandbox is a lie!”. Decentralisation brings more benefits too:
You can backup app data by running tracker3 export on the app’s SPARQL database. Useful for Notes, Photos and more.
Apps can bundle Tracker Miners inside Flatpak, allowing them to run on platforms that don’t ship a suitable version of Tracker Miners in the base OS.
Apps are no longer limited to the Nepomuk data model when storing data. Tracker Miner FS still uses the Nepomuk ontologies, but apps can write their own. Distributed queries work even across different data models.
Tracker’s test suite now sets up a private database for testing using public API, avoiding some hideous hacks.
Apps test suites can also set up private databases and even a private instance of the indexer. GNOME’s search and content apps have rather low test coverage at present. I suspect this is partly because the old design of Tracker made it hard to write good tests.
A distributed database is fundamentally a cool thing that you definitely need.
Stability
A system service, like a Victorian child, should be “seen and not heard”. Nobody wants the indexer to drain the battery, burn out the fan or lock up the desktop.
We prioritize any issue which reports the Tracker daemons have been behaving badly. In collaboration with many helpful bug reporters, we removed two codepaths in 3.0 that could trigger high CPU usage. One major change is we no longer index all plain text files, only those with an allowed extension. If you unpack Linux kernel tarballs in your Music folder, this is for you! (Remember Tracker isn’t designed to index source code). We also dropped a buggy and pointless codepath that tried and mostly failed to extract metadata from random image/* type files using GStreamer’s Discoverer API.
Tracker Extract is designed for robustness but it also needs to report errors. If extraction of foo.flac fails it may indicate a bug in Tracker, or in GStreamer, or libflac, or (more likely) the file is corrupt or mis-labelled. In Tracker Miners 3 we have improved how extraction errors are reported — instead of using the journal, we log errors to disk (at ~/.cache/tracker3/files/errors/). This prevents any ‘spamming’ of the journal when many errors are detected. You can check for errors by running tracker3 status. Perhaps Nautilus could make these errors visible in future too.
Since Tracker Miners 3.0.0 was released, distro beta testers found twoissues that could cause high CPU usage. These are fixed in the 3.0.1 release. If you see any issues with Tracker Miners 3.0.1, please report them on GitLab!
Here I also want to mention Benjamin’s excellent work to improve resource management for system services. Tracker Miner FS tries to avoid heavy resource use but filesystem IO is infinitely complicated and we cannot defend against every possible situation. Strange filesystems or bugs in dependencies can cause high CPU or IO consumption. If the kernel’s scheduler is not smart, it may focus on these tasks at the expense of the important shell and app processes, leaving the desktop effectively locked. Benjamin’s work lets the kernel know to prioritize a responsive desktop above any system services like Tracker Miner FS. Mac OS X could do this since 2013.
Whatever this decade brings, it should be free from desktop lockups!
Standards
The 2.x to 3.x transition was difficult partly because Tracker missed some big pieces of the SPARQL standard. Implementing a 3.x-to-2.x translation layer was out of the question — we had no motivation to re-implement the quirks of 2.x just so apps could avoid porting to 3.x.
I don’t see another major version break in Tracker’s near future, but we are now prepared. Tracker implements almost all of the SPARQL 1.1 standard.
SPARQL is not without its drawbacks — more on that in a future article — but aside from a few simple C and DBus interfaces, all of Tracker’s functionality is accessible through this W3C standard query language. Better to reuse standards than to make our own.
…and more
We have a new website, improved documentation. The tracker3 commandline tool saw loads of cleanups and improvements. Files are automatically re-processed when the relevant tracker-extract module changes — a ten year old feature request. Debugging is nicer as a keyword-enabled TRACKER_DEBUG variable replaces the old TRACKER_VERBOSITY. Deprecated APIs and dependencies are gone, including the venerable intltool. The core and Nepomuk ontologies are slimmed down and better organised. We measure test coverage, and test coverage is higher than ever. We enabled Coverity static analysis too which has found some obscure bugs. I’m no doubt forgetting some things.
A few of these changes impact everyone, but mostly the improvements benefit power users, app developers, and ourselves as maintainers. It’s crucial that a volunteer-driven project like Tracker is easy and fun to maintain, otherwise it can only fail. I think we have paved the way for a bright future.
Come back next week to find out more about the background and future of Tracker.
This is part 1 of a series. Come back next week to find out more about the changes in Tracker 3.0.
It’s too early to say “Job done”. But we’ve passed the biggest milestone on the project we announced last year: version 3.0 of Tracker is released and the rollout has begun!
We wanted to port all the core GNOME apps in a single release, and we almost achieved this ambitious goal. Nautilus, Boxes, Music, Rygel and Totem all now use Tracker 3. Photos will require 2.x until the next release. Outside of GNOME core, some apps are ported and some are not, so we are currently in a transitional period.
The important thing is only Tracker Miner FS 3 needs to run by default. Tracker Miner FS is the filesystem indexer which allows apps to do instant search and content discovery.
Since Photos 3.38 still uses Tracker 2.x we have modified it to start Tracker Miner FS 2 along with the app. This means the filesystem index in the central Tracker 2 database is kept up-to-date while Photos is running. This will increase resource usage, but only while you are using Photos. Other apps which are not yet ported may want to use the same method while they finish porting to Tracker 3 — see Photos merge request 142 to see how it’s done.
Flatpak apps can safely use Tracker Miner FS 3 on the host, via Tracker’s new portal which guards access to your data based on the type of content. It’s up to the app developer whether they use the system Tracker Miner service, or whether they run another instance inside the sandbox. There are upsides and downsides to both approaches.
We all owe thanks to Carlos for his huge effort re-thinking and re-implementing the core of Tracker. We should also thank Red Hat for sponsoring some of this work.
I also want to thank all the maintainers who collaborated with us. Marinus and Jean were early adopters in GNOME Music and gave valuable feedback including coming to the regular meetings, along with Jens who also ported Rygel early in the cycle. Bastien dug into reviewing the tracker3 grilo plugin, and made some big improvements for building Tracker Miners inside a Flatpak. In Nautilus, Ondrej and Antonio did some heroic last minute review of my branch and together we reworked the Starred Files feature to fix somelongstandingissues.
The new GNOME VM images were really useful for testing and catching issues early. The chat room is very responsive and friendly, Abderrahim, Jordan and Valentin all helped me a lot to get a working VM with Tracker 3.
GNOME’s release team were also responsive and helpful, right up to the last minute freeze break request which was crucial to avoiding a “Tracker Miner FS 2 and 3 running in parallel” scenario.
Thanks also to GNOME’s translation teams for keeping up with all the string changes in the CLI tool, and to distro packagers who are now working to make Tracker 3 available to you.
Coming soon to your distro.
It takes time for a new GNOME release to reach users, because most distros have their own testing phase.
We can use Repology to see where Tracker 3 is available. Note that some distros package it in a new tracker3 package while others update the existing tracker package.
Let’s see both:
Coming up…
I have a lot more to write about following the Tracker 3.0 release. I’ll be publishing a series of blog posts over the next month. Make sure you subscribe to my blog or to Planet GNOME to see them all!
GNOME’s conference is online this year, for obvious reasons. I spent the last 3 month teaching online classes so hopefully I’m prepared! I’m sad that there’s no Euro-trip this year and we can’t hang out in the pub, but nice that we’re saving hundreds of plane journeys.
There will be two talks related to Tracker: Carlos and I speaking about Tracker 3 (Friday 23rd July, 16.45 UTC), and myself on how to deal with challanges of working on GNOME’s session-wide daemons (Thursday 22nd July, 16.45 UTC). There are plenty of other fascinating talks, including inevitably one scheduled the same time as ours which you should, of course, watch as a replay during the break 🙂
Self-contained Tracker 3 apps
Let’s go back one year. The plan for Tracker 3 emerged when I spoke to Carlos Garnacho at GUADEC 2019 in Thessaloniki probably over a Freddo coffee like this one…
We had lots of improvements we want to make, but we knew we were at the limit of what we could to Tracker while keeping compatibility with the 10+ year old API. Changing a system service isn’t easy though (hence the talk). I’m a fan of the ‘Flatpak model’ of app deployment, and one benefit is that it can allow the latest apps to run on older LTS distributions. But there’s no magic there – this only works if the system and session-wide services follow strict compatibility rules.
Tracker 3.0 adds important features for apps and users, but these changes require apps to use a new D-Bus API which won’t be available on older operating systems such as Ubuntu 20.04.
We’re considering various ways around this, and one that I prototyped recently is to bundle Tracker3 inside the sandbox. The downside is that some folders will be double indexed on systems where we can’t use the host’s Tracker, but the upside is the app actually works on all systems.
I created a branch of gnome-music demoing this approach. GNOME’s CI is so cool now that you can just go to that page, click ‘View exposed artifact’, then download and install a Flatpak bundle of gnome-music using Tracker 3! If you do, please comment on the MR about whether it works for you 🙂 Next on my list is GNOME Photos, but this is more complex for various reasons.
Blocklists and Allowlists
Black people don't care if u stop saying "master bedroom" or if u remove problematic shows from decades ago from hulu, Black people want an end to police brutality and institutionalized racism. everything else is useless pandering
The world needs major changes to stamp out racism, and renaming variables in code isn’t a major change. That said, the terms ‘blacklist’ and ‘whitelist’ rely on and reinforce an association of ‘black bad, white good’. I’m happy to see a trend to replace these terms including Google, Linux, the IETF, and more.
It was simple to switch Tracker 3 to use the more accurate terms ‘blocklist’ and ‘allowlist’. I also learned something about stable releases — I merged a change to the 2.3 branch, but I didn’t realise that we consider the stable branch to be in ‘string freeze’ forever. (It’s obvious in hindsight 🙂 We’ve now reverted that but a few translation teams already updated their translations, so to the Spanish, Brazilian Portuguese and Romanian translators – sorry for creating extra work for you!
Acknowledging merge requests
I’ve noticed while working on app porting that some GNOME projects are quite unresponsive to merge requests. I’ve been volunteering my time as a GNOME contributor for longer than I want to remember, but it still impacts my motivation if I send a merge request and nobody comments. Part of the fun of contributing GNOME is being part of such an huge and talented community. How many potential contributors have we lost simply by ignoring their contributions?
This started me thinking about how to improve the situation. Being a GNOME maintainer is not easy and is in most cases unpaid, so it’s not constructive to simply complain about the situation. Better if we can mobilise people with free time to look at whatever uncommented merge requests need attention! In many cases you can give useful feedback even if you don’t know the details of the project in question – if there’s a problem then it doesn’t need a maintainer to raise it.
So my idea, which I intend to raise somewhere other than my blog when I have the time, is we could have a bot that posts to discourse.gnome.org every Friday with a list of merge requests that are over a week old and haven’t received any comments. If you’re bored on a Friday afternoon or during the weekend you’ll be able to pick a merge request from the list and give some feedback to the contributor – a simple “Thanks for the patch, it looks fine to me” is better than silence.
Let me know what you think of the idea! Can you think of a better way to make sure we have speedy responses to merge requests?
Lots of effort is going into Tracker at the moment. I was waiting for a convenient time to blog about it all, but there isn’t a convenient moment on a project like this, just lots of interesting tasks all blocked on different things.
App porting
With the API changes mostly nailed down, our focus moved to making initial Tracker 3 ports of the libraries and apps that use Tracker. This is a crucial step to prove that the new design works as we expect, and has helped us to find and fix loads of rough edges. We want to work with the maintainers of each app to finish off these ports.
If you want to help, or just follow along with the app porting, the process is being tracked in this GNOME Initiatives issue.
The biggest success story so far is GNOME Music. The maintainers Jean and Marinus are regular collaborators in #tracker and in our video meetings, and we’ve already got a (mostly) working port to Tracker 3. You can download a Flatpak build from that merge request, but note that it requires tracker-miners 3.0 installed on your host.
We’re hoping we can work around the host dependency in most cases, but I got excited and made unofficial Fedora packages of Tracker 3 which allowed me to try it out on my laptop.
We are also happy that GTK can be built against Tracker 3 already, and excited for the work in progress on Rygel. At the time of writing, the other apps with Tracker 3 work in progress Boxes, Files, Notes, Photos, Videos. Some of these use the new tracker3 Grilo plugin which we hope a Grilo maintainer will be able to review and merge soon. All help with finishing these branches and the remaining apps will be very welcome.
Release strategy
We have been putting thought into how to release Tracker 3. We need collaboration on two sides: from app maintainers who we need to volunteer their time and energy to review, test and merge the Tracker 3 changes in their apps, and from distros who we need to volunteer their time to package the new version and release it.
We have some tricky puzzles to solve, the main one being how an app might switch to Tracker 3 without breaking on Ubuntu 20.04 and other distros that are unlikely to include Tracker 3, but are likely to host the latest Flatpak apps.
We are hoping to find a path forward that satisfies everyone, again, you can follow the discussion in Initiative issue #17.
As you can see, we are volunteering a lot of our time at the moment to make sure this complicated project is a success.
Data exporting
We made it more convenient to export data from Tracker databases, with the tracker export command. It’s nice to have a quick way to see exactly what is stored there. This feature will also be crucial for exporting app data such as photo albums and starred files from the centralized Tracker 2 database.
Hardware testing with umockdev
The removable device support in Tracker goes largely untested, because you need to actually plug and unplug a real USB to exercise it. As always, for a volunteer driven project like Tracker it’s vital that testing the code is as easy as possible.
I recently discovered umockdev and decided to give it a spin. I started with the power management code because it’s super simple – on low battery notication, we stop the indexer. I’m happy with the test code but unfortunately it fails on GNOME’s CI runners with an error from umockdev:
sendmsg_one: cannot connect to client's event socket: Permission denied
I’m not sure when I’ll be motived to dig into why this fails, since the problem only reproduces on the CI runners, so if anyone has a pointer on what’s wrong then please comment on the MR.
The pandemic also means I’m likely to be spending the whole summer here in Galicia which can hardly be seen as bad luck. Here’s a photo of a beautiful spot I discovered recently about 30km from where I live:
Next steps
Carlos is working on some final API tweaks before we make another Tracker 2.99 beta release, after which the API should be fully stable. The Flatpak portal is also nearly ready.
We hope to see progress with app ports. This depends more and more on when app developers can volunteer their time to collaborate with us. Progress in the next few weeks will decide whether we target GNOME 3.38 (September 2020) or GNOME 3.40 (March 2021) for switching everything over to Tracker 3.
Unlike GTK 4, I can’t show any cool screenshots. I do have some ideas about how to demonstrate the improvements over Tracker 2, however … watch this space!
As always, we are available on IRC/Matrix in #tracker and you are welcome to join our online meetings.
It’s cool storing stuff in a database, but what if you shared the database schema so other tools can work with the data? That’s the basic idea of Linked Data which Tracker tries to follow when indexing your content.
In a closed music database, you might see a “Music” table with a “name” column. What does that mean? Is it the name of a song, an artist, an album, … ? You will have to do some digging to find out.
When Tracker indexes your music, it will create a table called nmm:MusicAlbum. What does that mean? You can click the link to find out, because the database schema is self-documenting. The abbreviation nmm:MusicAlbum expands to a URL, which clearly identifies the type of data being stored.
By formalising the database schema, we create a shared vocabulary for talking about the data. This is very powerful – have you seen GMail Highlights, where a button appears in your email inbox to checkin for a flight and such things? These are powered by the https://schema.org/ shared vocabulary. Google don’t manually add support to GMail for each airline in the world. Instead, the airlines embed a https://schema.org/FlightReservation resource in the confirmation email which GMail uses to show the information. The vocabulary is an open standard, so other email providers can use the same data and even propose improvements. Everyone wins!
Recent improvements
Tracker began 5 years before the creation of schema.org, and we use an older vocabulary from a project called Nepomuk. Tracker may now be the only user of the Nepomuk vocabularies, but to avoid a huge porting effort we have opted to keep using them for 3.0.
Inspired by schema.org documentation, I changed the formatting of Tracker’s schema documentation trying to pack the important information more densely. Compare the 2.x documentation to the 3.x documentation to see what has changed – I think it’s a lot more readable now.
One thing you will notice if you followed the nmm:MusicAlbum link above is that the contents of the documentation still requires some improvement. I hope to see incremental improvements here; if you think you can make it better, please send us a merge request !
CLI documentation
We maintain documentation for the tracker CLI tool in the form of man pages. These were a bit neglected. We now publish the man pages online making it easier to read them and harder to forget they exist.
There is a well-written and quite outdated set of documentation at https://wiki.gnome.org/Projects/Tracker. It’s mostly aimed at setting up Tracker on systems where it doesn’t come ready-integrated – which is a use case we don’t really want to support. I’m a bit stuck as I don’t want to delete what is quite good content, but I also don’t want to maintain documention for things that nobody should need to do…
Documentation hosting
This is a periodic reminder that the library-web script that manages developer.gnome.org needs a major reworking, such as the one proposed here. All the Tracker documentation on http://developer.gnome.org/ is years out of date, because we switched to Meson which requires us to do extra effort on each release to post the documentation. Much kudos is awaiting the people who can resolve this.
Stay tuned
Work is proceeding nicely on Tracker 3.0 and we hope to have the first beta release ready within the next couple of weeks. At that point, there will be opportunities to help with testing app ports and making sure performance is good – I will keep you posted here!
This article has been updated to correct a misunderstanding I had about the CONSTRAINT feature. Apps will not need to explicitly add this to their queries, it will be added implicitly by the xdg-tracker-portal process..
Lots has happened in the 2 months since my last post, most notably the global coronavirus pandemic … in Spain we’re in week 3 of quarantine lockdown already and noone knows when it is going to end.
Let’s take our mind off the pandemic and talk about Tracker 3.0. At the start of the year Carlos worked on some key API changes which are now merged. It’s a good opportunity to recap what’s really changing in the new version.
I made the developer documentation for Tracker 3.0 available online. Thanks to GitLab, this can be updated every time we merge a change in Git. The documentation a work in progress and we appreciate if you can help us to improve it.
The documentation contains a migration guide, but let’s have a broader look at some common use cases.
Tracker 3.0 is still in development and things may change! We very much welcome feedback from app developers who are going to use this API.
Browsing and searching
The big news in Tracker 3.0 is decentralization. Each app can now manage its own private database! There’s no single “Tracker store” any longer.
Tracker 3.0 will index content from the filesystem to facilitate searching and browsing, as it does now. The filesystem miner will keep this in its own database, and Flatpak apps will access this database through a portal (currently in development).
Apps access this data using a TrackerSparqlConnection just like now, but when we create the connection we need to specify that we want to connect to the filesystem miner’s database.
Here’s a Python example of listing all the music files in the user’s ~/Music directory:
from gi.repository import Tracker
conn = Tracker.SparqlConnection.bus_new(
"org.freedesktop.Tracker3.Miner.Files", None, None)
cursor = conn.query(
'SELECT ?url { ?r a nmm:MusicPiece ; nie:url ?url }')
print("Found music files:\n")
while cursor.next():
print(cursor.get_string()[0][0])
Running a full text search will be similar. Here’s how you’d look for “bananas” in every file in the users ~/Documents folder:
If you are running inside a Flatpak sandbox then there will be a portal between you and the org.freedesktop.Tracker3.Miner.Files database. The read-only /.flatpak-info file inside the sandbox, which is created when building the Flatpak, will declare what graphs your app can access. The xdg-tracker-portal will add that information into the SPARQL query, using a Tracker-specific syntax like this: CONSTRAINT GRAPH , and the database will enforce the constraint ensuring that your app really does only see the graphs that it’s requested access to.
Storing your own data
Tracker can be used as a data store by applications. One principle behind the design of Tracker 1.x was that by using a centralized store and a common vocabulary, different apps could easily share data. For example, when you create an album in GNOME Photos, it’s stored in the Tracker database using the standard nfo:DataContainer class. Any other app, perhaps a file manager, or a photos app from a different platform, can show and edit albums stored in this way without having to know specifics about GNOME Photos. Playlists in GNOME Music and starred files in Nautilus are also stored this way.
This approach had some downsides. Having all data in a single database creates a single point of failure. It’s hard to backup the valuable user data without backing up the search and indexing data too – but since the index can be recreated from the filesystem, it’s a waste of resources to include that in a backup. Apps were also forced to share a single database schema which was maintained in the tracker.git repository.
Tracker 3.0, each app creates a private database for storing its own data. It can use the ontology (database schema) from Tracker, or it can provide its own version. Here’s how a photos app written in Python could store photo albums:
Now let’s insert a photo into this album. Remember that the user’s photos are indexed by the filesystem miner. We can use the SERVICE statement to connect the filesystem miner’s database to our app’s private database, like this:
Notice again that the app has to request permission to access the Photos graph. If our example app is running in Flatpak, this will require a special permission.
It’s still possible for one app to share data with another, but it will require coordination at the app level. Using the example of photo albums, GNOME Photos can opt to make its database available to other apps. If a different app wants to see the user’s photo albums, they’ll need to connect to the org.gnome.Photos database over D-Bus. As usual, Flatpak apps would need permission to do this.
Is it a good time to port my app to Tracker 3.0?
It’s a good time to start porting your app. You will definitely be able to help us with testing and stabilising the library and the documentation if you start now.
Some functionality is no longer exposed in C libraries, due to the privitization of libtracker-control and libtracker-miner. As far as we know libtracker-miner is unused outside Tracker, but some apps are currenly using libtracker-control to display status updates for the Tracker daemons and trigger indexing of removable devices. We have an open issue about improving the story for on-demand removable device indexing. For status monitoring you may use the underlying DBus signals, and I’m also hoping to make these more useful.
Ideally I’d like to add a new helper library for Tracker 3.0 which would conveniently wrap the high level features that apps use. My volunteer time is limited though. I can share ideas for this if you are looking for a way to contribute!
What about a hackfest?
At some point we need to finish the Tracker 3.0 work and make sure that apps that use Tracker are all ported and working. The best case is that we do this in time for the upcoming GNOME 3.38 release. We discussed about a hackfest some point between now and GNOME 3.38 to make sure things are settled; it now may be that an in-person hackfest won’t be feasible in light of the Coronavirus pandemic but a series of online meetings would be a good alternative. We can only wait, and see!
Tracker’s database is now up to date with the latest SPARQL 1.1 standards, including the magical SERVICE statement that lets you combine results from multiple databases in a single query. Now we’re converting the database from a service into a library, and turning the previously monolithic architecture into something more flexible.
Carlos has already done most of this work and the code is pushed as #172 (tracker.git) and #136 (tracker-miners.git). At times it feels like we’re carving a big block of stone into a sculpture — just look at the diffstats:
Read merge request #172 for full details, but the highlights are that there’s no more tracker-store daemon, and the libtracker-sparql library which was previously only used for querying and inserting data can now be used to create and manage your own database. You can keep the database private, or you can expose it over D-Bus.
The code in tracker.git is now only about managing data. We may rename it to tracker-sparql in due course, or even to SPARQLite if this is okayed by the developers of SQLite. There’s perhaps a niche for a desktop-scale database that supports SPARQL queries, and it’s a niche that Tracker’s database fits in nicely.
All the code related to desktop indexing and search is now in tracker-miners.git. The tracker-miner-fs daemon will maintain the index in its own database, which you’ll be able to query by connecting over D-Bus just like you used to connect to tracker-store in Tracker 2.0. However, apps running inside Flatpak will not be able to talk directly to the tracker-miner-fs daemon — communication will go through a new portal that Carlos is currently working on, allowing us to implement per-app access controls to your data for the first time.
We are still pending a Tracker 2.3.2 bugfix release too! This month Victor Gal solved an issue that was causing photo geolocation metadata to be ignored. Rasmus Thomsen also added Alpine Linux to our CI, and the GNOME translation teams have been hard at work too.
If you want to help out by testing, developing, documenting Tracker – get in touch on GNOME Discourse (use the ‘tracker’ tag) or irc.gnome.org #tracker.
Rasmus Thomsen has been testing on Alpine Linux, fixing one issue and finding several more. Alpine Linux uses musl libc instead of the more common GNU libc, which triggers bugs that we don’tusuallysee. Finding and fixing these issues could be a great learning experience for someone who wants to dig deep into the platform!
There’s an ongoing issue reported by many Ubuntu users which seems to be due to SQLite database corruption. SQLite is rather a black box to me, so I don’t know how or when we might get to the bottom of why this corruption is happening.
Ubuntu CI
We now test each commit on Ubuntu as well as Fedora. This a nice step forwards. It’s also triggering more intermittentfailures in the CI — we’ve made huge progress in the last few years on bringing the CI up from zero, but there are some latent issues like these which we need to get rid of.
Tracker 3.0
Carlos has done more architectural work in the ‘master’ branch, working towards having a generic SPARQL store in tracker.git, and all GNOME/desktop/filesystem related code in tracker-miners.git.
As part of this, the tracker CLI tool is now split between tracker.git and tracker-miners.git (MR1, MR2).
We also moved the libtracker-control and libtracker-miner libraries into tracker-miners.git, and made the libtracker-control API private. As far as I know, the libtracker-control library is only being used by GNOME Photos to manage indexing of removable devices. We want to keep track of which apps need porting to 3.0, so please let me know if this is going to affect anything else.
New website
Tracker is famous enough that it merits a real website, not just an outdated set of wiki pages. So I made a real Tracker website, aiming to collect links to relevant user and developer documentation and to have a minimal overview and FAQ section. We can build and deploy this straight from the tracker.git repo, so whereas the wiki is easily forgotten, the new website lives in the same repo as the sourcecode. The next step will be to merge this and then tidy up most of the old wiki pages
I’ve continued looking at the developer experience of Tracker. Recently we modernized the README.md file (as several GNOME projects have done recently). I want the README to document a simple “build and test Tracker from git” workflow, and that led into work making it simpler to run Tracker from the build tree, and also a bunch of improvements to the test suite.
The design of Tracker has always meant that it’s a pain in the ass to build and test, because to do anything useful you need to have 3 different daemons running and talking to each other over D-Bus, reading and writing data in the same location, and communicating with the CLI or an app. We had a method for running Tracker from the build tree for use by automated tests, whose code was duplicated in tracker.git and tracker-miners.git, and then we had a separate script for developers to test things manually, but you still had to install Tracker to use that one. It was a bit of a mess.
The first thing I fixed was the code duplication. Now we have a Python module named trackertestutils. We install it, so we don’t need to duplicate code between tracker.git and tracker-miners.git any more. Thanks to Marco Trevisan we also install a pkgconfig file.
Then I added a ./run-uninstalled script to tracker-miners.git. The improvement in developer experience I think is huge. Now you can do this to try out the latest Tracker code:
git clone tracker-miners.git
cd tracker-miners && mkdir build && cd build
meson .. && ninja
./run-uninstalled --wait-for-miner=Files --wait-for-miner=Extract -- tracker index --file ~/Documents
./run-uninstalled -- tracker search "Hello"
The script is a small wrapper around trackertestutils, which takes care of spawning a private D-Bus daemon, collecting and filtering logs, and setting up the environment so that the Tracker cache is written to `/tmp/tracker-data`. (At the time of writing, there are some bug still and ./run-installed actually still requires you to install Tracker first.)
I also improved logging for Tracker’s functional-test suite. Since a year ago we’ve been running these tests in CI, but there have been some intermittent failures, which were hard to debug because log output from the tests was very messy. When you run a private D-Bus session, all kinds of daemons spawn and dump stuff to stdout. Now we set G_MESSAGE_PREFIXED in the environment, so the test harness can separate the messages that come from Tracker processes. It’s already allowed me to track down some of these annoying intermittent failures, and to increase the default log verbosity in CI.
Another neat thing about installing trackertestutils is that downstream projects can use it too. Rishi mentioned at GUADEC that gnome-photos has a test which starts the photos app and ends up displaying the actual photo collection of the user who is running the test. Automated tests should really be isolated from the real user data. And using trackertestutils, it’s now simple to do that: here’s a proof of concept for gnome-photos.
When I have some free time I like to be creative but sometimes I need a push of inspiration to take me in the right direction.
Interior designers and people who are about to get married like to create inspiration boards by gluing magazine cutouts to the wall.
‘Mood board for a Tuscan Style Interior’ by Design Folly on Flickr
I find a lot of inspiration online, so I want a digital equivalent. I looked for one, and I found various apps for iOS and Mac which act like digital inspiration boards, but I didn’t find anything I can use with GNOME. So I began planning an elaborate new GTK+ app, but then I remembered that I get tired of such projects before they actually become useful. In fact, there’s already a program that lets you manage a collection of images and text! It’s known as Files (Nautilus), and for me it only lacks the ability to store web links amongst the other content.
Then, I discovered that you can create .desktop files that point to web locations, the equivalent of .url files on Microsoft Windows. Would a folder full of URL links serve my needs? I think so!
Nautilus had some crufty code paths to deal with these shortcut files, which was removed in 2018. Firefox understands them directly, so if you set Firefox as the default application for the application/x-desktop file type then they work nicely: click on a shortcut and it opens in Firefox.
There is no convenient way to create these .desktop files: dragging and dropping a tab from Epiphany will create a text file containing the URL, which is tantalisingly close to what I want, but the resulting file can’t be easily opened in a browser. So, I ended up writing a simple extension that adds a ‘Create web link…’ dialog to Nautilus, accessed from the right-click menu.
Now I can use Nautilus to easily manage collections of links and I can mix in (or link to) any local content easily too. Here’s me beginning my ‘inspiration board’ for recipes …
I became interested in the Tracker project in 2011. I was looking at media file scanning and was happy to discover an active project that was focused on the same thing. I wanted to contribute, but I found it very hard to test my changes; and since Tracker runs as a daemon I really didn’t want to introduce any crazy regressions.
In those days Tracker already had a set of tests written in Python that tested the Tracker daemons as a whole, but they were a bit unfinished and unreliable. I focused some spare-time effort on improving those. Surprisingly enough it’s taken eight years to get the point where I’m happy with how they work.
The two biggest improvements parallel changes in many other GNOME projects. Last year Tracker stopped using GNU Autotools in favour of Meson, after a long incubation period. I probably don’t need to go into detail of how much better this is for developers. Also, we set up GitLab CI to automatically run the test suite, where previously developers and maintainers were required to run the test suite manually before merging anything. Together, these changes have made it about 100000% easier to review patches for Tracker, so if you were considering contributing code to the project I can safely say that there has never been a better time!
The Tracker project is now divided into two parts, the ‘core’ (tracker.git) and the ‘miners’ (tracker-miners.git) . The core project contains the database and the application interface libraries, while the miners project contains the daemons that scan your filesystem and extract metadata from your interesting files.
Let’s look at what happens automatically when you submit a merge request on GNOME GitLab for the tracker-miners project:
The .gitlab-ci.yml file specifies a Docker image to be used for running tests. The Docker images are built automatically from this project and are based on Fedora.
The script in .gitlab-ci.yml clones the ‘master’ version of Tracker core.
The tracker and tracker-miners projects are configured and built, using Meson. There is a special build option in tracker-miners that makes it include Tracker core as a Meson subproject, instead of building against the system-provided version. (It still depends on a few files from host at the time of writing).
The script starts a private D-Bus session using dbus-run-session, sets a fixed en_US.UTF8 locale, and runs the test suite for tracker-miners using meson test.
Meson runs the tests that are defined in meson.build files. It tries to run them in parallel with one test per CPU core.
The libtracker-miners-common tests exercises some utility code, which is duplicated from libtracker-common in Tracker core.
The libtracker-extract tests exercises libtracker-extract, which is a private library with helper code for accessing file metadata. It mainly focuses on standard metadata formats like XMP and EXIF.
The functional-300-miner-basic-ops and functional-301-resource-removal tests check the operation of the tracker-miner-fs daemon, mostly by copying files in and out of a specific path and then waiting for the corresponding changes to the Tracker database to take effect.
The functional-310-fts-basic test tries some full-text search operations on a text file. There are a couple of other FTS tests too.
The functional/extract/* tests effectively run tracker extract on a set of real media files, and test that the expected metadata is extracted. The tests are defined by JSON files such as this one.
The functional-500-writeback tests exercise the tracker-writeback daemon (which allows updating things like MP3 tags following changes in the Tracker database). These tests are not particularly thorough. The writeback feature of Tracker is not widely used, to my knowledge.
Finally, the functional-600-* tests simulate the behaviour of some MeeGo phone applications. Yes, that’s how old this code is 🙂
There is plenty of room for more testing of course, but this list is very comprehensive when compared to the total lack of automated testing that the project had just a year ago!
The way we organize content on computers hasn’t really evolved since the arrival of navigational file managers in late 1980s. We have been organizing files into directories for decades. Perhaps the biggest change anyone has managed since then is that we now call directories “folders” instead, and that we obscure the full directory tree now pointing users instead towards certain entry points such as the “Music”, “Downloads” and “Videos” folders inside their home directory.
It’s 2018 already. There must be a better way to find content than to grope around in a partially obscured tree of files and folders?
GNOME has been innovating in this area for a while, and one of the results is the Tracker search and indexing tool which creates a database of all the content it finds on the user’s computer and allows you to run arbitrary queries over it. In principle this is quite cool as you can, for example, search for all photos taken within a given time period, all songs by a specific artist, all videos above a certain resolution ordered by title, or whatever else you can think of (where the necessary metadata is available). However the caveat is for this to be at all useful you currently have to enjoy writing SPARQL queries on the commandline: Tracker itself is a “plumbing” component, the only interface it provides is the tracker commandline tool.
There is ongoing work on content-specific user interfaces that can work with Tracker to access local content, so for photos for example you can use GNOME Photos to view and organize your whole photo collection. However, there isn’t a content-agnostic tool available that might let you view and organize all the content on your computer… other than Nautilus which is limited to files and folders.
I’m interested in organizing content using tags, which are nothing but freeform textual category labels. On the web, tags are a very common way of categorizing content. (The name hashtags is probably more widely understood than tags among web users, but hashtag has connotations to social media and sharing which don’t necessarily apply when talking about desktop content so I will call them tags here.) Despite the popularity on the web, desktop support is low: Tagspaces seems to be the only option and the free edition is very limited in what it can do. Within GNOME, we have had support for storing tags in the Tracker database for many years but I don’t know of any applications that allow viewing or editing file tags.
Around the time of GUADEC 2017 I read Alexandru’s blog post about tags in Nautilus, in which he announced that Nautilus wasn’t going to get support for organizing files using tags because it would conflict to much with the existing organization principle in Nautilus of putting files into folders. I agree with that logic there, but it leaves open a question: when will GNOME get an interface that allows me to organize files using tags?
As it happened I had a bit of free time after GUADEC 2017 was finished and I started sketching out an application designed specifically for organizing content using tags.
The result so far looks like this:
This is really just a prototype, there are lots more features I’d like to add or improve too if I get the time, but it does support the basic use case of “add tags to my files” at this point and so I’ve started a stable release branch. The app is named Tagcloud and you can get it as a Flatpak .bundle of the 0.2.1 release from here. Note that it won’t autoupdate as this isn’t a proper Flatpak repo, just a bundle file.
Tagcloud is written using Python and PyGObject, and of course GTK+. I encountered several G-I bindings issues during development which mean that Tagcloud currently requires very new versions of GLib and GTK+ but the good news is that by using the Flatpak bundle you don’t need to care about any of that. Tagcloud uses Tracker internally and I’ve been thinking a lot about how to make Tracker work better for application developers; these thoughts are quite lengthy and not really complete yet so I will save them for a separate blog post.
One of the key principles of Tagcloud is that it should recognize any type of content, so for example you can group together photos, documents and videos related to a specific project. In future I would also like to see GNOME’s content-specific applications such as Photos and Documents recognize tags; this shouldn’t require too much plumbing work since everything seems to be tending towards using Tracker as a backend, but it would of course affect the user interfaces of those apps.
I didn’t yet mentioned in this blog that a couple of months ago I quit my job at Codethink and right now I’m training to be a language teacher. So I imagine that I will have very little time available to work on Tagcloud for a while, but please do send issue reports and patches if you like to https://gitlab.com/samthursfield/tagcloud. I will be at GUADEC 2018 and hopefully we can have lots of exciting discussions about applying tags to things. And for the future … while I would like Tagcloud to become a fully fledged application, I will also be happy if it serves simply as a prototype and as a way of driving improvements in Tracker which will then benefit all of GNOME’s content apps.
A long time ago I started looking at rewriting Tracker’s build system using Meson. Today those build instructions landed in the master branch in Git!
Meson is becoming pretty popular now so I probably don’t need to explain why it’s such a big improvement over Autotools. Here are some key benefits:
It takes 2m37s for me to build from a clean Git tree with Autotools, but only 1m08s with Meson.
There are 2573 lines of meson.build files, vs. 5013 lines of Makefile.am, a 2898 line configure.ac file, and various other bits of debris needed for Autotools
Only compile warnings are written to stdout by default, so they’re easy to spot
Out of tree builds actually work
Tracker is quite a challenging project to build, and I hit a number of issues in Meson along the way plus a few traps for the unwary.
We have a huge number of external dependencies — Meson handles this pretty neatly, although autodetection of backends requires a bit of boilerplate.
There’s a complex mix of Vala and C code in Tracker, including some libraries that are written in both. The Meson developers have put a lot of work into supporting Vala, which is much appreciated considering it’s a fairly niche language and in fact the only major problem we have left is something that’s just as broken with Autotools: failing to generate a single introspection repo for a combined C + Vala library
Tracker also has a bunch of interdependent libraries. This caused continual problems because Meson does very little deduplication in the commandlines it generates, and so I’d get combinational explosions hitting fairly ridiculous errors like commandline too long (the limit is 262KB) or too many open files inside the ld process. This is a known issue. For now I work around it by manually specifying some dependencies for individual targets instead of relying on them getting pulled in as transitive dependencies of a declare_dependency target.
A related issue was that if the same .vapi file ends up on the valac commandline more than once it would trigger an error. This required some trickery to avoid. New versions of Meson work around this issue anyway.
One pretty annoying issue is that generated files in the source tree cause Meson builds to fail. Out of tree builds seem to not work with our Autotools build system — something to do with the Vala integration — with the result that you need to make clean before running a Meson build even if the Meson build is in a separate build dir. If you see errors about conflicting types or duplicate definitions, that’s probably the issue. While developing the Meson build instructions I had a related problem of forgetting about certain files that needed to be generated because the Autotools build system had already generated them. Be careful!
Meson users need to be aware that the rpath is not set automatically for you. If you previously used Libtool you probably didn’t need to care what an rpath was, but with Meson you have to manually set install_rpath for every program that depends on a library that you have installed into a non-standard location (such as a subdirectory of /usr/lib). I think rpaths are a bit of a hack anyway — if you want relocatable binary packages you need to avoid them — so I like that Meson is bringing this implementation detail to the surface.
There are a few other small issues: for example we have a Gtk-Doc build that depends on the output of a program, which Meson’s gtk-doc module currently doesn’t handle so we have to rebuild that documentation on every build as a workaround. There are also some workarounds in the current Tracker Meson build instructions that are no longer needed — for example installing generated Vala headers used to require a custom install script, but now it’s supported more cleanly.
Tracker’s Meson build rules aren’t quite ready for prime time: some tests fail when run under Meson that pass when run under Autotools, and we have to work out how best to create release tarballs. But it’s pretty close!
All in all this took a lot longer to achieve than I originally hoped (about 9 months of part-time effort), but in the process I’ve found some bugs in both Tracker and Meson, fixedafew of them, and hopefully made a small improvement to the long process of turning GNU/Linux users into GNU/Linux developers.
Meson has come a long way in that time and I’m optimistic for its future. It’s a difficult job to design and implement a new general purpose build system (plus project configuration tool, test runner, test infrastructure, documentation, etc. etc), and the Meson project have done so in 5 years without any large corporate backing that I know of. Maintaining open source projects is often hard and thankless. Ten thumbs up to the Meson team!
Tracker 1.9.1 was released last month, and it comes with some work we did to improve the various extract modules (the code which looks at files on disk and extracts what we think is useful information). The extract modules are no longer hardcoded to generate SPARQL commands, instead they now use the new TrackerResource API which is a simple way of describing resources programmatically.
TrackerResource is hopefully fairly self-explanatory; you can read the unstable documentation for it here. Basically a TrackerResource object represents a resource (for example a Video) and the API lets you describe properties of that resource (such as its title, who directed it, etc.). When you’re done you can serialize it to some kind of interchange format. The RDF working committee have invented many of these over the years (most of which are absurd); Tracker generally uses Turtle, which is both efficient and human-friendly, so TrackerResource lets you serialize information about one or more resources either to Turtle or to a series of SPARQL commands that you can use to update a database (such as the Tracker-store!) with the new information.
What’s the point? One highlight of this work was removing 2,000 lines of code from the extract modules, and making them (in my opinion) a whole lot more readable and maintainable in the process … but more interesting to me is that the code in the extract modules should be useful to a lot more people. Scanning files and then outputting a series of SPARQL INSERT statements is really only useful to folk who want to use a SPARQL database to track file metadata, and most people (at least in the desktop & embedded world) are not asking for that at all. Data serialized as Turtle can be easily parsed by other programs, inserted into tables in an SQL database, and/or converted into any other serialization format you want; the most interesting one to me is JSON-LD, which gets you the convenience of JSON and (optionally) the rigour of Linked Data too. In fact I’m hoping TrackerResource will be able to serialize to JSON-LD directly in future, the work is partly done already.
I’ve often felt like there’s a lot of solid, well-tested code sat in tracker.git that would be a useful to a lot more people if it wasn’t quite so Tracker-specific. Hopefully this will help to “open up” all that code that parses different file formats. I should note that there’s nothing really Tracker-specific in the TrackerResource code, it could be moved out into a generic “RDF for GObject folk” library if there was any demand for such a thing. Likewise, the Tracker extract modules could be moved into their own “Generic metadata extraction” library if it looked like the extra maintenance burden would be worthwhile (splitting up Tracker has been under discussion for some time).
Unless someone speaks up, the TrackerSparqlBuilder API will be deprecated in due course in favour of TrackerResource. The TrackerSparqlBuilder API was never flexible enough to build the kind of advanced queries that SPARQL is really designed for, it was basically created so we could create correct INSERT statements in the extractors, and it’s no longer needed there.
This is quite a big change internally, although functionality-wise nothing should have changed, so please test out Tracker 1.9.1 if you can and report any issues you see to GNOME Bugzilla.
Example usage of TrackerResource
Object Resource Mapping (the resource-centric equivalent of Object Relation Mapping in the SQL database world) is not a new idea… Adrien Bustany had a go at an ORM for Tracker many years ago now named Hormiga. The Hormiga approach was to generate code to deal with each new type of resource which is fiddly in practice, although it does mean validation code can be generated automatically.
The TrackerResource approach was inspired by the Python RDFLib library which has a similar rdflib.Resource class. It’s simple and effective, and suits the “open world”, “there are no absolute truths except that everything is a resource” philosophy of RDF better.
To describe something with TrackerResource, you might do this:
Properties are themselves resources so they have URIs, which you can follow to find a description of said property… but it’s quite painful reading and writing long URIs everywhere, so you should of course take advantage of Compact URIs and do this instead:
You can now serialize this information to Turtle or to SPARQL, ready to send it or save it somewhere. For it to make sense to the receiver, the compact URI prefixes need to be kept with the data. The TrackerNamespaceManager class tracks the mappings between prefixes and the corresponding full URIs, and you can call tracker_namespace_manager_get_default() to receive a TrackerNamespaceManager that already knows about Tracker’s built-in prefixes. So to generate some Turtle you’d do this:
If you want to add a 2nd value to a property, rather than removing existing
values, you can do that too. The number of values that a property can have is
called its “cardinality” for some reason. If a property has a cardinality of
more than 1, you can *add* values, e.g. maybe you want to associate some new artwork you found:
INSERT {
<file:///home/sam/cat-on-stairs.avi> nie:title "My cat walking downstairs" .
}
There’s no DELETE statement first because we are *adding* a value to the property, not *setting* the property
By the way, it’s up to the Tracker Store to make sure that the data you are
inserting is valid. TrackerResource doesn’t have any way right now to check that the data you give it makes sense according to the ontologies (database schemas). So you can give the video 15 different titles, but if you try to execute the INSERT statement that results then the Tracker Store will raise an error, because the nie:title
property has a cardinality of 1 — it can only have 1 value.
Right now TrackerResource doesn’t support *querying* resources. It’s never
going to gain any kind of advanced querying features that try to hide SPARQL
from developers; the best way to write advanced data queries is using a query
language, and that’s really what SPARQL is best at. It is possible we could add a way to read information about known resources in the Tracker Store into
TrackerResource objects, and if that sounds useful to you please get hacking 🙂
Example usage of the new Tracker extractors
It’s always been possible to run `tracker-extract` on a single file, by
running `/usr/libexec/tracker-extract –file` explicitly. Try it:
$ /usr/libexec/tracker-extract --file ~/Downloads/Best\ Coast\ -\ The\ Only\ Place.mp3 --verbosity=0
Locale 'TRACKER_LOCALE_LANGUAGE' was set to 'en_GB.utf8'
Locale 'TRACKER_LOCALE_TIME' was set to 'en_GB.utf8'
Locale 'TRACKER_LOCALE_COLLATE' was set to 'en_GB.utf8'
Locale 'TRACKER_LOCALE_NUMERIC' was set to 'en_GB.utf8'
Locale 'TRACKER_LOCALE_MONETARY' was set to 'en_GB.utf8'
Setting priority nice level to 19
Loading extractor rules... (/usr/share/tracker/extract-rules)
Loaded rule '10-abw.rule'
Loaded rule '10-bmp.rule'
Loaded rule '10-comics.rule'
Loaded rule '10-dvi.rule'
Loaded rule '10-ebooks.rule'
Loaded rule '10-epub.rule'
Loaded rule '10-flac.rule'
Loaded rule '10-gif.rule'
Loaded rule '10-html.rule'
Loaded rule '10-ico.rule'
Loaded rule '10-jpeg.rule'
Loaded rule '10-msoffice.rule'
Loaded rule '10-oasis.rule'
Loaded rule '10-pdf.rule'
Loaded rule '10-png.rule'
Loaded rule '10-ps.rule'
Loaded rule '10-svg.rule'
Loaded rule '10-tiff.rule'
Loaded rule '10-vorbis.rule'
Loaded rule '10-xmp.rule'
Loaded rule '10-xps.rule'
Loaded rule '11-iso.rule'
Loaded rule '11-msoffice-xml.rule'
Loaded rule '15-gstreamer-guess.rule'
Loaded rule '15-playlist.rule'
Loaded rule '15-source-code.rule'
Loaded rule '90-gstreamer-audio-generic.rule'
Loaded rule '90-gstreamer-image-generic.rule'
Loaded rule '90-gstreamer-video-generic.rule'
Loaded rule '90-text-generic.rule'
Extractor rules loaded
Initializing media art processing requirements...
Found '37 GB Volume' mounted on path '/media/TV'
Found mount with volume and drive which can be mounted: Assuming it's removable, if wrong report a bug!
Adding mount point with UUID: '7EFAE0646A3F8E6E', removable: yes, optical: no, path: '/media/TV'
Found 'Summer 2012' mounted on path '/media/Music'
Found mount with volume and drive which can be mounted: Assuming it's removable, if wrong report a bug!
Adding mount point with UUID: 'c94b153c-754a-48d4-af67-698bb8972ee2', removable: yes, optical: no, path: '/media/Music'
MIME type guessed as 'audio/mpeg' (from GIO)
Using /usr/lib64/tracker-1.0/extract-modules/libextract-gstreamer.so...
GStreamer backend in use:
Discoverer/GUPnP-DLNA
Retrieving geolocation metadata...
Processing media art: artist:'Best Coast', title:'The Only Place', type:'album', uri:'file:///home/sam/Downloads/Best%20Coast%20-%20The%20Only%20Place.mp3', flags:0x00000000
Album art already exists for uri:'file:///home/sam/Downloads/Best%20Coast%20-%20The%20Only%20Place.mp3' as '/home/sam/.cache/media-art/album-165d0788d1c055902e028da9ea6db92a-b93cfc4ae665a6ebccfb820445adec56.jpeg'
Done (14 objects added)
SPARQL pre-update:
--
INSERT {
<urn:artist:Best%20Coast> a nmm:Artist ;
nmm:artistName "Best Coast" .
}
INSERT {
<urn:album:The%20Only%20Place:Best%20Coast> a nmm:MusicAlbum ;
nmm:albumTitle "The Only Place" ;
nmm:albumArtist <urn:artist:Best%20Coast> .
}
DELETE {
<urn:album-disc:The%20Only%20Place:Best%20Coast:Disc1> nmm:setNumber ?unknown .
}
WHERE {
<urn:album-disc:The%20Only%20Place:Best%20Coast:Disc1> nmm:setNumber ?unknown .
}
DELETE {
<urn:album-disc:The%20Only%20Place:Best%20Coast:Disc1> nmm:albumDiscAlbum ?unknown .
}
WHERE {
<urn:album-disc:The%20Only%20Place:Best%20Coast:Disc1> nmm:albumDiscAlbum ?unknown . }
INSERT {
<urn:album-disc:The%20Only%20Place:Best%20Coast:Disc1> a nmm:MusicAlbumDisc ;
nmm:setNumber 1 ;
nmm:albumDiscAlbum <urn:album:The%20Only%20Place:Best%20Coast> .
}
--
SPARQL item:
--
a nfo:Audio , nmm:MusicPiece ;
nie:title "The Only Place" ;
nie:comment "Free download from http://www.last.fm/music/Best+Coast and http://MP3.com" ;
nmm:trackNumber 1 ;
nfo:codec "MPEG-1 Layer 3 (MP3)" ;
nfo:gain 0 ;
nfo:peakGain 0 ;
nmm:performer <urn:artist:Best%20Coast> ;
nmm:musicAlbum <urn:album:The%20Only%20Place:Best%20Coast> ;
nmm:musicAlbumDisc <urn:album-disc:The%20Only%20Place:Best%20Coast:Disc1> ;
nfo:channels 2 ;
nfo:sampleRate 44100 ;
nfo:duration 164 .
--
SPARQL where clause:
--
--
SPARQL post-update:
--
--
All the info is there, plus way more logging output that you didn’t ask for,
but it’s really hard to make use of that and this feature was really only
intended for testing.
In the Tracker 1.9.1 unstable release things have improved. The `tracker`
commandline tool has an `extract` command which you can use to see the metadata of any file that has a corresponding extract module, and the output is readable both for other programs and for humans (at least, for developers):
Turtle is still a bit of an esoteric format; I think this will be more fun when it can output JSON, but you can already pipe this into a tool like `rapper` or `rdfpipe` and convert it to whatever format you like.