Blog about what you do!

Am I the first to blog from GUADEC 2019? It has been a great conference: huge respect to the organization team for volunteering significant time and energy to make it all run smoothly.

The most interesting thing at GUADEC is talking to community members old and new. I discovered that I don’t know much about what people are doing in GNOME. I discovered Antonio is doing user support / bug triage and more in Nautilus. I discovered that Bastian is posting GNOME-related questions and answers on StackOverflow. I discovered Britt is promoting us on Twitter and moderating discussions on Reddit. I discovered Felipe is starting to do direct user support for Boxes. I wouldn’t know any of this if I hadn’t been to GUADEC.

So here’s my plea — if you contribute to GNOME, please blog about it! If everyone reading this wrote just one blog post a year… I’d have a much better idea of what you’re all doing!

Don’t forget: Planet GNOME is not only for announcing cool new projects and features – it’s “a window into the world, work and lives of GNOME hackers and contributors.” Blog about anything GNOME related, and be yourself — we’re not a corporation, we’re an underground network with a global, diverse, free thinking membership and that’s our strength.

Remember that there’s much more to GNOME than software development — read this long list of skillsets that you’re probably using. Write about translations, user support, testing, documentation, packaging, outreach, foundation work, event organization, bug triage, product management, release management, design, infrastructure operations. Write about why you enjoy contributing to GNOME, write about why it’s important to you. Write about what you did yesterday, or what you did last month. Write about your friends in GNOME. Make some graphs about your project to show how much work you do. Write short posts, write them quickly. Don’t worry about minor errors — it’s a blog, not a magazine article. Don’t be scared that readers won’t be interested — we are! We’re a distributed team and we need to keep each other posted about what we’re doing. Show links, screenshots, discussions, photos, graphs, anything. Don’t write reports, write stories.

If you contribute to GNOME but don’t have a blog… please start one! Write some nice posts about what you do. Become a Foundation member if you haven’t already*, and ask to join Planet GNOME.

And even if you forget all that, remember this: positive feedback for contributions encourages more contributions. Writing a blog post, like any other form of contribution, can sometimes feel shouting into an abyss. If you read an interesting post, leave a positive comment & thank the author for taking the time to write it.

* The people using and reviewing your contributions will be happy to vouch for you, don’t worry about that!

Advertisements
Posted in Uncategorized | 4 Comments

Twitter without Infinite Scroll

I like reading stuff on twitter.com because a lot of interesting people write things there which they don’t write anywhere else.

But Twitter is designed to be addictive, and a key mechanism they use is the “infinite scroll” design. Infinite scroll has been called the Web’s slot machine because of the way it exploits our minds to make us keep reading. It’s an unethical design.

In an essay entitled “If the internet is addictive, why don’t we regulate it?”, the writer Michael Schulson says:

… infinite scroll has no clear benefit for users. It exists almost entirely to circumvent self-control.

Hopefully Twitter will one day consider the ethics of their design. Until then, I made a Firefox extension to remove the infinite scroll feature and replace it with a ‘Load older tweets’ link at the bottom of the page, like this:

example

The Firefox extension is called Twitter Without Infinite Scroll. It works by injecting some JavaScript code into the Twitter website which disconnects the ‘uiNearTheBottom’ event that would otherwise automatically fetch new data.

Quoting Michael Shulson’s article again:

Giving users a chance to pause and make a choice at the end of each discrete page or session tips the balance of power back in the individual’s direction.

So, if you are a Twitter user, enjoy your new-found power!

Posted in Uncategorized | 7 Comments

Tools I like for creating web apps

I used to dislike creating websites. CSS confused me and JavaScript annoyed me.

In the last year I’ve grown to like web development again! The developer tools available in the web browsers of 2019 are incredible to work with. The desktop world is catching up, but but the browser world is ahead. I rarely have to write CSS at all. JavaScript has gained a lot of the features that it was inexplicably missing.

Here’s a list of the technologies I’ve recently used and liked.

First, Bootstrap. It’s really a set of CSS classes which turn HTML into something that “works out of the box” for creating decently styled webapps. To me it feels like a well-designed widget toolkit, a kind of web counterpart to GTK. Once you know what Bootstrap looks like, you realize that everyone else is already using it. Thanks to Bootstrap, I don’t really need to understand CSS at all anymore. Once you get bored of the default theme, you can try some others.

Then, jQuery. I guess everyone in the world uses jQuery. It provides powerful methods to access JavaScript functionality that is otherwise horrible to use. One of its main features is the ability to select elements from a HTML document using CSS selectors. Normally jQuery provides a function named $, so for example you could get the text of every paragraph in a document like this: $('p').text().  Open up your browser’s inspector and try it now! Now, try to do the same thing without jQuery — you’ll need at least 6 times more code.1

After that, Riot.js. Riot is a UI library which lets you create a web page using building blocks which they call custom tags. Each custom tag is a snippet of HTML. You can attach JavaScript properties and methods to a custom tag as well, and you can refer to them in the snippet of HTML giving you a powerful client-side template framework.

There are lots of “frameworks” which provide similar functionality to Riot.js. I find frameworks a bit overwhelming, and I’m suspicious of tools like create-react-app that need to generate code for me before I can even get started. I like that Riot can run completely in the browser without any special tooling required, and that it has one, specific purpose. Riot isn’t perfect; in particular I find the documentation quite hard to understand at times, but so far I’m enjoying using it.

Finally, lunr.js. Lunr provides a powerful full-text search engine, implemented completely as JavaScript that runs in your users’ web browsers. “Isn’t that a terrible idea?” you think. For large data sets, Lunr is not at all appropriate (you might consider its larger sibling Solr). For a small webapp or prototype, Lunr can work well and can save you from having to write and run a more complex backend service.

If, like I did, you think web development is super annoying due to bad tooling, give it another try! It’s (kind of) fun now!

1. Here’s how it looks without jQuery: Array.map(document.getElementsByTagName('p'), function(e) { return e.textContent; })

Posted in Uncategorized | 2 Comments

The Lesson Planalyzer

I’ve now been working as a teacher for 8 months. There are a lot of things I like about the job. One thing I like is that every day brings a new deadline. That sounds bad right? It’s not: one day I prepare a class, the next day I deliver the class one or more times and I get instant feedback on it right there and then from the students. I’ve seen enough of the software industry, and the music industry, to know that such a quick feedback loop is a real privilege!

Creating a lesson plan can be a slow and sometimes frustrating process, but the more plans I write the more I can draw on things I’ve done before. I’ve planned and delivered over 175 different lessons already. It’s sometimes hard to know if I’m repeating myself or not, or if I could be reusing an activity from a past lesson, so I’ve been looking for easy ways to look back at all my old lesson plans.

Search

GNOME’s Tracker search engine provides a good starting point for searching a setof lesson plans: I can put the plans in my ~/Documents folder, open the folder in Nautilus, and then I type a term like "present perfect" into the search bar.

Screenshot of Nautilus showing search results

The results aren’t as helpful as they could be, though. I can only see a short snippet of the text in each document, when I really need to see the whole paragraph for the result to be directly useful. Also, the search returns anything where the words present and perfect appear, so we could be talking about tenses, or birthdays, or presentation skills.  I wanted a better approach.

Reading .docx files

My lesson plans have a fairly regular structure. An all-purpose search tool doesn’t know anything about my personal approach to writing lesson plans, though. I decided to try writing my own tool to extract more structured information from the documents. The plans are in .docx format1 which is remarkably easy to parse — you just need the Python ‘unzip’ and ‘xml’ modules, and some guesswork to figure out what the XML elements mean. I was surprised not to find a Python library that already did this for me, but in the end I wrote a very basic .docx helper module, and I used this to create a tool that read my existing lesson plans and dumped the data as a JSON document.

It works reliably! In a few cases I chose to update documents rather than add code to the tool to deal with formatting inconsistencies. Also, the tool currently throws away all formatting information, but I barely notice.

Web and desktop apps

From there, of course, things got out of control and I started writing a simple web application to display and search the lesson plans. Two months of sporadic effort later, and I just made a prototype release of The Lesson Planalyzer. It remains to be seen how useful it is for anyone, including me, but it’s very satisfying to have gone from an idea to a prototype application in such a short time. Here’s an ugly screenshot, which displays a couple of example lesson plans that I found online.

The user interface is HTML5, made using Bootstrap and a couple of other cool JavaScript libraries (which I might mention in a separate blog post). I’ve wrapped that up in a basic GTK application, which runs a tiny HTTP server and uses a WebKitWebView display its output. The desktop application has a couple of features that can’t be implemented inside a browser, one is the ability to open plan documents directly in LibreOffice, and also the other is a dedicated entry in the alt+tab menu.

If you’re curious, you can see the source at https://gitlab.com/samthursfield/planalyzer/. Let me know if you think it might be useful for you!

1. I need to be able to print the documents on computers which don’t have LibreOffice available, so they are all in .docx format.

Posted in Uncategorized | Leave a comment

Inspire me, Nautilus!

When I have some free time I like to be creative but sometimes I need a push of inspiration to take me in the right direction.

Interior designers and people who are about to get married like to create inspiration boards by gluing magazine cutouts to the wall.

6907272105_b47a5ca31a_b

‘Mood board for a Tuscan Style Interior’ by Design Folly on Flickr

I find a lot of inspiration online, so I want a digital equivalent. I looked for one, and I found various apps for iOS and Mac which act like digital inspiration boards, but I didn’t find anything I can use with GNOME. So I began planning an elaborate new GTK+ app, but then I remembered that I get tired of such projects before they actually become useful. In fact, there’s already a program that lets you manage a collection of images and text! It’s known as Files (Nautilus), and for me it only lacks the ability to store web links amongst the other content.

Then, I discovered that you can create .desktop files that point to web locations, the equivalent of .url files on Microsoft Windows. Would a folder full of URL links serve my needs? I think so!

Nautilus had some crufty code paths to deal with these shortcut files, which was removed in 2018. Firefox understands them directly, so if you set Firefox as the default application for the application/x-desktop file type then they work nicely: click on a shortcut and it opens in Firefox.

There is no convenient way to create these .desktop files: dragging and dropping a tab from Epiphany will create a text file containing the URL, which is tantalisingly close to what I want, but the resulting file can’t be easily opened in a browser. So, I ended up writing a simple extension that adds a ‘Create web link…’ dialog to Nautilus, accessed from the right-click menu.

Now I can use Nautilus to easily manage collections of links and I can mix in (or link to) any local content easily too. Here’s me beginning my ‘inspiration board’ for recipes …

Screenshot from 2019-03-04 22-05-13.png

<

Posted in Uncategorized | 4 Comments

Paying money for things

Sometimes it’s hard to make money from software. How do you make money from something that can be copied infinitely?

Right now there are 3 software tools that I pay for. Each one is supplied by a small company, and each one charges a monthly or annual fee. I prefer software with this business model because it creates an incentive for careful, ongoing maintenance and improvement. The alternative (pay a large fee, once) encourages a model that is more like “add many new features, sell the new version and then move onto something else”.

The 3 tools are:

  1. Feedbin, which is a tool that collects new content from many different blogs and shows them all in a single interface. This is done with a standard called RSS. The tool is a pleasure to use, and best of all, it’s Free Software released under a permissive license.
  2. Pinboard, a bookmarking and archival tool. The interface doesn’t spark joy and the search tool leaves a lot to be desired too. However, Pinboard carefully archives a copy of every single website that I bookmark, just at the time that I bookmark it. Since the Web is changing all the time and interesting content comes and goes, I find this very valuable. I don’t know if I’ll actually use this archive of content for much as I don’t actually enjoy writing articles particularly, but I use the existance of the archive as a way to convince myself to close browser tabs.
  3. Checkvist, a “to do list” tool that supports nesting items, filtering by tags, styling with Markdown, and keyboard-only operation. I use this not as a to-do list but as a way of categorising activities and resources that I use when teaching. To be honest, the “free” tier of this tool is generous enough that I don’t really need to pay, but I like to support the project.

Music can also be copied infinitely, and historically I’ve not been keen to buy it because I didn’t like the very shady operations of many record companies. Now I use Bandcamp, which has an incredible library of music with rich, manually curated recommendations, and a clear, sustainable business model.

What digital goods do you pay for on a regular basis?

 

Posted in Uncategorized | 3 Comments

How Tracker is tested in 2019

I became interested in the Tracker project in 2011. I was looking at media file scanning and was happy to discover an active project that was focused on the same thing. I wanted to contribute, but I found it very hard to test my changes; and since Tracker runs as a daemon I really didn’t want to introduce any crazy regressions.

In those days Tracker already had a set of tests written in Python that tested the Tracker daemons as a whole, but they were a bit unfinished and unreliable. I focused some spare-time effort on improving those. Surprisingly enough it’s taken eight years to get the point where I’m happy with how they work.

The two biggest improvements parallel changes in many other GNOME projects. Last year Tracker stopped using GNU Autotools in favour of Meson, after a long incubation period. I probably don’t need to go into detail of how much better this is for developers. Also, we set up GitLab CI to automatically run the test suite, where previously developers and maintainers were required to run the test suite manually before merging anything. Together, these changes have made it about 100000% easier to review patches for Tracker, so if you were considering contributing code to the project I can safely say that there has never been a better time!

The Tracker project is now divided into two parts, the ‘core’ (tracker.git) and the ‘miners’ (tracker-miners.git) . The core project contains the database and the application interface libraries, while the miners project contains the daemons that scan your filesystem and extract metadata from your interesting files.

Let’s look at what happens automatically when you submit a merge request on GNOME GitLab for the tracker-miners project:

  1. The .gitlab-ci.yml file specifies a Docker image to be used for running tests. The Docker images are built automatically from this project and are based on Fedora.
  2. The script in .gitlab-ci.yml clones the ‘master’ version of Tracker core.
  3. The tracker and tracker-miners projects are configured and built, using Meson. There is a special build option in tracker-miners that makes it include Tracker core as a Meson subproject, instead of building against the system-provided version. (It still depends on a few files from host at the time of writing).
  4. The script starts a private D-Bus session using dbus-run-session, sets a fixed en_US.UTF8 locale, and runs the test suite for tracker-miners using meson test.
  5. Meson runs the tests that are defined in meson.build files. It tries to run them in parallel with one test per CPU core.
  6. The libtracker-miners-common tests exercises some utility code, which is duplicated from libtracker-common in Tracker core.
  7. The libtracker-extract tests exercises libtracker-extract, which is a private library with helper code for accessing file metadata. It mainly focuses on standard metadata formats like XMP and EXIF.
  8. The functional-300-miner-basic-ops and functional-301-resource-removal tests check the operation of the tracker-miner-fs daemon, mostly by copying files in and out of a specific path and then waiting for the corresponding changes to the Tracker database to take effect.
  9. The functional-310-fts-basic test tries some full-text search operations on a text file. There are a couple of other FTS tests too.
  10. The functional/extract/* tests effectively run tracker extract on a set of real media files, and test that the expected metadata is extracted. The tests are defined by JSON files such as this one.
  11. The functional-500-writeback tests exercise the tracker-writeback daemon (which allows updating things like MP3 tags following changes in the Tracker database). These tests are not particularly thorough. The writeback feature of Tracker is not widely used, to my knowledge.
  12. Finally, the functional-600-* tests simulate the behaviour of some MeeGo phone applications. Yes, that’s how old this code is 🙂

There is plenty of room for more testing of course, but this list is very comprehensive when compared to the total lack of automated testing that the project had just a year ago!

Posted in Uncategorized | 3 Comments