Twitter without Infinite Scroll

I like reading stuff on twitter.com because a lot of interesting people write things there which they don’t write anywhere else.

But Twitter is designed to be addictive, and a key mechanism they use is the “infinite scroll” design. Infinite scroll has been called the Web’s slot machine because of the way it exploits our minds to make us keep reading. It’s an unethical design.

In an essay entitled “If the internet is addictive, why don’t we regulate it?”, the writer Michael Schulson says:

… infinite scroll has no clear benefit for users. It exists almost entirely to circumvent self-control.

Hopefully Twitter will one day consider the ethics of their design. Until then, I made a Firefox extension to remove the infinite scroll feature and replace it with a ‘Load older tweets’ link at the bottom of the page, like this:

example

The Firefox extension is called Twitter Without Infinite Scroll. It works by injecting some JavaScript code into the Twitter website which disconnects the ‘uiNearTheBottom’ event that would otherwise automatically fetch new data.

Quoting Michael Shulson’s article again:

Giving users a chance to pause and make a choice at the end of each discrete page or session tips the balance of power back in the individual’s direction.

So, if you are a Twitter user, enjoy your new-found power!

Tools I like for creating web apps

I used to dislike creating websites. CSS confused me and JavaScript annoyed me.

In the last year I’ve grown to like web development again! The developer tools available in the web browsers of 2019 are incredible to work with. The desktop world is catching up, but but the browser world is ahead. I rarely have to write CSS at all. JavaScript has gained a lot of the features that it was inexplicably missing.

Here’s a list of the technologies I’ve recently used and liked.

First, Bootstrap. It’s really a set of CSS classes which turn HTML into something that “works out of the box” for creating decently styled webapps. To me it feels like a well-designed widget toolkit, a kind of web counterpart to GTK. Once you know what Bootstrap looks like, you realize that everyone else is already using it. Thanks to Bootstrap, I don’t really need to understand CSS at all anymore. Once you get bored of the default theme, you can try some others.

Then, jQuery. I guess everyone in the world uses jQuery. It provides powerful methods to access JavaScript functionality that is otherwise horrible to use. One of its main features is the ability to select elements from a HTML document using CSS selectors. Normally jQuery provides a function named $, so for example you could get the text of every paragraph in a document like this: $('p').text().  Open up your browser’s inspector and try it now! Now, try to do the same thing without jQuery — you’ll need at least 6 times more code.1

After that, Riot.js. Riot is a UI library which lets you create a web page using building blocks which they call custom tags. Each custom tag is a snippet of HTML. You can attach JavaScript properties and methods to a custom tag as well, and you can refer to them in the snippet of HTML giving you a powerful client-side template framework.

There are lots of “frameworks” which provide similar functionality to Riot.js. I find frameworks a bit overwhelming, and I’m suspicious of tools like create-react-app that need to generate code for me before I can even get started. I like that Riot can run completely in the browser without any special tooling required, and that it has one, specific purpose. Riot isn’t perfect; in particular I find the documentation quite hard to understand at times, but so far I’m enjoying using it.

Finally, lunr.js. Lunr provides a powerful full-text search engine, implemented completely as JavaScript that runs in your users’ web browsers. “Isn’t that a terrible idea?” you think. For large data sets, Lunr is not at all appropriate (you might consider its larger sibling Solr). For a small webapp or prototype, Lunr can work well and can save you from having to write and run a more complex backend service.

If, like I did, you think web development is super annoying due to bad tooling, give it another try! It’s (kind of) fun now!

1. Here’s how it looks without jQuery: Array.map(document.getElementsByTagName('p'), function(e) { return e.textContent; })

The Lesson Planalyzer

I’ve now been working as a teacher for 8 months. There are a lot of things I like about the job. One thing I like is that every day brings a new deadline. That sounds bad right? It’s not: one day I prepare a class, the next day I deliver the class one or more times and I get instant feedback on it right there and then from the students. I’ve seen enough of the software industry, and the music industry, to know that such a quick feedback loop is a real privilege!

Creating a lesson plan can be a slow and sometimes frustrating process, but the more plans I write the more I can draw on things I’ve done before. I’ve planned and delivered over 175 different lessons already. It’s sometimes hard to know if I’m repeating myself or not, or if I could be reusing an activity from a past lesson, so I’ve been looking for easy ways to look back at all my old lesson plans.

Search

GNOME’s Tracker search engine provides a good starting point for searching a set of lesson plans: I can put the plans in my ~/Documents folder, open the folder in Nautilus, and then I type a term like "present perfect" into the search bar.

Screenshot of Nautilus showing search results

The results aren’t as helpful as they could be, though. I can only see a short snippet of the text in each document, when I really need to see the whole paragraph for the result to be directly useful. Also, the search returns anything where the words present and perfect appear, so we could be talking about tenses, or birthdays, or presentation skills.  I wanted a better approach.

Reading .docx files

My lesson plans have a fairly regular structure. An all-purpose search tool doesn’t know anything about my personal approach to writing lesson plans, though. I decided to try writing my own tool to extract more structured information from the documents. The plans are in .docx format1 which is remarkably easy to parse — you just need the Python ‘unzip’ and ‘xml’ modules, and some guesswork to figure out what the XML elements mean. I was surprised not to find a Python library that already did this for me, but in the end I wrote a very basic .docx helper module, and I used this to create a tool that read my existing lesson plans and dumped the data as a JSON document.

It works reliably! In a few cases I chose to update documents rather than add code to the tool to deal with formatting inconsistencies. Also, the tool currently throws away all formatting information, but I barely notice.

Web and desktop apps

From there, of course, things got out of control and I started writing a simple web application to display and search the lesson plans. Two months of sporadic effort later, and I just made a prototype release of The Lesson Planalyzer. It remains to be seen how useful it is for anyone, including me, but it’s very satisfying to have gone from an idea to a prototype application in such a short time. Here’s an ugly screenshot, which displays a couple of example lesson plans that I found online.

The user interface is HTML5, made using Bootstrap and a couple of other cool JavaScript libraries (which I might mention in a separate blog post). I’ve wrapped that up in a basic GTK application, which runs a tiny HTTP server and uses a WebKitWebView display its output. The desktop application has a couple of features that can’t be implemented inside a browser, one is the ability to open plan documents directly in LibreOffice, and also the other is a dedicated entry in the alt+tab menu.

If you’re curious, you can see the source at https://gitlab.com/samthursfield/planalyzer/. Let me know if you think it might be useful for you!

1. I need to be able to print the documents on computers which don’t have LibreOffice available, so they are all in .docx format.

Inspire me, Nautilus!

When I have some free time I like to be creative but sometimes I need a push of inspiration to take me in the right direction.

Interior designers and people who are about to get married like to create inspiration boards by gluing magazine cutouts to the wall.

6907272105_b47a5ca31a_b
‘Mood board for a Tuscan Style Interior’ by Design Folly on Flickr

I find a lot of inspiration online, so I want a digital equivalent. I looked for one, and I found various apps for iOS and Mac which act like digital inspiration boards, but I didn’t find anything I can use with GNOME. So I began planning an elaborate new GTK+ app, but then I remembered that I get tired of such projects before they actually become useful. In fact, there’s already a program that lets you manage a collection of images and text! It’s known as Files (Nautilus), and for me it only lacks the ability to store web links amongst the other content.

Then, I discovered that you can create .desktop files that point to web locations, the equivalent of .url files on Microsoft Windows. Would a folder full of URL links serve my needs? I think so!

Nautilus had some crufty code paths to deal with these shortcut files, which was removed in 2018. Firefox understands them directly, so if you set Firefox as the default application for the application/x-desktop file type then they work nicely: click on a shortcut and it opens in Firefox.

There is no convenient way to create these .desktop files: dragging and dropping a tab from Epiphany will create a text file containing the URL, which is tantalisingly close to what I want, but the resulting file can’t be easily opened in a browser. So, I ended up writing a simple extension that adds a ‘Create web link…’ dialog to Nautilus, accessed from the right-click menu.

Now I can use Nautilus to easily manage collections of links and I can mix in (or link to) any local content easily too. Here’s me beginning my ‘inspiration board’ for recipes …

Screenshot from 2019-03-04 22-05-13.png

<

Developing with Packer and Docker

I spent last Friday looking at setting up an OpenID provider for the Baserock project. This is kind of a prerequisite to having our own Storyboard instance, which we’d quite like to use for issue and task tracking.

I decided to start by using some existing tools that have nothing to do with Baserock. Later we can move the infrastructure over to using Baserock, to see what it adds to the process. We have spent quite a lot of time “eating our own dogfood” and while it’s a good idea in general, a balanced diet contains more than dogfood.

The Baserock project has an OpenStack tenency at DataCentred which can host our public infrastructure. The goal is to deploy my OpenID provider system there. However, for prototyping it’s much easier to use a container on my laptop because I don’t need to be working across an internet connection.

The morph deploy command in Baserock allows deploying systems in multiple ways and I think it’s pretty cool. Another tool which seems to do this is Packer.

Differences between `morph deploy` and Packer

Firstly, I notice Packer takes JSON as input, where Morph uses YAML. So with Packer I have to get the commas and brackets right and can’t use comments. That’s a bit mean!

A bigger difference is that Morph considers ‘building and configuring’ the image differently to ‘writing the image somewhere.’ By constrast, Packer ties ‘builder’ and ‘type of hosting infrastructure’ together. In my case I need to use the Docker builder for my prototype and the OpenStack builder for the production system. There can be asymmetry here: in Docker my Fedora 20 base comes from the Docker registry, but for OpenStack I created my own image from the Fedora 20 cloud image. I could have created my own Docker image too if I didn’t trust the semi-official Fedora image, though.

The downside to the `morph deploy` approach is that it can be less efficient. To deploy to OpenStack, Morph first unpacks a tarball locally, then runs various ‘configuration’ extensions on it, then converts it to a disk image and uploads it to OpenStack. Packer starts by creating the OpenStack instance and then configures it ‘live’ via SSH. This means it doesn’t need to unpack and repack the system image, which can be slow if the system being deployed is large.

Packer has separate types of ‘provisioner’ for different configuration management frameworks. This is handy, but the basic ‘shell’ and ‘file’ provisioners actually give you all you need to implement the others. The `morph deploy` command doesn’t have any special helpers for different configuration management tools, but that doesn’t prevent one from using them.

Building Packer

Packer doesn’t seem to be packaged for Fedora 20, which I use as my desktop system, so I had a go at building it from source:

    sudo yum install golang
    mkdir gopath && cd gopath
    export GOPATH=`pwd`
    go get -u github.com/mitchellh/gox
    git clone https://github.com/mitchellh/packer \
src/github.com/mitchellh/packer
    cd src/github.com/mitchellh/packer    
    make updatedeps
    make dev

There’s no `make install`, but you can run the tool as

$GOPATH/bin/packer

. Note there’s also a program called `packer` provided by the CrackLib package in /usr/sbin on Fedora 20 which can cause a bit of confusion!

Prototyping with Docker

I’m keen on use of Docker for prototyping and for managing contains in general. I’m not really sold on the idea of the Dockerfile as it seems like basically a less portable incarnation of shell scripting. I do think it’s cool that Docker takes a snapshot after each line of the file is executed, but I’ve no idea if this is useful in practice. I’d much prefer to use a configuration management system like Ansible. And rather than installing packages every time I deploy a system, it’d be nice to just build the right one to begin with, like I can do with Morph in Baserock.

Using Packer’s Docker builder doesn’t need to me to write a Dockerfile, and as the Packer documentation points out, all of the configuration that can be described in a Dockerfile can also be specified as arguments to the docker run command.

As a Fedora desktop user it makes sense to use Fedora for now as my Docker base image. So I started with this template.json file:

    {
        "builders": [
            {
                "type": "docker",
                "image": "fedora:20",
                "commit": true
            }
        ],
        "post-processors": [
            {
                "type": "docker-tag",
                "repository": "baserock/openid-provider",
                "tag": "latest",
            }
        ]
    }

I ran packer build template.json and (after some confusion with /usr/sbin/packer) waited for a build. Creating my container took less then a minute including downloading the Fedora base image from the Docker hub. Nice!

I could then enter my image with docker run -i -t and check out my new generic Fedora 20 system.

Initially I thought I’d use Vagrant, which is a companion tool to Packer, to get a development environment set up. That would have required me to use VirtualBox rather than Docker for my development deployment, though, which would be much slower and more memory-hungry than a container. I realised that all I really wanted was the ability to share the Git repo I was developing things in between my desktop and my test deployments anyway, which could be achieved with a Docker volume just as easily.

Edit: I since found out that there are about four different ways to use Vagrant with Docker, but I’m going to stick with my single docker run command for now

So I ended up with the following commandline to enter my development environent:

    docker run -i -t --rm \
        --volume=`pwd`:/src/test-baserock-infrastructure \
        baserock/openid-provider

The only issue is that because I’m running as ‘root’ inside the container, files from the develpoment Git repo that I edit inside the container become owned by root in my home directory. It’s no problem to always edit and run Git from my desktop, though (and since the container system lacks both vim and git, it’s easy to remember!).

Running a web service

I knew of two OpenID providers I wanted to try out. Since Baserock is mostly a Python shop I thought the first one to try should be the Python-based Django OpenID Provider (the alternative being Masq, which is for Ruby on Rails).

Fedora 20 doesn’t ship with Django so the first step was install it in my system. The easiest (though far from the best) way is using the Packer ‘shell’ provisioner and running the following:

    yum install python-pip
    pip install django

Next step was to follow the Django tutorial and get a demo webserver running. Port forwarding is nobody’s friend it took me a bit of time to be able to talk to the webserver (which was in a container) from my desktop. The Django tutorial advises running the server on 127.0.0.1:80 but this doesn’t make so much sense in a container. Instead, I ran the server with the following:

    python ./manage.py runserver 0.0.0.0:80

And I ran the container as follows:

    docker run -i -t --rm \
        --publish=127.0.0.1:80:80 \
        --volume=`pwd`:/src/test-baserock-infrastructure \
        baserock/openid-provider

So inside the container the web server listens on all interfaces, but but it’s forwarded only to localhost on my desktop, so that other computers can’t connect to my rather insecure test webserver.

I then spent a while learning Django and setting up Django OpenID Provider in my Django project. It was actually quite easy and fun! Eventually I got to the point where I wanted to put my demo OpenID server on the internet, so I could test it against a real OpenID consumer.

Deploying to OpenStack

The Packer to deployment to OpenStack proved a bit more tricky than deploying to Docker. It turns out that fields like ‘source_image’, ‘flavor’ and ‘networks’ take IDs rather than names, which is pain (although I understand the reason). I had to give my instance a floating IP, too, as Packer needs to contact it via SSH and I’m deploying from my laptop, which isn’t on the cloud’s internal network. It took a while to get a successful deployment but we got there.

I found that using the “files” provisioner to copy in the code of the Django application didn’t work: the files were there in the resulting system, but corrupted. It may be better to try and deploy this from Git anyway, but I’m a little confused what went wrong there.

Edit: I found that there was quite a bit of corruption in the files that were added during provisioning, and while I didn’t figure out the cause, I did find that adding a call to ‘sleep 10’ as the first step in provisioning made the issue go away. Messy.

I’m fairly happy with what I managed to get done in a single day: we now have a way of developing and deploying infrastructure which requires minimal fluff in the Git repository and a pretty quick turnaround time. As for the Django OpenID provider, I’ve not yet managed to get it to serve me an OpenID — I guess next Friday I shall start debugging it.

The code is temporarily available at http://github.com/ssssam/test-baserock-infrastructure. If we make use of this in Baserock it will no doubt move to git.baserock.org.

Facebook is the new phone book

A while back there used to be this website which was a fun alternative to actual work. Lots of real people were on there and you could share jokes, photos and meaningless rants. I realised today that Facebook has reached a crucial turning point, because it’s not actually any fun. With more and more eyes on it, the atmosphere has shifted from a playground to a marketplace, and the profile of the company from the leaders of an expedition to a brave new world to a generic services company.

They have always been a company without precedent. I think it’s worth noting just how many big decisions they have taken without the slightest idea how they would turn out, and the big effect that it’s had on the way the entire world interacts with the Web. They have been super proactive! Imagine if the first social network had resembled the Facebook of 2011 – it would have been impenetrable and impossible. They created the market for themselves.

I can’t tell if the current Timeline move is genius, desperation or insanity, but I can tell what’s going to happen – with the increasing exposure of public data and “sharing by default”, Facebook will become the 21st century phone book. The momentum is such that to no have an account is limiting, because events and announcements pass you by. I think it’s unlikely that anyone will manage to supplant them. But at the same time – is this where they wanted to be?

We’ve had to become experts in privacy, identity protection, here’s an article on how to use Facebook for professional advantage, for example. Scammers and spammers are working hard to steal your identity and sell things that I don’t understand. This is not fun!! Digital media by default has lead me to start using a film camera, because having actual photographs has become an interesting thing in itself. The age of social media is fully upon us, but do they realise that by becoming ubiquitious, they are taking up between the print shop and the telephone company?

Edit: this article tells you much the same.

Privacy advice

This post was inspired by a great example of the not-fun I’m describing:

“I like to keep my FB private for obvious reasons except to those I am friends with. So if you all would do the following, I’d appreciate it. With the new FB timeline on its way this week for EVERYONE. . . please do both of us a favour: Hover over my name above. In a few seconds you’ll see a box that says “Subscribed”. Hover over that, then go to “Comments and Likes” and unclick it. That will stop my posts and yours to me from showing up on the side bar for everyone to see, but most importantly it limits hackers from invading our profiles. If you repost this I will do the same for you. You’ll know I’ve acknowledged you because if you tell me that you’ve done it I’ll “like” it”

I had a quick think about this one. Of course subscriptions don’t allow “hackers” to “invade” our profiles (this is a reference to the multiple “remove timeline from your profile” apps which are apparently all scams, there’s actually no way to do that). I don’t particularly feel like having an automatically generated personal website that’s out of my control, though, either. Is there a way to – not show the timeline?

The nearest I’ve found so far is:

  • Privacy Settings > Limit the Audience for Past Posts
  • , which I guess removes any public posts from public view

  • Manually selecting “Hide all” for every type of activity in the “recent activity” section on your timeline, which hides them for everyone looking at your profile as well