Writing well

We rely on written language to develop software. I used to joke that I worked as a professional email writer rather than a computer programmer (and it wasn’t really a joke). So if you want to be a better engineer, I recommend that you focus some time on improving your written English.

I recently bought 100 Ways to Improve Your Writing by Gary Provost, which is a compact and rewarding book full of simple and widely applicable guidelines to writers. My advice is to buy a copy!

You can also find plenty of resources online. Start by improving your commit messages. Since we love to automate things, try these shell scripts that catch common writing mistakes. And every time you write a paragraph simply ask yourself: what is the purpose of this paragraph? Is it serving that purpose?

Native speakers and non-native speakers will both find useful advice in Gary Provost’s book. In the UK school system we aren’t taught this stuff particularly well. Many English-as-a-second-language courses don’t teach how to write on a “macro” level either, which is sad because there are many differences from language to language that non-natives need to be aware of. I have seen “Business English” courses that focus on clear and convincing communication, so you may want to look into one of those if you want more than just a book.

Code gets read more than it gets written, so it’s worth taking extra time so that it’s easy for future developers to read. The same is true of emails that you write to project mailing lists. If you want to make a positive change to development of your project, don’t just focus on the code — see if you can find 3 ways to improve the clarity of your writing.

Natural Language Processing

This month I have been thinking about good English sentence and paragraph structure. Non-native English speakers who are learning write in English will often think of what they want to say in their first language and then translate it. This generally results in a mess. The precise structure of the mess will depend on the rules of the student’s first language. The important thing is to teach the conventions of good English writing; but how?

Visualizing a problem helps to solve it. However there doesn’t seem to be a tool available today that can clearly visualize the various concerns writers have to deal with. A paragraph might contain 100 words, each of which relate to each other in some way. How do you visualize that clearly… not like this, anyway.

I did find some useful resources though. I discovered the Paramedic Method, through this blog post from helpscout.net. The Paramedic Method was devised by Richard Lanham and consists of these 6 steps:

  1. Highlight the prepositions.
  2. Highlight the “is” verb forms.
  3. Find the action. (Who is kicking whom?)
  4. Change the action into a simple active verb.
  5. Start fast—no slow windups.
  6. Read the passage out loud with emphasis and feeling.

This is good advice for anyone writing English. It’ll be particularly helpful in my classes in Spain where we need to clean up long strings of relative clauses. (For example, a sentence such as “On the way I met one of the workers from the company where I was going to do the interview that my friend got for me”. I would rewrite this as: “On the way I met a person from Company X, where my friend had recently got me an interview.”

I found a tool called Write Music which I like a lot. The idea is simple: to illustrate and visualize the rule that varying sentence length is important when writing. The creator of the tool, Titus Wormer, seems to be doing some interesting and well documented research.

I looked at a variety of open source tools for natural language processing. These provide good ways to tokenize a text and to identify the “part of speech” (noun, verb, adjective, adverb, etc.) but I didn’t yet find one that could analyze the types of clauses that are used. Which is a shame. My understanding of this is an area of English grammar is still quite weak and I was hoping my laptop might be able teach me by example but it seems not.

I found some surprisingly polished libraries that I’m keen to use for … something. One day I’ll know what. The compromise library for JavaScript can do all kinds of parsing and wordplay and is refreshingly honest about its limitations, and spaCy for Python also looks exciting. People like to interact with a computer through text. We hide the UNIX commandline. But one of the most popular user interfaces in the world is the Google search engine, which is a text box that accepts any kind of natural language and gives the impression of understanding it. In many cases this works brilliantly — I check spellings and convert measurements all the time using this “search engine” interface. Did you realize GNOME Shell can also do unit conversions? Try typing “50lb in kg” into the GNOME Shell search box and look at the result. Very useful! More apps should do helpful things like this.

I found some other amazing natural language technologies too. Inform 7 continues to blow my mind whenever I look at it. Commercial services like IBM Watson can promise incredible things like analysing the sentiments and emotions expressed in a text, and even the relationships expressed between the subjects and objects. It’s been an interesting day of research!

Enourmous Git Repositories

If you had a 100GB Subversion repository, where a full checkout came to about 10GB of source files, how would you go about migrating it to Git?

One thing you probably wouldn’t do is import the whole thing into a single Git repo, it’s pretty well known that Git isn’t designed for that. But, you know, Git does have some tools that let you pretend it’s a centralised version control system, and, huge monolithic repos are cool, and it works in Mercurial… evidence is worth more than hearsay, so I decided to create a Git repo with 10GB of text files to see what happened. I did get told in #git on Freenode that Git will not cope with a repo that’s larger than available RAM, but I was a little suspicious given the number of multi-gigabyte Git repos in existance.

I adapted a Bash script from here to create random filenames, and the csmith program to fill those files with nonsense C++ code, until I had 10GB 7GB of such gibberish.(I realised that, having used du -s instead of du --apparent-size -s to check the size of my test data, it was only 7GB of content, that was using 10GB of disk space.)

The test machine was an x86 virtual machine with 2GB of RAM and 1CPU, with no swap. The repo was on a 100GB ext4 volume. Doing a performance benchmark on a virtual machine on shared infrastructure is a bad idea, but I’m testing a bad idea, so whatever. The machine ran Git version 2.5.0.

Results

Generating the initial data: this took all night, perhaps because I included a call to du inside the loop that generated the data, which would take an increasing amount of time on each iteration.

Creating an initial 10GB 7GB commit: 95 minutes

$ time git add .
real    90m0.219s
user    84m57.117s
sys     1m6.932s

$ time git status
real    1m15.992s
user    0m4.071s
sys     0m20.728s

$ time git commit -m "Initial commit"
real    4m22.397s
user    0m27.168s
sys     1m5.815s

The git log command is pretty instant, a git show of this commit takes a minute the first time I run it, about 5 seconds if I run it again.

Doing git add and git rm to create a second commit is really quick, git status is still slow, but git commit is quick:

$ time git status
real    1m19.937s
user    0m5.063s
sys     0m16.678s

$ time git commit -m "Put all z files in same directory"
real    0m11.317s
user    0m1.639s
sys     0m5.306s

Furthermore, git show of this second commit is quick too.

Next I used git daemon to serve the repo over git:// protocol:

$ git daemon --verbose --export-all --base-path=`pwd`

Doing a full clone from a different machine (with Git 2.4.3, over
intranet): 22 minutes

$ time git clone git://172.16.20.95/huge-repo
Cloning into 'huge-repo'...
remote: Counting objects: 339412, done.
remote: Compressing objects: 100% (33351/33351), done.
remote: Total 339412 (delta 5436), reused 0 (delta 0)
Receiving objects: 100% (339412/339412), 752.12 MiB | 2.53 MiB/s, done.
Resolving deltas: 100% (5436/5436), done.
Checking connectivity... done.
Checking out files: 100% (46345/46345), done.

real    22m17.734s
user    2m12.606s
sys     0m54.603s

Doing a sparse checkout of a few files: 15 minutes

$ mkdir sparse-checkout
$ cd sparse-checkout
$ git init .
$ git config core.sparsecheckout true
$ echo z-files/ >> .git/info/sparse-checkout

$ time git pull  git://172.16.20.95/huge-repo master
remote: Counting objects: 339412, done.
remote: Compressing objects: 100% (33351/33351), done.
remote: Total 339412 (delta 5436), reused 0 (delta 0)
Receiving objects: 100% (339412/339412), 752.12 MiB | 2.58 MiB/s, done.
Resolving deltas: 100% (5436/5436), done.
From git://172.16.20.95/huge-repo
 * branch            master     -> FETCH_HEAD

real    14m26.032s
user    1m9.133s
sys     0m22.683s

This is rather unimpressive. I only pull a 55MB subset of the repo, a single directory, but the clone still takes nearly 15 minutes. Cloning the same subset again from the same git-daemon process took a similar time. The .git directory of the sparse clone is the same size as with a full clone.

I think these numbers are interesting. They show that the sky doesn’t fall if you put a huge amount of code into Git. At the same time, the ‘sparse checkouts’ feature doesn’t really let you pretend that Git is a centralised version control system, so you can’t actually avoid the consequences of having such a huge repo.

Also, I learned that if you are profiling file size, you should use du --apparent-size to measure that, because file size != disk usage!

Disclaimer: there are better ways to spend your time than trying to use a tool for things that it’s not designed for (sometimes).

Cleaning up stale Git branches

I get bored looking through dozens and dozens of stale Git branches. If git branch --remote takes up more than a screenful of text then I am unhappy!

Here are some shell hacks that can help you when trying to work out what can be deleted.

This shows you all the remote branches which are already merged, those can probably be deleted right away!

git branch --remote --merged

These are the remote branches that aren’t merged yet.

git branch --remote --no-merged

Best not to delete those straight away. But some of them are probably totally stale. This snippet will loop through each unmerged branch and tell you (a) when the last commit was made, and (b) how many commits it contains which are not merged to ‘origin/master’.

for b in $(git branch --remote --no-merged); do
    echo $b;
    git show $b --pretty="format:  Last commit: %cd" | head -n 1;
    echo -n "  Commits from 'master': ";
    git log --oneline $(git merge-base $b origin/master)..$b | wc -l;
    echo;
done

The output looks like this:

origin/album-art-to-libtracker-extract
  Last commit: Mon Mar 29 17:22:14 2010 +0100
  Commits from 'master': 1

origin/albumart-qt
  Last commit: Thu Oct 21 11:10:25 2010 +0200
  Commits from 'master': 1

origin/api-cleanup
  Last commit: Thu Feb 20 12:16:43 2014 +0100
  Commits from 'master': 18

...

Two of those haven’t been touched for five years, and only contain a single commit! So they are probably good targets for deletion, for example.

You can also get the above info sorted, with the oldest branches first. First you need to generate a list. This outputs each branch and the date of its newest commit (as a number), sorts it numerically, then filters out the number and writes it to a file called ‘unmerged-branches.txt’:

for b in $(git branch --remote --no-merged); do
    git show $b --pretty="format:%ct $b" | head -n 1;
done | sort -n | cut -d ' ' -f 2 > unmerged-branches.txt

Then you can run the formatting command again, but replace the first line with:

for b in $(cat unmerged-branches.txt); do

OK! You have a list of all the unmerged branches and you can send a mail to people saying you’re going to delete all of them older than a certain point unless they beg you not to.

.yml files are an anti-pattern

A lot of people are representing data as YAML these days. That’s good! It’s an improvement over the days when everything seemed to be represented as XML, anyway.

But one thing about the YAML format is that it doesn’t require you to embed any information in the file about how the data should be interpreted. So now we have projects where there are hundreds of different .yml files committed to Git and I have no idea what any of them are for.

YAML is popular because it’s minimal and convenient, so I don’t think that requiring that everyone suddenly creates an ontology for the data in these .yml files would be practical. But I would really like to see a convention that the first line of any .yml file was a comment describing what the file did, e.g.

# This is a BOSH deployment manifest, see http://bosh.io/ for more information

That’s all!

Running Firefox in a cgroup (using systemd)

This blog post is very out of date. As of 2020, you can find up to date information about this topic in the LWN article “Resource management for the desktop”

I’m a long time user of Firefox and it’s a pretty good browser but you know how sometimes it eats all of the memory on your computer and uses lots of CPU so the whole thing becomes completely unusable? That is incredibly annoying!

I’ve been using a build system with a web interface recently and it is really a problem there, because build logs can be quite large (40MB) and Firefox handles them really badly.

Linux has been theoretically able to limit how much CPU, RAM and IO that a process can use for some time, with the cgroups mechanism. Its default behavour, at the time of writing, is to let Firefox starve out all other processes so that I am totally unable to kill it and have to force power-off on my computer and restart it. It would make much more sense for Linux’s scheduler to ensure that the user interface always gets some CPU time, so I can kill programs that are going nuts, and also for the out-of-memory killer to actually work properly. There is a proposal to integrate gnome-session with systemd which I hope would solve this problem for me. But in the meantime, here’s a fairly hacky way of making sure that Firefox always runs in a cgroup with a fixed amount of memory, so that it will crash itself when it tries to use too much RAM instead of making your computer completely unusable.

I’m using Fedora 20 right now, but probably any operating system with Linux and systemd will work the same.

First, you need to create a ‘slice’. The documentation for this stuff is quite dense but the concept is simple: your system’s resources get divided up into slices. Slices are heirarchical, and there are some predefined slices that systemd provides including user.slice (for user applications) and system.slice (for system services). So I made a user-firefox.slice:

[Unit]
Description=Firefox Slice
Before=slices.target

[Slice]
MemoryAccounting=true
MemoryLimit=512M
# CPUQuota isn't available in systemd 208 (Fedora 20).
#CPUAccounting=true
#CPUQuota=25%

This should be saved as /etc/systemd/system/user-firefox.slice. Then you can run systemctl daemon-reload && systemctl restart user-firefox.slice and your slice is created with its resource limit!

You can now run a command in this slice using the systemd-run command, as root.

sudo systemd-run --slice user-firefox.slice --scope xterm

The xterm process and anything you run from it will be limited to using 512MB of RAM, and memory allocations will fail for them if more than that is used. Most programs crash when this happens because nobody really checks the result of malloc() (or they do check it, but they never tested the code path that runs if an allocation fails so it probably crashes anyway). If you want to be confident this is working, change the MemoryLimit in user-firefox.slice to 10M and run a desktop application: probably it will crash before it even starts (you need to daemon-reload and restart the .slice after you edit the file for the changes to take effect).

About the --scope argument: a ‘scope’ is basically a way of identifying one or more processes that aren’t being managed directly by systemd. By default, systemd-run would start xterm as a system service, which wouldn’t work because it would be isolated from the X server.

So now you can run Firefox in a cgroup, but it’s a bit shit because you can only do so as the ‘root’ user. You’ll find if you try to use `sudo` or `su` to become your user again that these create a new systemd user session that is outside the user-firefox.slice cgroup. You can use `systemd-cgls` to show the heirarchy of slices, and you’ll see that the commands run under `sudo` or `su` show up in a new scope called something like session-c3.scope, where the scope created by systemd-run that is in the correct slice is called run-1234.scope.

There are various nice ways that we could go about fixing this but today I am not going to be a nice person, instead I just wrote this Python wrapper that becomes my user and then runs Firefox from inside the same scope:

#!/usr/bin/env python3

import os
import pwd


user_info = pwd.getpwnam('sam')
os.setuid(user_info.pw_uid)

env = os.environ.copy()
env['HOME'] = user_info.pw_dir

os.execle('/usr/bin/firefox', 'Firefox (tame)', env)

Now I can run:

sudo systemd-run --slice user-firefox.slice --scope
./user-firefox

This is a massive hack and don’t hold me responsible for anything bad that may come of it. Please contribute to Firefox if you can.

Viruses

I just did a bit of virus removal for a friend, one of the inventive police warning ones. I discovered a couple of useful tricks, since it’s been a while since I had to do this.

http://pogostick.net/~pnh/ntpasswd/ hosts a tool to manipulate a Windows registry from Linux system (Wine itself doesn’t use the real Windows registry format, so it can’t be used for this as I originally expected).

The virus did some trickery to prevent any .exe programs from running (other than some whitelisted ones), preventing access to cmd.exe, taskmgr.exe, regedit.exe or anything else you might be like to use to remove the virus. I forget the mechanism it uses to do this, but one simple fix is to rename regedit.exe to regedit.com.

It was then a disappointingly simple search through the registry to the classic HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run key where the file tpl_0_c.exe had done a rather poor job of hiding.

I ran Fedora 17 from a memory stick before going back into Windows, and the transition from Gnome Shell on a netbook to Windows XP on a netbook made me understand a lot more some of the design decisions that went into the shell. Windows XP is really fiddly to use on such tiny and crappy hardware whereas the shell felt really comfortable. I’m still not at all convinced that we should be making GNOME run on tablets, but for netbooks it makes perfect sense to maximise things by default and make the buttons bigger. Sadly some of this is coming at the expense of usability on big desktops; I’m ok with having to configure stuff to get a comfortable UI, but the recent thread where side-by-side windows were suggested as a replacement for Nautilus’ split view makes me worried that we’re losing sight of desktop users completely. Side by side windows on a 25″ monitor aren’t remotely comparable to a split window, though they might be on a netbook. OS X has always been dreadful on large screens, we shouldn’t take GNOME down the same path.

Little tidbits of wisdom #9117

When you are writing code with GCC on Windows, and you pass a callback to a Windows API function, your callback must have the __stdcall calling convention. Otherwise, something in a random part of your application quite distant from the callback will fail in a weird way and you will spend two days learning lots of things you never wanted to know about the internals of Windows when in fact you just needed to add one word to your callback definition.

Here’s a netlabel who put out mostly free releases of ambient and electroacoustic albums and EP’s, so you can relax after a hard day of banging your head against a wall and nerdrage.

Asus motherboards and USB boot

I’m writing this mainly for google’s benefit. If you’re trying to get an ASUS motherboard, such as the M3N78-VM I have, to boot from a memory stick, it turns out you have to do it in a weird way: turn on the PC with the memory stick plugged in, go into BIOS Setup and the Boot section and then go onto Hard Disk Drives. Delete all the entries except USB (maybe you can just put it to the top, I didn’t try). Now you can go to the normal Boot Device Priority list, and “USB” will be an entry which you can put where you like.

This is all pretty counterintuitive, because the boot device list has “Removable media” as an entry, which is apparently useless – in fact, worse than useless, or I might have worked this out faster. Hopefully writing about it will save others from wasting time ..

In other news, since here am I writing.. I finished my degree in music & music technology recently (which is why i finally have time to fix my computer), it’s been a fun ride and I achieved a bunch of things I always have wanted to do, like mixing for bad metal bands, writing and recording crazy dub tunes and playing sounds too quiet for anyone to hear in a gallery with some free wine. After a summer getting some programaction done (more on that later), mi novia y yo are going to South America for a while. We fly in to Buenos Aires in September and out of Lima in January (hopefully later) and so far that is the plan, I’ve never been out the UK for more than a few weeks before, i am really looking forward finally to some proper travelling in a very beautiful part of the world.

On Cheapness

Both of my IBM Thinkpad power adapters are now working only because of ample solder and insulating tape. I have a third, but that’s disintegrated altogether.

How, after over 100 years of development can we not manufacture power cables properly? You’d think especially the Thinkpad might come with adapters and cables which could last more than a few years.

August 17th ~19:00 UTC

Hi everyone! I thought I would write my final SOC report to my blog, so that it gets everyone excited about a really cool new API that you can’t easily use for anything at the moment, but it is going to be awesome when you can!

My original idea was basically to take the gconf-peditor widgets and make them GtkBuildable, saving a lot of wasted effort. I was going to do some other stuff to make GConf more bearable as well. By the time I started coding, this had become ignoring GConf entirely (which is on its way out sooner than I realised) and essentially closing bug 494329, and related work.

So here are the various things I have written over the last few months:

  • GLib
    I added a pretty printer for GVariant objects, which is now in the main gvariant branch of glib.

  • GTK+
    My Gtk+ branch can read the following:

    <object class="GtkCheckButton" id="foo">
      <property name="active" setting="foo">true</property>
      ...

    and later on, you can call gtk_builder_bind_settings_full (...) and it will call a function for each of these bound properties. To avoid a GSettings dependency, nothing happens automatically yet: you have to pass g_settings_bind as the callback.

  • gsettings
    I have a branch with some small changes to GSettings, such as loading schemas from outside the system schemas database. My main contribution is that I just wrote a windows registry backend, with .. wait for it … full change notification support. So hopefully (I haven’t actually tried this in the real world, but it works in my tests) you can update your app’s UI as the user edits the settings in regedit and crazy nonsense like that.

    (This is all done just with Windows API functions – which is smart enough to tell you a registry has changed, but not smart enough to tell you what value it was, or do so asynchronously. So the code currently caches the settings in memory and works out what has changed the hard way. A month ago I wouldn’t have predicted even that to be possible 🙂

  • gsettings-gtk
    Some of my stuff is still in its own repo at the moment. Here we have g_settings_bind() and an (incomplete) GtkBuildable version of GSettingsList. There is also a script which will read a GtkBuilder file and output a GSettings schema, based on which object properties are bound. Given that you will always need a settings schema, code to control the settings and code that is controlled by the settings, this seems like the best way to minimise duplication of effort; the default values are taken from the default values in the ui. Another option would be to generate the schema from the properties of the object it controls, but this is left as an exercise for now …

  • GLADE
    Finally, so people can actually use this stuff easily, I did some GLADE hacking. This branch has a simple GladeSettingsEditor dialog which can edit property bindings .. and even has some incomplete GtkSettingsList editing. Although the dialog works fine, it needs some more stupidity checks; at the moment for example you can bind checkbutton.active and set a related action at the same time.

I think this is a reasonable amount of work for 3 or so months .. especially counting the time I wasted on autotools issues and git confusion 🙂 Compared to my original proposal, some big things on the gsettings-ui side are still unfinished .. but I have done some other things not in the proposal at all, so it balances out I guess. The inspiration for this project was the fact that working with GConf sucks, and you end up wasting a lot of time, so I’ll be continuing working on this until I can use it in my apps in place of the GConf API. My future plans include (ignoring the obvious things such as testing and merging into the main repos) supporting flag and enum values in various useful ways, converting some apps which currently have a lot of code dedicated to preferences and replacing it with a much smaller amount of gsettings code (mainly for the satisfaction), and writing a tutorial entitled “How to use GSettings and GTK+ to make your life much easier”

Thanks to ryan for sorting me out all the time, and also to tristan for help on the GLADE side, and to everyone who spends time making GNOME rock

what a gwan?

  • I heard some talk a while back about ways to get automake to shut up. It turns out it now can, using a new silent mode.

    It involves adding the following to configure.ac:

    m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES])

    (using m4_ifdef so that the script continues to work with automake 1.10 and older)

    And passing the --enable-silent-rules flag to configure. Of course, it also involves updating your infrastructure to support automake 1.11, and prepending something like $(AM_V_GEN) on any custom Makefile actions you have .. but these are technicalities

    In fact, this autotools mythbuster document has a couple of other gems, such as how to implement non-recursive automake correctly.

  • Been using my new Thinkpad X40 for a while. It’s nice having a computer new enough to run compiz. It’s also nice having a notebook with a 2 hour battery life, so I can take it up a hill and sit coding for longer than it took me to get there. I installed the Karmic beta, which keeps warning me that the disk is about to die, presumably because it reports a load cycle count of 92 billion. I take this value with a pinch of salt.
  • Less than 24 hours before the summer of code ‘pencils down’ time. Tomorrow I will post a nice report on all the stuff I have done, it will very be exciting!

Some more jhbuild on windows

  • Discovered something that was up with my jhbuild branch: zlib installed its import library with the wrong name, so libtool didn’t build DLL’s of its deps. This is now fixed in my git.
  • Finally bought a new laptop. I’ve had the same Thinkpad T20 for 3 years, and I imagine it’s been around for 3 or 4 years before I got hold of it. For a while, the trackpoint controller has been a bit crazy, the screen is unreliable and the case is cracked so I found an X40 on ebay this morning for £160. I was hoping to get something better suited to working outside (black isn’t the ideal colour for use in the sun, never mind the transmissive screen) but this doesn’t seem to be a priority of laptop makers, so my odds of getting something cheap in this line are low 🙂 I also worry about the 12″ screen for coding on, but I guess spare monitors aren’t too hard to find if I am stuck without a desktop for a while. I looked at some other makes, but it’s hard to justify buying anything other than a thinkpad – there’s so many niceties I would miss, and I worry that no other make has comparable build quality. Long live the thinkpad!

Pretty view from my windows

Springtime is here in the win32 world. First GCC 4.4.0 and now MSYS 1.0.11. There’s magic in the air. And a few nights ago I built gtk+ completely from source on Windows using jhbuild, mingw and MSYS.

I tried to repeat this yesterday using gcc 4.4.0 instead of gcc 3.4.5, and this caused many new problems 🙂 I’ve just about run out of time to fix them up for now, so I thought I would present my findings as they are. I wrote up a nice set of instructions which live at http://afuera.me.uk/jhbuild-windows/. They have worked for me with gcc 3.4.5 and are very close to working with gcc 4.4.0. There are various pitfalls, but all the ones I have come across are documented and hacked around. So take it for a spin!! .. I will happily accept more/improved -windows modulesets, fixes for the various things I’ve had to hack around, or any sort of drugs that might help me recover from the mental trauma.

I have also started work on an MS Windows registry backend for GSettings .. my SOC stuff has been neglected recently while I’ve been away, but I think with a few days of solid effort it’s all going to start really coming together. It’s going to be exciting to start converting some apps and tearing out big chunks of newly-redundant code 🙂

Autohell, part 995

  • I’m putting in a little time today on my windows branch of jhbuild. Running git now works (using a .bat file to call MSYSgit in its own shell, it’s all messy but works fine once it’s set up).
  • I spent the past hour or so wondering why ACLOCAL_FLAGS was being ignored. I finally realised that it’s not actually honoured by aclocal at all and never has been. autogen.sh scripts tend to execute aclocal $ACLOCAL_FLAGS which make it work often enough that I assumed it was meant to.

    Now I wonder whether autoreconf would accept a patch to make it honour $ACLOCAL_FLAGS, or if I should patch Pixman’s autogen.sh to call autoreconf $ACLOCAL_FLAGS .. and any others that don’t ..

  • Highlights of Glastonbury were definitely Blur, and a more obscure band called Edward II who I last saw aged about 12.

    Best wishes for everyone in Gran Canaria!

GLADEs

This week I started exploring the wild world of GLADE!! After losing a day to the fact that I hadn’t done a ‘make install’ after changing the API in some way (the plugins were still being loaded from PREFIX/lib/glade3, so all hell was subtly breaking loose), I managed to implement the following provisional UI for binding settings to properties:

GLADE with GSettings integration #1

Most properties can be bound, including many that you would never want to – but the ‘Bind to’ widget is normally hidden for these. It defaults to shown on the ‘data’ properties you’d normally bind to, such as GtkEntry.text and GtkCheckButton.active.

A bonus of this is you get ‘guards’ (where one toggle affects the sensitivity of an area of the dialog) basically for free. Just bind the ‘sensitive’ property of the container widget to the same key as the toggle, and all of its children will be disabled/enabled appropriately.

Last week I stayed a few days with my parents in Wales. Generously the sun came out and I got out a bit.

Aqueduct 3

This week I am headed to Glastonbury! I like to get to lots of festivals every year and Glastonbury is far and away the best in the country. I often work at them but I’d hate to do that at Glastonbury, there’s already not nearly enough time to see the whole of the festival. It just so happens there are several recently reformed big-name bands playing this summer, I think it is going to be one to remember!

L’été du code, première semaine

Hello! I haven’t posted here so far (although I think a couple of really old blog posts of mine appeared a while ago for some reason) .. so I thought I would write a note about my plans for this summer.

GSettings & GTK
I took on the task this summer of closing bug 494329, and in the process helping Ryan to make gsettings rock extra hard. You can track my progress if this sort of thing interests you in gitorious: http://gitorious.org/gsettings-gtk/. This includes hopefully adding GLADE support for gsettings bindings and converting some existing applications, so that the API is actually useful in real-world situations. I’m really excited that I will be saving people hours of future time adding new preferences and setting keys, which for simple situations should be as simple as adding the control widget to the dialog, and writing a changed hander for the settings key it’s bound to. This I think makes up for the fact that I can’t show what I’m doing to my girlfriend and make her say “wow that is cool”

jhbuild on windows
I’ve been using jhbuild for a while to manage building dependencies on Win32. Recently I decided to not leave all of my alterations bitrotting in Launchpad but to merge what I can with the master repo, and maintain the rest in a Gitorious branch or some such. When I say “my alterations” I am talking about mostly things I didn’t do of course, I just added some cruft on top of all the stuff John Stowers did. Anyway, it’s at the point where with git HEAD and this ugly patch the infrastructure basically works. Only two unit tests fail (or only one in the right build environment). In the next few days I’ll push (seperately) some other niceties like a seperate bootstrap moduleset for msys, the infamous binary moduletype, and a hack to make it possible to use MSYS-git. These will go into a seperate branch.

So if you’re interested in building GTK+ really, really slowly on a substandard OS this work should be of interest to you. I haven’t had many comments about this stuff so far, perhaps because it’s a very painful thing to try to do and you have to be a little bit crazy .. I don’t know where all this work will end up, I really just want an easy way to build new gstreamer releases for mingw and compile my gtk+ branch, but on win32 there’s a lot more for jhbuild to do because the rest of the infrastructure is so bad.

Sorting my life out
Recent days have not seen much productivity because here in England, the clouds seem have gone on holiday for a couple of weeks! Back to work though and I have various things to sort out such as the university revoking my libary card even though I have another year of study left, tax forms for the GSoC money and other time sinks ..

Hello vegetables

I just noticed I still have a livejournal. I think it’s going to be syndicated on planet.gnome.org (regards my programming habit) for a while so I thought I would add this note about all of the previous posts, which don’t make a lot of sense and were written a number of years ago.

Revittelise your day.

Last night I watched the movie “Pirates Of The Carribean 2” which I thought was a really good film ! I expected it to not be good and certainly not be as good as the first one but it actually turned out good. My favourite part is where the guy with a squid for a face is playing the organ. I really think if I was making a film with a guy who a had a squid for a face I’d have him do the exact same thing. My only complaint is the plot is too complicated for someone who chooses which movies to go see basically by how many pirates there are in them.

Other things I have done recently: work, drink