I was recently frustrated by very slow tests and timeouts in my Java code, that would often show a similar stack trace (if they actually timed out):

    io.vertx.core.VertxException: Thread blocked
     at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
     at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
     at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
     at java.net.InetAddress.getLocalHost(InetAddress.java:1500)

I’ve highlighted the critical part. The Vertx bit is just what implements the timeout in my case. If you’re using a Mac and you see this, you’re probably having the same problem. You might also see Inet4AddressImpl in the stack trace instead of Inet6AddressImpl.

After a lot of web trawling and some help from a colleague (thanks Tim) I got to the bottom of it. I’m writing it up here, for my own benefit when I run into this again in the future, and because a lot of the existing resources weren’t clear and direct enough to solve my problem easily.

The fix

The slowness is caused by a domain name lookup that’s taking a few seconds each time, because for some reason your computer is asking the network about its own address, and timing out. I don’t fully understand the mechanics frankly, but the fix was simple in my case.

First, figure out what your computer thinks its hostname is, by running hostname in the terminal. Then use the value returned from that to add lines like this to your /etc/hosts file:

    # This works around slow lookup that we sometimes see in
    # java.net.Inet6AddressImpl.lookupAllHostAddr Sams-MacBook-Pro.local
    ::1 Sams-MacBook-Pro.local

This provides a direct answer for both IPv4 and IPv6, avoiding the slowness. This had the nice effect of bringing my Gradle build time down from 2 minutes to just 44 seconds, including all the tests.

Apple Watch battery

Do Apple Watch owners suffer “range anxiety”, in the same manner as electric car owners? In my experience, yes, but it fades quite quickly once expectations and experience collide and settle down.

For my usage so far – albeit just a week and a half – my 42mm watch generally has more than 50% left when I hit the sack, having been on my wrist approx 06:45 to 23:00. Given that I have no qualms at all about charging it every night, that’s pretty good, and better than a lot of scare stories had led me to expect. Of course if I was going camping for a week it would be utterly useless, but I accept that it’s just not the right product for that scene.

How much do I really use it though? I don’t stare at it all day long, especially as the novelty starts to wear off. Apple are correct: its rightful place in the world is for fleeting interactions lasting just a few seconds, and I have quickly settled into that very casual relationship with it. Right now, I use it to:

  • check the time (obviously)
  • check the weather
  • to see what song is playing on my iPhone when I don’t recognise it
  • to see incoming messages and tweets (but very rarely to respond to them)
  • to snooze/dismiss calendar alerts
  • to quickly set a timer for an ad-hoc reminder
  • to take incoming calls, before getting my iPhone out and switching to that – but I hope to get out of that habit
  • for running (much more about that in a future post)
  • for keeping tabs on general activity via Apple’s ‘Activity’ app with its all-knowing three circles

I’ve had one day where the battery ran out prematurely. Very prematurely, at 1830! That morning I’d gone for a 5km run using Apple’s built-in ‘Workout’ app – my first and only time with that app so far – which had knocked the battery down to 84% by 0700. That actually didn’t seem too bad for the run itself, since it was working hard keeping track of heart rate etc. but I still don’t understand how it came to expire later on, from being a mere 16% down at the start. Perhaps the battery level reporting was poor and when it said 84% it was actually much lower. Indeed when it flaked out, it was reporting 13% so maybe calibration was poor, and maybe I’m closer to the wire than I think when I go to bed with an apparently healthy percentage left. We’ll see how future experiments pan out.


The allure of being able to walk into an Apple Store, try one on and get instant purchasing gratification was just too much, and I caved. Having decided to get an Apple Watch to develop apps against, but never having actually liked the look of them, my opinion changed the moment I fondled one in the store. The quality feel is simply exceptional and it was immediately comfortable on my wrist, but the best and most surprising thing was the size.

I don’t have large wrists but the larger 42mm watch sits very nicely indeed. It is definitively not a hefty lump of technology struggling to masquerade as a watch. And I say that because I had expected it to be ungainly and oversized, based on my limited experience of Android devices.

But how have they achieved this? Wonders of electronic miniaturisation of course, with miserly power consumption allowing for a tinier battery than the competition. But that’s not the most cunning part in my opinion.

The crucial trick was to employ a rectangular screen and a user interface with a black background. This affords many subtle wins over much of the competition!

Rectangular screen

Many Android watches have gone for a circular screen – which best mirrors the classic round watch look. But it’s hard to use the circular space efficiently for displaying information other than a clock face. Apple’s rectangular screen is (comparatively) easy to fill up with well-spaced information, even though it’s tiny. To show a block of text on a circular screen means lots of wasted curved scraps of space. As a developer I’m glad I don’t have to create apps for circular screens!

Black background – black bezel

The Apple Watch user interface employs a black background throughout, which merges seamlessly into the black bezel around the screen, with the glass wrapping over both. The shine of the curved glass edge and the super-deep black of the OLED screen means it really is an invisible transition. This means that the user interface elements can run right out to the edges and corners of the screen, without requiring any padding to space them pleasingly away from those edges. The physical bezel outside the screen is that padding.

Again, this maximises the usable space whilst keeping the package small. Competing watches that have a distinct bezel have to inset UI elements and so a surprising amount of power-draining screen real-estate is wasted.

Also, with an OLED screen, a black background is directly better for power consumption, as each pixel is individually illuminated (there is no separate backlight) and black pixels consume the least power.

UI tricks to maintain the illusion

The ‘home screen’ is a hexagonal grid of circular icons, which immediately diffuses the rectilinear reality. The most cunning part of all though is how the icons smoothly shrink down to nothing as they approach the physical limits of the screen. This stops them being chopped off at the straight edge and so maintains the inky black illusion.


Elsewhere in the user interface, elements with non-black backgrounds are heavily rounded. Of course the screen edges reveal themselves when scrolling vertically through content, or sideways between glances, but those are usually fairly brief transitions, and the illusion can only go so far.

Bonus: no lugs

Unrelated to the points above, but worth mentioning for its vital impact on the sense of size, the strap connects to the case seamlessly via Apple’s custom attachment (all the better to sell you expensive replacements) but this negates the need for two lugs top and bottom, sticking out and increasing the height of the device.

Finally, some amateur prognostication

As smart phones evolved it turned out that telephony was way down on the list of real users’ activities. There will probably be a similar story with ‘smart watches’, with years of exciting evolution ahead, not just for the raw technology but to establish a successful form and function. It’s anthropology as much as it is technology. People’s habits and expectations, and fashion too will evolve alongside the gadgets.

Right now many vendors are trying to replicate the traditional watch in form, it being the obvious starting point, but I predict that we’ll quickly move on from that as people get used to having their digital lives reflected in miniature on their wrists. One of the clock faces that Apple provides is named “Modular”  – shown in the image at the top of this post – and is a very utilitarian grid of configurable information. At first I didn’t like it, but already it has won me over and I find it striking how far I’ve already been moved away from the traditional watch. Once again, the rectangular format plays well to this direction.

On the fashion front, I notice in the mirror that the Apple Watch on my wrist is a featureless, glossy black blob on a black strap when it’s not illuminated. This is quite a departure from the aesthetic of a traditional watch, and right now I’m not especially keen on it. But before long that will be an accepted norm that doesn’t seem strange or out of place.

Who knows – maybe my thoughts here are completely out of whack with what Apple were thinking. It’ll be interesting to see how things develop.


I like to live life on the edge, and helping me to do that is my Mac's Calendar app with its predliction for silently discarding all alert settings for Google calendar entries. This stresses me out as I'm just waiting for the first time I embarassingly fail to turn up for something. Today, and frankly not for the first time, I set out on a serious Google hunt to find a resolution. I now have one that's just about tolerable.

It seems the basic problem is that when you modify a Google calendar entry via Calendar, it then immediately syncs back from Google and wipes out any alert that you just configured. But you didn't want to go to that meeting anyway, right?

I can't claim original credit for my solution, but I can endeavour to explain it very clearly and add a few extra tidbits. The trick/workaround is to configure Google Calendar itself, via its web UI, to have a default alert for all newly created events. Then it seems that you can modify the alert successfully from within Calendar, to a different number of minutes ahead. However you can't remove it, so you have to just live with alerts for everything. To me that's far preferable to none at all.

You can change the setting for Google Calendar as follows.

  • Sign in to your Google calendar at https://www.google.com/calendar.
  • Open the settings page by selecting "Settings" from the gear dropdown in the top right.
  • Select the "Calendars" tab at the top.
  • In the list of calendars (which probably only contains your one calendar) select "Reminders and notifications"
  • In the "Event reminders" section at the top, select "Add a reminder"
  • Select "Popup" and however many minutes you want, for the default reminder.
  • Don't forget to click the Save button at the top left.

Allow me to wax philosphical for a moment with an observation about where computers and their operating systems are heading.

In the world of software development CQRS = Command Query Responsibility Segregation, which in its simplest sense recognises that it's sometimes better to use a different mechanism for reading data than it is for writing it. See Martin Fowler's exposition of the concept if you want to know more, but this post isn't actually about software development at all!

I reckon that we're at a critical juncture in the evolution of personal computing devices and that the CQRS principle is necessarily coming to the fore to save the human race.

Tablet computers are taking the world by storm, in case you hadn't noticed. Apple could barely make enough iPad Minis for me to be able to get my wife one for Xmas, though I did manage it at the very last minute, and shortly thereafter bagged one for myself too. Frankly it's bloody brilliant, but I use it predominantly for consuming rather than creating and I'm far from alone. This is partly because the human populace is inexorably dumbing down towards being fat blobs with brains wired directly into the 'net, consuming inane banter, amusing picture of cats and the latest celebrity news, 140 characters at a time. But that aside, it's just not very pleasant to write large quantities of text, manipulate images or perform other expansive creative works by prodding a tiny screen. Or even a big screen.

To write software, construct lengthy blog posts (ahem), edit movies, sequence the human genome or design great buildings requires a proper computer! On that basis I posit that there will always be a place for desktops and laptops, or indeed whatever replaces them but which necessarily has a non-trivial input mechanism. I genuinely worry that the market for serious computers will be increasingly neglected by the manufacturers, refocussing as they are on the mass consumer market, inevitably leading to the downfall of humankind. Perhaps I exaggerate – at least I hope so.

Now I've never used Windows 8, indeed I shudder at having to use Windows 7 on a daily basis at work, but I understand it represents something of a chimera. It is best known for its shiny, touchy, slidey 'Metro' UI, beckoning your greasy fingers to caress its tiles. However it also allows you to fall back into the more staid world of traditional Windows where presumably you can get some proper work done, as long as you have a keyboard and a pointing device other than your finger. I understand critics are conflicted about this hybrid approach, but it's CQRS writ large and may therefore be the way forwards. One way or another, at least some people will need to create great works. I do hope to be one of them, and to have the equipment to be able to do it.

I've been doing some trivial benchmarking of Play 2 with ab (Apache Bench) just to get an idea of its raw capabilities for serving simple requests – and because it's what I always do when picking up a new framework so I know what I'm dealing with. In doing so I ran into a bit of a puzzler that had me thinking Play 2 was bugged – but my spidey sense soon kicked in and told me it was more likely to be an OS or ab issue. I had done approximately the following, using Play 2.0.1 on OS X 10.7.3, and I'm pretty certain you'll see the same results if you do this on a Mac:

> play new hello  [select option 1 - basic Scala app]
> cd hello
> play start
> ab -c 50 -n 16000 http://localhost:9000/ [Runs fine - about 3700rps]
> ab -c 50 -n 16000 http://localhost:9000/ [Gives up with timeout]
> ab -c 50 -n 16000 http://localhost:9000/ [Runs fine - about 3700rps]
> ab -c 50 -n 16000 http://localhost:9000/ [Gives up with timeout]

It took me a bit of experimentation to establish that it's about 16000 requests that work fine, followed by timeouts, in a reliable pattern. That's a suspicious number, being near enough a power of 2, which is what clued me into it being an OS limit that I was running into. I ran the same ab test (with the same result) against the built in Apache https serving a static file, confirming that Play 2 probably wasn't to blame.

Sure enough, a quick Google turns up the goods. My OS was running out of the approximately 16000 ephemeral ports available and having to wait for them to be released before it could reuse them. So not Play 2 or ab's fault at all. Actually in some senses it is Play 2's fault for being so fast that I've run into this limit.

I'm not going to go into the details of what ephemeral ports really are, as others have done that perfectly well, and there is a good StackOverflow answer with some key ways to work around the problem by modifying parameters of the OS' network stack – but be careful and make sure you understand what you're doing.

However, one very simple way to workaround the issue is to simply pass the -k option to ab, to use HTTP keepalive (assuming the server you're testing supports it). Note that this changes the nature of your test though, as you're no longer really simulating large numbers of separate connections – but for basic sanity check testing it may help. For the record `ab -c 50 -n 100000 -k http://localhost:9000/` benchmarked Play 2 at about 7000 requests per second on my 2.4GHz Core Duo MacBook.

I've been having a lot of trouble with my Ruby 1.9.1 install on Mac OS X. Mostly it works fine, but I struggle when installing gems that require native extensions. I think this is because the way my install was built causes linkage problems, perhaps due to 32 vs 64 bit issues, or due to linkage with other libraries. I'm not entirely sure what's causing the problems, but recently I decided enough was enough and tried out rvm since I've heard a lot of good things about it. I got the impression that by compiling from my own source I was stubbornly making a lot of my own trouble.

Rvm is trivial to install: it's a gem that installs some of its own executables. I did hack my PATH first, to remove /usr/local/bin (where my custom Ruby lived) so that I'd be using the stock Mac OS X Ruby for the rvm install.

> sudo gem install rvm
> rvm-install

Note that rvm-install added the following to the end of ~/.bash_profile automatically, so I could ignore the instruction it gave me about adding it myself:

if [ -s ~/.rvm/scripts/rvm ] ; then source ~/.rvm/scripts/rvm ; fi

I then used rvm to install a fresh version of Ruby 1.9.1:
>> rvm install 1.9.1

Actually that failed with an error about libsqlite3.dylib being the wrong architecture – perhaps another hangover from my old manual installs, or a problem I'm going to have to solve sometime in the future! For now I moved the old version of that file and tried again:

> sudo mv /usr/local/lib/libsqlite3.dylib /usr/local/lib/libsqlite3.dylibOLD
> rvm install 1.9.1

And that left me with a decent ruby 1.9.1 install. Which brought me back to one of the things that I was originally frustrated by: getting NetBeans Ruby debugging working with the fast debugger. With my old install the ruby-debug-ide gem would not install, but I'm pleased to report that it does with this new setup.

However getting NetBeans to actually use my new rvm ruby required a bit of a trick. The Ruby Platform management GUI in NetBeans doesn't show you hidden folders in its file picker, so you can't navigate to the ~/.rvm/ruby-1.9.1-p243/bin/ruby file that it wants. The trick is to create a non-hidden symlink, so you can then find it from NetBeans (and it's also handy to get at your rvm files from Finder):

> ln -s ~/.rvm ~/rvm

One word of warning: once you're using an RVM Ruby install, do not use sudo for gem installs, as the gems (and every part of rvm) live in ~/.rvm so sudo is not required. In fact using sudo will knacker your gems quite badly as it gets its PATH wrong and its permissions and you end up deleting a bunch of stuff to get back to a known good state. I learnt this the hard way!

I was very excited to see MacRuby 0.5 beta 1 had been announced, complete with ahead of time compilation via LLVM. It has been long while since the previous update on the MacRuby blog in March, but clearly a lot of work has been taking place. At the moment this beta shows the promise of things to come but isn't yet fit for much more than anticipatory experimentation. If you want to try the macrubyc compiler, Antonio Cangiano's blog post on the topic is a must-read.

The MacRuby notes suggest that compiled ahead of time or not, it uses LLVM for a big speed win, but my own quick experiment showed the macruby interpreter to be about 3 times slower than the standard MRI Ruby 1.9.1. This was with a single small benchmark app only though, just to prove things were working, so I can't draw conclusions. I can't pretend I wasn't a little disappointed not to see MRI blown out of the water though, even though I know it's unscientific and wrong of me!

I couldn't get a fully compiled version to produce any output, though it appeared to run without barfing, so I couldn't tell if it was really working or not. It was notable that the compiled binary was nearly 15MB so there must be a lot of statically linked code being included to swell my couple of KB of Ruby code so much. I'm hopeful that this can be improved in the future in order to support my dream of iPhone apps being written with Ruby hooking into Cocoa. In fact more than a dream – I'm hopeful and optimistic that in the long-run Apple will make Ruby a heavily promoted first class citizen for Mac and iPhone development, sitting on top of Objective-C but hiding it for the most part. The whole world has moved on from primitive C-based languages to higher levels of abstraction and I think Apple really needs a successor to Objective-C within the next 5 years. Is MacRuby it?