Friday, April 18, 2014

The perils of micro-optimisations

A debate has been raging on my website over the use of StringBuilder.AppendFormat in my exception logger code. OK, raging is something of an exaggeration, there have been two comments in two years. But the point made by two people is that rather than

error.AppendLine("Application: " + Application.ProductName);

I should be using

error.AppendFormat("Application: {0}\n", Application.ProductName);

Since this means I wouldn’t be using string concatenation, which is considered bad for performance reasons. My main reason for not doing anything about this is because I’m lazy, but also because the whole point of this code is that it only runs when an exception is thrown, which hopefully is a pretty rare event, so performance is not a major concern.

But then I wondered what the difference in performance is between these two approaches? So I wrote a little test application that looks like this.

    static void Main(string[] args)
    {
      for (int j = 0; j < 10; j++)
      {
        // try using AppendLine
        Console.WriteLine("AppendLine");
        StringBuilder error = new StringBuilder();
        Stopwatch sw = new Stopwatch();
        sw.Start();
        for (int i = 0; i < 1000000; i++)
        {
          error.AppendLine("Application: " + Application.ProductName);
        }
        sw.Stop();
        Console.WriteLine(sw.ElapsedMilliseconds);

        // try using AppendFormat
        Console.WriteLine("AppendFormat");
        error.Clear();

        sw.Restart();
        for (int i = 0; i < 1000000; i++)
        {
          error.AppendFormat("Application: {0}\n", Application.ProductName);
        }
        sw.Stop();
        Console.WriteLine(sw.ElapsedMilliseconds);
      }

      Console.ReadKey();
    }

The results from this app in milliseconds are as follows (reformatted for clarity)

AppendLine 307 315 321 372 394 370 289 298 300 296
AppendFormat 366 360 362 471 353 359 354 365 365 350

So which is quicker? Well it looks like AppendLine might be marginally quicker. But, much more importantly, who the feck cares? We are repeating each operation 1 million times and the time to execute is still less than half a second. Maybe you can pick holes in my test application, but again I would ask who the feck cares? Either approach is really fast.

And this is the main problem with trying to optimise this kind of stuff. We can spend huge amounts of time figuring out if one approach is quicker than another, but a lot of the time is doesn’t matter. Either the code runs quick enough using any sensible approach, or it’s hit so infrequently that even a really poor implementation will work.

Of course we should consider performance whilst writing code, but we should only use particular approaches when we know they are going to produce more performant code. A good example is the StringBuilder class. We can be pretty sure this is going to be better than using string concatenation, otherwise it wouldn’t exist in the first place. That said, if you’re concatenating two strings I really wouldn’t worry about it.

But the key to writing efficient code is to understand what is slow on a computer. Network operations are slow. Disk access is slow. Because of that, anything that requires large amounts of memory (meaning virtual memory i.e. disk access) is slow. Twiddling bits in memory is quick. Fast code is achieved by avoiding the slow stuff and not worrying about the quick stuff.

And once you’ve written your code and found it doesn’t run ask quick as you’d hoped, don’t jump in and replace calls to AppendLine with calls to AppendFormat, profile your application! Every time I profile an application, I’m always amazed at the causes of the performance bottleneck, it’s rarely where I thought it would be.

If you don’t have a profiler, use poor man’s profiling. There are also free profilers available, I quite liked the Eqatec Profiler which seems to be available from various download sites, although it’s no longer available from Eqatec. But whatever you do, don’t get into Cargo Cult Programming

Saturday, March 29, 2014

Land Registry data for Feb 2014

I have uploaded the Land Registry house price data to doogal.co.uk.

There’s been a lot of talk in the press recently about there being a two speed housing market, London and the rest of the UK. You can see this illustrated fairly clearly if you first look at house prices in Blackburn then compare them with prices in West London.

Prices in Blackburn can be broken into three distinct periods. Prior to 2003, they were gently rising, probably in line with wage increases. Then in 2003 things went ballistic (I’m not sure of the trigger for that, although I’d guess it was easier access to mortgages). 5 years later in 2008, things ground to a halt, sales fell of a cliff and prices have been flat-lining ever since.

But look at West London and the only similarity is that sales volumes dropped off rapidly in 2008, but you’d never know that there was a financial crisis at all. Houses have been a one way bet for nearly 20 years. You’ve got to wonder how long it can go on.

Thursday, March 27, 2014

Bluffers Guide to responsive design

A while back I spent a bit of time making doogal.co.uk more mobile friendly. I’d put it off for a long time primarily because I thought it would be really hard. But actually it turned out to be not too tricky. So here is my not so comprehensive guide to making your site mobile friendly with responsive design

The first thing to do is decide at what screen size your design will change from the normal design to the mobile design. My decision was that tablets should see the standard design but mobile phones should see the mobile design. So the media query I use for all my mobile CSS is

@media handheld, only screen and (max-width: 600px), only screen and (max-device-width: 600px) 

Better menus - My standard menus were tricky to use on a small screen, but fortunately they were built using styled unordered lists i.e. <ul></ul>, so I was able to use the Mobilemenu JavaScript library. This converts the list into a dropdown <select> which is much more useable on mobile devices.

Hide stuff – display:none is your friend. Most websites have bits on the page that may be useful but aren’t entirely essential. On a big screen we can get away with that, but on mobile devices it’s necessary to concentrate on the essential information. So hide anything that isn’t needed. I have several tables with many columns, but each row links through to more information, so I hid several of the columns since the tables didn’t render very well. The best approach to this appears to be CSS like the following

.postcodeTable  td + td { display: none;} 

No more table layouts – We’ve been told for years not to use tables for layout of our websites. But I’ve certainly been guilty of it, since it’s always easier to build a multi-column layout using a table. But now is when the chickens come home to roost. You may want two columns on the desktop, but on a mobile device, you’ll probably want to have the two columns stacked on top of each other, so those tables will need to be converted to div’s which float on the desktop and don’t on mobile devices.

Server side control – The solutions so far have all been client-side. I think this is generally the easiest way to deal with the issue, but there is certainly a good argument for saying content shouldn’t be getting pushed to the client if the client will never actually display it. If you’re using PHP on the back-end, Mobile Detect Library can be used to tailor your HTML before it leaves the server. One place where you may need to do this is adverts. Google’s T&Cs say you can’t hide adverts, so using display:none for them is probably a bad idea.

Sunday, March 09, 2014

The beginning of the end of Metastorm BPM

It looks like development of Metastorm BPM has, if not stopped completely, at least slowed down. So I thought I’d write something about my thoughts about what was a big part of my professional career. If you want the full history, this isn’t it, have a look at Jerome’s book.

For me, it all started sometime in 1997. I was writing software for a firm called Bacon and Woodrow that sold actuarial software. It wasn’t really my kind of thing, but it was my first proper job, something to add to the CV. I got a call from a former colleague, Richard Kluczynski, who’d gone off to write his own software, then got a gig with a software house called Sysgenics. Before the days of mobile phones (or at least before I had one), I remember having to wander the streets of Epsom to find a phone box to call him back to discuss properly, away from the office. It sounded interesting but it wasn’t the right time for me as we were trying to get the first Windows version of our software out the door.

A few weeks or months later I got a call from a recruiter asking if I was looking for a new job and telling me about a company called Sysgenics. We were still trying to get our Windows version out the door, but I guess getting called about the same company twice peeked my interest. I remember looking at their website and getting pretty excited about the screenshots of some kind of graphical tool for building workflows called e-work. Before I knew it, I was in Wimbledon for an interview with Steve Brown and Jerome Pearce. And an hour later I was in the pub. This was obviously a great place to work!

And it was. We had no customers but some VC money to keep us ticking over. We were using the latest technologies (Delphi and MS Access!). Having no customers meant we could build stuff and break stuff without having to worry too much about upsetting people using the software, so we were always making a lot of progress.

Before long we were bought out by Metastorm. Looking back, that was actually a bit weird. Initially they seemed to be some massive software company but looking closer, the one product they sold, InForms, was clearly coming towards the end of its life, since it was tied into the dying Novell Groupware. But they had what we needed, money, and I guess we had what they needed, some modern software to sell.

The years flew by, six in fact. By that point we had quite a few customers, the little startup was a proper software house. There was structure and rules, forms to fill in, basically not really my scene anymore. So I flew the nest to work for a financial software house in central London. But a couple of years after that, Jerome asked me to join his little band of Metastorm consultants. So I built a shed and got to work building stuff on top of Metastorm e-work. Metastorm e-work became Metastorm BPM, we carried on calling it e-work… Metastorm started rewriting it from scratch in .NET, releasing it as version 9 (missing out version 8, some kind of off by one error I think).

I’d probably still be working in my shed for Jerome had the financial crisis not hit, caused major stress to our main client who then couldn’t pay us. So I went to Croydon, regretted it almost immediately and started working for myself. Back to the shed…

Then I started doing some work for Steve’s new company, Business Optix. That eventually became a full time job and is where I am now. Meanwhile Metastorm got bought out by OpenText. Given the price they paid, you’d think anyone who’d taken up their share options would have done well out of the deal, but you’d be wrong. Somebody must have made a nice chunk of money out of it, but it wasn’t the people who’d originally developed the software (this isn’t bitterness on my part, I never exercised the option on my shares).

But not content with one BPM tool, OpenText also bought Cordys and Global 360. I guess the writing was on the wall for two of those products at that point, why would a company want three BPM tools? Anyway, it looks like Metastorm BPM is one of the victims. You have to wonder why OpenText bought them in the first place, presumably not for the customer base, already fed up with having to rewrite their processes for version 9, now fuming that they need to rewrite again in some other system.

Saturday, March 01, 2014

Land Registry sales data uploaded to doogal.co.uk

I’ve uploaded the latest Land Registry sales data, covering sales for January 2014, to my website. The good or bad news, depending on your point of view, is that prices continue to creep up. 

Thursday, February 27, 2014

Thursday, January 30, 2014

Land Registry December 2013 data on doogal.co.uk

I’ve imported the latest Land Registry to doogal.co.uk. As ever, little can be inferred from a single month of data but draw your own conclusions. And let me know if you’d like to see this data in some other way.