Tuesday, June 30, 2009

Day 18 – Bell, Doogal Bell

So little happened today that I actually answered the door to the Jehovah’s Witnesses and tried to convert them to Atheism. I did have a call from an agent wondering if I’d be interested in a contract position in Sweden and an email asking if I spoke French and would be interested in a job in Paris. Being an international man of hacking does appeal I guess, flying from nation to nation with only a laptop loaded up with Visual Studio.

Improving the Local Search .NET call

Following on from my post about using Google Local Search from C#, I thought I’d try to improve it. Deserializing the JSON data ended up with some ugly typecasts and manipulation of Dictionarys. The first thing to notice is that the JavaScriptSerializer class has a Deserialize<T> method so all that is needed is a class to hold the returned data. Here’s a simple implementation of this.

  public class Results
  {
    public double lat;
    public double lng;
  }

  public class ResponseData
  {
    public Results[] results;
  }

  public class LocalSearchData
  {
    public ResponseData responseData;
  }

OK, I know, there’s a bit of a lack of OO encapsulation going on there but it seems like public fields with names matching the data returned from the JSON are required. They can be replaced with properties, but these must have getters and setters so this doesn’t really buy you much except to stop FxCop moaning at you.

Then the deserializing code looks much nicer

        LocalSearchData searchData = serializer.Deserialize<LocalSearchData>(response);
        latitude = searchData.responseData.results[0].lat;
        longitude = searchData.responseData.results[0].lng;

This still isn’t perfect. We have to use the same names as used in the JSON, which doesn’t really match up with .NET naming conventions and we have a class hierarchy that doesn’t really serve a purpose. It looks like the JavaScriptConverter class might help out here but that’s something to look at another day. Another alternative might be to just use these classes for moving the data into yet another class that has a better interface.

Monday, June 29, 2009

Day 17 – A sweaty interview

Croydon is “Manhattan as imagined by Le Corbusier” apparently, or so my mate Jethro says. I’d agree to an extent, except I’d have to say it’s Manhattan without the glamour. And today it was a particularly sweaty unglamorous place.

I was faced with a technical test, which I quite liked since I could do it. But as is always the case with these kind of tests, there are a hundred ways to implement a solution, so it all depends whether what I’ve done resonates with the person looking at the code.

And then onto an interview, where the interviewers basically said they are looking for someone who can be a team leader, top quality developer, project manager, software architect and product manager. Oh and they don’t want to pay very much. Of course I said I could do all of these things, although I wanted to shout “you will never find anybody who meets all those requirements with the money you’ve got to offer!”

Local Search web service

All these cool pieces of AJAX code are great but what if you want to use them from some server-side code? Local Search doesn’t provide any kind of web service API as far as I’m aware, but all AJAX calls eventually have to resolve down to simple HTTP calls. So it should be possible to use them from a server-side piece of code. To test out this theory, I thought I’d see if I could write some C# code to use Google’s Local Search AJAX API to get the latitude and longitude for a postcode as if it was a web service call.

So to see what is happening under the hood, we need to fire up Fiddler and use a page that uses the Local Search API, like this one. If we issue a query using that page, we can see the URL used is something like this.

http://www.google.com/uds/GlocalSearch?callback=google.search.LocalSearch.RawCompletion&context=0&lstkp=0&rsz=small&hl=en-GB&gss=.uk&sig=b211652959f1f93330a3286c1a81eab6&q=KT1%203EG%2C%20UK&sll=37.77916,-122.42009&gll=37747397,-122451853,37810922,-122388328&llsep=500,500&key=ABQIAAAAjtZCgAx5i04BiZDO6HlxhRQUdBDpWCOMRMbgTcqadX0jQ8HOERSxXxhk24TIBUpivovAKLrnpSio9w&v=1.0&nocache=1246220526894

And the returned data is in JSON format. Click on that link in a browser and you should see the returned JSON data. And if we move to the .NET world and make the call from C#, like this

      HttpWebRequest req = (HttpWebRequest)WebRequest.Create(
        "http://www.google.com/uds/GlocalSearch?callback=google.search.LocalSearch.RawCompletion&context=0&lstkp=0&rsz=small&hl=en-GB&gss=.uk&sig=b211652959f1f93330a3286c1a81eab6&q=KT1%203EG%2C%20UK&sll=37.77916,-122.42009&gll=37747397,-122451853,37810922,-122388328&llsep=500,500&key=ABQIAAAAjtZCgAx5i04BiZDO6HlxhRQUdBDpWCOMRMbgTcqadX0jQ8HOERSxXxhk24TIBUpivovAKLrnpSio9w&v=1.0&nocache=1246220526894");
      WebResponse resp = req.GetResponse();
      Stream respStream = resp.GetResponseStream();
      StreamReader reader = new StreamReader(respStream);
      string response = reader.ReadToEnd();
      MessageBox.Show(response);

This works as well. Which is good since it suggests Google aren’t doing anything to stop people using the API from ‘browsers’ that aren’t really browsers. So the next thing to figure out is which bits of the URL are required. So after removing all the parameters that aren’t needed, we are left with

http://www.google.com/uds/GlocalSearch?q=KT1%203EG%2C%20UK&v=1.0

I was somewhat surprised at how few of the parameters are actually required for the call to still work. Even the user’s API key isn’t needed. Of course, since this is completely undocumented, this may change in the future. In fact last time I tried to do this, I’m fairly certain it was a lot harder to get the HTTP call to work from .NET.

So now we know what URL is required, we just need to be able to parse the returned JSON data into something more .NET friendly. Fortunately .NET 3.5 provides the JavaScriptSerializer class to serialize and deserialize JSON strings. So putting it all together, we get a fairly simple implementation

using System;
using System.Collections.Generic;
using System.IO;
using System.Net;
using System.Web.Script.Serialization;

namespace LocalSearch
{
  public static class Postcode
  {
    public static void Geocode(string postcode, out double latitude, out double longitude)
    {
      HttpWebRequest req = (HttpWebRequest)WebRequest.Create(
        string.Format("http://www.google.com/uds/GlocalSearch?q={0}%2C%20UK&v=1.0", postcode));
      using (WebResponse resp = req.GetResponse())
      using (Stream respStream = resp.GetResponseStream())
      using (StreamReader reader = new StreamReader(respStream))
      {
        string response = reader.ReadToEnd();
        JavaScriptSerializer serializer = new JavaScriptSerializer();
        Dictionary<string, object> deserialized = 
          (Dictionary<string, object>)serializer.DeserializeObject(response);
        Dictionary<string, object> responseData =
          (Dictionary<string, object>)deserialized["responseData"];
        object[] results = (object[])responseData["results"];
        Dictionary<string, object> resultsData =
          (Dictionary<string, object>)results[0];

        latitude = Convert.ToDouble(resultsData["lat"]);
        longitude = Convert.ToDouble(resultsData["lng"]);
      }
    }
  }
}

This could be improved by using Deserialize<T> instead of DeserializeObject but that would involve writing some classes to hold the returned JSON data I think. I might look at that some other time.

Of course this provides exactly the same functionality as my StreetMap screen scraping code of many moons ago, so what’s the point? Really just to show a fairly generic method of using AJAX calls from .NET server-side code.

Friday, June 26, 2009

Day 14 – my first real interview

To be frank I thought this would be a complete train crash of an interview. I’d been asked to do a presentation on some of the work I’d done during my last job and I generally don’t feel overly confident doing presentations. So I guess it was a success because my presentation wasn’t awful. It was helped by the fact that the interviewer seemed to be checking his email most of the time I was talking.

The rest of the interview seemed to go OK, although my interview assessing antennae seem to be a little off at the moment. I met the agent afterwards and he said because there are so many candidates going after jobs, employers are getting much choosier about who they take on. Whereas in the past being a 75% match may have been enough, now they only accept a 100% match. Which may explain my incorrect assumptions about how well interviews have gone so far.

Thursday, June 25, 2009

Day 13 – where my hopes are dashed

So today I had a telephone interview with a potential employer and things seemed to be going pretty well. I think you can generally tell how well an interview is going. Some times it’s clear the two of you are not really on the same wavelength, but other times it’s like you’re talking to someone who you could imagine as a friend, or at least a work mate. Then he mentioned a company I’ve done some work for recently and I thought things were going even better. In the past, quite a few of my jobs have come not necessarily from my knowledge or experience but from having some kind of connection with the company I’m interviewing with. At Metastorm, one of my former colleagues worked there. At APT, I knew the person who would become my boss. At Process Mapping, the boss was a former work mate. Of course, I’m sure that’s not the only the reason I got those jobs, but it certainly didn’t reduce my chances.

So the thing was having that connection made me think I’d at least get a face to face interview. But then the agency tells me they’ve decided not to go any further and I was somewhat disappointed. If it had been one of those awkward interviews where there is no common ground I’d have been cool with it, but now I’m sitting here wondering what did I miss, what did I say that came across badly and how do I rectify it? But then of course the next interview will be with someone completely different who has completely different requirements and a completely different perspective on who they are looking for. So it’s a different game, one where you find out the rules after the game has finished.

Wednesday, June 24, 2009

Day 12 – where I go to the Job Centre

I dunnow, I was kind of expecting the Job Centre to be full of terrifying men shouting ‘gisajob!’, but it was actually quite a pleasant experience, mostly normal people looking for jobs and the staff were perfectly friendly, not the intimidating bunch I’d been led to believe. There were some burly security guards there who are presumably there in case it all kicks off. In fact I’d imagine they are probably hoping it does kick off, so at least they’ve got something to do.

I had a telephone interview which seemed to go OK, but I haven’t heard anything back from them yet.

Postcode geocoding in ASP.NET with live update

I put up an example of postcode geocoding using the Google Local Search AJAX API and somebody asked if it was possible to populate an ASP.NET GridView with the data in real-time. So since I currently have some spare time, I thought I’d give it a go. First a disclaimer, I have no idea if this breaks the terms and conditions for Local Search usage so check before doing it yourself.

First, lets create a table to store the postcode information

CREATE TABLE Postcodes(
    Postcode nchar(10) NOT NULL,
    Latitude float NOT NULL,
    Longitude float NOT NULL,
 CONSTRAINT PK_Postcodes PRIMARY KEY CLUSTERED 
 (
    Postcode ASC
 )
)

I won’t post all the code here, but the basic process goes like this

  • When the user presses the ‘Get lat/long’ button, execute the Local Search query
  • When/if that returns the latitude and longitude for the postcode, send the results off to a generic handler using a XMLHttpRequest object that puts the data into the Postcodes table
  • When that call returns, update the GridView, which sits in an UpdatePanel so only the grid gets updated.

Anyway, this is what it looks like (yeh I know, not too pretty) and you can download the source here.

Postcode geocoding in ASP.NET

Tuesday, June 23, 2009

Day 11 – where I wonder how List<T> is implemented

Just over 5 years ago I had an interview at APT and after a mammoth interview (the longest I’ve ever experienced) the interviewer told me he thought I’d done well but he was a little disappointed that I didn’t know how the TList class in Delphi was implemented (an internal array as it happens, as opposed to a linked list). Fast forward to now and I was having a telephone interview today where the discussion turned to data structures and the difference between an array and a linked list. We discussed how a list like this could be improved to provide faster random access. I probably didn’t do too well on this, since the internals of collection classes are generally not something I’ve needed to worry about too much but I probably should think about them some more since they do seem to turn up in interviews a lot.

But after all this talk of linked lists, how is the List<T> class (or the ArrayList class for that matter) actually implemented? Well, after firing up Reflector I discovered that, just like in Delphi, these classes actually use an array internally. I can only presume the overhead of having to resize the array when more items are added is outweighed by the memory fragmentation and slow random access of a linked list. Of course if you really want a linked list, there is a LinkedList<T> class available.

Monday, June 22, 2009

Looking for work - Day 10

When I started this series of blog posts I was thinking it might be interesting because we are meant to be in the middle of a big recession. But as far as I can tell there seem to be plenty of jobs out there for people like myself. It could just be down to my impressive CV (ahem) or my tactic of throwing enough crap at the wall that some of it is bound to stick or it could just be there are still a good number of jobs available. I’ve had another two telephone interviews confirmed today so I’ve now got three telephone interviews and a face-to-face interview lined up. I’m guessing that employers will probably interview the same number of people as in the past, even if there are more CVs available to look at, they will still not want to spending all day interviewing people, so I’m guessing my chances of actually being offered a job from one these interviews are the same as they were in the past. This is just conjecture on my part of course, but it certainly makes me feel better.

Talking of my blanket bombing technique, here are the sites I have submitted my CV to. It seems that different recruiters use different websites to find candidates, so it is probably worthwhile submitting a CV to as many as possible

jobserve.co.uk

totaljobs.com

jobsite.co.uk

monster.co.uk

planetrecruit.com, jobsearch.co.uk, gisajob.com (I think these three share the same back end)

cwjobs.co.uk

technojobs.co.uk

jobs.ac.uk

Friday, June 19, 2009

Day 7 – where nothing happens

A few phone calls from agents, but nothing of note.

Running a client-side script when a form segment loads

Form segments don’t provide a way to run a client script when the form segment is loaded. This can be somewhat limiting but it can be solved quite easily. Add a label to the form segment and set the label’s caption to

<script type="text/javascript">window.attachEvent("onload", Setup);</script> 

Then add a script to your form segment to do what ever you want

function Setup()
{
    alert("hello");
}

The problem with flags in Metastorm BPM

Flags are a great way to pass data between processes or to pass data from an external application to Metastorm BPM. There are several ways to raise flags, via the eRaiseFlag executable, using the Raise Flag ActiveX control, via the engine’s XML interface or through the engine’s COM interface. FreeFlow provides a wrapper around the last two approaches. Usage is pretty simple

      Connection conn = new Connection();
      // use to switch between TP and COM
      conn.RaiseFlagBy = RaiseFlagBy.TransactionProtocol;
      conn.RaiseFlag("New Data", new string[] {"some data", "1"});

As an aside, eagle eyed C# coders may be wondering why the last parameter of RaiseFlag doesn’t use the params keyword to simplify usage even further. The problem is there are several overloads of this method (taking user name, password, folder ID etc) so adding params would confuse the compiler since there would be multiple matches for a call to RaiseFlag. One solution would be to give the method a different name but this would make the API less discoverable since different versions of the same method would have different names. Another solution would be to just have one version of the method, with all the required parameters, but that wouldn’t really make life any simpler since the simple usage above would require passing in all parameters. API design isn’t an exact science and sometimes compromises are required.

But back to the main point of this post. Say we are going to create a new folder in Metastorm BPM using the flag data passed to populate the custom variables. Typically this might happen when somebody fills in an online ASP.NET form and we want to kick off some kind of process in Metastorm based on that data. So we’ll have a flagged creation action in the process and add some code to read the data, like so.

%tText:=%Session.FlagData[1]
%iNumber:=%Session.FlagData[2]

Which works fine. OK, say we change the data passed in to

conn.RaiseFlag("New Data", new string[] {"some\tdata", "1"});

Now when we run the code we don’t get a folder created. Instead we get an error in the Designer Log saying “'%inumber' failed while evaluating expression '%iNumber:=data' Error setting value for custom folder field 'inumber'”. This is because flag data passed to the engine is tab-delimited, so if your actual data contains tabs, everything gets screwed up.

We have two problems to solve here. First, how do we handle data with tabs in it, since we probably can’t stop tabs being entered by the user of the ASP.NET form. Secondly, how do we deal with any kind of failure to parse the data passed. This is more of a problem, since currently if we fail to parse the data, we lose it all since the folder never gets created.

robust flagsSo tackling the second problem first, we want to do as little work as possible in the flagged creation action. We will just assign the flag data to a temporary memo variable, since the flag data will not be available outside the flagged action.

%xmData:=%Session.FlagData 

You may find this won’t work for you in earlier versions, at some point only each individual item of flag data could be accessed but it looks to have been fixed in version 7.6. If it doesn’t work, you’ll need to manually combine each piece of flag data.

Next in the Parse conditional action (with no condition), we attempt to assign the flag data to the variables, like so

%tText:=%xmData[1]
%iNumber:=%xmData[2]

This will still fail and will stay at the ‘Got Data’ stage, but at least we haven’t lost the data. The Edit action can then be used to manually fix up the data and get the folder on its way.

So back to the first problem, handling tabs in flag data. Really the only solution to this is to use a different delimiter when raising the flag. None of them are perfect, since potentially any of them could be in the data, but %CHR(160) has worked well for us in the past. Another solution might be to pass your data in some other format such as XML. That will be more complicated but more robust. 

Thursday, June 18, 2009

Day 6 – where I take a test

I spent the morning taking a programming test, sent it off to the agent (the second in a chain of agents I’ve had to pass through to get my CV to the actual employer) and he told me they’d probably get back to me in a week. It seems like no-one is in too much of a hurry to actually employ people. My one interview is over a week away, although I’ve now got a telephone interview lined up for next week which will be followed by a real interview the next day if the phone interview goes well.

I did like the agent’s story of how one person who took this test (for a C# position) actually implemented his solution in Perl… It was quicker apparently. I’m sure he probably solved the problem in 3 incomprehensible lines of code, but he didn’t get the job. 

Wednesday, June 17, 2009

Looking for work – day 5

Perhaps the job market isn’t as bad as people are saying, I’ve managed to get myself an interview. Admittedly not the best paying job in the world, but it’s enough to cover the bills. And I’ve still got a few other potentials looking positive.

And going off at a tangent, I’ve got an idea for an application somebody could write. I’d do it myself, but I don’t have time right now. Logging into all these job websites to update my CV is a right pain. Keeping track of all the jobs I’ve applied for and all the conversations I’ve had with agents is also hard work. So an application that helps manage all these things would be most useful. It should be able to store all your login details for job websites, then I should be able to do a search across all the sites and also update my CV on all the sites in one action. It should also be able to keep track of job applications, link them to emails in Outlook and so on.

Tuesday, June 16, 2009

Looking for work – day 4

It occurs to me that having Metastorm BPM plastered all over my CV isn’t really helping in my job search. If a Metastorm job does come up then I’d hope I’d be in a good position to get it, but the fact is there aren’t any Metastorm jobs around at the moment. And most agencies (and potential employers for that matter) don’t have a clue what it is so they probably drop my CV in the bin, since they’ve almost certainly got a big pile of CVs that match their requirements more closely. To be fair, I’d do the same thing myself. So more CV tweaking is required I think. It seems like uploading a new CV to the job websites might trigger an email to agents anyway, so it may make sense to keep updating it, to keep myself at the front of the queue.

Agency quote of the day - “Winsocks are also important as well”

Monday, June 15, 2009

Looking for work – day 3

Things I learned today

  • There are still some jobs available for hackers like myself
  • Some of them are potentially quite interesting. In fact one actually seemed pretty exciting, not necessarily for the work itself but for the company who do something I think could change the face of one of our industries, in a good way.
  • Apparently having an A in Maths A-Level is considered a good thing, even though I took the exam 19 years ago and have forgotten everything about it. Take heed young uns!

Some stats

  • Number of agents I’ve talked to – lots
  • Number of jobs I’ve put myself forward for – several
  • Number of interviews lined up – none…

Sunday, June 14, 2009

Looking for work – day 2

I dunnow, but this may be interesting to people, so I’ve decided to blog about my attempts to find a new job. Given the current economic situation, I suspect it may take a while and as my life may start to mirror ‘Fun with Dick and Jane’, it could get entertaining (a film I ironically saw just a couple of months ago at the ‘Baltimore or bust’ conference…)

It will be interesting for me, for at least one reason. I’ve never had to look for a job during a recession. I was having a wild time at university during the last recession, it all passed me by in a drunken haze.

So what have my first moves been? I’ve uploaded my latest CV to Jobserve, Jobsite and TotalJobs and applied for a few jobs. I’ve also asked a few people on LinkedIn to recommend me. Being a Sunday, I obviously haven’t heard back from any of the agencies, but hopefully tomorrow things will really start to kick off for real.

Debugging authentication scripts in Metastorm BPM

I talked previously about how to simplify editing authentication scripts in Metastorm BPM by using the FreeFlow Administrator. What I didn’t talk about was how to debug what is actually going on in your authentication scripts. There isn’t a way to debug these scripts from within Visual Studio, or at least not that I’m aware of, so the only way to debug them is to write some kind of trace statements. This can help clarify which scripts are being executed and what path is being taken through the scripts.

I guess you could just write to a text file but the way I’ve done this in the past is to use the createLogEntry function. This function is provided in most of the authentication scripts supplied by Metastorm and usage of it is as follows.

    var err = new Object; 
    err.eDetectedByMethod = "eLogin()";
    err.eDetails = "logging in via web";
    createLogEntry( err );

This will write an entry to the eLog table. Typically I’ll add something similar to the above code as the first lines in each eLogin method in each authentication script, to see which script is actually being called, then add it where necessary to see why scripts are failing.

Saturday, June 13, 2009

Looking for work

I am in the market for a job. Ideally I’m after a .NET development role in London, but I’ve also got a lot of Metastorm BPM experience so anything in that area would suit me. You can find my CV here.

Thursday, June 11, 2009

Paste from Visual Studio naughtiness

There’s a nice add-in for Windows Live Writer called Paste from Visual Studio that lets you copy and paste syntax highlighted code from Visual Studio into Live Writer. I’ve used it for a while and had no problems with it. But the other day I looked at the source of the generated HTML and was a little disappointed to find a link to the author’s website with no link text, meaning it never shows up on the page.

I guess it’s OK wanting to get some links to his site, but to do it in such a way as to be invisible to the end user is a little underhand. I’m not sure what a search engine would make of a link like that. Very possibly it will be considered as some kind of black hat SEO and my pages will be downgraded as a result.

So I guess I’ll have to re-implement the plug-in myself. The guy does kindly provide the source code on his website so it shouldn’t be too difficult to get my own version built. Or I guess I can go back to using the excellent C# code format site, which does the right thing by just adding a comment to the generated HTML that includes the URL for the site.

Wednesday, June 10, 2009

Using a different SAP to login to Metastorm BPM via FreeFlow

Something that comes up quite frequently is how to use FreeFlow to login to Metastorm BPM using something other than the default authentication mechanism. This typically happens when SSO has been installed and although it’s possible to use FreeFlow with SSO it can be a hassle to set up, so the user wants to use the standard eUser authentication. This is pretty easy, by using the SAP property, as below. Note, the SAP property is zero based, so if the eUser script is second in your list of authentication scripts, its value needs to be 1.

  class Program
  {
    static void Main(string[] args)
    {
      Connection conn = new Connection();
      conn.HttpServer = "NEWDOOGAL";
      conn.Engine = "NEWDOOGAL";
      conn.ConnectionType = ConnectionType.HTTP;
      conn.SAP = 1;
      conn.LogOn("Doogal", "");
      Console.WriteLine("Session : " + conn.SessionId);
      Console.ReadLine();
    }
  }

There have been some reports of this not working in e-Work version 6. I suspect this is a bug on the Metastorm end, but since version 6 isn’t supported any more, I haven’t investigated too much. I suggest upgrading, it’s not too painful! And if you need help upgrading, you know who to call.

Thursday, June 04, 2009

IIS Search Engine Optimization Toolkit

I love Google Webmaster Tools since it tells me things I’m doing wrong on the websites I maintain (in fact I love any tools that tell me what I’m doing wrong, FxCop, the HTML editor in Visual Studio, compilers… I’m sure a psychiatrist would draw some scary conclusions from this admission). But the problem with the Google webmaster Tools is that they aren’t very responsive. If I fix an issue, I have to wait for the Google bot to crawl that page again before I know it’s been fixed.

So I was pleased to be directed to the Search Engine Optimization Toolkit by ScottGu’s post. It’s only in beta but it’s already a powerful tool. It will tell you about lots of potential problems on your site, such as missing alt tags on images, missing description meta tags, broken links, no h1 tag etc… I’ve spent the day trying it out with the Process Mapping site and have cleaned up a lot of potential problems, that no other tools had told me about. Only time will tell if this improves our ranking a lot, but I’m guessing even in the worst case, it won’t cause our ranking to drop, since all the suggestions seem perfectly sensible.

It’s simple to install, requires nothing special on the website to get it working and provides instant feedback, so I’m sold on it. The only possible downside is that it will only install on IIS 7 I guess (although that doesn’t mean your website needs to be running IIS 7, just the machine running the toolkit). 

Monday, June 01, 2009

The search for some page rank

I played around with bing today to see if it’s any good. I was pleased to see that a search for Metastorm brought up our forum and the FreeFlow web page on the first page, so clearly bing is a complete success. Or perhaps not, the image search brought up a photo of their former CEO, who hasn’t been there for many years.

But it did make me go off and check the same search against some other search engines (if you try this yourself, you may well get different results, I’d be interested to know if they are wildly different). Yahoo brings up the Process Mapping website on the first page, Cuil has Jerome’s book on the first page and puts the Process Mapping website on the second page and Ask has the forum on the second page (although for some reason they think I’m Dutch).

Finally I checked the other search engine that you may have heard of. OK, it’s the only search engine that matters. And we are way down the search rankings (page 5 for the forums, page 4 for FreeFlow). But the weird thing is if I search on Google’s UK site, FreeFlow and the forums are on the first two pages. That may make some sense for the FreeFlow site since it’s hosted in the UK, but the forums are hosted in Australia. Next I changed my search term to ‘Meta storm’ and was asked ‘Did you mean: Metastorm’, and although all the results were related to Metastorm, both the FreeFlow website and the forums now appeared on the first two pages. We also rank highly on searches for ‘Metastorm development’ and ‘Metastorm consultancy’ so it seems odd that we are so low down on that one search.

So is it a cock-up, conspiracy, some weirdness in the ranking algorithm or have we been marked down for doing something bad? I don’t think we’ve employed any bad practices in our attempts to improve our ranking and if we had, I would imagine that would impact all our rankings, not just on one search, so I think we can eliminate the last option. But I have no idea about the other three.