Tuesday, December 22, 2009

Getting boundary data out of OS OpenSpace

County boundaries in SQL Server

I’ve been able to pull out a single county’s boundary data from OS OpenSpace using its JavaScript API, but I wanted to get the data for all counties. Now I could have used my little example webpage to do this manually for each county, but that would be extremely tedious. So based on the fact that all AJAX APIs can be considered as simple HTTP requests, I wrote a little piece of C# code that made the required HTTP request and dumped the data into my SQL Server table. This is very rough and ready but it met my needs. Some time soon I will probably put together a class to handle all the OS Open Space interaction a bit more nicely and with more functionality.

      HttpWebRequest req = (HttpWebRequest)WebRequest.Create(
        "http://openspace.ordnancesurvey.co.uk/osmapapi/boundary?geometry=true&"+
        "key=74B560DED619715AE0405F0AF060615A&f=xml&" +
        "url=http%3A%2F%2Flocalhost%2FOpenSpaceBoundary%2FDefault.aspx&area_code=CTY&" +
        "bbox=0%2C0%2C1400000%2C1400000&resolution=1000&dojo.preventCache=1261317566652&" +
        "callback=dojo.io.script.jsonp_dojoIoScript4._jsonpCallback");
      using (WebResponse resp = req.GetResponse())
      using (Stream respStream = resp.GetResponseStream())
      using (StreamReader reader = new StreamReader(respStream))
      {
        string response = reader.ReadToEnd();
        XmlDocument xmlDoc = new XmlDocument();
        xmlDoc.LoadXml(response);

        using (SqlConnection conn = new SqlConnection("Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=OGC;Data Source=NEWDOOGAL"))
        {
          conn.Open();
          using (SqlCommand command = conn.CreateCommand())
          {
            command.CommandText = "DELETE FROM CountyBoundaries";
            command.ExecuteNonQuery();
          }

          JavaScriptSerializer serializer = new JavaScriptSerializer();
          XmlNodeList nodes = xmlDoc.SelectNodes("BoundaryResultVO/items/item/geojson");
          foreach (XmlElement element in nodes)
          {
            Dictionary<string, object> data = 
              (Dictionary<string, object>)serializer.DeserializeObject(element.InnerText);
            Dictionary<string, object> properties = (Dictionary<string, object>)data["properties"];
            string countyName = (string)properties["NAME"];

            // get polygon data
            Dictionary<string, object> geometry = (Dictionary<string, object>)data["geometry"];
            object[] coordinates = (object[])geometry["coordinates"];
            
            for (int i = 0; i < coordinates.Length; i++)
            {
              StringBuilder wktBuilder = new StringBuilder();
              object[] polygonParts = (object[])coordinates[i];
              wktBuilder.Append("POLYGON(");
              for (int j = 0; j < polygonParts.Length; j++)
              {
                if (j>0)
                  wktBuilder.Append(",");
                wktBuilder.Append("(");
                object[] points = (object[])polygonParts[j];
                for (int k = 0; k < points.Length; k++)
                {
                  object[] point = (object[])points[k];
                  if (k > 0)
                    wktBuilder.Append(",");
                  wktBuilder.Append(point[0]);
                  wktBuilder.Append(" ");
                  wktBuilder.Append(point[1]);
                }
                wktBuilder.Append(")");
              }
              wktBuilder.Append(")");

              // put into database
              using (SqlCommand command = conn.CreateCommand())
              {
                command.CommandText = string.Format(
                  "INSERT INTO CountyBoundaries (County, Boundary) VALUES ('{0}', " +
                  "geometry::STPolyFromText('{1}', 4277))", countyName, wktBuilder.ToString());
                command.ExecuteNonQuery();
              }
            }
          }
        }
      }

Thursday, December 10, 2009

Getting all available geographic reference systems out of SQL Server 2008

As a reminder to myself, the query required to get a list of all available geographic reference systems out of SQL Server 2008 is

SELECT * FROM sys.spatial_reference_systems

Wednesday, December 09, 2009

uDig – free GIS

To be frank I don’t much like anything Java based but uDig is the exception that probably proves the rule. It’s a free GIS based on Eclipse and it is pretty user friendly. Unfortunately it doesn’t handle data stored in SQL Server 2008, which marks it down a bit in my eyes, but it does support most other data sources.

Saturday, November 28, 2009

Using OGC functions in SQL Server 2008 part 5 – Google Maps and polygons

County boundary In this series of posts I’ve so far managed to pull some spatial data into SQL Server, query it in a number of ways and finally been able to display some of that data in Google Maps. The next thing I wanted to do was to display the county boundary for Lancashire, since it was the postcodes for that great county I was displaying. I’ve already got the data for the county boundary from OS OpenSpace but the problem is this data is in UK OS coordinates rather than latitude and longitude as used by Google Maps. There are two ways to deal with this. Convert the data to latitude/longitude before displaying it or implement the GProjection interface to handle the conversion in the web page. The latter is something I want to look at at some point but for this I decided to do the conversion server-side.

So the first thing I did was to write a simple generic handler to generate an XML document containing the points for the county boundary. This code uses the .NET Coordinates library I have converted from the Java original. This is what the handler looks like

  public void ProcessRequest (HttpContext context) 
  {
    context.Response.ContentType = "text/xml";
    StringWriter stringWriter = new StringWriter();
    XmlTextWriter writer = new XmlTextWriter(stringWriter);
    writer.WriteStartElement("points");
    using (SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["postcodesDatabase"].ConnectionString))
    {
      conn.Open();
      using (SqlCommand comm = conn.CreateCommand())
      {
        comm.CommandText = "SELECT Boundary.STNumPoints() FROM CountyBoundaries WHERE County='Lancashire'";
        int numPoints = (int)comm.ExecuteScalar();
        for (int i = 1; i <= numPoints; i++)
        {
          writer.WriteStartElement("point");
          // get X
          comm.CommandText = "SELECT Boundary.STPointN(" + i + ").STX FROM CountyBoundaries WHERE County='Lancashire'";
          double x = (double)comm.ExecuteScalar();

          // get Y
          comm.CommandText = "SELECT Boundary.STPointN(" + i + ").STY FROM CountyBoundaries WHERE County='Lancashire'";
          double y = (double)comm.ExecuteScalar();

          // convert to lat/long
          OSRef osRef = new OSRef(x, y);
          LatLng latLong = osRef.ToLatLng();

          writer.WriteAttributeString("x", latLong.Longitude.ToString());
          writer.WriteAttributeString("y", latLong.Latitude.ToString());
          
          writer.WriteEndElement();
        } 
      }
    }
    writer.WriteEndElement(); // points
    context.Response.Write(stringWriter.ToString());
  }

Next up was the JavaScript required in the web page to display the data. This is pretty simple

        // load boundary
        var boundReq = new XMLHttpRequest();
        boundReq.open("GET", "GetBoundary.ashx", true);
        boundReq.onreadystatechange = function() {
          if (boundReq.readyState == 4) {
            if (boundReq.status == 200) {
              var nodes = boundReq.responseXML.selectNodes("points/point");
              var latLongs = new Array();
              for (var i = 0; i < nodes.length; i++) {
                var lng = nodes[i].getAttribute("x");
                var lat = nodes[i].getAttribute("y");
                var latLong = new GLatLng(lat, lng);
                latLongs.push(latLong);
              }
              var polygon = new GPolygon(latLongs, "#FF0000", 3);
              map.addOverlay(polygon);
            }
          }
        };
        boundReq.send(); 

If you look at the image above you may think the boundary displayed is not correct since it doesn’t include some places that are quite obviously in Lancashire (Blackburn and Darwen for instance) and this confused me for a while. But the boundary displayed is that of Lancashire County council, rather than the full county, which doesn’t include some areas which are in unitary authorities but still within the county.

Wednesday, November 25, 2009

Using OGC functions in SQL Server 2008 part 4 – Google Maps

Spatial data displayed in Google Maps So I’d now got to the stage where I’d pulled in some postcode data into SQL Server, got the county boundaries for one county (the mighty Lancashire) and figured out how to do a spatial query to tell me which postcodes were in that county. Now I wanted to display those postcodes on a map. So time to turn to Google Maps and Visual Studio.

First I created a simple generic handler in C# that returned an XML document containing the geometries of the postcodes. This could return JSON if that floats your boat but here is what my version looks like.

    public void ProcessRequest (HttpContext context) 
    {
      context.Response.ContentType = "text/xml";
      StringWriter stringWriter = new StringWriter();
      XmlTextWriter writer = new XmlTextWriter(stringWriter);
      writer.WriteStartElement("points");
      using (SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["postcodesDatabase"].ConnectionString))
      {
        conn.Open();
        using (SqlCommand comm = conn.CreateCommand())
        {
          comm.CommandText = "SELECT Location.STX AS X, Location.STY AS Y, City, Region, Postcode FROM Postcodes WHERE " +
            "(SELECT Boundary FROM CountyBoundaries).STContains(OsLocation)=1";
          using (SqlDataReader reader = comm.ExecuteReader())
          {
            while (reader.Read())
            {
              writer.WriteStartElement("point");
              writer.WriteAttributeString("description", reader["Postcode"].ToString() + ", " + reader["City"].ToString() + ", " + reader["Region"].ToString());
              writer.WriteAttributeString("x", reader["X"].ToString());
              writer.WriteAttributeString("y", reader["Y"].ToString());
              writer.WriteEndElement();
            }
          }
        }
      }
      writer.WriteEndElement(); // points
      context.Response.Write(stringWriter.ToString());
    }

Next I created a HTML page with a bit of JavaScript to load up the postcode data, which looked like this.

      function loadMap() {
      var map;
      if (GBrowserIsCompatible()) {
        map = new GMap2(document.getElementById("map"));
        map.setCenter(new GLatLng(53.756830663572174, -2.73834228515625), 9);
        map.addControl(new GSmallMapControl());
        map.addControl(new GMapTypeControl());
        map.enableDoubleClickZoom();
        map.enableScrollWheelZoom();

        // load data
        var req = new XMLHttpRequest();
        req.open("GET", "GetPoints.ashx", true);
        req.onreadystatechange = function() {
          if (req.readyState == 4) {
            if (req.status == 200) {
              var nodes = req.responseXML.selectNodes("points/point");
              for (var i = 0; i < nodes.length; i++) {
                var lng = nodes[i].getAttribute("x");
                var lat = nodes[i].getAttribute("y");
                var marker = new GMarker(new GLatLng(lat, lng), { title: nodes[i].getAttribute("description") });
                map.addOverlay(marker);
              }
            }
          }
        };
        req.send();
      }
    }

Again nothing groundbreaking but the next stage could me more challenging. That is pulling out the data for the county boundary and plotting that on the map. And what is this all for? All part of my plans for world domination.

Sunday, November 15, 2009

Using OGC functions in SQL Server 2008 part 3

It’s been a while since I last tried to do anything with the spatial features in SQL Server 2008. My next plan was to query the database to see which postcodes were in a particular county. To do this I got hold of the boundary data for some counties using the OpenSpace API. I just added one county, the one and only Lancashire, with the following code.

INSERT INTO CountyBoundaries (County, Boundary) VALUES ('Lancashire', 
geometry::STPolyFromText('POLYGON((335390.6 402509.3,335318.8 405919.1,337285.8 405717.9,341638.1 403012.7,341198.7 401158.3,
345355.3 399036.6,348231.1 404146.9,349111.4 402150.3,351662.6 402905.5,353409.1 407181.8,352474 409073.9,354484.8 412190.4,
358109.6 412599.6,358670 410970.3,360670.3 412859.2,362419.8 411151.7,366280.6 414615.7,364921.9 418843.3,366064.7 421813,
363060.8 427981.2,363748.5 429031.4,369403 431708.6,372966.5 423266.5,375369.8 421263.6,375568.6 419018,379162.2 417655.1,
380342.3 418953.7,382117.1 413139.9,383850.1 418586.1,385712 419187.4,387839.3 415759.4,389429.8 416106,390432.6 420649.6,
388662.6 425190.8,391447.8 428359.4,392703.4 434383.5,396065.9 436596.5,397063 439322.3,394185.1 441332.3,392652.8 446613.3,
388130.5 448821.7,387988.3 450699.5,387239.2 452627.7,385286.7 451690.4,384991.7 453964.6,380792.9 453190.5,379173.9 455357.1,
376997.3 455460,377844.2 457098.1,375605.7 461521.8,372209.8 460317.7,369352.3 461239,369714.2 464470.9,363191.2 470277.6,
363491 473169.9,365227.3 473672.8,370052.2 481319.2,370144.8 482748.7,362512.5 477911.7,358350.8 478665.3,355703.4 474137.7,
347972.3 478292.1,345446.8 475698.7,342749 476087.5,342784.7 474615.5,340414.8 475469.9,342882.8 469480.8,341304.5 465488.7,
344238.6 465331.3,340055 463217.2,340345.5 460141.4,335601.4 455045.4,336057.5 453597.1,332947.7 450366,329952.5 450443.2,
330737.1 442705.2,333041.8 441857.2,333572.8 437130.7,335319.6 433501.9,333963.4 433247,334425.2 431366.8,329729.3 431451.9,
329199.8 427517.3,328505.3 422791.4,330070.5 422095.3,336522 423531.2,337967.3 418422.4,332016.4 412241.1,332374.5 408594.8,
330736.4 405654.3,335390.6 402509.3))', 4277))

This is where things got a little tricky. The county boundaries used the OS coordinate system, whereas my postcodes were stored in latitude/longitude format. I was hopeful that SQL Server would do some magic for me and handle the different coordinate systems, but it would appear this isn’t the case.

So it looked like the only option was to convert my postcode points into the OS coordinate system. I’ve used phpCoord to do this conversion in the past, but this time I was looking for a .NET solution. So I took the Java implementation and converted it C#. That took a while (and I’ll upload the code at some point when I’ve OKed it with the original author) but after modifying my code to insert postcodes into the database I was able to run the following query and successfully pull out postcodes that were in the Lancashire area.

SELECT * FROM Postcodes WHERE (SELECT Boundary FROM CountyBoundaries).STContains(OsLocation)=1

Saturday, October 17, 2009

Legal action against free postcode lookup

I only discovered the Ernest Maples website today. Ernest Maples was the man who introduced the UK postcode system. The website was providing a useful service that took a UK postcode and returned the latitude and longitude of that postcode, which seems kind of familiar… But they’ve recently received a threatening legal letter from the Royal Mail’s lawyers telling them to remove this service from their site.

This raises so many questions. When will I get my threatening letter? When will Google get their threatening letter, since they provide this exact same feature but through a JavaScript API rather than via a URL? Why is the publically owned Royal Mail charging for this data anyway, since it was taxpayer money that paid for its creation in the first place?

Monday, October 12, 2009

Google Maps tools

This is an interesting tool to see what is possible with the Google Maps API. Something I feel missing is real-time update of the shapes being drawn as the user moves the mouse around, something our mapping tool will provide when we ship the next version very soon.

Sunday, October 11, 2009

Micro Men

BBC3 and BBC4 often have some interesting programmes that are easily missed. I remembered to record Micro Men, a dramatisation of the work of Clive Sinclair and Chris Curry who produced the ZX Spectrum and BBC Micro respectively. It was good fun, although Sinclair didn’t come across as a particularly likeable character, which I found disappointing since he was always something of a hero for me as a child. Acorn lives on with its ARM processor powering most of the world’s mobile phones and Sinclair continues to invent things, although with somewhat less success than the ZX Spectrum. Anyway, Micro Men is available to watch on BBC iPlayer for a while.

Zopa – 18 months on

About 18 months ago, I started to siphon some of my savings into Zopa in an attempt to get a slightly better return on my money. So I thought now would be a good time to look back on how well it has gone. First up, it certainly provides better returns than having your money in a bank account, with interest rates as low as they are. On the downside, this increased return does mean increased risk, the old risk premium again. Those loans can sometimes turn bad and when they do, all the outstanding debt is written off. Well I guess some of it may be returned, but I’ve not had any bad debt returned yet.

But the main downside is that the money is tied up for a long time, and by my calculations the return from part paying off my mortgage is still better than the returns on Zopa. And although doing that means that the money is gone forever, in the future I’ll probably be more likely to do that than put more money into Zopa.

Thursday, October 01, 2009

Checking a latitude/longitude is in the UK

I found quite a lot of duff data appearing in my incomplete database of UK postcodes, so thought I’d better filter out some of the rubbish. So the obvious thing to do was figure out which latitude and longitudes weren’t in the UK and stop them being put into in the database in the first place and also delete the ones already in there. There are definitely better ways of achieving this but my simple solution seems to have done the trick. First check the latitude is greater than 49 and less than 61, then check the longitude is greater than –12 and less than 3. It’s not perfect but I think it’s probably good enough for many applications.

Tuesday, September 29, 2009

OS OpenSpace API

I’m pretty new to the world of GIS but one thing seems blindingly obvious to me, maps are becoming commoditised. With Google Maps and Virtual Earth already on the block, who is ever going to pay for maps anymore? New to the fray is the OS OpenSpace API, from the Ordnance Survey. It looks pretty decent, it includes some things not available in the other map providers, such as administrative boundaries, but misses other things like aerial imagery. When I get chance I will play with the API to see how good it is and get some ideas for our own JavaScript API.

Sunday, September 27, 2009

Using OGC functions in SQL Server 2008 part 2

In part 1 I created a table that contained some spatial UK postcode data. Now the next task is to work out how to query this data. A typical question we may want to answer is ‘what are the nearest postcodes to a particular postcode?’. OK, what we probably want to answer is ‘what is the nearest station/cash machine/post office etc to a particular postcode?’ but the only spatial data I’ve got is for postcodes, so we’ll use that. And the query to answer this question is pretty straightforward.

SELECT TOP 100 P.Postcode, P.Location.STDistance(PL.Location) AS Distance 
FROM Postcodes P, Postcodes PL
WHERE PL.Postcode='KT1 3EG'
ORDER BY Distance

The MSDN documentation for the STDistance function is a bit sparse but from what I can find from other sources, it will return the distance between two locations in metres.

Obviously this has all been possible in the past by writing your own code to do the calculation, but executing it directly against the database makes life a lot simpler. I have no idea if it will be quicker but simpler generally wins out for me.

Saturday, September 26, 2009

XML encoding in JavaScript

As far as I know there is no cross browser way to generate valid XML documents, so it seems like string concatenation is the way to go. So here’s a little function to do the required encoding of characters that need to be specially encoded for XML. As ever, I’ve not done any kind of thorough testing on this…

    function XmlEncode(text) {
      text = text.replace(/&/g, '&amp;');
      text = text.replace(/\"/g, '&quot;');
      text = text.replace(/\'/g, '&apos;');
      text = text.replace(/</g, '&lt;');
      text = text.replace(/>/g, '&gt;');
      return text;
    }

Friday, September 25, 2009

Using OGC functions in SQL Server 2008 part 1

image One great thing about having a new job is being faced with new technologies. So I’ve just come across the new geography and geometry data types in SQL Server 2008. These allow you to store geographical data in a database. In some ways this is nothing new, you could always store latitude and longitude or eastings and northings if you only cared about the UK area. But what is new is that the database actually understands what these values are, meaning it’s possible to write queries that are based on geographical location, which to my mind is absolutely frigging awesome.

So I thought I’d play around with these new features in SQL Server. Oracle also now has spatial features, but frankly I try and avoid using Oracle wherever possible. First up I created a new table with a geography column, as shown above. Next I had to populate it. Fortunately I knew where I could find some data. With a little help from C# and Visual Studio, it was pretty easy to get this into the database.

  class Program
  {
    static void Main(string[] args)
    {
      string[] lines = File.ReadAllLines(@"C:\Users\Doogal\Desktop\Postcodes.txt");
      using (SqlConnection conn = new SqlConnection("Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=OGC;Data Source=NEWDOOGAL"))
      {
        conn.Open();
        foreach (string line in lines)
        {
          string[] data = line.Split(',');
          using (SqlCommand command = conn.CreateCommand())
          {
            command.CommandText = string.Format("INSERT INTO Postcodes (Postcode, Location, City, Region) " +
              "VALUES ('{0}', geography::STPointFromText('POINT({2} {1})', 4326), '{3}', '{4}')",
              SqlEscape(data[0]), data[1], data[2], SqlEscape(data[3]), SqlEscape(data[4]));
            command.ExecuteNonQuery();
          }
        }
      }
    }

    private static string SqlEscape(string sql)
    {
      if (sql == null)
        return null;
      return sql.Replace("'", "''");
    }
  }

image So after getting the data in the database, the next step was to query it and see what we got. And after running a query against a spatial table in Management Studio, the first thing you notice is the new ‘Spatial results’ tab. Clicking on that showed the diagram shown on the left. And that looked remarkably like a map of the UK.

I guess that in itself is not hugely impressive, but it certainly pleased me. Next I have to move onto real queries, such as ‘give me all the postcodes within 1km of this location’, which is where spatial databases potentially come into there own.

Monday, September 07, 2009

When will IE6 die?

IE6 usage

Just like everybody else who has to do any kind of web development, I’m quite keen to see the back of IE6 so I thought I’d make an estimate of when IE6 would die. I took the Google Analytics data from the Random Pub Finder and pulled out the monthly percentage of IE users who were using IE6, which produced the graph to the left. It looked pretty linear, except for the initial drop off after IE7 was released, so I added a linear trend line. And voila, IE6 will die at the beginning of 2011. Well, one can live in hope…

Wednesday, September 02, 2009

RBS. Make it happen

I’m constantly amazed at the number of adverts I see around for RBS. I guess all the sponsorship deals they signed up for before they blew up have to be honoured so they still appear all over the place. I’ve passed through Glasgow airport a couple of times recently and it is plastered in ads for RBS saying ‘RBS. Make it happen’. I’m not sure if this a command to readers of the ad or they are just telling the reader that RBS make ‘it’ happen. I’m assuming it’s the latter. So what is it they make happen? Global financial meltdown? A massive public deficit due to bailing them out? Either way, I think perhaps they should update their ads to say something like ‘Look we are really really fecking sorry’, it might be more effective.

Friday, August 28, 2009

Installing and debugging a .NET service built in Visual Studio 2008

There seem to be several tutorials on the web on how to create a Windows service using .NET in Visual Studio 2005 and before but I was unable to find one for Visual Studio 2008, so thought I’d write my own since I got tripped up a couple of times when I tried to install it.

The first step is easy, create a new project using New Project/Windows Service. The next obvious thing to try and do is to build it and run it. You will then be presented with the following error -

Cannot start service from the command line or a debugger.  A Windows Service must first be installed (using installutil.exe) and then started with the ServerExplorer, Windows Services Administrative tool or the NET START command.

So then you fire up installutil from the command line passing the EXE’s full path as the only parameter. If you make my schoolboy error, you’ll be faced with the following error message -

Exception occurred while initializing the installation: System.BadImageFormatException: The format of the file 'WindowsService1.exe' is invalid.

This is because I ran installutil from the standard command prompt, rather than the .NET command prompt and it picked up installutil from the .NET Framework 1.1. So when I figured that out I ran the correct version of installutil and got the following error message -

No public installers with the RunInstallerAttribute.Yes attribute could be found in the C:\Source\dotNET2\WindowsService1\WindowsService1\bin\Debug\WindowsService1.exe assembly.

So the next thing to do is to add an installer to the service. This is achieved by right clicking on the service’s design surface and selecting ‘Add Installer’. after doing this and running installutil once again, the service should appear in the Services applet.

Finally, to debug it you’ll need to start it from the Services applet and then use the ‘Debug/Attach to Process…’ menu item in Visual Studio, making sure the ‘Show processes in all sessions’ checkbox is checked if the service is running under a different account to your own.

Saturday, August 22, 2009

Regards WTF?

When I was a young lad and was taught how to write a letter (a skill that has turned out to be not especially useful) I was told to end the letter with “yours sincerely”, which kind of baffled me at the time and still does. What does it mean, why would I want to be owned by the reader of the letter? why would I not be sincere?

And now, when my most of communication is via email, I’m receiving more and more of them ending with “regards” or “kind regards”. And now I’m asking myself the same question - what does that mean? It’s even happening with people I’ve known for years who have never put any strange incantations at the end of their emails in the past and have now started this odd practice. I guess emails are the new letters and we have to have some way of signing off from them. I’d always thought of emails as a fairly informal medium, but perhaps not anymore.

Regards Doogal

Saturday, August 15, 2009

Virtual PC on Windows 7

I decided to jump in and upgrade to Windows 7, because I keep hearing it’s what Vista should have been. And it’s OK. I think I’m getting too old to get excited about OS upgrades, these days I just get annoyed about what’s been broken.

But today I had need to fire up a virtual machine so launched Virtual PC 2007. It told me it was unable to get the network connection up and running and offered to fix it, but then couldn’t find the location of my setup files. Not to worry I thought, probably the best plan is to just reinstall the whole thing. So I uninstalled then attempted to re-install but was told Virtual PC 2007 doesn’t work on Windows 7 so it refused to install it (even though it was clearly working at least a little bit for me). But then I noticed there was a new version of Virtual PC available that would work on Windows 7. So I downloaded that to try and fix my problem. Unfortunately this new version of Virtual PC requires a CPU that has virtualisation built-in. Sadly my CPU is lacking that feature so I couldn’t install this new version.

So if you have Windows 7 and a CPU that doesn’t have the right features then you are basically stuffed. I have managed to get round the problem by rolling back my Windows installation a couple of days using System Restore. Virtual PC 2007 seems to work fine except for the lack of network support, but if you don’t already have Virtual PC installed then you will be somewhat stuck.

Using the StateDropDownEditor in state machine workflows

Dropdown Many moons ago I wrote about creating custom state machine activities. I’ve finally got back to doing some more work on this and thought it would be nice to add a dropdown to choose the target state for my custom activity. This looks pretty easy to implement, just decorate the property with the following attribute

[Editor(typeof(StateDropDownEditor), typeof(UITypeEditor)), DefaultValue((string) null)]

Unfortunately, StateDropDownEditor is an internal class so this won’t compile. The way round this is to fire up Reflector and copy the code for the StateDropDownEditor class. Not sure what the copyright issues are for this, but it starts to solve the problem. Unfortunately the StateDropDownEditor class references a StateMachineHelpers class which is also internal. We can again use Reflector to get hold of the implementation of this class, but things can start to get messy at this point. We end up having to pull in pretty much the whole state machine infrastructure code but the solution is pretty simple, just remove the GetCurrentState method and then no more code needs to be ‘borrowed’. I now have a nice dropdown for selecting the target state.

Sunday, August 09, 2009

Poor man's profiling

If you don't have access to a profiler or you can't be bothered to fire up your profiler to see where you've got performance problems, there's a pretty simple way to find performance bottlenecks in your code. Just keep breaking into your code and if a particular piece of code is causing problems, then chances are you will keep being taken to that piece of code.

Thursday, July 09, 2009

More on string.Concat vs the + operator in C#

A post of mine from 3 years ago about the performance differences between using string.Concat and the string class’s + operator got its first comment yesterday so I thought I’d flesh out what I said there to clarify what happens. First, here’s a little test program to show different ways to concatenate strings.

  class Program
  {
    static void Main(string[] args)
    {
      Console.WriteLine("a" + "b" + "c" + "d");
      Console.WriteLine(string.Concat("a", "b", "c", "d"));

      string a = "a";
      string b = "b";
      string c = "c";
      string d = "d";
      Console.WriteLine(string.Concat(a, b, c, d));
      Console.WriteLine(a + b + c + d);
    }
  }

So now lets look at the IL generated from that, using our old friend Reflector.

.method private hidebysig static void Main(string[] args) cil managed
{
    .entrypoint
    .maxstack 4
    .locals init (
        [0] string a,
        [1] string b,
        [2] string c,
        [3] string d)
    L_0000: nop 
    L_0001: ldstr "abcd"
    L_0006: call void [mscorlib]System.Console::WriteLine(string)
    L_000b: nop 
    L_000c: ldstr "a"
    L_0011: ldstr "b"
    L_0016: ldstr "c"
    L_001b: ldstr "d"
    L_0020: call string [mscorlib]System.String::Concat(string, string, string, string)
    L_0025: call void [mscorlib]System.Console::WriteLine(string)
    L_002a: nop 
    L_002b: ldstr "a"
    L_0030: stloc.0 
    L_0031: ldstr "b"
    L_0036: stloc.1 
    L_0037: ldstr "c"
    L_003c: stloc.2 
    L_003d: ldstr "d"
    L_0042: stloc.3 
    L_0043: ldloc.0 
    L_0044: ldloc.1 
    L_0045: ldloc.2 
    L_0046: ldloc.3 
    L_0047: call string [mscorlib]System.String::Concat(string, string, string, string)
    L_004c: call void [mscorlib]System.Console::WriteLine(string)
    L_0051: nop 
    L_0052: ldloc.0 
    L_0053: ldloc.1 
    L_0054: ldloc.2 
    L_0055: ldloc.3 
    L_0056: call string [mscorlib]System.String::Concat(string, string, string, string)
    L_005b: call void [mscorlib]System.Console::WriteLine(string)
    L_0060: nop 
    L_0061: ret 
}

OK, so looking at the first method where we concatenate string literals using the + operator and we can see the compiler helps us out by concatenating the strings at compile time, which is going to be as optimal as possible. The second example shows that the compiler doesn’t do this magic when we use string.Concat so string.Concat is actually slower in this scenario.

Now if we look at the next examples where we concatenate string variables, the generated IL is exactly the same! So the performance characteristics are likely to be somewhat similar to say the least. Things get more interesting when you get beyond 4 strings since there is no version of string.Concat that takes more than 4 parameters, so they have to be pushed into an array but the result is the same, the + operator generates the exact same code as string.Concat.

So I can’t see a scenario where you’d want to use string.Concat (unless you’re particularly fond of it) and if string concatenation performance is an issue, you probably should be using the StringBuilder class.

Wednesday, July 08, 2009

Encryption in Metastorm BPM

A question came up on the Metastorm forums about encrypting sensitive data contained in custom variables so I thought I’d see what I could come up with. I took this C# code and translated it to JScript.NET, which looks something like this

import System;
import System.IO;
import System.Security.Cryptography;
import System.Text;
import eWork.Engine.ScriptObject;

package Encrypt.Encrypt
{
    public class Encryption
    {
        private static const password : String = "password";
        public static function Encrypt( ework: SyncProcessData, args: Object[] ) : Object
        {
            // args[0] - string to encrypt
            // returns the encrypted string

            var encrypt : Encryption = new Encryption(password);
            return encrypt.Encrypt(args[0]);
        }

        public static function Decrypt( ework: SyncProcessData, args: Object[] ) : Object
        {
            // args[0] - string to decrypt
            // returns the decrypted string
            if (args[0] == "")
                return "";

            var encrypt : Encryption = new Encryption(password);
            return encrypt.Decrypt(args[0]);
        }

        function Encryption(password : String)
        {
            GenerateKey(password);
        }

        private var Key : byte[];
        private var Vector : byte[];

        private function GenerateKey(password : String)
        {
            var sha : SHA384Managed  = new SHA384Managed();
            var b : byte[] = sha.ComputeHash(new ASCIIEncoding().GetBytes(password));

            Key = new byte[32];
            Vector = new byte[16];

            System.Array.Copy(b, 0, Key, 0, 32);
            System.Array.Copy(b, 32, Vector, 0, 16);
        }

        public function Encrypt(plainText : String) : String
        {
            var data : byte[] = new ASCIIEncoding().GetBytes(plainText);

            var crypto : RijndaelManaged = new RijndaelManaged();
            var encryptor : ICryptoTransform = crypto.CreateEncryptor(Key, Vector);

            var memoryStream : MemoryStream = new MemoryStream();
            var crptoStream : CryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write);

            crptoStream.Write(data, 0, data.Length);
            crptoStream.FlushFinalBlock();

            crptoStream.Close();
            memoryStream.Close();

            return Convert.ToBase64String(memoryStream.ToArray());
        }

        public function Decrypt(encryptedText : String) : String
        {
            var cipher : byte[] = Convert.FromBase64String(encryptedText);

            var crypto : RijndaelManaged = new RijndaelManaged();
            var encryptor : ICryptoTransform = crypto.CreateDecryptor(Key, Vector);

            var memoryStream : MemoryStream = new MemoryStream(cipher);
            var crptoStream : CryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Read);

            var data : byte[] = new byte[cipher.Length];
            var dataLength : int = crptoStream.Read(data, 0, data.Length);

            memoryStream.Close();
            crptoStream.Close();

            return (new ASCIIEncoding()).GetString(data, 0, dataLength);
        }
    }
}

Then all that is required is to decrypt the string when the form is loaded and encrypt it when the form is saved, like so

%sensitive:=%ScriptEval(JScript.NET,,%Procedure.Name,%MapName,"Encrypt.Encrypt.Encryption.Decrypt",%sensitive )

%sensitive:=%ScriptEval(JScript.NET,,%Procedure.Name,%MapName,"Encrypt.Encrypt.Encryption.Encrypt",%sensitive)

Now the user should see the unencrypted text and the encrypted version will be stored in the database. You will also need to decrypt the data anywhere else you need to use it.

One thing to realise at this point is that the system is still not secure. Although it will stop casual viewers who just run a query against the custom variable table, it won’t stop a more professional hacker. The script text is also stored in the database, so a hacker can have a look at that and find the password used to encrypt/decrypt the data. A more secure implementation would store the password in a location that only the engine account has access to, assuming the engine account is also locked down.

Download the demo procedure

Saturday, July 04, 2009

MooZoom with image maps

MooZoom is a nice piece of JavaScript that adds zoom and pan functionality to images, built on top of MooTools. It wasn’t exactly what I needed. I didn’t want the zoom/pan to be constrained to the original size, I wanted the ability to zoom out beyond the original size and I wanted image maps to be handled correctly. I’m quite pleased with the results and you can download the source here.

To use it, in your image simply set

class="moozoom"

Friday, July 03, 2009

Day 21 – where I get offered a job

3 weeks in and I get a job offer, pretty good going I think. I have the weekend to think about it. And given that a bird in the hand is worth two in the bush (even if the birds in the bush have really nice plumage) I will probably accept it.

For any other IT job hunters in the current climate, here’s my advice.

  • Throw your CV onto every job website out there.
  • Make sure everyone you know is aware you are looking for work, have no shame!
  • If you have a website or blog, make it obvious you are looking for work. Play the percentages game, every person who sees you are looking for work may be a potential employer.
  • Don’t demand to get paid as much as you were paid before. Assuming you’re out of work like me, your current income is zero or thereabouts, so your previous salary is pretty much irrelevant.
  • Accept all interviews. Even if the job isn’t a perfect fit, it’s good to get back into the interviewing groove, which will help when a better role turns up. And who knows, a job that doesn’t appear perfect on paper may turn out to be better than expected.
  • When you have an interview and you get asked about something you have no knowledge of, go off and investigate it. OK, it’s too late for that particular interview but it might come up again.
  • Learn about other technologies which may have passed you by in the past. Today the guy I spoke to said he was impressed with my use of the JavaScriptSerializer class, which I’d only started playing with the day before my technical test.

Wednesday, July 01, 2009

Resizing an image map when zooming an image

There are some cool libraries out there for zooming images in a HTML document, but none that I’ve seen handle resizing an image map attached to the image. My research may be incomplete so I might be recreating the wheel here, but this simple solution seems to fix the problem. Now I need to integrate with one of those libraries.

First up my HTML looks like this

      <map id="map" name="map">
        <area coords="15,92,568,247" alt="Blah" href="javascript:alert('hello');" />
        <area coords="18,259,546,432" alt="Blah" href="javascript:alert('hello 2');" />
      </map>
      <input type="button" value="+" onclick="javascript:ZoomIn();"/>
      <input type="button" value="-" onclick="javascript:ZoomOut();"/><br />
      <img src="Highlight cells.png" usemap="#map" id="image" />

And then there is some JavaScript to do the resizing, that looks like this

  <script type="text/javascript">
    function ZoomIn() {
      Zoom(1.1);
    }

    function ZoomOut() {
      Zoom(0.9);
    }

    function Zoom(amount) {
      // resize image
      var image = document.getElementById('image');
      image.height = image.height * amount;

      // resize image map
      var map = document.getElementById('map');
      for (var i = 0; i < map.areas.length; i++) {
        var area = map.areas[i];
        var coords = area.coords.split(',');
        for (var j = 0; j < coords.length; j++) {
          coords[j] = coords[j] * amount;
        }
        area.coords = coords[0] + ',' + coords[1] + ',' + coords[2] + ',' + coords[3];
      }
    }
  </script>

Day 19 – More calls

Quite a few calls from agents today, including some jobs that sound very interesting. What I’ve noticed is that I never hear anything back about the roles that sounds particularly interesting, just the run of the mill stuff. I guess the interesting jobs get absolutely deluged with CVs so can pick and choose people who look to be perfect. My first call about Metastorm work as well, so there is some work out there on that side of my skills. And I will return to Croydon for yet another interview on Friday.

Tuesday, June 30, 2009

Day 18 – Bell, Doogal Bell

So little happened today that I actually answered the door to the Jehovah’s Witnesses and tried to convert them to Atheism. I did have a call from an agent wondering if I’d be interested in a contract position in Sweden and an email asking if I spoke French and would be interested in a job in Paris. Being an international man of hacking does appeal I guess, flying from nation to nation with only a laptop loaded up with Visual Studio.

Improving the Local Search .NET call

Following on from my post about using Google Local Search from C#, I thought I’d try to improve it. Deserializing the JSON data ended up with some ugly typecasts and manipulation of Dictionarys. The first thing to notice is that the JavaScriptSerializer class has a Deserialize<T> method so all that is needed is a class to hold the returned data. Here’s a simple implementation of this.

  public class Results
  {
    public double lat;
    public double lng;
  }

  public class ResponseData
  {
    public Results[] results;
  }

  public class LocalSearchData
  {
    public ResponseData responseData;
  }

OK, I know, there’s a bit of a lack of OO encapsulation going on there but it seems like public fields with names matching the data returned from the JSON are required. They can be replaced with properties, but these must have getters and setters so this doesn’t really buy you much except to stop FxCop moaning at you.

Then the deserializing code looks much nicer

        LocalSearchData searchData = serializer.Deserialize<LocalSearchData>(response);
        latitude = searchData.responseData.results[0].lat;
        longitude = searchData.responseData.results[0].lng;

This still isn’t perfect. We have to use the same names as used in the JSON, which doesn’t really match up with .NET naming conventions and we have a class hierarchy that doesn’t really serve a purpose. It looks like the JavaScriptConverter class might help out here but that’s something to look at another day. Another alternative might be to just use these classes for moving the data into yet another class that has a better interface.

Monday, June 29, 2009

Day 17 – A sweaty interview

Croydon is “Manhattan as imagined by Le Corbusier” apparently, or so my mate Jethro says. I’d agree to an extent, except I’d have to say it’s Manhattan without the glamour. And today it was a particularly sweaty unglamorous place.

I was faced with a technical test, which I quite liked since I could do it. But as is always the case with these kind of tests, there are a hundred ways to implement a solution, so it all depends whether what I’ve done resonates with the person looking at the code.

And then onto an interview, where the interviewers basically said they are looking for someone who can be a team leader, top quality developer, project manager, software architect and product manager. Oh and they don’t want to pay very much. Of course I said I could do all of these things, although I wanted to shout “you will never find anybody who meets all those requirements with the money you’ve got to offer!”

Local Search web service

All these cool pieces of AJAX code are great but what if you want to use them from some server-side code? Local Search doesn’t provide any kind of web service API as far as I’m aware, but all AJAX calls eventually have to resolve down to simple HTTP calls. So it should be possible to use them from a server-side piece of code. To test out this theory, I thought I’d see if I could write some C# code to use Google’s Local Search AJAX API to get the latitude and longitude for a postcode as if it was a web service call.

So to see what is happening under the hood, we need to fire up Fiddler and use a page that uses the Local Search API, like this one. If we issue a query using that page, we can see the URL used is something like this.

http://www.google.com/uds/GlocalSearch?callback=google.search.LocalSearch.RawCompletion&context=0&lstkp=0&rsz=small&hl=en-GB&gss=.uk&sig=b211652959f1f93330a3286c1a81eab6&q=KT1%203EG%2C%20UK&sll=37.77916,-122.42009&gll=37747397,-122451853,37810922,-122388328&llsep=500,500&key=ABQIAAAAjtZCgAx5i04BiZDO6HlxhRQUdBDpWCOMRMbgTcqadX0jQ8HOERSxXxhk24TIBUpivovAKLrnpSio9w&v=1.0&nocache=1246220526894

And the returned data is in JSON format. Click on that link in a browser and you should see the returned JSON data. And if we move to the .NET world and make the call from C#, like this

      HttpWebRequest req = (HttpWebRequest)WebRequest.Create(
        "http://www.google.com/uds/GlocalSearch?callback=google.search.LocalSearch.RawCompletion&context=0&lstkp=0&rsz=small&hl=en-GB&gss=.uk&sig=b211652959f1f93330a3286c1a81eab6&q=KT1%203EG%2C%20UK&sll=37.77916,-122.42009&gll=37747397,-122451853,37810922,-122388328&llsep=500,500&key=ABQIAAAAjtZCgAx5i04BiZDO6HlxhRQUdBDpWCOMRMbgTcqadX0jQ8HOERSxXxhk24TIBUpivovAKLrnpSio9w&v=1.0&nocache=1246220526894");
      WebResponse resp = req.GetResponse();
      Stream respStream = resp.GetResponseStream();
      StreamReader reader = new StreamReader(respStream);
      string response = reader.ReadToEnd();
      MessageBox.Show(response);

This works as well. Which is good since it suggests Google aren’t doing anything to stop people using the API from ‘browsers’ that aren’t really browsers. So the next thing to figure out is which bits of the URL are required. So after removing all the parameters that aren’t needed, we are left with

http://www.google.com/uds/GlocalSearch?q=KT1%203EG%2C%20UK&v=1.0

I was somewhat surprised at how few of the parameters are actually required for the call to still work. Even the user’s API key isn’t needed. Of course, since this is completely undocumented, this may change in the future. In fact last time I tried to do this, I’m fairly certain it was a lot harder to get the HTTP call to work from .NET.

So now we know what URL is required, we just need to be able to parse the returned JSON data into something more .NET friendly. Fortunately .NET 3.5 provides the JavaScriptSerializer class to serialize and deserialize JSON strings. So putting it all together, we get a fairly simple implementation

using System;
using System.Collections.Generic;
using System.IO;
using System.Net;
using System.Web.Script.Serialization;

namespace LocalSearch
{
  public static class Postcode
  {
    public static void Geocode(string postcode, out double latitude, out double longitude)
    {
      HttpWebRequest req = (HttpWebRequest)WebRequest.Create(
        string.Format("http://www.google.com/uds/GlocalSearch?q={0}%2C%20UK&v=1.0", postcode));
      using (WebResponse resp = req.GetResponse())
      using (Stream respStream = resp.GetResponseStream())
      using (StreamReader reader = new StreamReader(respStream))
      {
        string response = reader.ReadToEnd();
        JavaScriptSerializer serializer = new JavaScriptSerializer();
        Dictionary<string, object> deserialized = 
          (Dictionary<string, object>)serializer.DeserializeObject(response);
        Dictionary<string, object> responseData =
          (Dictionary<string, object>)deserialized["responseData"];
        object[] results = (object[])responseData["results"];
        Dictionary<string, object> resultsData =
          (Dictionary<string, object>)results[0];

        latitude = Convert.ToDouble(resultsData["lat"]);
        longitude = Convert.ToDouble(resultsData["lng"]);
      }
    }
  }
}

This could be improved by using Deserialize<T> instead of DeserializeObject but that would involve writing some classes to hold the returned JSON data I think. I might look at that some other time.

Of course this provides exactly the same functionality as my StreetMap screen scraping code of many moons ago, so what’s the point? Really just to show a fairly generic method of using AJAX calls from .NET server-side code.

Friday, June 26, 2009

Day 14 – my first real interview

To be frank I thought this would be a complete train crash of an interview. I’d been asked to do a presentation on some of the work I’d done during my last job and I generally don’t feel overly confident doing presentations. So I guess it was a success because my presentation wasn’t awful. It was helped by the fact that the interviewer seemed to be checking his email most of the time I was talking.

The rest of the interview seemed to go OK, although my interview assessing antennae seem to be a little off at the moment. I met the agent afterwards and he said because there are so many candidates going after jobs, employers are getting much choosier about who they take on. Whereas in the past being a 75% match may have been enough, now they only accept a 100% match. Which may explain my incorrect assumptions about how well interviews have gone so far.

Thursday, June 25, 2009

Day 13 – where my hopes are dashed

So today I had a telephone interview with a potential employer and things seemed to be going pretty well. I think you can generally tell how well an interview is going. Some times it’s clear the two of you are not really on the same wavelength, but other times it’s like you’re talking to someone who you could imagine as a friend, or at least a work mate. Then he mentioned a company I’ve done some work for recently and I thought things were going even better. In the past, quite a few of my jobs have come not necessarily from my knowledge or experience but from having some kind of connection with the company I’m interviewing with. At Metastorm, one of my former colleagues worked there. At APT, I knew the person who would become my boss. At Process Mapping, the boss was a former work mate. Of course, I’m sure that’s not the only the reason I got those jobs, but it certainly didn’t reduce my chances.

So the thing was having that connection made me think I’d at least get a face to face interview. But then the agency tells me they’ve decided not to go any further and I was somewhat disappointed. If it had been one of those awkward interviews where there is no common ground I’d have been cool with it, but now I’m sitting here wondering what did I miss, what did I say that came across badly and how do I rectify it? But then of course the next interview will be with someone completely different who has completely different requirements and a completely different perspective on who they are looking for. So it’s a different game, one where you find out the rules after the game has finished.

Wednesday, June 24, 2009

Day 12 – where I go to the Job Centre

I dunnow, I was kind of expecting the Job Centre to be full of terrifying men shouting ‘gisajob!’, but it was actually quite a pleasant experience, mostly normal people looking for jobs and the staff were perfectly friendly, not the intimidating bunch I’d been led to believe. There were some burly security guards there who are presumably there in case it all kicks off. In fact I’d imagine they are probably hoping it does kick off, so at least they’ve got something to do.

I had a telephone interview which seemed to go OK, but I haven’t heard anything back from them yet.

Postcode geocoding in ASP.NET with live update

I put up an example of postcode geocoding using the Google Local Search AJAX API and somebody asked if it was possible to populate an ASP.NET GridView with the data in real-time. So since I currently have some spare time, I thought I’d give it a go. First a disclaimer, I have no idea if this breaks the terms and conditions for Local Search usage so check before doing it yourself.

First, lets create a table to store the postcode information

CREATE TABLE Postcodes(
    Postcode nchar(10) NOT NULL,
    Latitude float NOT NULL,
    Longitude float NOT NULL,
 CONSTRAINT PK_Postcodes PRIMARY KEY CLUSTERED 
 (
    Postcode ASC
 )
)

I won’t post all the code here, but the basic process goes like this

  • When the user presses the ‘Get lat/long’ button, execute the Local Search query
  • When/if that returns the latitude and longitude for the postcode, send the results off to a generic handler using a XMLHttpRequest object that puts the data into the Postcodes table
  • When that call returns, update the GridView, which sits in an UpdatePanel so only the grid gets updated.

Anyway, this is what it looks like (yeh I know, not too pretty) and you can download the source here.

Postcode geocoding in ASP.NET

Tuesday, June 23, 2009

Day 11 – where I wonder how List<T> is implemented

Just over 5 years ago I had an interview at APT and after a mammoth interview (the longest I’ve ever experienced) the interviewer told me he thought I’d done well but he was a little disappointed that I didn’t know how the TList class in Delphi was implemented (an internal array as it happens, as opposed to a linked list). Fast forward to now and I was having a telephone interview today where the discussion turned to data structures and the difference between an array and a linked list. We discussed how a list like this could be improved to provide faster random access. I probably didn’t do too well on this, since the internals of collection classes are generally not something I’ve needed to worry about too much but I probably should think about them some more since they do seem to turn up in interviews a lot.

But after all this talk of linked lists, how is the List<T> class (or the ArrayList class for that matter) actually implemented? Well, after firing up Reflector I discovered that, just like in Delphi, these classes actually use an array internally. I can only presume the overhead of having to resize the array when more items are added is outweighed by the memory fragmentation and slow random access of a linked list. Of course if you really want a linked list, there is a LinkedList<T> class available.

Monday, June 22, 2009

Looking for work - Day 10

When I started this series of blog posts I was thinking it might be interesting because we are meant to be in the middle of a big recession. But as far as I can tell there seem to be plenty of jobs out there for people like myself. It could just be down to my impressive CV (ahem) or my tactic of throwing enough crap at the wall that some of it is bound to stick or it could just be there are still a good number of jobs available. I’ve had another two telephone interviews confirmed today so I’ve now got three telephone interviews and a face-to-face interview lined up. I’m guessing that employers will probably interview the same number of people as in the past, even if there are more CVs available to look at, they will still not want to spending all day interviewing people, so I’m guessing my chances of actually being offered a job from one these interviews are the same as they were in the past. This is just conjecture on my part of course, but it certainly makes me feel better.

Talking of my blanket bombing technique, here are the sites I have submitted my CV to. It seems that different recruiters use different websites to find candidates, so it is probably worthwhile submitting a CV to as many as possible

jobserve.co.uk

totaljobs.com

jobsite.co.uk

monster.co.uk

planetrecruit.com, jobsearch.co.uk, gisajob.com (I think these three share the same back end)

cwjobs.co.uk

technojobs.co.uk

jobs.ac.uk

Friday, June 19, 2009

Day 7 – where nothing happens

A few phone calls from agents, but nothing of note.

Running a client-side script when a form segment loads

Form segments don’t provide a way to run a client script when the form segment is loaded. This can be somewhat limiting but it can be solved quite easily. Add a label to the form segment and set the label’s caption to

<script type="text/javascript">window.attachEvent("onload", Setup);</script> 

Then add a script to your form segment to do what ever you want

function Setup()
{
    alert("hello");
}

The problem with flags in Metastorm BPM

Flags are a great way to pass data between processes or to pass data from an external application to Metastorm BPM. There are several ways to raise flags, via the eRaiseFlag executable, using the Raise Flag ActiveX control, via the engine’s XML interface or through the engine’s COM interface. FreeFlow provides a wrapper around the last two approaches. Usage is pretty simple

      Connection conn = new Connection();
      // use to switch between TP and COM
      conn.RaiseFlagBy = RaiseFlagBy.TransactionProtocol;
      conn.RaiseFlag("New Data", new string[] {"some data", "1"});

As an aside, eagle eyed C# coders may be wondering why the last parameter of RaiseFlag doesn’t use the params keyword to simplify usage even further. The problem is there are several overloads of this method (taking user name, password, folder ID etc) so adding params would confuse the compiler since there would be multiple matches for a call to RaiseFlag. One solution would be to give the method a different name but this would make the API less discoverable since different versions of the same method would have different names. Another solution would be to just have one version of the method, with all the required parameters, but that wouldn’t really make life any simpler since the simple usage above would require passing in all parameters. API design isn’t an exact science and sometimes compromises are required.

But back to the main point of this post. Say we are going to create a new folder in Metastorm BPM using the flag data passed to populate the custom variables. Typically this might happen when somebody fills in an online ASP.NET form and we want to kick off some kind of process in Metastorm based on that data. So we’ll have a flagged creation action in the process and add some code to read the data, like so.

%tText:=%Session.FlagData[1]
%iNumber:=%Session.FlagData[2]

Which works fine. OK, say we change the data passed in to

conn.RaiseFlag("New Data", new string[] {"some\tdata", "1"});

Now when we run the code we don’t get a folder created. Instead we get an error in the Designer Log saying “'%inumber' failed while evaluating expression '%iNumber:=data' Error setting value for custom folder field 'inumber'”. This is because flag data passed to the engine is tab-delimited, so if your actual data contains tabs, everything gets screwed up.

We have two problems to solve here. First, how do we handle data with tabs in it, since we probably can’t stop tabs being entered by the user of the ASP.NET form. Secondly, how do we deal with any kind of failure to parse the data passed. This is more of a problem, since currently if we fail to parse the data, we lose it all since the folder never gets created.

robust flagsSo tackling the second problem first, we want to do as little work as possible in the flagged creation action. We will just assign the flag data to a temporary memo variable, since the flag data will not be available outside the flagged action.

%xmData:=%Session.FlagData 

You may find this won’t work for you in earlier versions, at some point only each individual item of flag data could be accessed but it looks to have been fixed in version 7.6. If it doesn’t work, you’ll need to manually combine each piece of flag data.

Next in the Parse conditional action (with no condition), we attempt to assign the flag data to the variables, like so

%tText:=%xmData[1]
%iNumber:=%xmData[2]

This will still fail and will stay at the ‘Got Data’ stage, but at least we haven’t lost the data. The Edit action can then be used to manually fix up the data and get the folder on its way.

So back to the first problem, handling tabs in flag data. Really the only solution to this is to use a different delimiter when raising the flag. None of them are perfect, since potentially any of them could be in the data, but %CHR(160) has worked well for us in the past. Another solution might be to pass your data in some other format such as XML. That will be more complicated but more robust.