Many moons ago I discovered a bug in some software I use on a fairly regular basis (I won’t tell you which software since it isn’t relevant to the story and I don’t want this to appear to be a critique of the company or their development process, though you can probably figure out who it is if you look round the rest of this site). The application allows you to write server scripts in JScript.NET. It also does some validation of these scripts but in certain circumstances this validation incorrectly flags up problems with scripts that aren’t actually problems. It was a fairly innocuous bug so I reported it but didn’t push for it to be fixed.
Fast forward a year or so and a new version of the software comes out with a new feature that validates your scripts when you try to deploy a project to the server. Generally a useful feature except the previous bug is still there. Meaning I now can’t deploy perfectly valid projects due to the interaction of the previous bug with this new feature. So a minor unfixed bug has blown up into a major issue.
Which got me thinking. The kind of bugs that don’t get fixed are these kind of things. They only occur in particular rare scenarios. If they happened all the time, the chances are they would get fixed, if they are serious enough. But they do add complexity to the product and testing. No longer can we assume doing A will cause B to happen. Most of the time it will be true but occasionally doing A will cause C to happen. So any testing of new features will have to take this into account. In fact, testing of any new feature will have to take into account any unfixed bugs that may impact the new functionality.
There are many reasons for trying to fix as many bugs as possible (keeping customers happy, trying to fight the constant entropy inherent in software development), may I add this to the list?
No comments:
Post a Comment