I came across this interesting video which goes from DOS 5 to windows 7. Microsoft generally does a pretty good job at preserving the backward compatibility.
With the popularity of Test Driven Design more and more teams are running automated regression tests. Psychologically it is gratifying to know when our tests passed. Continuos builds are a great way forward.
However after an initial honeymoon period, the reality sets in and without proper management of automated tests pretty soon they become unbearable burden. The testing tax just becomes too much and the process starts to break down, products are delayed and customers/management get frustrated.
Once again, I'm not saying the automated tests are bad. Not at all - All I am saying is that it is very easy to mismanage them.
Mistake 1: Trusting Developer Unit Tests
I'm not saying we should not trust them as good human beings. All I'm saying is that to err is human.
Let us assume I follow Test Driven Development by the book. I develop tests before coding and when I check in all my tests are passing. I have coded almost every test cases I can think of. My code coverage is 100%. I move on to next story/feature/workitem. Does this mean my code is solid? Not really - Despite my good intentions I might have still missed some important edge cases. Result is simple - I missed it in code and also I missed it in unit tests.
Mistake 2: Preferring regression testing over exploratory testing.
Let's be real we can not test our product exhaustively and we must choose wisely how we go about testing the product. Automated tests are a great way of making sure you don't introduce bugs as you add new features. However many teams concentrate on creating new regression tests for new feature rather than spending time actually finding bugs. Distinction is subtle but important. Bugs are not created equal: many bugs are shallow and only a few are really deep. Shallow bugs are caused by simple oversights. (Eg. Not checking null parameter). Shallow bugs are easy to fix and once fixed they don't normally come back. Deep bugs are relatively rare but like an hydra they come back again and again. They are generally a symptom of complex design. We should concentrate on finding as many shallow bugs as possible because once fixed they aren't coming back. Don't rush to automate them yet. Instead spend time hunting for the deep bugs. They are hard to find, but more important. All that busywork and activity in finding shallow bugs is no substitute for the real progress resulting from uncovering the deep bugs.
Use random monkeys, real monkeys or whatever works for you. Aim is not to write automated tests; Aim is to first find as many bugs as possible and then make sure they don't comeback.
Mistake 3: Not scaling continuous build process
In theory continuous build sounds really great. And in practice works great for the small teams. However as the team size grows it becomes the single bottleneck that will generally cripple your team. By definition the continuous build means only one check-in gets checked in at a time. For each checkin you will be running tests. That takes time. You have to wait in a queue to be able to check in. Alternative is to check in anytime and fix only if regular tests fail. Either way as the team size increases (say 50+), you have to wait for a long time- Either to be able to check in or simply waiting for others to fix their mess. I have seen team on which the wait time was about 3-4 days!! Worse yet when your code is ready to be checked in it is so much out of sync that it fails to compile. You spend enormous amount of time dealing with merge conflicts and wait time. Bigger the team bigger the mess. Soon you are in a fix Damned if don't continuous integration damned if you do.
Mistake 4: On version 1 product, spending all your capital on paying testing tax
Version one products or for that matter any major overhauls of the system are characterized by fluid dynamics and flow of creative ideas. There is too much uncertainty and so much at stake. Experimentation is inevitable. No risk no gain! You must follow the maxim: fail early and often to succeed sooner. However many teams forget it and institute a rigid regime of continuous and exhaustive testing way too early in the game. The result is that it becomes increasingly hard to change the design and improve it because it also means changing lots of tests. Developers get frustrated with burden put on them, test team gets frustrated by constant breaking changes. Managers get frustrated by delays and not innovating enough. Result is many teams spend their most of their precious capital on paying testing tax.
Mistake 5: Hiring good developers and turning them into bad testers
Many managers mistakenly think that automation requires a full time SDET. So they hire a bright developer and try to turn him into a brilliant tester. That strategy often backfires. First, many SDETs are brilliant and better developers themselves- they are coders first and foremost- make no mistake. More often than not they have wider and deeper experience than the "regular" developers on their team. Problem is that often times they think like a developer- which in fact they are. They see automated testing as just another kind of programming and concentrate on "programming" and not so much on testing. Management has very different view. They want them to be testers first - which they simply can't be. Result is missed opportunities on every side. And unnecessary wastage of another developer's precious talent and high rates of disappointments and attrition. Better solution is to hire that guy as a developer utilizing his skills to write product code and ask everyone on team share responsibility of writing automated tests. Your new hire is a great assets because he can mentor all other developers in how to write good test automation. For the testing part - hire a professional tester who enjoys thrill of finding obscure bugs.