The last week at work, I was given the task to rewrite an IMHO crappy piece of code dealing with deletion of objects in some Eclipse-based editor. Actually, I just had to fix a bug and add a small new feature, but things turned out FUBAR, so after 2 changelist reverts i realized refactoring just wont do the job and tests and productive code have to be rewritten.
Half of the task went down to implementing a 'Delete' command for some Metamodel module upon which our editor, and our tool, is based.
So, trying to be a TDD guy, i sat down and started to devise a test. I devised an elaborate test fixture mocking all the functionality i had to interact with in the Metamodel module. This way, i figured, I would abstract my code from this Module as much as possible, and protect from failures in it - even if it failed, i figured, my test would still pass, and my code would still be correct.
So I spent 3 hours mocking this Metamodel, 1 hour more to devise my tests, then i implemented the functionality in the Command; and everything was tip-top - or so i thought.
The next day I got me a cup of coffee, sat at the desk and tried to manually test my code against the productive Metamodel module. Surprisingly to me, my command worked for only 20% of the input data. It turned out that the branches of the class responsible for the remaining 80% of the cases invoked the Metamodel in such manners that it would always throw exceptions. And still my low-granular, carefully devised JUnits were passing.
So all the JUnits i had written really were a prerequisite for my code to be correct, but they still gave little value to the product.
I sat down and wrote integration tests of my class with the real Metamodel module. Almost all of them would fail on the existing code. The assertions were all based on problems i saw in the live behaviour of the editor. After i get them all green tomorrow morning, the code will be next to rocket-stable.
Unit-testing, and Mocking for that matter, is rarely always the answer. Think about it - why would you want to abstract your Command from module 'A', when this Command exists for the sole purpose of interacting with module 'A', and not module 'B', 'C' or some other? Having your Unit tests pass and the integration fail is just having code that is correct from your point of view, but incorrect from the System / Customer / User point of view.
Furthermore, this Metamodel module was so widely reused throughtout our application stack, that the test fixture for the Integration test consisted of just calling a couple of handy lookup and creation services - it took me less than 20 mins to write, and the value for the product was times the value I achieved with 3 hours of EasyMocking. (what's so easy about it anyway?!).
Unit testing in isolation has real value for units which are low in the Application Stack. Mocking has real value when real object interactions are VERY expensive or VERY hard to setup. A Module high in the Application Stack of usages often has no real meaning without the Modules it relies upon; it was likely designed with these Modules in mind, and so it should - we shouldnt abstract for the sole purpose of abstraction. Likewise, tests abstracted from these modules can become meaningless too.
So, if you have a Class relying on lots of other functionality, do not rush to mock this functionality when testing. If it is not hard and expensive, setup the real functionality, feed it to the Class, and feed the Test with lots of test data. Data drive it, giving it boundary cases, normal cases, illegal inputs and anything in between. If the test passes then, you'll get real confidence the class gets the job done.
The only remaining problem is that if the modules which you rely upon fail, your Integration test will fail, too. Then, you will have harder root cause analysis and misleading test run reports. On the other hand, if A is designed as something on top of B and C, and B and C fail, does it really make sense for A's tests to pass?
The described approach has one more advantage.All the tests of classes high in the usage stack also test the low-level modules in various different circumstances equal with their real requirements, so as a result, as the project evolves, the low-level modules should become very robust.
So perhaps Unit-testing a class against its live environment is not such a bad idea. Hesitantly, i will call this approach 'Integration Unit Testing'. I shall derive more practical experience and give it some more thought, and perhaps write about it in my later blogs.