понеделник, 30 ноември 2009 г.

Quick tip to reduce hierarchical complexity.

This week it is going to be a short and simple one.

Say we build a set of classes implementing a particular interface. For the sake of example we can imagine these classes represent abstractions of test applications used in integration testing fixtures. We may have various implementers - web service provider applications, consumer ones, EJB applications, web applications, so forth.
Inevitably at one point in time we will realize there is common logic between the implementers - for example, creating some j2ee modules and assigning them to an ear module; deploying the application; cleaning up after the application on the fixture teardown, etc.
We may be tempted to remove the code duplication with polymorphism. This is a very common approach, but not a sound one in my opinion.
So, we create a base class to hold the common logic, and the implementers extend it. Why not cool? Well, the problem is that the classes extending this base class have little in common in terms of domain modeling. The only thing in common they have is code. Even the name of the base class will suck, which reveals a code smell. Such classes are usually called (in our example) "BaseApplication", "AbstractApplication" or something like that.
So, what to do? Simple, create utulity class or classes to hold the common code and refer them inside your hierarchy. Now, if certain implementers have special stuff to do to extend the common tasks logic, the utilities might also follow some hierarchy.
Some may say that utility objects are also a code smell, but this approach is cleaner in terms of Object modeling and simpler in terms of type hierarchy, than having a "BaseApplication". Also, it is more flexible, because Java does not support multiple inheritance (thank goodness), so when not extending the 'base application', you retain the option to extend some real class from your domain model.

петък, 19 юни 2009 г.

Integration Testing vs Unit Testing

In my last blog I described an approach to testing which i called 'Integration Unit Testing'. The basic idea was to unit-test a class against its real-life environment rather than relying on a mocking infrastructure. I cannot tell whether and when this approach is good to follow - i guess that it is a question of personal taste and intuition, as are many aspects of programming.
Another approach i would like to address, this time briefly, is the one of 'Integration Testing instead of Unit Testing'.
In a nutshell, here's what this is about:
You are given some tack to accomplish. First you write classes ImportantClass and classes B and C. Class ImportantClass does some importantStuff(), and with the aid of classes B and C seems to get the job done. Being a TDD guy, you of course beforehand write the unit tests for B, C and ImportantClass.
A few hours/days later, you discover that what you came up with is bullcrap and decide to revert your changelist. You write some new Aid classes D and E, and alter class ImportantClass to use them. Of course, BTest and CTest are no longer relevant, and you have to write DTest and ETest in their place.
Now the real bad thing is that ImportantClassTest is also no longer valid. The problem is that, being a unit test, in its bulk it does not assert that some high level business functionality is achieved. Rather, it very much asserts that e.g. some methods of B and C are called with some parameters.
Being a patient guy, you say: 'O.K. Patience my young padawan'. (You say it to yourself, knowing it is healthy for a programmer to talk to themselves.) So you rewrite the ImportantClassTest.
Of course, a day later you have another ephiphany and discover that the concept of classes D and E is bullcrap also. So you do another revert and end up in the same ordeal, rewriting again ImportantClassTest, and very much wanting to bite off your arms now.
If you end up in such a situation, where classes B,C,D, E and so forth are likely to change/drop with each ephiphany you have or each new developer working on the code, it is likely an indication that classes B,C and so forth are not key aspects to the business functionality to begin with. If they are not, perhaps the effort is not worth to unit-test them at all. So what you could do is write an ImportantClassIntegrationTest, test only thru the ImportantClass's public API and assert some business objectives are really achieved. Then you measure the test's code coverage against the code of ImportantClass and the current B, C and so forth classes. If the metric is OK, you also have done OK.
To further emphasise on the small importance and transient-ness of the B and C and so forth classes, you can either:
  • nest them inside the ImportantClass
  • create them inside the package of ImportantClass and give them package visibility.

неделя, 7 юни 2009 г.

A few words on Unit Testing, Integration Testing and Mocks

The last week at work, I was given the task to rewrite an IMHO crappy piece of code dealing with deletion of objects in some Eclipse-based editor. Actually, I just had to fix a bug and add a small new feature, but things turned out FUBAR, so after 2 changelist reverts i realized refactoring just wont do the job and tests and productive code have to be rewritten.
Half of the task went down to implementing a 'Delete' command for some Metamodel module upon which our editor, and our tool, is based.
So, trying to be a TDD guy, i sat down and started to devise a test. I devised an elaborate test fixture mocking all the functionality i had to interact with in the Metamodel module. This way, i figured, I would abstract my code from this Module as much as possible, and protect from failures in it - even if it failed, i figured, my test would still pass, and my code would still be correct.
So I spent 3 hours mocking this Metamodel, 1 hour more to devise my tests, then i implemented the functionality in the Command; and everything was tip-top - or so i thought.
The next day I got me a cup of coffee, sat at the desk and tried to manually test my code against the productive Metamodel module. Surprisingly to me, my command worked for only 20% of the input data. It turned out that the branches of the class responsible for the remaining 80% of the cases invoked the Metamodel in such manners that it would always throw exceptions. And still my low-granular, carefully devised JUnits were passing.
So all the JUnits i had written really were a prerequisite for my code to be correct, but they still gave little value to the product.
I sat down and wrote integration tests of my class with the real Metamodel module. Almost all of them would fail on the existing code. The assertions were all based on problems i saw in the live behaviour of the editor. After i get them all green tomorrow morning, the code will be next to rocket-stable.
Unit-testing, and Mocking for that matter, is rarely always the answer. Think about it - why would you want to abstract your Command from module 'A', when this Command exists for the sole purpose of interacting with module 'A', and not module 'B', 'C' or some other? Having your Unit tests pass and the integration fail is just having code that is correct from your point of view, but incorrect from the System / Customer / User point of view.
Furthermore, this Metamodel module was so widely reused throughtout our application stack, that the test fixture for the Integration test consisted of just calling a couple of handy lookup and creation services - it took me less than 20 mins to write, and the value for the product was times the value I achieved with 3 hours of EasyMocking. (what's so easy about it anyway?!).
Unit testing in isolation has real value for units which are low in the Application Stack. Mocking has real value when real object interactions are VERY expensive or VERY hard to setup. A Module high in the Application Stack of usages often has no real meaning without the Modules it relies upon; it was likely designed with these Modules in mind, and so it should - we shouldnt abstract for the sole purpose of abstraction. Likewise, tests abstracted from these modules can become meaningless too.
So, if you have a Class relying on lots of other functionality, do not rush to mock this functionality when testing. If it is not hard and expensive, setup the real functionality, feed it to the Class, and feed the Test with lots of test data. Data drive it, giving it boundary cases, normal cases, illegal inputs and anything in between. If the test passes then, you'll get real confidence the class gets the job done.
The only remaining problem is that if the modules which you rely upon fail, your Integration test will fail, too. Then, you will have harder root cause analysis and misleading test run reports. On the other hand, if A is designed as something on top of B and C, and B and C fail, does it really make sense for A's tests to pass?
The described approach has one more advantage.All the tests of classes high in the usage stack also test the low-level modules in various different circumstances equal with their real requirements, so as a result, as the project evolves, the low-level modules should become very robust.
So perhaps Unit-testing a class against its live environment is not such a bad idea. Hesitantly, i will call this approach 'Integration Unit Testing'. I shall derive more practical experience and give it some more thought, and perhaps write about it in my later blogs.