UnsupportedOperationException controversy
In my view, some controversy can be seen concerning the existence of the UnsupportedOperationException class.
The
LSP, one of the
SOLID principles of object-oriented designs, suggests that correct OO systems should exhibit behavioural subtyping. This informally means that replacing occurences of e.g. Dog objects inside your program with e.g. Pitbull objects should result in your program functioning the way it did before, provided Pitbull is a subtype of Dog.
If a Dog class correctly implements the bark() method, and Pitbull throws an UnsupportedOperationException in its bark() method, behavioural subtyping is no longer the case. Clients of Dog relied it is perfectly safe to invoke its bark() method, and replacing occurences of Dog with Pitbull will make client code fail everywhere the method in question is being invoked.
Such an OO system obviously exhibits bad design. When the behavioural is-a relationship between parent and child class is broken, polymorphism cannot be utilized to its full power.
An extreme opinion can be stated that the very existence of UnsupportedOperationException gives the tool for junior programmers to violate the LSP.
So why does UnsupportedOperationException exist in the first place, and what is its correct usage?
- There could be interfaces for lots of specific stuff - modifiability, mutability, etc. Combining each new Interface (concept) with the already existing ones creates twice as much artifacts in the hieararchy. It also creates a parallel hierarchy, which is an OO antipattern (ModifiableLinkedList on the one side, ImmodifiableLinkedList at the other). Repeating the process e.g. a second time makes, it the worst case, four times more artifacts and two parralel hierarches each consisting of a set of two parralel hierarchies in their turn. Ultimately the size of the hierarchy blows up exponentially. Needless to say, implementing code reuse through the parallel branches becomes a major pain in the ass, since Java has no multiple inheritance. You will have to introduce delegation to circumvent this, but when you add one more concept to the framework, you will again end up with a parallel hierarchy in the delegate objects, and you will need one more layer of delegation to circumvent the problem again. Such design soon becomes unmanageable, even for simple enough domain areas.
- There was no way to envision all the sets of concepts (interfaces) that will have to be introduced to suit all possible real-world Collection implementers.
So the approach of generic interfaces with optional operations was selected. Implementors to which a certain method was not relevant, would throw an UnsupportedOperationException.
Such an approach possibly violates the
Interface Segregation Principle on the provider side, but it is a compromise against an exponential size of the hierarchy. In OO Design, one always needs to compromise - this is the tricky part of it.
To sum up, it is permissible in some cases to use the UnsupportedOperationException, to signal that a certain method in the interface is not relevant for an implementation in question. However, throwing this exception in a method which is correctly implemented in the parent class, is always a mistake and a marker of bad design, and must be avoided. Strengthening the throws clause of a child ( @sigals UnsupportedOperationException if(true) ) is a violation of the LSP and hinders use of polymorphism.
The described above child classes most likely realize the more general (anti)-pattern of implementation inheritance. This means they do not have a easily reasonable is-a relationship with the parent class, but rather extend it to reuse some of its code. Implementation inheritance is almost always a bad phenomenon , signalling that the domain was not object-modeled in the best way possible. Implementation inheritance has fallen out of favor for the same reason UnsupportedOperationException can be considered controversial - polymorphism cannot be safely used in such cases.