четвъртък, 28 март 2013 г.

HTTP Session attributes considered evil

I've been dealing with a Struts 1.x based application in a Java 1.4 environment.

Struts actions, where the majority of the business logic is implemented, are not capable of returning results. Hence the predominant anti-pattern used when one needs to return a value from one action to another, is setting parameters in the request or session scope.

This quickly gets out of hand and the code becomes  what I'd term "Java with a mixture of the JavaScript bad parts" - a global object with untypized access, where all actors read and write as they deem appropriate.

Here are the most notable disadvantages to this approach:

- Unconvenient (raw) object access
- Complete reliance on side effects as opposed to returning values completely inhibits the functional coding style, and makes your application hard to reason about.
- The fact that one actor relies on another to have written data in session/request so that the other can read it, introduces strong temporal coupling between the actors.


I would go far as to argue that any framework which relies on session and request objects to pass data between actors is evil.




четвъртък, 19 януари 2012 г.

Central workflow with Git

Abstract: This post will describe the characteristics of a development workflow in a traditional software enterprise, and prescribe a Git workflow which implements it.

More and more large software enterprises are basing their development infrastructure on Git. (The possible reasons will not be discussed in this post.)

An interesting thing about this phenomenon is that tranditional software development in these companies is done with a very centralized process. More precisely, there is one central repository per project. Everybody "submits" and "syncs" from one and the same "development" / "stream" branch in that repo.
Furthermore, people do not want to isolate themselves from others' changes, or isolate others from their changes, when in the "stream" branch. On the contrary, they want to share changes with others as often as possible, and always work with the latest and greatest from other team members, effectively practicing what is known as "Continuous Integration".

On the other hand, Git beginners do not know how to implement this wokflow with Git. They work ineffectively with it and abuse it. This is so because Git is very complex - the author's humble opinion is that it is much more complex than it should be.
A typical sign of this inefficiency is people merging their own changes, or, in general, commits which contain no diff but only merge histories.



The Workflow
This was first suggested to me by Borislav Kapukaranov.

Peter wants to develop feature "Adding validation logic to editor"
Steps:

1. Create a branch "validation"
2. Develop Commit develop Commit develop Commit. Done, ready to submit
3. Pull master to obtain latest changes from the team
4. Rebase "validation" on top of master. This replays your local work on top of the new origin/master history. It rewrites your commits as if you were always developing on top of the latest version of the remote. This is exactly your intention in a centralized / continuous integration workflow.
Please make sure your working tree is clean when doing a rebase, otherwise it won't work. Working tree clean means you have committed all your local changes.

5. (Optional) Rebase fails because of conflicts with  remote history
6.  (Optional) Peter resolves conflicts through IDE or text editor. Peter adds the modified files to the staging area by performing "git add" on the modified files
7.(Optional) Peter continues rebase.
 git rebase --continue
Rebase finishes successfully.

8. Peter is ready to push their work to the remote master:
git push origin validation:refs/heads/master
(push my local branch "validation" to the remote branch "master" of remote "origin")

The lazy Workflow
This is "the fucking short version" for people who do not wanna bother with creating local branches for everything they develop. Okay. Working on the master branch is not a good practice, and I do it too.

Why working on master branch is inferior from the above described approach:
- If you create dedicated branch for every task you do, you can work on many pieces of work independently by just switching branches;
- If  you work on local dedicated branch, when you git pull master, there will never be conflicts.

Anyways, here is the lazy workflow


1. Develop commit develop commit develop commit
2. git pull --rebase
3. (optional)  Rebase fails. Resolve conflicts, add files to index, continue rebase
4. git push

неделя, 16 януари 2011 г.

Why not to use intristic locking

package crap;

public class Deadlock {

    public static void main(String[] args) throws InterruptedException {
        Deadlock deadlock = new Deadlock();
        deadlock.go();
    }

    private void go() throws InterruptedException {
        ImplicitLock lock = new ImplicitLock();
        FooRunnable r1 = new FooRunnable(lock);
        // ExplicitLock lock = new ExplicitLock();
        // FooRunnable r1 = new FooRunnable(lock);

        synchronized (lock) {
            Thread t1 = new Thread(r1);
            t1.start();
            t1.join();
        }
    }

    class ImplicitLock implements Fooable {
        public synchronized void foo() {
            int x = 2;
            x++;
        }
    }

    class ExplicitLock implements Fooable {
        private final Object monitor = new Object();

        public void foo() {
            synchronized (monitor) {
                int x = 2;
                x++;
            }
        }
    }

    interface Fooable {
        void foo();
    }

    class FooRunnable implements Runnable {
        private Fooable fooable;

        public FooRunnable(Fooable fooable) {
            this.fooable = fooable;
        }

        @Override
        public void run() {
            fooable.foo();
        }

    }
}

сряда, 15 декември 2010 г.

TDD Antipatterns

This time I just want to link to something really rich I encountered - a catalogue of TDD antipatterns.

It really relates to my current work on a legacy system, where a lot of tests exhibit "The Liar" and "The Greedy Catcher". This way, even a successful run of the full suite (which happens quite rarely) gives you a sense of false safety.

събота, 25 септември 2010 г.

It's surprising how simple concepts could evade one's mind for incredible amounts of time. I'd never explained to myself fully what a static class or interface means. Never took the time to read about it.

сряда, 1 септември 2010 г.

Open-source and code-quality

Over the past few months, at the Eclipse demo camp in Sofia first, I've been hearing about the Virgo server,
part of the EclipseRT project.
Although the main current usecase of Virgo is deploying and running JEE Web modules and Spring apps, in its core it is just an OSGi runtime. This means that it can be distributed without the Tomcat servlet container, and since it has the OSGi modularity, you can potentially create different 'servers' out of it to suit your needs,  provided you deploy the bundles you need for the job.
To be honest, I haven't given it a try yet. The reason I am writing this is that last night I took a quick glimpse of the source code of the kernel, and... Well, first-looks show the code is really high-quality. It leaves you with the impression that if you check it out, you can start working with it immediately, having little to no knowledge of the domain area. To a person with no knowledge in the product, stuff seems cleanly designed, hidden behind interfaces, documented, unit and integration tested, so forth. I don't think I encountered a class > 500 LOC, which was quite a pleasant surprise.
The only thing that really puzzled me and turned me off a bit :) "The use of the 'final' qualifier on method parameters and local variables is strongly discouraged. " (from the product's code convention) Ah well. Question of taste...
Anyways, at a first glimpse, at least the internal quality of this product seems to showcase what open-source can achieve. (Well, it's been open-source for some time now, the initial contribution was made by SpringSource, so kudos you guys).

You can take a look what is being planned for the product here. If you are interested to know about the Virgo community, or interested to become a contributer, check this out.

I'll take the time to take her for a spin as soon as I get some school work off my back this week...
With relation to one of my previous blogs (in Bulgarian), one should be able to use Virgo as a nice RAP runtime, if they deployed the RAP platform bundles on Virgo. No servlet bridge or stuff.

неделя, 29 август 2010 г.

UnsupportedOperationException controversy

In my view, some controversy can be seen concerning the existence of the UnsupportedOperationException class.
The LSP, one of the SOLID principles of object-oriented designs, suggests that correct OO systems should exhibit behavioural subtyping. This informally means that replacing occurences of e.g. Dog objects inside your program with e.g. Pitbull objects should result in your program functioning the way it did before, provided Pitbull is a subtype of Dog.
If a Dog class correctly implements the bark() method, and Pitbull throws an UnsupportedOperationException in its bark() method, behavioural subtyping is no longer the case. Clients of Dog relied it is perfectly safe to invoke its bark() method, and replacing occurences of Dog with Pitbull will make client code fail everywhere the method in question is being invoked.
Such an OO system obviously exhibits bad design. When the behavioural is-a relationship between parent and child class is broken, polymorphism cannot be utilized to its full power.

An extreme opinion can be stated that the very existence of UnsupportedOperationException gives the tool for junior programmers to violate the LSP.

So why does UnsupportedOperationException exist in the first place, and what is its correct usage?

Good insight can be gained by studying what the Java Collection Framework designers have to say. They chose to create fairly generic interfaces with optional operations instead of a large set of specific intefaces, to circumvent two problems:

- There could be interfaces for lots of specific stuff - modifiability, mutability, etc. Combining each new Interface (concept) with the already existing ones creates twice as much artifacts in the hieararchy. It also creates a parallel hierarchy, which is an OO antipattern (ModifiableLinkedList on the one side, ImmodifiableLinkedList at the other). Repeating the process e.g. a second time makes, it the worst case, four times more artifacts and two parralel hierarches each consisting of a set of two parralel hierarchies in their turn. Ultimately the size of the hierarchy blows up exponentially. Needless to say, implementing code reuse through the parallel branches becomes a major pain in the ass, since Java has no multiple inheritance. You will have to introduce delegation to circumvent this, but when you add one more concept to the framework, you will again end up with a parallel hierarchy in the delegate objects, and you will need one more layer of delegation to circumvent the problem again. Such design soon becomes unmanageable, even for simple enough domain areas.
- There was no way to envision all the sets of concepts (interfaces) that will have to be introduced to suit all possible real-world Collection implementers.

So the approach of generic interfaces with optional operations was selected. Implementors to which a certain method was not relevant, would throw an UnsupportedOperationException.
Such an approach possibly violates the Interface Segregation Principle on the provider side, but it is a compromise against an exponential size of the hierarchy. In OO Design, one always needs to compromise - this is the tricky part of it.

To sum up, it is permissible in some cases to use the UnsupportedOperationException, to signal that a certain method in the interface is not relevant for an implementation in question. However, throwing this exception in a method which is correctly implemented in the parent class, is always a mistake and a marker of bad design, and must be avoided. Strengthening the throws clause of a child ( @sigals UnsupportedOperationException if(true) ) is a violation of the LSP and hinders use of polymorphism.

The described above child classes most likely realize the more general (anti)-pattern of implementation inheritance. This means they do not have a easily reasonable is-a relationship with the parent class, but rather extend it to reuse some of its code. Implementation inheritance is almost always a bad phenomenon , signalling that the domain was not object-modeled in the best way possible. Implementation inheritance has fallen out of favor for the same reason UnsupportedOperationException can be considered controversial - polymorphism cannot be safely used in such cases.