warpedjavaguy

Imperative by day and functional by night

How I defeated the maven-release-plugin in a flat structured multi module project

Rules are made to be broken so that paradoxes can be created.

Maven is a handy tool and has a lot of available plugins. It adopts the convention over configuration philosophy and provides a standard build lifecycle out of the box. This makes it very easy to automate a build and release process without writing a single line of script. But there’s a catch! It only works if you do things the “maven way” and follow the maven rules.

Most projects are made up of one or more multiple smaller projects. In the maven world, such projects are called multi module projects. A multi module project has a parent project and one or more nested child projects known as modules. When you build the parent project the child projects are also built. The recommended maven way of structuring a multi module project is to mirror the parent child hierarchy using a nested project structure.

So maven recommends that you create a parent project that contains the child projects using a physical project structure like this:

workspace/parent/pom.xml
workspace/parent/child1/pom.xml
workspace/parent/child2/pom.xml

Modules are then declared in the parent POM like this:

<modules>
  <module>child1</module>
  <module>child2</module>
</modules>

This was good for maven but it was is not good for eclipse. The eclipse IDE does not support nested projects. This was clearly a problem! I wanted to import all my projects (parent and children) into eclipse but the nested structure made this impossible. So I decided to use a flat project structure instead and moved all my child projects out of the parent project.

Now my parent and child projects were organised in a flat physical structure like this:

workspace/parent/pom.xml
workspace/child1/pom.xml
workspace/child2/pom.xml

And I then redefined the maven modules in the parent POM like this:

<modules>
  <module>../child1</module>
  <module>../child2</module>
</modules>

Now I could import all my projects into eclipse. This worked well and life was good until I decided to use the maven release plugin to automate the release process. I learned the hard way that the release plugin only supports the nested project structure recommended by maven. Reverting back to the nested structure was not an option. I had broken a maven rule and was being punished for it! I needed a paradoxical solution that would support both the nested and flat structures at the same time. It was then that I realised that my parent POM was responsible for two things: POM inheritance and module composition. It served two “parental” roles. In one role it provided all the common properties, dependencies, plugins, and profiles to all children through inheritance and in the other it defined itself as the parent project of all child projects. In OO terms, this was akin to defining a superclass that contains a list of all its subclasess.

My parent POM had violated the single responsibility principle. So I decided to split it up into two separate parent POMs. I removed the modules declaration from the original POM in my parent project. This POM was now purely to be used for inheritance purposes only. All child POMs continued to reference this POM as the parent POM. Nothing changed there. I then created a new POM that inherited this modified POM and aggregated all the other child POMs. I placed this new top level POM file in the workspace root alongside all my existing projects. My flat project structure now had a top level POM file that defined all the child projects as modules.

The final project structure looked like this:

workspace/pom.xml
workspace/parent/pom.xml
workspace/child1/pom.xml
workspace/child2/pom.xml

The workspace/parent/pom.xml was inherited by all child POMs and also the top level workspace/pom.xml. It was the parent POM for inheritance purposes. The top level workspace/pom.xml aggregated all the child projects into one container project. It was the (root) parent POM for composition purposes. It defined the parent and child modules like this:

<parent>
  <groupId>?</groupId>
  <artifactId>?</artifactId>
  <version>?</version>
  <relativePath>parent/pom.xml</relativePath>
</parent>
<modules>
  <module>parent</module>
  <module>child1</module>
  <module>child2</module>
</modules>

Both the maven release plugin and the eclipse IDE were happy with this structure. It was flat enough for eclipse and hierarchical enough for the maven release plugin.

Note: After experiencing and resolving this problem first hand I later discovered that the issue has already been reported and discussed here and mentioned at the very bottom of the maven eclipse plugin page. But I still cannot find any mention of this limitation on the maven release plugin page itself. I suspect that this is a well known issue in the maven community. If anyone is aware of any fixes or better solutions, please let me know. Interestingly also the title of this issue suggests that the problem has been fixed but the actual contents therein state otherwise.

Sample POM snippets – Posted on 22 Aug 2011 by request

workspace/pom.xml (The top level root POM)

  <parent> 
    <groupId>maven.demo</groupId> 
    <artifactId>parent</artifactId> 
    <version>1.0.0-SNAPSHOT</version>
    <relativePath>parent/pom.xml</relativePath>
  </parent> 

  <groupId>maven.demo</groupId>
  <artifactId>root</artifactId>
  <version>1.0.0-SNAPSHOT</version>
  <packaging>pom</packaging>

  <modules>
    <module>parent</module>
    <module>child1</module>
    <module>child2</module>
  </modules>

workspace/parent/pom.xml (The parent POM)

  <groupId>maven.demo</groupId>
  <artifactId>parent</artifactId>
  <version>1.0.0-SNAPSHOT</version>
  <packaging>pom</packaging>

workspace/child1/pom.xml (The child1 POM)

  <parent> 
    <groupId>maven.demo</groupId> 
    <artifactId>parent</artifactId> 
    <version>1.0.0-SNAPSHOT</version>
    <relativePath>../parent/pom.xml</relativePath>
  </parent> 

  <groupId>maven.demo</groupId>
  <artifactId>child1</artifactId>
  <version>1.0.0-SNAPSHOT</version>
  <packaging>jar</packaging>

workspace/child2/pom.xml (The child2 POM)

  <parent> 
    <groupId>maven.demo</groupId> 
    <artifactId>parent</artifactId> 
    <version>1.0.0-SNAPSHOT</version>
    <relativePath>../parent/pom.xml</relativePath>
  </parent> 

  <groupId>maven.demo</groupId>  
  <artifactId>child2</artifactId>
  <version>1.0.0-SNAPSHOT</version>
  <packaging>war</packaging>

Written by warpedjavaguy

August 8, 2011 at 11:21 pm

Posted in automation, maven

Hey Null, Don’t Touch My Monads!


Null handling.. we’ve been doing it completely wrong!

Nulls are a big problem! They are a most constant annoyance and the single root cause of all NullPointerExceptions. It is common practice in Java to defensively check for nulls everywhere. It would be nice if they could just go away. But they won’t because we’ve been using them on purpose to denote missing values. Almost every Java API uses nulls in this way.

When we integrate with Java API’s we have to check for nulls. Fortunately, Scala provides the Option monad to help deal with nulls. It is basically a container that can hold two types of values; Some or None (some value or no value). But for Java interoperability purposes, Some can also hold null. Is this a problem? Lets explore..

First, we’ll define a User class like this (overly simple I know)

case class User (name: String)

Now we’ll define a list of users and filter it

val users = List(User("Wendy"), User("Bob"))
users filter { _.name == "Bob" }

This yields List(User(Bob)). So far so good! Now lets redefine the user list by including a null entry.

val users = List(User("Wendy"), null, User("Bob"))
users filter { _.name == "Bob" }

Bang! NullPointerException! Lets fix it by checking for null.

users filter { u => u != null && u.name == "Bob" }

This yields List(User(Bob)) again. Good, but this is ugly. We don’t want to check for nulls. Let’s wrap every element in an Option instance.

users flatMap { Option(_) } filter { _.name == "Bob" }

Option(null) resolves to None and the expression yields List(User(Bob)) again. Nice! Now let’s try and wrap every element in a Some instance instead and see what happens.

users flatMap { Some(_) } filter { _.name == "Bob" }

Some(null) resolves to null and oh no, we get a NullPointerException! This is a problem. We want Some(null) to resolve to None. So lets define our own Some function that overrides this behavior.

def Some[T](x: T) = Option(x)
users flatMap { Some(_) } filter { _.name == "Bob" }

Now Some(null) returns Option(null) which resolves to None and the expression yields List(User(Bob)) as expected. Ok, so we’ve solved the Some(null) problem. Now lets look at another null problem. Lets blindly extract the second user element in the list (the null entry) and print the name property.

val user = users(1)
println(user.name)

The user reference is null and we get a NullPointerException. Oh dread, we didn’t check for null. Let’s do that.

if (user != null) {
  println(user.name)
}

Now we skip the printing of the name if the reference is null. But we don’t want to check for nulls. So lets wrap the null user in an Option and use foreach.

Option(user) foreach { u => println(u.name) }

Option(null) resolves to None and all is good. With our overriding Some function still in scope, we can do the same using Some also.

Some(user) foreach { u => println(u.name) }

Just like before, our call to Some(null) returns Option(null) which resolves to None and we’re good. But we want cleaner code that looks like this:

user foreach { u => println(u.name) }

This results in a compile error because foreach is not a member of User. But we can fix that by making our overriding Some function implicit. With this implicit in scope, Scala converts user to Option(user) and the above will now work.

implicit def Some[T](x: T) = Option(x)
user foreach { u => println(u.name) }

Taking this a step further, we can map the user name and print it like this. If the user is null, the name is never mapped and never printed.

user map { _.name } foreach println

Nulls be gone! 🙂

Now of course there are many cases in Java where nulls are legitimate values, and for this reason Scala supports Some(null) = null. But as shown above, that doesn’t mean we can’t override that with Some(null) = None.

Written by warpedjavaguy

June 8, 2011 at 4:35 am

Posted in java, scala

The “Scala is too Complex” Conspiracy


They don’t want pragmatic Scala programmers.

 

The “Scala is too complex for the average programmer” movement is disturbing. It conspires that Scala is too difficult for the average programmer to learn and that it is too academic. Scala is a hybrid programming language that is object oriented and functional. Java is a programming language that is object oriented and imperative. This means that Java programmers have no functional programming power.

This year I read Programming in Scala and practiced some functional programming. I converted some Java code I had lying around to Scala and gained some hands on experience with case classes, pattern matching, lazy evaluation, implicit conversions, lambdas, and closures. I also reduced and folded lists and wrote curried functions. I was pleasantly shocked and surprised!

It is true that moving from object oriented to functional programming requires a shift in mindset. It is true also that many Java programmers are already thinking functionally but are unaware of it. In Java we use immutable objects when programming for concurrency. We use anonymous inner classes to simulate lambdas and closures. We use iterators and predicates to simulate list comprehensions. We recognize these and other functional concepts but implement them in roundabout ways because there is no direct support for them in the Java language.

Fortunately Java 7 is looking to add lambda support to the language so we will soon no longer have to write anonymous inner classes wherever single method interfaces and abstract classes (SAM types) are expected. In the meantime Scala has emerged as a functional language that Java programmers can learn and transition to without sacrificing their object oriented skills and without leaving the JVM platform.

For any programmer who has not looked at Scala or who has been deterred by a “too complex” conspirator, here are some code samples..

Case classes

Lets create a class named Path that accepts a source and destination city as two separate characters and exposes them as public read only properties.

case class Path (origin: Char, destination: Char)

Prefixing a class definition with the “case” keyword automatically exposes constructor arguments as public read only properties. It also adds a factory method to your class so you don’t have to instantiate it with new, so Path(‘A’, ‘B’) will suffice for example. It also provides a toString method that returns a string literal like Path(A,B). You also get a natural implementation of the hashCode and equals methods. You get constructor pattern matching support too. All this for free with a one liner case class.

Factory method with pattern matching

Now lets create a factory method that accepts a string, parses it, and returns a Path instance. For example, passing the string “AB” should return a Path(‘A’, ‘B’) instance whereas passing the string “ABC” should fail.

object Path {	
  val PathRE = "^([A-Z]{1})([A-Z]{1})$".r
  def apply(pathStr: String): Path = pathStr match {
      case PathRE(origin, destination) 
      	=> Path(origin(0), destination(0))
      case _ 
      	=> throw new IllegalArgumentException(pathStr)
  }
}

Now we can instantiate a Path as Path(“AB”) in addition to Path(‘A’, ‘B’). Any string that does not contain exactly two characters that are not between A and Z will result in an IllegalArgumentException. So the strings “a”, “1”, “A1”, and “ABC” will all fail construction. As a safeguard we can add an assert statement to the Path constructor to ensure that the source and destination cities are never equal like this:

case class Path (origin: Char, destination: Char) {
  assert (origin != destination, "origin and destination are same")
}

Implicit conversion

Now lets make it possible to assign the string literal “AB” directly to any Path type anywhere so that we don’t have to call the factory method explicitly. We do this by prefixing our apply(String) factory method with the keyword implicit as shown below:

implicit def apply(pathStr: String): Path

Now the string literal “AB” can be accepted anywhere where a Path instance is expected.

Folding Lists

Now suppose we want to write an application that accepts a list of Path string literals from the command line. We can convert the incoming list of Path strings to a Set of Path instances by using a fold left operation. The following creates a new empty Set and adds to it every Path in the incoming list. Each string in the list is automatically converted to a Path instance through implicit conversion.

def main(args: Array[String]) {
  val pathSet = (Set[Path]() /: args) (_+_)
}

Lambda expressions

Now lets say we have already written a function named find, that finds all the routes from one city to another based on some route condition. This function accepts two arguments, a Path containing the from and to cities, and a predicate lambda expression. The signature looks like this:

def find(path: Path, predicate: Route => Boolean): List[Route]

We can invoke this function to find (for example) all the routes from city ‘A’ to city ‘E’ having less than 3 stops like this:

val routes = find("AE", route => route.stops < 3)

Currying

We can curry the find function by splitting its two argument parameter list into two one argument parameter lists like this:

def find(path: Path)(predicate: Route => Boolean): List[Route]

Now when we invoke the find function with a Path argument we get a second function that we can then invoke with the predicate argument to get the result. We can invoke our curried function like this:

val routes = find("AE")(route => route.stops < 3)

Scala allows us to optionally wrap the sole argument to a single argument function in curly braces instead of parenthesis. So we can also invoke our curried function like this:

val routes = find("AE") { route => route.stops < 3 }

Now our call to the find function looks like a built in Scala construct.

Is Scala too Complex?

If you think that the above Scala code is too complex then I urge you to try and achieve the same in Java with less complexity.

Written by warpedjavaguy

August 2, 2010 at 11:14 pm

Posted in java, programming, scala

Polymorphism by Closure Injection?


Instead of defining many classes to do the same thing differently through polymorphism, you could define just one class and achieve the same through closure injection!
 

I recently downloaded the feature complete BGGA closures prototype and had a bit of a play. My first experiment was to refactor an existing codebase and replace all anonymous inner classes with closures. The ones that extended single abstract method (SAM) interfaces were easy to convert. The ones that extended multiple method (non-SAM) types were a bit trickier. In the end though, I managed to successfully convert all of them, and with very little effort.

In converting the non-SAM types, I discovered that the functional equivalent of polymorphism can be achieved by injecting closures into a single concrete implementation having no subclasses.

The codebase I refactored was a small (fictitious) railroad application that provided customers with information about routes. In particular, it calculated things like the number of routes between two cities, the number of routes that do not exceed a given number of stops, the distances of individual routes, and the like. It did this by deriving routes on the fly from a given set of legs and dynamically yielding only those that match a given predicate (or condition).

Here is the original predicate code using anonymous inner classes:

public abstract class Predicate<T> {

  public boolean eligible(T item) {
    return yield(item);
  }

  public abstract boolean yield(T item);

  /* Pre-defined predicates follow */

  public static Predicate<Route> routesWithMaxStops(final int maxStops) {
    return new Predicate<Route>() {
      public boolean yield(Route route) {
        return route.getLegs().size() <= maxStops;
      }
    };
  }

  public static Predicate<Route> routesWithStops(final int stops) {
    return new Predicate<Route>() {
      public boolean eligible(Route route) {
        return route.getLegs().size() <= stops;
      }
      public boolean yield(Route route) {
        return route.getLegs().size() == stops;
      }
    };
  }

}

In the above code only two predefined predicates are shown for brevity (the actual code has got more).

Note that the predicate has two methods. Only routes for which both methods return true are yielded.

  • eligible – determines if a route satisfies a boundary condition
  • yield – determines if a route should be yielded

Here is the converted closure equivalent:

public class Predicate<T> {

  private {T => boolean} eligible;
  private {T => boolean} yield;

  public Predicate({T => boolean} yield) {
    this(yield, yield);
  }

  public Predicate({T => boolean} eligible, {T => boolean} yield) {
    this.eligible = eligible;
    this.yield = yield;
  }

  public boolean eligible(T item) {
    return eligible.invoke(item);
  }

  public boolean yield(T item) {
    return yield.invoke(item);
  }

  /* Pre-defined predicates follow */

  public static Predicate<Route> routesWithMaxStops(int maxStops) {
    return new Predicate<Route>(
      {Route route => route.getLegs().size() <= maxStops});
  }

  public static Predicate<Route> routesWithStops(int stops) {
    return new Predicate<Route>(
      {Route route => route.getLegs().size() <= stops},
      {Route route => route.getLegs().size() == stops});
  }

}

If you look carefully, you will notice that both the original and converted code produce a redundant invocation when the eligible and yield expressions are identical. Fortunately, this is a flaw that can be very easily corrected in the closure version.

A possible correction is shown below (see lines 5, 17, 18, and 22):

public class Predicate<T> {

  private {T => boolean} eligible;
  private {T => boolean} yield;
  private boolean isEligible;

  public Predicate({T => boolean} yield) {
    this(yield, yield);
  }

  public Predicate({T => boolean} eligible, {T => boolean} yield) {
    this.eligible = eligible;
    this.yield = yield;
  }

  public boolean eligible(T item) {
    isEligible = eligible.invoke(item);
    return isEligible;
  }

  public boolean yield(T item) {
    return (isEligible && eligible == yield) ? true : yield.invoke(item);
  }

  /* Unchanged pre-defined predicates not shown here */

}

Applying the same correction to the original closureless version is not nearly as simple. It would require a more complex solution and could even involve changing the interface.

It is very interesting that polymorphic functionality can be achieved by injecting closures into one concrete class instead of defining multiple subclasses of an abstract class. Closures and anonymous inner classes, injection and inheritance, deferred execution and late binding. So many great choices!

Written by warpedjavaguy

September 4, 2008 at 11:31 pm

Posted in java, programming

Tagged with

Java Closures – The Fundamental Benefit


Avoiding redundant code is a basic programming instinct.
 

How many times have you written a piece of code and thought “I wish I could reuse this block of code”? How many times have you refactored existing code to remove redundancies? How many times have you written the same block of code more than once before realising that an abstraction exists? How many times have you extracted such abstractions successfully? How difficult was it to do and how much effort was involved? How constrained were you by the current Java language constructs? These are all questions pertaining to the everyday challenges we face as Java programmers in our struggles to achieve zero code redundancy.

Closures are reusable blocks of code that capture the environment and can be passed around as method arguments for immediate or deferred execution. Why do we need them in Java? There are many reasons for and against, but the fundamental benefit they provide is the facilitation of redundant code avoidance. The current Java closures specification makes a strong point of this in the first paragraph where it states: “they allow one to more easily extract the common parts of two almost-identical pieces of code”.

Closures provide an elegant means of reusing blocks of code and avoiding code duplication without the boilerplate. This is the fundamental benefit that closures bring to Java. All other benefits are derived benefits inherited from this fundamental benefit. In fact, all the needs addressed by the Java closures proposal are all derived benefits.

I know I am just stating the obvious and reiterating my point here, but it is just too easy to overlook this one fundamental benefit and be overwhelmed by all others. Java closures make it easier to eliminate redundant code and avoid it altogether!

Written by warpedjavaguy

June 25, 2008 at 10:32 pm

Posted in java, programming

Tagged with

Closure Syntax Wars


Closures look weird and complicated but so does all other code in general.
 

There seems to be some disagreement amongst the Java community about what the exact syntax for Java closures should be. The two currently most popular closure proposals are BGGA and FCM and they each use a different syntax. Neither syntax is final and looking at the discussions and comments in recent polls and many various blogs it is evident that we all have our own opinions, preferences, and personal favourites (myself included). And as developers why shouldn’t we? We are the ones that will potentially have to write and maintain code that uses closures. That’s why we care about the syntax.

The current BGGA proposal uses the ‘=>’ syntax. Although this looks like an arrow that points to a block of code, it can sometimes trick the eye and easily be mistaken for the ‘>=’ and ‘<=’ conditional operators. Consider the example below which defines a function that returns a boolean and accepts a parameter of type int and compares it against two variables.

{int => boolean} f = { int x => y <= x && x <= z };

Now consider the same example using the current FCM ‘#’ syntax. This style is designed to look and feel like Java method signatures. It is less confusing and easier to grasp though.

<ol>
  <li>(boolean(int)) f =  #(int x) { y <= x && x <= z };

In my previous post I questioned why we shouldn’t consider Java Method Pointers (JMP). The inspiration for this was a familiar but variant form of the C/C++ function pointer syntax. The same example would look something like this:

boolean* (int) f = (int x) { y <= x && x <= z };

Closures are indeed an alien concept to Java and they sure look alien too. Throw generics into the mix and they can look even more weird and complicated.Take a look at the following two examples:

Neal Gafter’s closures puzzler which transforms a list of objects of type T to a list of objects of type U.

static  List<U> map(List list, {T=>U} transform) {
  List<U> result = new ArrayList<U>(list.size());
  for (T t : list) {
    result.add(transform.invoke(t));
  }
  return result;
}

Stephen Colebourne’s evaluating BGGA example which converts an object of type T to an object of type U.

public  {T => U} converter({=> T} a, {=> U} b, {T => U} c) {
  return {T t => a.invoke().equals(t) ? b.invoke() : c.invoke(t)};
}

At the end of the day, we really want closures in Java for doing these nice things:

  • Simplifying anonymous class instance creation
  • Automating resource management/termination (ARM blocks)
  • Using for-each style loop constructs
  • Writing less boilerplate code

Closures would make it all possible. A lot of the complicated closure constructs involving generics would be integrated into the API. Developers would not have to write those. Java would provide them out of the box.

The more you look at closures and read about them the more familiar and less alien they become. Just like everything else, they do take a little getting used to. The question is what closure syntax do we want to get used to? Will it be good for Java? One day we may have to decide. Until then, the syntax war will continue.

Written by warpedjavaguy

February 28, 2008 at 12:14 pm

Posted in java, programming

Tagged with

Pointer Syntax for Java Closures


If Java closures are pointers to methods then we should make them look like pointers to methods.
 

Closures for Java are a hot topic yet again. The BGGA proposal is currently leading in the polls. It seems that most people who want closures want full closure support. I like the BGGA proposal, but I think the syntax can be improved to look more familiar and less verbose.

I generally tend to think of Java closures in the same way that I think of C/C++ function pointers (or member function pointers). A lot of programmers like myself transitioned from C++ to Java back in 1999. Java borrows a lot of the syntax from C++ and it was a natural transition for C++ programmers to move to Java. Given this, I cant help but question whether or not we should adopt the familiar C++ function pointer syntax for Java closures. That way the syntax (at least) would not be too alien to us.

Borrowing the examples from Stephen Colebourne’s weblog entry on evaluating BGGA, we have the following syntax comparisons:

BGGA syntax:

int y = 6;
{int => boolean} a = {int x => x <= y};
{int => boolean} b = {int x => x >= y};
boolean c = a.invoke(3) && b.invoke(7);
public {T => U} converter({=> T} a, {=> U} b, {T => U} c) {
  return {T t => a.invoke().equals(t) ? b.invoke() : c.invoke(t)};
}

Java Method Pointer syntax (simplified C/C++ pointer syntax for Java):

int y = 6;
boolean* (int) a = (int x) {x <= y};
boolean* (int) b = (int x) {x >= y};
boolean c = a.invoke(3) && b.invoke(7);
public  U* (T) converter(T* a, U* b, U* (T) c) {
  return (T t) {a.invoke().equals(t) ? b.invoke() : c.invoke(t)};
}

In addition to this, we could also borrow the & syntax and reference existing static and instance methods like this:

boolean* (int) a = class&method(int);
boolean* (int) b = instance&method(int);
boolean* (int) c = this&method(int);

If we want closures in Java, why not borrow the C/C++ pointer syntax and think of them as pointers to methods if that’s what they are? Most of the Java syntax is borrowed from C++ after all. I understand that function pointers do not fit well in the OO paradigm but I’m not talking about pointers to functions here. I’m instead talking about pointers to methods in the OO world of Java.

Written by warpedjavaguy

February 21, 2008 at 9:51 pm

Posted in java, programming

Tagged with

%d bloggers like this: