warpedjavaguy

Imperative by day and functional by night

Archive for the ‘programming’ Category

The “Scala is too Complex” Conspiracy


They don’t want pragmatic Scala programmers.

 

The “Scala is too complex for the average programmer” movement is disturbing. It conspires that Scala is too difficult for the average programmer to learn and that it is too academic. Scala is a hybrid programming language that is object oriented and functional. Java is a programming language that is object oriented and imperative. This means that Java programmers have no functional programming power.

This year I read Programming in Scala and practiced some functional programming. I converted some Java code I had lying around to Scala and gained some hands on experience with case classes, pattern matching, lazy evaluation, implicit conversions, lambdas, and closures. I also reduced and folded lists and wrote curried functions. I was pleasantly shocked and surprised!

It is true that moving from object oriented to functional programming requires a shift in mindset. It is true also that many Java programmers are already thinking functionally but are unaware of it. In Java we use immutable objects when programming for concurrency. We use anonymous inner classes to simulate lambdas and closures. We use iterators and predicates to simulate list comprehensions. We recognize these and other functional concepts but implement them in roundabout ways because there is no direct support for them in the Java language.

Fortunately Java 7 is looking to add lambda support to the language so we will soon no longer have to write anonymous inner classes wherever single method interfaces and abstract classes (SAM types) are expected. In the meantime Scala has emerged as a functional language that Java programmers can learn and transition to without sacrificing their object oriented skills and without leaving the JVM platform.

For any programmer who has not looked at Scala or who has been deterred by a “too complex” conspirator, here are some code samples..

Case classes

Lets create a class named Path that accepts a source and destination city as two separate characters and exposes them as public read only properties.

case class Path (origin: Char, destination: Char)

Prefixing a class definition with the “case” keyword automatically exposes constructor arguments as public read only properties. It also adds a factory method to your class so you don’t have to instantiate it with new, so Path(‘A’, ‘B’) will suffice for example. It also provides a toString method that returns a string literal like Path(A,B). You also get a natural implementation of the hashCode and equals methods. You get constructor pattern matching support too. All this for free with a one liner case class.

Factory method with pattern matching

Now lets create a factory method that accepts a string, parses it, and returns a Path instance. For example, passing the string “AB” should return a Path(‘A’, ‘B’) instance whereas passing the string “ABC” should fail.

object Path {	
  val PathRE = "^([A-Z]{1})([A-Z]{1})$".r
  def apply(pathStr: String): Path = pathStr match {
      case PathRE(origin, destination) 
      	=> Path(origin(0), destination(0))
      case _ 
      	=> throw new IllegalArgumentException(pathStr)
  }
}

Now we can instantiate a Path as Path(“AB”) in addition to Path(‘A’, ‘B’). Any string that does not contain exactly two characters that are not between A and Z will result in an IllegalArgumentException. So the strings “a”, “1”, “A1”, and “ABC” will all fail construction. As a safeguard we can add an assert statement to the Path constructor to ensure that the source and destination cities are never equal like this:

case class Path (origin: Char, destination: Char) {
  assert (origin != destination, "origin and destination are same")
}

Implicit conversion

Now lets make it possible to assign the string literal “AB” directly to any Path type anywhere so that we don’t have to call the factory method explicitly. We do this by prefixing our apply(String) factory method with the keyword implicit as shown below:

implicit def apply(pathStr: String): Path

Now the string literal “AB” can be accepted anywhere where a Path instance is expected.

Folding Lists

Now suppose we want to write an application that accepts a list of Path string literals from the command line. We can convert the incoming list of Path strings to a Set of Path instances by using a fold left operation. The following creates a new empty Set and adds to it every Path in the incoming list. Each string in the list is automatically converted to a Path instance through implicit conversion.

def main(args: Array[String]) {
  val pathSet = (Set[Path]() /: args) (_+_)
}

Lambda expressions

Now lets say we have already written a function named find, that finds all the routes from one city to another based on some route condition. This function accepts two arguments, a Path containing the from and to cities, and a predicate lambda expression. The signature looks like this:

def find(path: Path, predicate: Route => Boolean): List[Route]

We can invoke this function to find (for example) all the routes from city ‘A’ to city ‘E’ having less than 3 stops like this:

val routes = find("AE", route => route.stops < 3)

Currying

We can curry the find function by splitting its two argument parameter list into two one argument parameter lists like this:

def find(path: Path)(predicate: Route => Boolean): List[Route]

Now when we invoke the find function with a Path argument we get a second function that we can then invoke with the predicate argument to get the result. We can invoke our curried function like this:

val routes = find("AE")(route => route.stops < 3)

Scala allows us to optionally wrap the sole argument to a single argument function in curly braces instead of parenthesis. So we can also invoke our curried function like this:

val routes = find("AE") { route => route.stops < 3 }

Now our call to the find function looks like a built in Scala construct.

Is Scala too Complex?

If you think that the above Scala code is too complex then I urge you to try and achieve the same in Java with less complexity.

Advertisements

Written by warpedjavaguy

August 2, 2010 at 11:14 pm

Posted in java, programming, scala

Polymorphism by Closure Injection?


Instead of defining many classes to do the same thing differently through polymorphism, you could define just one class and achieve the same through closure injection!
 

I recently downloaded the feature complete BGGA closures prototype and had a bit of a play. My first experiment was to refactor an existing codebase and replace all anonymous inner classes with closures. The ones that extended single abstract method (SAM) interfaces were easy to convert. The ones that extended multiple method (non-SAM) types were a bit trickier. In the end though, I managed to successfully convert all of them, and with very little effort.

In converting the non-SAM types, I discovered that the functional equivalent of polymorphism can be achieved by injecting closures into a single concrete implementation having no subclasses.

The codebase I refactored was a small (fictitious) railroad application that provided customers with information about routes. In particular, it calculated things like the number of routes between two cities, the number of routes that do not exceed a given number of stops, the distances of individual routes, and the like. It did this by deriving routes on the fly from a given set of legs and dynamically yielding only those that match a given predicate (or condition).

Here is the original predicate code using anonymous inner classes:

public abstract class Predicate<T> {

  public boolean eligible(T item) {
    return yield(item);
  }

  public abstract boolean yield(T item);

  /* Pre-defined predicates follow */

  public static Predicate<Route> routesWithMaxStops(final int maxStops) {
    return new Predicate<Route>() {
      public boolean yield(Route route) {
        return route.getLegs().size() <= maxStops;
      }
    };
  }

  public static Predicate<Route> routesWithStops(final int stops) {
    return new Predicate<Route>() {
      public boolean eligible(Route route) {
        return route.getLegs().size() <= stops;
      }
      public boolean yield(Route route) {
        return route.getLegs().size() == stops;
      }
    };
  }

}

In the above code only two predefined predicates are shown for brevity (the actual code has got more).

Note that the predicate has two methods. Only routes for which both methods return true are yielded.

  • eligible – determines if a route satisfies a boundary condition
  • yield – determines if a route should be yielded

Here is the converted closure equivalent:

public class Predicate<T> {

  private {T => boolean} eligible;
  private {T => boolean} yield;

  public Predicate({T => boolean} yield) {
    this(yield, yield);
  }

  public Predicate({T => boolean} eligible, {T => boolean} yield) {
    this.eligible = eligible;
    this.yield = yield;
  }

  public boolean eligible(T item) {
    return eligible.invoke(item);
  }

  public boolean yield(T item) {
    return yield.invoke(item);
  }

  /* Pre-defined predicates follow */

  public static Predicate<Route> routesWithMaxStops(int maxStops) {
    return new Predicate<Route>(
      {Route route => route.getLegs().size() <= maxStops});
  }

  public static Predicate<Route> routesWithStops(int stops) {
    return new Predicate<Route>(
      {Route route => route.getLegs().size() <= stops},
      {Route route => route.getLegs().size() == stops});
  }

}

If you look carefully, you will notice that both the original and converted code produce a redundant invocation when the eligible and yield expressions are identical. Fortunately, this is a flaw that can be very easily corrected in the closure version.

A possible correction is shown below (see lines 5, 17, 18, and 22):

public class Predicate<T> {

  private {T => boolean} eligible;
  private {T => boolean} yield;
  private boolean isEligible;

  public Predicate({T => boolean} yield) {
    this(yield, yield);
  }

  public Predicate({T => boolean} eligible, {T => boolean} yield) {
    this.eligible = eligible;
    this.yield = yield;
  }

  public boolean eligible(T item) {
    isEligible = eligible.invoke(item);
    return isEligible;
  }

  public boolean yield(T item) {
    return (isEligible && eligible == yield) ? true : yield.invoke(item);
  }

  /* Unchanged pre-defined predicates not shown here */

}

Applying the same correction to the original closureless version is not nearly as simple. It would require a more complex solution and could even involve changing the interface.

It is very interesting that polymorphic functionality can be achieved by injecting closures into one concrete class instead of defining multiple subclasses of an abstract class. Closures and anonymous inner classes, injection and inheritance, deferred execution and late binding. So many great choices!

Written by warpedjavaguy

September 4, 2008 at 11:31 pm

Posted in java, programming

Tagged with

Java Closures – The Fundamental Benefit


Avoiding redundant code is a basic programming instinct.
 

How many times have you written a piece of code and thought “I wish I could reuse this block of code”? How many times have you refactored existing code to remove redundancies? How many times have you written the same block of code more than once before realising that an abstraction exists? How many times have you extracted such abstractions successfully? How difficult was it to do and how much effort was involved? How constrained were you by the current Java language constructs? These are all questions pertaining to the everyday challenges we face as Java programmers in our struggles to achieve zero code redundancy.

Closures are reusable blocks of code that capture the environment and can be passed around as method arguments for immediate or deferred execution. Why do we need them in Java? There are many reasons for and against, but the fundamental benefit they provide is the facilitation of redundant code avoidance. The current Java closures specification makes a strong point of this in the first paragraph where it states: “they allow one to more easily extract the common parts of two almost-identical pieces of code”.

Closures provide an elegant means of reusing blocks of code and avoiding code duplication without the boilerplate. This is the fundamental benefit that closures bring to Java. All other benefits are derived benefits inherited from this fundamental benefit. In fact, all the needs addressed by the Java closures proposal are all derived benefits.

I know I am just stating the obvious and reiterating my point here, but it is just too easy to overlook this one fundamental benefit and be overwhelmed by all others. Java closures make it easier to eliminate redundant code and avoid it altogether!

Written by warpedjavaguy

June 25, 2008 at 10:32 pm

Posted in java, programming

Tagged with

Closure Syntax Wars


Closures look weird and complicated but so does all other code in general.
 

There seems to be some disagreement amongst the Java community about what the exact syntax for Java closures should be. The two currently most popular closure proposals are BGGA and FCM and they each use a different syntax. Neither syntax is final and looking at the discussions and comments in recent polls and many various blogs it is evident that we all have our own opinions, preferences, and personal favourites (myself included). And as developers why shouldn’t we? We are the ones that will potentially have to write and maintain code that uses closures. That’s why we care about the syntax.

The current BGGA proposal uses the ‘=>’ syntax. Although this looks like an arrow that points to a block of code, it can sometimes trick the eye and easily be mistaken for the ‘>=’ and ‘<=’ conditional operators. Consider the example below which defines a function that returns a boolean and accepts a parameter of type int and compares it against two variables.

{int => boolean} f = { int x => y <= x && x <= z };

Now consider the same example using the current FCM ‘#’ syntax. This style is designed to look and feel like Java method signatures. It is less confusing and easier to grasp though.

<ol>
  <li>(boolean(int)) f =  #(int x) { y <= x && x <= z };

In my previous post I questioned why we shouldn’t consider Java Method Pointers (JMP). The inspiration for this was a familiar but variant form of the C/C++ function pointer syntax. The same example would look something like this:

boolean* (int) f = (int x) { y <= x && x <= z };

Closures are indeed an alien concept to Java and they sure look alien too. Throw generics into the mix and they can look even more weird and complicated.Take a look at the following two examples:

Neal Gafter’s closures puzzler which transforms a list of objects of type T to a list of objects of type U.

static  List<U> map(List list, {T=>U} transform) {
  List<U> result = new ArrayList<U>(list.size());
  for (T t : list) {
    result.add(transform.invoke(t));
  }
  return result;
}

Stephen Colebourne’s evaluating BGGA example which converts an object of type T to an object of type U.

public  {T => U} converter({=> T} a, {=> U} b, {T => U} c) {
  return {T t => a.invoke().equals(t) ? b.invoke() : c.invoke(t)};
}

At the end of the day, we really want closures in Java for doing these nice things:

  • Simplifying anonymous class instance creation
  • Automating resource management/termination (ARM blocks)
  • Using for-each style loop constructs
  • Writing less boilerplate code

Closures would make it all possible. A lot of the complicated closure constructs involving generics would be integrated into the API. Developers would not have to write those. Java would provide them out of the box.

The more you look at closures and read about them the more familiar and less alien they become. Just like everything else, they do take a little getting used to. The question is what closure syntax do we want to get used to? Will it be good for Java? One day we may have to decide. Until then, the syntax war will continue.

Written by warpedjavaguy

February 28, 2008 at 12:14 pm

Posted in java, programming

Tagged with

Pointer Syntax for Java Closures


If Java closures are pointers to methods then we should make them look like pointers to methods.
 

Closures for Java are a hot topic yet again. The BGGA proposal is currently leading in the polls. It seems that most people who want closures want full closure support. I like the BGGA proposal, but I think the syntax can be improved to look more familiar and less verbose.

I generally tend to think of Java closures in the same way that I think of C/C++ function pointers (or member function pointers). A lot of programmers like myself transitioned from C++ to Java back in 1999. Java borrows a lot of the syntax from C++ and it was a natural transition for C++ programmers to move to Java. Given this, I cant help but question whether or not we should adopt the familiar C++ function pointer syntax for Java closures. That way the syntax (at least) would not be too alien to us.

Borrowing the examples from Stephen Colebourne’s weblog entry on evaluating BGGA, we have the following syntax comparisons:

BGGA syntax:

int y = 6;
{int => boolean} a = {int x => x <= y};
{int => boolean} b = {int x => x >= y};
boolean c = a.invoke(3) && b.invoke(7);
public {T => U} converter({=> T} a, {=> U} b, {T => U} c) {
  return {T t => a.invoke().equals(t) ? b.invoke() : c.invoke(t)};
}

Java Method Pointer syntax (simplified C/C++ pointer syntax for Java):

int y = 6;
boolean* (int) a = (int x) {x <= y};
boolean* (int) b = (int x) {x >= y};
boolean c = a.invoke(3) && b.invoke(7);
public  U* (T) converter(T* a, U* b, U* (T) c) {
  return (T t) {a.invoke().equals(t) ? b.invoke() : c.invoke(t)};
}

In addition to this, we could also borrow the & syntax and reference existing static and instance methods like this:

boolean* (int) a = class&method(int);
boolean* (int) b = instance&method(int);
boolean* (int) c = this&method(int);

If we want closures in Java, why not borrow the C/C++ pointer syntax and think of them as pointers to methods if that’s what they are? Most of the Java syntax is borrowed from C++ after all. I understand that function pointers do not fit well in the OO paradigm but I’m not talking about pointers to functions here. I’m instead talking about pointers to methods in the OO world of Java.

Written by warpedjavaguy

February 21, 2008 at 9:51 pm

Posted in java, programming

Tagged with

The Mind of a Programmer


Good programmers are good at solving problems because they consider all the dimensions.
 

A short while ago I published a post about natural born programmers and questioned whether or not they exist. When I was writing it I was thinking about how programmers go about coding solutions to problems that they already know how to solve. I was trying to tap into the mind of a programmer and discover how it thinks. I demonstrated two approaches to solving the simple problem of determining whether a number is odd or even. One solution used a ‘smart’ human approach and the other a ‘simple’ math approach. It generated some good discussions and triggered various reactions. The most interesting one was this one. It discusses my first response to the first comment posted by Matt Turner.

The simple math solution that I suggested was this:

// if divisible by 2 then is even else is odd
boolean isEven = number % 2 == 0;

The alternative and bitwise solution that Matt suggested was this:

// if last bit is 0 then is even else is odd
boolean isEven = (number & 1) == 0;

I knew that the &1 solution was functionally equivalent to the %2 solution but I wasn’t exactly sure if it was also equally as efficient (or even better). I had to prove it. So I knocked up a quick test program to compare the execution times. When I ran the program I observed that both of them did take the same time to execute. I also analysed the generated bytecode and found that the %2 solution used the IREM bytecode instruction and that the &1 operation used the IAND instruction. My bytecode findings attracted these comments by Michael Speer and Charlie Chan respectively (and other anonymous comments too). Having done a bit of assembler, C, and C++ in my not too old school days, I do have some experience in binary arithmetic and native code optimisation. So I can appreciate the &1 solution. But I never expect to have to apply this knowledge directly to Java programming. I always expect Java to perform all trivial optimisations on my behalf. I should not have to optimise my Java code at this level. I should just write code that expresses what I want in the simplest and clearest way possible.

It has often been said by many programmers that there is no one right solution to a problem. That is definitely the case here. But there is something more interesting here though. If you study the &1 solution you will find that it very closely resembles the ‘smart’ human optimised approach to solving the problem. How? Remember that smart humans can tell whether a number is even or odd just by looking at the last digit. The &1 solution emulates exactly that, yes indeed! The difference is that humans look at the right-most digit whereas machines look at the right-most bit. We have identified a pattern here that is common to both humans and machines. Interesting!

But wait! The %2 approach is a more natural and readable expression of the mathematical definition and is also more easily understood by us humans as adopters of the base 10 decimal number system. It directly expresses at a high level the definition that all even numbers are divisible by two and that all odd numbers are not. At the lower level it expresses it as an IREM bytecode instruction. At the even lower machine code level it expresses it as native assembly instructions. We are now in the world of base 2 binary numbers where operations like ‘divide by two’ involve shifting all bits one place to the right and carrying the right most bit (the remainder). Bits can be checked, set, and toggled with bitwise AND, OR, and XOR operations. At this stage, the %2 instruction is optimised for the machine and more closely resembles the &1 operation. So at the binary level, one can deduce that the %2 operation is essentially equivalent to &1.

By solving problems and comparing solutions we are able to identify common patterns. Gifted programmers are naturally good at it! These are the things we learn and the things that go on in our minds as programmers.

Written by warpedjavaguy

January 18, 2008 at 11:41 am

Posted in java, programming

Tagged with

Test Driven Rewrites (for Programmers)


Why write mock objects manually when you can emulate them virtually?
 

In my previous post on Test Driven Rewrites I described at a high level how I used virtual mock objects to deliver a regression test suite for an existing component and how I used that suite to test and develop a rewrite of that component. In it you’ll recall that the existing codebase had no test suite at all and that I had very little knowledge of the business domain. I had to come up with a quick and easy way to create a test suite from scratch. Here I present some code that shows how I did it using AspectJ, XML, and JUnit.

The first thing I had to do was identify the following points in the codebase:

  1. The test point
    • This is the point in the application where the call to the component is made. It is the call to the component method that will be put under test.
  2. All the mock points
    • These are all the points in the component where calls are made to other components and/or services. They are the calls to the methods that need to be mocked.

Next, a TestCaseGenerator aspect was written to capture the arguments and return values of the test and mock points identified above. This aspect defined two point cuts for matching the test and mock points respectively. “Around” advice was used on each to capture and persist the arguments and return values of each call to an XML file. A MethodCall POJO was created to encapsulate the name, arguments, and return values of individual method calls and Castor XML was used to marshal it to XML. The aspect code is shown below:

public aspect TestCaseGenerator {

    pointcut testPoint() :
        execution (/* test-point method pattern */);

    pointcut mockPoints() :
        call (/* mock-points method pattern */)
        && withincode(/* test-point method pattern*/);

    // output XML document
    private Document xmldoc;

    Object around() : testPoint() {

        /* instantiate new XML document here */

        // encapsulate invoked method data into POJO
        MethodCall methodCall = new MethodCall();
        methodCall.setMethod(
            thisJoinPoint.getSignature().toString());
        methodCall.setArgs(thisJoinPoint.getArgs());
        Object retValue = proceed();
        methodCall.setRetValue(retValue);

        /* marshal POJO into test-point XML element here */

        /* persist XML to file here */

        // pass back return value
        return retValue;
    }

    Object around() : mockPoints() {

        // encapsulate invoked method data into POJO
        MethodCall methodCall = new MethodCall();
        methodCall.setMethod(
            thisJoinPoint.getSignature().toString());
        methodCall.setArgs(thisJoinPoint.getArgs());
        Object retValue = proceed();
        methodCall.setRetValue(retValue);

        /* marshal POJO into mock-point XML element here */

        // pass back return value
        return retValue;
    }
}

This aspect was woven into the existing codebase which was then deployed locally. So now an XML test case file was automatically generated by the aspect on the file system for every scenario that was executed on the deployed application. The business identified all the scenarios that needed to be tested and a test case XML file was generated for each one in real time as it was executed on the system by a user. The basic structure of the XML that was generated is shown below:

<?xml version="1.0" encoding="UTF-8">
<test-case>
    <test-point method="method pattern">
        <arguments>
            ...
        </arguments>
        <return-value>
            ...
        </return-value>
    </test-point>
    <mock-point method="method pattern">
        <arguments>
            ...
        </arguments>
    </mock-point>
    <mock-point method="method pattern">
        <arguments>
            ...
        </arguments>
        <return-value>
            ...
        </return-value>
    </mock-point>
    ...
</test-case>

Once the data for all the known use case scenarios had been captured in XML files, it was then time to start writing the JUnit test suite that would execute the tests. This involved creating the following:

  1. A MockedTestCase class (extends JUnit TestCase)
  2. A MockedTestSuite class (extends JUnit TestCase)
  3. A VirtualMocker aspect

The MockedTestCase class was implemented with a constructor that accepted a given test case XML file. All the mock point data in the file was loaded into individual MethodCall POJO instances and stored in a Map keyed by the method signature pattern. The map was bound to a thread local variable to make it thread safe and accessible to the virtual mocker aspect for mocking purposes. The test point data was used to invoke the component in the test and to assert the returned result. The Java code is shown below:

public class MockedTestCase extends TestCase {

    public static final
        ThreadLocal<Map<String,MethodCall>> MOCKS =
           new ThreadLocal<Map<String,MethodCall>>();

    private File xmlFile;
    private MethodCall testPoint;

    public MockedTestCase(File xmlFile) {
        super("testIt");
        this.xmlFile = xmlFile;
    }

    // overrides super to return test case XML file name
    public String getName() {
        return xmlFile.getName();
    }

    protected void setUp() throws Exception {

        testPoint = new MethodCall();
        /* Unmarshal XML test-point into POJO */

        Map<String,MethodCall> mocks =
            new HashMap<String,MethodCall>();
        /* Unmarshal XML mock-points and load into Map */

        // store loaded mocks in thread local var
        MOCKS.set(mocks);
    }

    public void testIt() throws Exception {

        // invoke the test
        // - pass in test-point args and capture the result
        Object args = testPoint.getArgs();
        Object result = /* invocation goes here */;

        // assert the result
        assertValues(
            "Unexpected result returned by "
                + testPoint.getMethod(),
            testPoint.getRetValue(),
            result);
    }

    public static void assertValues(
        String msg, Object expected, Object actual) {
        /* Compare the two values here and throw assertion
            error if not the same. One generic way of doing
            this might involve marshaling both objects to XML
            and then comparing the two with an XML diff utility. */
    }
}

The MockedTestSuite class was implemented to load and run every test case. This was done by loading all the XML test case files from the file system, instantiating individual MockedTestCase instances for each one, and adding them all to the suite of tests to be executed.

The VirtualMocker aspect was defined to intercept all the mock-points using an “around” advice which asserted the incoming arguments against the mocked arguments for the call before returning the mocked return value. Access to all mock point data was provided in a Map contained in the thread local variable as loaded by the currently executing MockedTestCase. The aspect code is shown below:

public aspect VirtualMocker {

    pointcut mockPoints() :
        call (/* mock-points method pattern */)
        && withincode(/*test-point method pattern*/);

    Object around() : mockPoints() {

        // retrieve mock data
        Map<String,MethodCall> mocks =
            MockedTestCase.MOCKS.get();
        MethodCall mock = mocks.get(
            thisJoinPoint.getSignature().toString());

        // assert input and return output
        MockedTestCase.assertValues(
            "Unexpected input arguments passed to "
                + mock.getMethod(),
            mock.getArgs(),
            thisJoinPoint.getArgs());
        return mock.getRetValue();
    }
}

The VirtualMocker aspect was woven into the existing codebase and the MockedTestSuite was executed over it to perform all the tests. The same aspect was then woven into the new codebase and used to facilitate the test driven development of the rewrite.

Each XML test case file contained all the mock data that was required to test the same scenario that generated it. The VirtualMocker aspect used the data to do all the mocking in all the tests. No mock objects or test data fixtures were manually coded 🙂

Written by warpedjavaguy

December 20, 2007 at 9:54 pm

Posted in java, programming

Tagged with

%d bloggers like this: