warpedjavaguy

Imperative by day and functional by night

Archive for the ‘java’ Category

Closure Syntax Wars


Closures look weird and complicated but so does all other code in general.
 

There seems to be some disagreement amongst the Java community about what the exact syntax for Java closures should be. The two currently most popular closure proposals are BGGA and FCM and they each use a different syntax. Neither syntax is final and looking at the discussions and comments in recent polls and many various blogs it is evident that we all have our own opinions, preferences, and personal favourites (myself included). And as developers why shouldn’t we? We are the ones that will potentially have to write and maintain code that uses closures. That’s why we care about the syntax.

The current BGGA proposal uses the ‘=>’ syntax. Although this looks like an arrow that points to a block of code, it can sometimes trick the eye and easily be mistaken for the ‘>=’ and ‘<=’ conditional operators. Consider the example below which defines a function that returns a boolean and accepts a parameter of type int and compares it against two variables.

{int => boolean} f = { int x => y <= x && x <= z };

Now consider the same example using the current FCM ‘#’ syntax. This style is designed to look and feel like Java method signatures. It is less confusing and easier to grasp though.

<ol>
  <li>(boolean(int)) f =  #(int x) { y <= x && x <= z };

In my previous post I questioned why we shouldn’t consider Java Method Pointers (JMP). The inspiration for this was a familiar but variant form of the C/C++ function pointer syntax. The same example would look something like this:

boolean* (int) f = (int x) { y <= x && x <= z };

Closures are indeed an alien concept to Java and they sure look alien too. Throw generics into the mix and they can look even more weird and complicated.Take a look at the following two examples:

Neal Gafter’s closures puzzler which transforms a list of objects of type T to a list of objects of type U.

static  List<U> map(List list, {T=>U} transform) {
  List<U> result = new ArrayList<U>(list.size());
  for (T t : list) {
    result.add(transform.invoke(t));
  }
  return result;
}

Stephen Colebourne’s evaluating BGGA example which converts an object of type T to an object of type U.

public  {T => U} converter({=> T} a, {=> U} b, {T => U} c) {
  return {T t => a.invoke().equals(t) ? b.invoke() : c.invoke(t)};
}

At the end of the day, we really want closures in Java for doing these nice things:

  • Simplifying anonymous class instance creation
  • Automating resource management/termination (ARM blocks)
  • Using for-each style loop constructs
  • Writing less boilerplate code

Closures would make it all possible. A lot of the complicated closure constructs involving generics would be integrated into the API. Developers would not have to write those. Java would provide them out of the box.

The more you look at closures and read about them the more familiar and less alien they become. Just like everything else, they do take a little getting used to. The question is what closure syntax do we want to get used to? Will it be good for Java? One day we may have to decide. Until then, the syntax war will continue.

Advertisements

Written by warpedjavaguy

February 28, 2008 at 12:14 pm

Posted in java, programming

Tagged with

Pointer Syntax for Java Closures


If Java closures are pointers to methods then we should make them look like pointers to methods.
 

Closures for Java are a hot topic yet again. The BGGA proposal is currently leading in the polls. It seems that most people who want closures want full closure support. I like the BGGA proposal, but I think the syntax can be improved to look more familiar and less verbose.

I generally tend to think of Java closures in the same way that I think of C/C++ function pointers (or member function pointers). A lot of programmers like myself transitioned from C++ to Java back in 1999. Java borrows a lot of the syntax from C++ and it was a natural transition for C++ programmers to move to Java. Given this, I cant help but question whether or not we should adopt the familiar C++ function pointer syntax for Java closures. That way the syntax (at least) would not be too alien to us.

Borrowing the examples from Stephen Colebourne’s weblog entry on evaluating BGGA, we have the following syntax comparisons:

BGGA syntax:

int y = 6;
{int => boolean} a = {int x => x <= y};
{int => boolean} b = {int x => x >= y};
boolean c = a.invoke(3) && b.invoke(7);
public {T => U} converter({=> T} a, {=> U} b, {T => U} c) {
  return {T t => a.invoke().equals(t) ? b.invoke() : c.invoke(t)};
}

Java Method Pointer syntax (simplified C/C++ pointer syntax for Java):

int y = 6;
boolean* (int) a = (int x) {x <= y};
boolean* (int) b = (int x) {x >= y};
boolean c = a.invoke(3) && b.invoke(7);
public  U* (T) converter(T* a, U* b, U* (T) c) {
  return (T t) {a.invoke().equals(t) ? b.invoke() : c.invoke(t)};
}

In addition to this, we could also borrow the & syntax and reference existing static and instance methods like this:

boolean* (int) a = class&method(int);
boolean* (int) b = instance&method(int);
boolean* (int) c = this&method(int);

If we want closures in Java, why not borrow the C/C++ pointer syntax and think of them as pointers to methods if that’s what they are? Most of the Java syntax is borrowed from C++ after all. I understand that function pointers do not fit well in the OO paradigm but I’m not talking about pointers to functions here. I’m instead talking about pointers to methods in the OO world of Java.

Written by warpedjavaguy

February 21, 2008 at 9:51 pm

Posted in java, programming

Tagged with

The Mind of a Programmer


Good programmers are good at solving problems because they consider all the dimensions.
 

A short while ago I published a post about natural born programmers and questioned whether or not they exist. When I was writing it I was thinking about how programmers go about coding solutions to problems that they already know how to solve. I was trying to tap into the mind of a programmer and discover how it thinks. I demonstrated two approaches to solving the simple problem of determining whether a number is odd or even. One solution used a ‘smart’ human approach and the other a ‘simple’ math approach. It generated some good discussions and triggered various reactions. The most interesting one was this one. It discusses my first response to the first comment posted by Matt Turner.

The simple math solution that I suggested was this:

// if divisible by 2 then is even else is odd
boolean isEven = number % 2 == 0;

The alternative and bitwise solution that Matt suggested was this:

// if last bit is 0 then is even else is odd
boolean isEven = (number & 1) == 0;

I knew that the &1 solution was functionally equivalent to the %2 solution but I wasn’t exactly sure if it was also equally as efficient (or even better). I had to prove it. So I knocked up a quick test program to compare the execution times. When I ran the program I observed that both of them did take the same time to execute. I also analysed the generated bytecode and found that the %2 solution used the IREM bytecode instruction and that the &1 operation used the IAND instruction. My bytecode findings attracted these comments by Michael Speer and Charlie Chan respectively (and other anonymous comments too). Having done a bit of assembler, C, and C++ in my not too old school days, I do have some experience in binary arithmetic and native code optimisation. So I can appreciate the &1 solution. But I never expect to have to apply this knowledge directly to Java programming. I always expect Java to perform all trivial optimisations on my behalf. I should not have to optimise my Java code at this level. I should just write code that expresses what I want in the simplest and clearest way possible.

It has often been said by many programmers that there is no one right solution to a problem. That is definitely the case here. But there is something more interesting here though. If you study the &1 solution you will find that it very closely resembles the ‘smart’ human optimised approach to solving the problem. How? Remember that smart humans can tell whether a number is even or odd just by looking at the last digit. The &1 solution emulates exactly that, yes indeed! The difference is that humans look at the right-most digit whereas machines look at the right-most bit. We have identified a pattern here that is common to both humans and machines. Interesting!

But wait! The %2 approach is a more natural and readable expression of the mathematical definition and is also more easily understood by us humans as adopters of the base 10 decimal number system. It directly expresses at a high level the definition that all even numbers are divisible by two and that all odd numbers are not. At the lower level it expresses it as an IREM bytecode instruction. At the even lower machine code level it expresses it as native assembly instructions. We are now in the world of base 2 binary numbers where operations like ‘divide by two’ involve shifting all bits one place to the right and carrying the right most bit (the remainder). Bits can be checked, set, and toggled with bitwise AND, OR, and XOR operations. At this stage, the %2 instruction is optimised for the machine and more closely resembles the &1 operation. So at the binary level, one can deduce that the %2 operation is essentially equivalent to &1.

By solving problems and comparing solutions we are able to identify common patterns. Gifted programmers are naturally good at it! These are the things we learn and the things that go on in our minds as programmers.

Written by warpedjavaguy

January 18, 2008 at 11:41 am

Posted in java, programming

Tagged with

Test Driven Rewrites (for Programmers)


Why write mock objects manually when you can emulate them virtually?
 

In my previous post on Test Driven Rewrites I described at a high level how I used virtual mock objects to deliver a regression test suite for an existing component and how I used that suite to test and develop a rewrite of that component. In it you’ll recall that the existing codebase had no test suite at all and that I had very little knowledge of the business domain. I had to come up with a quick and easy way to create a test suite from scratch. Here I present some code that shows how I did it using AspectJ, XML, and JUnit.

The first thing I had to do was identify the following points in the codebase:

  1. The test point
    • This is the point in the application where the call to the component is made. It is the call to the component method that will be put under test.
  2. All the mock points
    • These are all the points in the component where calls are made to other components and/or services. They are the calls to the methods that need to be mocked.

Next, a TestCaseGenerator aspect was written to capture the arguments and return values of the test and mock points identified above. This aspect defined two point cuts for matching the test and mock points respectively. “Around” advice was used on each to capture and persist the arguments and return values of each call to an XML file. A MethodCall POJO was created to encapsulate the name, arguments, and return values of individual method calls and Castor XML was used to marshal it to XML. The aspect code is shown below:

public aspect TestCaseGenerator {

    pointcut testPoint() :
        execution (/* test-point method pattern */);

    pointcut mockPoints() :
        call (/* mock-points method pattern */)
        && withincode(/* test-point method pattern*/);

    // output XML document
    private Document xmldoc;

    Object around() : testPoint() {

        /* instantiate new XML document here */

        // encapsulate invoked method data into POJO
        MethodCall methodCall = new MethodCall();
        methodCall.setMethod(
            thisJoinPoint.getSignature().toString());
        methodCall.setArgs(thisJoinPoint.getArgs());
        Object retValue = proceed();
        methodCall.setRetValue(retValue);

        /* marshal POJO into test-point XML element here */

        /* persist XML to file here */

        // pass back return value
        return retValue;
    }

    Object around() : mockPoints() {

        // encapsulate invoked method data into POJO
        MethodCall methodCall = new MethodCall();
        methodCall.setMethod(
            thisJoinPoint.getSignature().toString());
        methodCall.setArgs(thisJoinPoint.getArgs());
        Object retValue = proceed();
        methodCall.setRetValue(retValue);

        /* marshal POJO into mock-point XML element here */

        // pass back return value
        return retValue;
    }
}

This aspect was woven into the existing codebase which was then deployed locally. So now an XML test case file was automatically generated by the aspect on the file system for every scenario that was executed on the deployed application. The business identified all the scenarios that needed to be tested and a test case XML file was generated for each one in real time as it was executed on the system by a user. The basic structure of the XML that was generated is shown below:

<?xml version="1.0" encoding="UTF-8">
<test-case>
    <test-point method="method pattern">
        <arguments>
            ...
        </arguments>
        <return-value>
            ...
        </return-value>
    </test-point>
    <mock-point method="method pattern">
        <arguments>
            ...
        </arguments>
    </mock-point>
    <mock-point method="method pattern">
        <arguments>
            ...
        </arguments>
        <return-value>
            ...
        </return-value>
    </mock-point>
    ...
</test-case>

Once the data for all the known use case scenarios had been captured in XML files, it was then time to start writing the JUnit test suite that would execute the tests. This involved creating the following:

  1. A MockedTestCase class (extends JUnit TestCase)
  2. A MockedTestSuite class (extends JUnit TestCase)
  3. A VirtualMocker aspect

The MockedTestCase class was implemented with a constructor that accepted a given test case XML file. All the mock point data in the file was loaded into individual MethodCall POJO instances and stored in a Map keyed by the method signature pattern. The map was bound to a thread local variable to make it thread safe and accessible to the virtual mocker aspect for mocking purposes. The test point data was used to invoke the component in the test and to assert the returned result. The Java code is shown below:

public class MockedTestCase extends TestCase {

    public static final
        ThreadLocal<Map<String,MethodCall>> MOCKS =
           new ThreadLocal<Map<String,MethodCall>>();

    private File xmlFile;
    private MethodCall testPoint;

    public MockedTestCase(File xmlFile) {
        super("testIt");
        this.xmlFile = xmlFile;
    }

    // overrides super to return test case XML file name
    public String getName() {
        return xmlFile.getName();
    }

    protected void setUp() throws Exception {

        testPoint = new MethodCall();
        /* Unmarshal XML test-point into POJO */

        Map<String,MethodCall> mocks =
            new HashMap<String,MethodCall>();
        /* Unmarshal XML mock-points and load into Map */

        // store loaded mocks in thread local var
        MOCKS.set(mocks);
    }

    public void testIt() throws Exception {

        // invoke the test
        // - pass in test-point args and capture the result
        Object args = testPoint.getArgs();
        Object result = /* invocation goes here */;

        // assert the result
        assertValues(
            "Unexpected result returned by "
                + testPoint.getMethod(),
            testPoint.getRetValue(),
            result);
    }

    public static void assertValues(
        String msg, Object expected, Object actual) {
        /* Compare the two values here and throw assertion
            error if not the same. One generic way of doing
            this might involve marshaling both objects to XML
            and then comparing the two with an XML diff utility. */
    }
}

The MockedTestSuite class was implemented to load and run every test case. This was done by loading all the XML test case files from the file system, instantiating individual MockedTestCase instances for each one, and adding them all to the suite of tests to be executed.

The VirtualMocker aspect was defined to intercept all the mock-points using an “around” advice which asserted the incoming arguments against the mocked arguments for the call before returning the mocked return value. Access to all mock point data was provided in a Map contained in the thread local variable as loaded by the currently executing MockedTestCase. The aspect code is shown below:

public aspect VirtualMocker {

    pointcut mockPoints() :
        call (/* mock-points method pattern */)
        && withincode(/*test-point method pattern*/);

    Object around() : mockPoints() {

        // retrieve mock data
        Map<String,MethodCall> mocks =
            MockedTestCase.MOCKS.get();
        MethodCall mock = mocks.get(
            thisJoinPoint.getSignature().toString());

        // assert input and return output
        MockedTestCase.assertValues(
            "Unexpected input arguments passed to "
                + mock.getMethod(),
            mock.getArgs(),
            thisJoinPoint.getArgs());
        return mock.getRetValue();
    }
}

The VirtualMocker aspect was woven into the existing codebase and the MockedTestSuite was executed over it to perform all the tests. The same aspect was then woven into the new codebase and used to facilitate the test driven development of the rewrite.

Each XML test case file contained all the mock data that was required to test the same scenario that generated it. The VirtualMocker aspect used the data to do all the mocking in all the tests. No mock objects or test data fixtures were manually coded 🙂

Written by warpedjavaguy

December 20, 2007 at 9:54 pm

Posted in java, programming

Tagged with

Test Driven Rewrites


Rewritten applications are best tested in the most laziest ways possible.
 

Rewriting existing applications or components can be a risky exercise. Consider an existing component that provides critical business functionality to an enterprise. The component has been tweaked and fine tuned over the years and has evolved to provide the exact functionality required by the business. It is a core component that consists of very fine grained and delicate operations and it is used by multiple applications. Although it functions correctly, a rewrite has been requested after a review identified some major performance and maintainability issues. The functions were taking too long to execute and it was becoming too difficult to add, modify, or replace any functionality without incurring negative side effects. The component had become very volatile and susceptible to change.

I was part of a team that was given the task of rewriting such a component. My role was to come up with an automated regression test suite that would test both the old and the new software. The first problem I had was that there was no current test suite at all that I could use as a base. I was not familiar with the software at all and my general understanding of the business and its domain was very poor. I was a newbie that had just joined the project and I was thrown into the deep end. I had to quickly find a way to start creating tests that would cover all possible scenarios. In my mind I was thinking “I wish I had a test generator”. That way I could then get a business person to run through all the scenarios in the application and have the tests automatically generated on the file system (including mock data, assertions, and all). The same tests could then be used to test the rewrite. So I set upon creating a test generator. It wasn’t going to be easy but I knew that once I had it going that the remaining exercise of running the same tests over the rewritten code base would be a breeze. It was a challenge that I could not refuse and failure was not an option.

I immediately started researching mocking frameworks and was looking for one that would involve minimal overhead from a coding perspective. I wanted one that would help generate all the mock objects and data that I would need for all my tests. The last thing I wanted to do was to have to manually start coding mock objects and data fixtures. I wanted all the data to be automatically captured and all the tests to be automatically generated. After about half an hour of researching online, I stumbled upon this beauty:

Virtual Mock Objects using AspectJ with JUNIT

Using aspect-oriented programming in AspectJ to facilitate isolation of testable units, without hand-crafting Mock Objects or using a specialized Mock Object generation tool.

With aspects you can intercept method invocations and access the input parameters and return values both before and after the call. This was exactly what I needed. The existing component we were replacing made several calls to a rules engine and other EJB services. I immediately realised that I could write an aspect to intercept those calls and capture the data going in and the data coming out. I could then persist this data to an XML file and have all the mock data I need for all my tests. So I wrote a test generator aspect that captured all the data going in and out of the calls that I needed to mock. I separated the input and output data of each method call into separate XML elements in the generated file and associated them with the call. I weaved the aspect into the code and asked a business person to use the application as they normally do and run some scenarios. For each scenario that they ran an XML test file was automatically generated in real time on the file system. The existing code was not changed at all. The generator aspect was simply woven into it.

The test case generator was complete and all test cases were identified by the business and generated in less than two weeks. We managed to generate some 600+ test cases in that time. It was now time to start writing the JUnit test runner and virtual mocker aspect using the pattern described in the virtual mock objects article (quoted above). The mocker aspect was written to intercept every method invocation for which mock data was provided in the generated XML test files. It asserted the input parameters passed to each mocked call and overrode the method to return the output for that call as captured in the XML test file. The aspect was weaved into the existing code and a JUnit test runner was written to invoke the component using the captured test input and to assert the returned result against the captured test output. The aspect handled all the mocked calls in between. Again, the existing code was not modified in anyway and the virtual mocker aspect was simply woven into it. As expected all the tests passed when executed over the existing code base. I now had a complete regression test suite that I could use to test both the existing and newly rewritten component that the rest of the team was busy developing. I had some time on my hands and helped them complete it.

When the rewrite was ready for testing I weaved the same virtual mocker aspect into the new code and ran the same tests using the same JUnit test runner. The tests all ran very efficiently and fast and about 60% of them passed first go! The other 40% that failed were due to various bugs that were introduced into the new code, missing logic, and other miscellaneous anomalies. It took less than a week to fix the new code and achieve a 100% test pass rate. That 40% of the rewrite was completely test driven. The entire exercise was a huge success and the tests were integrated into the automated build process. The entire test suite was recorded in XML form and it could be used to successfully verify both the old and rewritten component. The rewritten component continued to evolve and the tests were kept in sync. Whenever the data model changed the tests were also updated. XSL stylesheets made it easy to transform and restructure the data contained in all XML test files as required.

When it came to running the tests in isolation though, empty stub implementations of all EJB interfaces had to be used. Those empty stubs were the only little bit of manual coding we had to do. But as far as mocking goes, nothing was manually coded. Using aspects in this way made it possible to test both existing and rewritten code “as is” without having to manually write any mock objects or test data fixtures 🙂
 
AddThis Social Bookmark Button AddThis Feed Button

Written by warpedjavaguy

December 11, 2007 at 10:01 pm

Posted in java

Tagged with , ,

The Day I Timed Out


There is no more time left for waiting after timing out.
 

I work as a contractor (freelancer) and am always on the lookout for new and interesting job opportunities. Not long ago I found and applied for a Java developer role that was advertised online. My current contract was about to expire and so it was the ideal time to actively start looking around. There was no real need for me to do so though because my current client had already offered me a renewal. But I could not resist a new opportunity and I had not yet agreed to accept the renewal. It was the perfect time to make my next transition. That’s what us contractors do. We have the freedom of choice and we move around. It’s the beauty of contracting.

The next morning on my way to work I got a phone call from the recruiter. It was about 8:30 AM and I had just bought my usual morning coffee. Normally I would sit down and drink my coffee in peace without any distractions but I made an exception in this case. The conversation I had was a thirty minute phone interview. I didn’t enjoy my coffee the way I normally would have liked to but I did manage to secure a face to face interview for 12:30 PM the same day. And that’s always a good thing.

Now, 12:30 PM is usually when I prefer to go out for lunch but I was willing to sacrifice that too. I promptly arrived for the interview and the receptionist lead me to a waiting room. It was a tiny room that had no windows in it. There was one small round table with a form and a pen on it and there were three chairs. I was asked to sit down and fill in the form and wait for the interviewer. I filled in the form in five minutes and sat there waiting. Five more minutes passed and I was still sitting there waiting. Yet another five minutes passed and I was yet still sitting there waiting! I was there for 15 minutes all alone without anyone to speak to or even a glass of water to drink. I am not one for demanding respect, but a little bit of hospitality would have been nice. So there I was sitting there all alone and doing nothing. By this time both my patience and respect for that recruiter had ran out. I didn’t really need to be there and I was not desperate for the job either. I flipped over the form that I had filled in and wrote the following message on the back of it:

<timed-out millisecs="900000"/>

I left it on the table and walked straight out. Yes, that’s right. I had timed out and I knew it.

Some two minutes later as I was walking away from the building, my phone rang. It was the interviewer. I didn’t answer it. I grabbed a quick lunch and went back to my current workplace, signed into my computer, and resumed working. I soon received an email message that read:

I am sorry for the mix-up that occurred today with your interview. Our reception did not notify me of your arrival and by the time I checked to see if you had arrived, you had left. Is there a chance we could re-schedule for tomorrow?

Please accept my apologies for keeping you waiting today.

To which I replied:

Sorry I left the way I did. I guess I’m just a little bit impatient sometimes. In the meantime I’ve decided to accept the renewal offered to me by my current client.

I wish you good luck in filling the position.

Written by warpedjavaguy

October 17, 2007 at 10:34 pm

Posted in java, programming

Tagged with

Renegading Java Classics


Anything that is deprecated and superseded can be considered classic and should be renegaded.
 

The Java API has undergone many changes since the original 1.0 and classic 1.1 and 1.2 versions. The changes include both enhancements and deprecations. Any classes, interfaces, and methods that have been marked as deprecated should no longer be used. They are supported for backwards compatibility purposes and are likely to never be removed from future Java releases. It is recommended (but not enforced) that programmers never use them in their programs. I can understand the rationale behind this. Backwards compatibility is important and all reasonable measures should be taken to ensure that it is always preserved. Deprecation provides a means of providing this compatibility. The problem is that in doing so, it forever supports something that is deprecated without strictly enforcing that it no longer be used.

I would like to propose the idea of introducing a new and specialised type of deprecation called renegation whereby anything that is superseded and unsupported can be physically demoted to ‘classic’ status and not be supported in source code but in bytecode only. A Java compiler that supports this would error when attempting to compile any code that makes any direct use of anything marked renegaded. Anything that is already marked and compiled as renegaded would be fully supported at runtime but strictly controlled at compile time. This would ensure that all bytecode remains valid and that any source code that does not have any compile time dependencies on anything marked renegaded still compiles. In this way, all newly written and compiled code would be clean and free of any renegaded code and will still be able to make use of any new or existing precompiled binaries. Programmers would then be forced to not write or compile any code that makes direct use of anything that is renegaded. They would be forced to use the preferred alternatives instead. Compatibility would still be supported through binaries and any existing code that requires recompilation would have to be updated to ensure that all renegaded code fragments are replaced with the preferred alternatives first.

Renegation should not have any impact on deprecation and should not be applied to everything that is already deprecated. It should instead only be applied to types that may or may not already be deprecated and are suitable because they are obsolete and superseded by preferred alternatives (like the java.awt.List.clear() method for example). Furthermore, it would also make it easier to deprecate and avoid using certain ‘classic’ types that have not yet been deprecated but should have been (like the java.util.Enumeration interface for example). The Enumeration interface is a primary candidate for renegation, especially when you consider that it is only reactively used and has a special note included in it’s Javadoc comments stating that Iterators should be considered in preference.

Enumeration Javadoc comment:

NOTE: The functionality of this interface is duplicated by the Iterator interface. In addition, Iterator adds an optional remove operation, and has shorter method names. New implementations should consider using Iterator in preference to Enumeration.

In order to support renegation and preserve deprecation, the existing @Deprecated annotation would need to be extended, or a new core Java annotation type or type modifier would need to be introduced.

Here are some suggestions:

/* Option 1:
Extending the existing @Deprecated annotation type */

@Deprecated (renegation = "classic")
public interface Enumeration<E> {
...
}

/* Option 2:
Introducing a new @Renegaded annotation type */

@Renegaded (status = "classic")
public interface Enumeration<E> {
...
}

/* Option 3:
Introducing a new classic type modifier */

public classic interface Enumeration<E> {
...
}

Of course, renegation should be supported at the type, method, and attribute levels. Its use should not just be limited to just the Java API either. Any code in any codebase should be able to be renegaded if deemed suitable.

Written by warpedjavaguy

October 10, 2007 at 11:33 am

Posted in java, programming

Tagged with

%d bloggers like this: