Performance Zone is brought to you in partnership with:

I am the founder and CEO of Data Geekery GmbH, located in Zurich, Switzerland. With our company, we have been selling database products and services around Java and SQL since 2013. Ever since my Master's studies at EPFL in 2006, I have been fascinated by the interaction of Java and SQL. Most of this experience I have obtained in the Swiss E-Banking field through various variants (JDBC, Hibernate, mostly with Oracle). I am happy to share this knowledge at various conferences, JUGs, in-house presentations and on our blog. Lukas is a DZone MVB and is not an employee of DZone and has posted 222 posts at DZone. You can read more from them at their website. View Full User Profile

The Dark Side of Java 8

04.08.2014
| 20708 views |
  • submit to reddit

The dark side of Java 8

So far, we’ve been showing the thrilling parts of this new major release. But there are also caveats. Lots of them. Things that

  • … are confusing
  • … are "wrong"
  • … are omitted (for now)
  • … are omitted (for long)

There are always two sides to Java major releases. On the bright side, we get lots of new functionality that most people would say was overdue. Other languages, platforms have had generics long before Java 5. Other languages, platforms have had lambdas long before Java 8. But now, we finally have these features. In the usual quirky Java-way.

Lambda expressions were introduced quite elegantly. The idea of being able to write every anonymous SAM instance as a lambda expression is very compelling from a backwards-compatiblity point of view. So what arethe dark sides to Java 8?

Overloading gets even worse

Overloading, generics, and varargs aren’t friends. We’ve explained this in a previous article, and also in this Stack Overflow question. These might not be every day problems in your odd application, but they’re very important problems for API designers and maintainers.

With lambda expressions, things get “worse”. So you think you can provide some convenience API, overloading your existing run() method that accepts a Callable to also accept the new Supplier type:

static <T> T run(Callable<T> c) throws Exception {
    return c.call();
}
static <T> T run(Supplier<T> s) throws Exception {
    return s.get();
}

What looks like perfectly useful Java 7 code is a major pain in Java 8, now. Because you cannot just simply call these methods with a lambda argument:

public static void main(String[] args) throws Exception {
    run(() -> null);
    //  ^^^^^^^^^^ ambiguous method call
}

Tough luck. You’ll have to resort to either of these “classic” solutions:

run((Callable<Object>) (() -> null));
run(new Callable<Object>() {
    @Override
    public Object call() throws Exception {
        return null;
    }
});

So, while there’s always a workaround, these workarounds always “suck”. That’s quite a bummer, even if things don’t break from a backwards-compatibility perspective.

Not all keywords are supported on default methods

Default methods are a nice addition. Some may claim that Java finally has traits. Others clearly dissociate themselves from the term, e.g. Brian Goetz:

The key goal of adding default methods to Java was “interface evolution”, not “poor man’s traits.”

As found on the lambda-dev mailing list.

Fact is, default methods are quite a bit of an orthogonal and irregular feature to anything else in Java. Here are a couple of critiques:

They cannot be made final

Given that default methods can also be used as convenience methods in API:

public interface NoTrait {
    // Run the Runnable exactly once
    default final void run(Runnable r) {
        //  ^^^^^ modifier final not allowed
        run(r, 1);
    }

    // Run the Runnable "times" times
    default void run(Runnable r, int times) {
        for (int i = 0; i < times; i++)
            r.run();
    }
}

Unfortunately, the above is not possible, and so the first overloaded convenience method could be overridden in subtypes, even if that makes no sense to the API designer.

They cannot be made synchronized

Bummer! Would that have been difficult to implement in the language?

public interface NoTrait {
    default synchronized void noSynchronized() {
        //  ^^^^^^^^^^^^ modifier synchronized
        //  not allowed
        System.out.println("noSynchronized");
    }
}

Yes, synchronized is used rarely, just like final. But when you have that use-case, why not just allow it? What makes interface method bodies so special?

The default keyword

This is maybe the weirdest and most irregular of all features. The defaultkeyword itself. Let’s compare interfaces and abstract classes:

// Interfaces are always abstract
public /* abstract */ interface NoTrait {

    // Abstract methods have no bodies
    // The abstract keyword is optional
    /* abstract */ void run1();

    // Concrete methods have bodies
    // The default keyword is mandatory
    default void run2() {}
}

// Classes can optionally be abstract
public abstract class NoInterface {

    // Abstract methods have no bodies
    // The abstract keyword is mandatory
    abstract void run1();

    // Concrete methods have bodies
    // The default keyword mustn't be used
    void run2() {}
}

If the language were re-designed from scratch, it would probably do without any of abstract or default keywords. Both are unnecessary. The mere fact that there is or is not a body is sufficient information for the compiler to assess whether a method is abstract. I.e, how things should be:

public interface NoTrait {
    void run1();
    void run2() {}
}

public abstract class NoInterface {
    void run1();
    void run2() {}
}

The above would be much leaner and more regular. It’s a pity that the usefulness of default was never really debated by the EG. Well, it was debated but the EG never wanted to accept this as an option. I’ve tried my luck, with this response:

I don’t think #3 is an option because interfaces with method bodies are unnatural to begin with. At least specifying the “default” keyword gives the reader some context why the language allows a method body. Personally, I wish interfaces would remain as pure contracts (without implementation), but I don’t know of a better option to evolve interfaces.

Again, this is a clear commitment by the EG not to commit to the vision of “traits” in Java. Default methods were a pure necessary means to implement 1-2 other features. They weren’t well-designed from the beginning.

Other modifiers

Luckily, the static modifier made it into the specs, late in the project. It is thus possible to specifiy static methods in interfaces now. For some reason, though, these methods do not need (nor allow!) the default keyword, which must’ve been a totally random decision by the EG, just like you apparently cannot define static final methods in interfaces.

While visibility modifiers were discussed on the lambda-dev mailing list, but were out of scope for this release. Maybe, we can get them in a future release.

Few default methods were actually implemented

Some methods would have sensible default implementations on interface – one might guess. Intuitively, the collections interfaces, like List or Setwould have them on their equals() and hashCode() methods, because the contract for these methods is well-defined on the interfaces. It is also implemented in AbstractList, using listIterator(), which is a reasonable default implementation for most tailor-made lists.

It would’ve been great if these API were retrofitted to make implementing custom collections easier with Java 8. I could make all my business objects implement List for instance, without wasting the single base-class inheritance on AbstractList.

Probably, though, there has been a compelling reason related to backwards-compatibility that prevented the Java 8 team at Oracle from implementing these default methods. Whoever sends us the reason why this was omitted will get a free jOOQ sticker :-)

The wasn’t invented here – mentality

This, too, was criticised a couple of times on the lambda-dev EG mailing list. And while writing this blog series, I can only confirm that the new functional interfaces are very confusing to remember. They’re confusing for these reasons:

Some primitive types are more equal than others

The intlongdouble primitive types are preferred compared to all the others, in that they have a functional interface in the java.util.functionpackage, and in the whole Streams API. boolean is a second-class citizen, as it still made it into the package in the form of a BooleanSupplier or aPredicate, or worse: IntPredicate.

All the other primitive types don’t really exist in this area. I.e. there are no special types for byteshortfloat, and char. While the argument of meeting deadlines is certainly a valid one, this quirky status-quo will make the language even harder to learn for newbies.

The types aren’t just called Function

Let’s be frank. All of these types are simply “functions”. No one really cares about the implicit difference between a Consumer, a Predicate, aUnaryOperator, etc.

In fact, when you’re looking for a type with a non-void return value and two arguments, what would you probably be calling it? Function2? Well, you were wrong. It is called a BiFunction.

Here’s a decision tree to know how the type you’re looking for is called:

  • Does your function return void? It’s called a Consumer
  • Does your function return boolean? It’s called a Predicate
  • Does your function return an intlongdouble? It’s calledXXToIntYYXXToLongYYXXToDoubleYY something
  • Does your function take no arguments? It’s called a Supplier
  • Does your function take a single intlongdouble argument? It’s called an IntXXLongXXDoubleXX something
  • Does your function take two arguments? It’s called BiXX
  • Does your function take two arguments of the same type? It’s calledBinaryOperator
  • Does your function return the same type as it takes as a single argument? It’s called UnaryOperator
  • Does your function take two arguments of which the first is a reference type and the second is a primitive type? It’s called ObjXXConsumer(only consumers exist with that configuration)
  • Else: It’s called Function

Good lord! We should certainly go over to Oracle Education to check if the price for Oracle Certified Java Programmer courses have drastically increased, recently… Thankfully, with Lambda expressions, we hardly ever have to remember all these types!

More on Java 8

Java 5 generics have brought a lot of great new features to the Java language. But there were also quite a few caveats related to type erasure. Java 8′s default methods, Streams API and lambda expressions will again bring a lot of great new features to the Java language and platform. But we’re sure that Stack Overflow will soon burst with questions by confused programmers that are getting lost in the Java 8 jungle.

Learning all the new features won’t be easy, but the new features (and caveats) are here to stay. If you’re a Java developer, you better start practicing now, when you get the chance. Because we have a long way to go.

Nonetheless, thigns are exciting, so stay tuned for more exciting Java 8 stuff published in this blog series.

Are you in for another critique about Java 8? Read “New Parallelism APIs in Java 8: Behind the Glitz and Glamour” by the guys over

Published at DZone with permission of Lukas Eder, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:

Comments

matt inger replied on Tue, 2014/04/08 - 1:37pm

in regards to your "decision tree", there a certain amount of niceness to have some of these specializations already defined for you.


Predicate<T> extends Function<T, Boolean>

BinaryOperator<T> extends BiFunction<T,T,T>

UnaryOperator<T> extends Function<T,T>

Personally, i do agree that Function0, Function1, Function2 would have been better than Supplier, BiFunction and Function.

To me,  Consumer makes sense, as it's a side effecting construct with no return type (in scala this would be Unit).  If you had consumer in terms of Function, it would have to be Function<T,Void> and you'd be forced to return null from every implementation, which would add code cruft.

The only annoyance i really see is the dealing with the primitives.  However, this can hardly be blamed on the functional aspects of java 8.  This is a result of the (insert adjective here) design decision made in java 1 to separate the primitives from the object hierarchy.  That's the source of a lot of ugliness in lots of libraries, and rears its ugly head in the form of autoboxing.

Overall, i think they did an OK job.  Could it have been better?  Possibly.  But remember, they're trying to maintain full backward compatiblity while at the same time trying to bring the language forward.  Last time I checked, python had to do a compatibility breaking release in 3.0, and it's happened in scala more than once.  I don't recall that ever happening in Java (though i would argue maybe it needs to happen)



Lukas Eder replied on Wed, 2014/04/09 - 1:56am in response to: matt inger

I completely agree that most of this annoyance is due to primitive types. Don't get me wrong - the expert group made the best of possible choices / implementations given the project constraints. It would have been much better to implement value types (removal of primitive types) before lambda expressions and streams. But it is also easy to understand why priorities were shifted towards JSR-335.

Note that Edwin Dalorzo has written a very interesting follow-up article after mine, going a bit deeper on the subject of JDK's new functional interfaces:

http://blog.informatech.cr/2014/04/04/jdk-8-interface-pollution/ 

Borislav Iordanov replied on Wed, 2014/04/16 - 1:39pm

Great article! Reading this, I can't help but conclude that language design should be a bit more uncompromising. That is, principles should be respected with a stronger fervor. For example, I'm sure Java was originally designed sort of with the C/C++ mindset of introducing keywords only when necessary. Then that principle was ignored for readability (a relatively subjective judgement). And your other examples are similarly the result of various compromises. There are enough opportunities for compromises in API designs or application development... 

Michele Mauro replied on Thu, 2014/04/17 - 3:09am

But when you have that use-case, why not just allow it?

Phrases like this were the fuel of many religious wars. If you were around in the '90s, you may recall the feuds between Python and Perl and Vi and Emacs.

In language design, it's often not wise to try to appeal to everyone. Scala (and maybe Ruby?) is sometimes struggling with this, too, as some features have escaped and now are in the wild, and difficoult to remove. Python on the other side has always followed the "I don't care about your use case, do it THIS way" philosophy, with some success.

Mario Fusco replied on Thu, 2014/04/17 - 7:34am

I find this article mostly FUD and since of course it is not easy to explain why with one or two tweets (I mistakenly tried that path first) I decide to explain this claim in a bit more detail. Also it seems not to take in any consideration the biggest constraint that the lambda project developers had to face: the introduction of this epochal change had to be made without breaking any backward compatibility of any Java code written in the last 20 years. So let me try to reply to the main points of this article, one by one:

1. Overloading get even worse

I don't see how this is related with lambdas. Actually I believe that this is just a further demonstration that overloading should be considered harmful and avoided every time it's possible. In particular I could reproduce the same problem without lambdas or even without generics at all: suppose you have the following 2 methods

void run(String s)

void run(Integer i)

what happens if you do run(null) without any explicit cast?

2. Default method cannot be made final

Are you forgetting the backward compatibility constraint? What happened if you had your own List implementation and they declared the stream() method final in the Java 8 List interface?

3. Default method cannot be made synchronized

This feature would have been totally useless: a default method cannot have any state so what's the point in sync'ing it? Language features orthogonaility is great but only when it makes sense.

4. Few default methods are actually implemented

Backward compatibility. What happened if you had your List implementation not declaring the equals() method (and then relying on the Object one) and they added another implementation in the List interface?

5. Some primitive types are more equal the others

This is true but also an understandable design choice. Better to have 3 Streams for the primitives your use 99% of time, instead than 8 of them. Conversely primitive Streams for int, long and double are very necessary (I also criticized this at the very beginning). Just compare the performances of a String<Double> vs. a DoubleStream and you'll see why.

6. Types aren't called just Function.

You're forgetting the default methods. E.g. the Predicate has a negate() method, what should that method does on Function?

Lukas Eder replied on Fri, 2014/04/18 - 12:46am in response to: Mario Fusco

Thank you for your detailed feedback, Mario. I'm still very surprised how quickly you jumped to the conclusion that this article be FUD. This article is part of a larger series of articles, which is mostly pro-Java 8. At Data Geekery, we're great fans of Java and we had been looking forward to this major release for a long time. I hope you will be able to put this article into a more appropriate perspective.

But you took the time to delve into the details of this article, so I will respond to you:

Mario: Also it seems not to take in any consideration the biggest constraint that the lambda project developers had to face: the introduction of this epochal change had to be made without breaking any backward compatibility of any Java code written in the last 20 years.

Please, re-read the introduction which reads: 

Article: Lambda expressions were introduced quite elegantly. The idea of being able to write every anonymous SAM instance as a lambda expression is very compelling from a backwards-compatiblity point of view. So what arethe dark sides to Java 8?

I'm fully aware that the expert group made the best of decisions given the immensly complex constraints modern Java imposes upon them, and I'm very curious about future evolutions - e.g. to see what sort of tricks they will pull off to introduce value types and reified generics. Given the constraints, this was a very good piece of work. But it is not FUD to raise the voice once more and remind ourselves, that maybe, we should start considering completely rewriting Java, eventually.

Every junior developer who learns Java nowadays will have to learn its complete history of backwards-compatibility, in order to master it. We're approaching a level where this starts being unbearable - even to seniors.

1. Overloading get even worse

I don't see how this is related with lambdas. 

:-) Well, you can of course insist that this point is only about overloading, not about lambdas. But as I told you already on Twitter, one needs to understand these things in depth to know why a cast is necessary, specifically when using a lambda. It is not immediately clear for an API designer using overloading, that the introduced overload might break a call-site lambda expression, while it was a perfectly clean Java 7 API increment.


So, yes. This point is very much related to lambdas.

2. Default method cannot be made final

Are you forgetting the backward compatibility constraint?

No, I'm not. The article never insisted that Collections default methods be final. But I as an API designer would like to leverage "final default" methods in my own APIs. There is hardly any reason why this should be generally impossible in situations where the API designer knows what they're doing.

3. Default method cannot be made synchronized

This feature would have been totally useless

I agree that this critique doesn't pull much of its weight as the use-cases are very remote.

4. Few default methods are actually implemented

Backward compatibility. What happened if you had your List implementation not declaring the equals() method (and then relying on the Object one) and they added another implementation in the List interface?

Equals was a bad example, as it is not possible to implement default equals() methods in interfaces - the Object.equals() implementation will always be preferred.

A better example would have been List.listIterator() and many other List methods. Why isn't there any default implementation, unlike the one for Iterator.remove()?

And even if there's a good reason for this, don't you think that there should be such a default implementation?

5. Some primitive types are more equal the others

This is true but also an understandable design choice. Better to have 3 Streams for the primitives your use 99% of time, instead than 8 of them.

Yes, I understand this motivation, perfectly, and it was the right choice. But again, this shows that we're in for designing a quirkier and quirkier language that is harder and harder to maintain (and to learn).

(I also criticized this at the very beginning)

... which I suspect wasn't FUD then, it was merely criticism, right? ;-)

6. Types aren't called just Function.

You're forgetting the default methods. E.g. the Predicate has a negate() method, what should that method does on Function?

OK, now I really must insist that we go through this thoroughly. Had Java already implemented value types (and perhaps reified generics), it might be possible to say that the mere fact of having a negate() method on an interface simply doesn't pull the weight of having such an interface. It would be just as simple as declaring some static method:

public static <T> Function<T, Boolean> not(
Function<T, Boolean> function) {
    return t -> !function.apply(t);
}

Again. Don't get me wrong. I understand that we have backwards-compatibility, primitive types, generic type erasure, and all the other many constraints that made this the best possible solution. But I insist in saying that when we're going to be using Java 10, we'll add to the list of quirky constraint the need for support of a panoply of weird functional interfaces.

Mario - I really don't understand your reproach. Again, we've been blogging so many times about the good parts of Java 8. Now we've introduced our readers to a couple of bad parts which deserve being mentioned. I understand that your heart beats for functional programming, with passion. This doesn't mean that anyone criticising FP is spreading FUD. I'm sure you do understand.


Yours,

Lukas

Tomm Carr replied on Fri, 2014/04/18 - 1:29am in response to: matt inger

 Quote: "I don't recall that [breaking backwards compatibility] ever happening in Java (though i would argue maybe it needs to happen)"

I would have to agree. Backwards compatibility is a great, strong requirement for any language (or framework or whatever). Reliability is extremely important. However, since we cannot know what features we will need or no longer need in the future, I would submit that breaking backwards compatibility would be an occasional necessity. Sometimes trying to shoehorn a new feature into a backwards compatible setting can distort the usability and/or maintainability into unrecognizable forms. Maintaining an unbroken link to the past is no way to forge ahead to the future.

Java, indeed any language (framework, etc.) should have a specified minimum for backwards compatibility -- say, two major releases. So Java 8 would have to be backwards compatible with Java 6 & 7 but not necessarily Java 5 and beyond. We can't have it changed haphazardly, but we need to design in the ability to incorporate paradigm shifts and other nifty unknowable features.

Realiability does not mean "never changes." It means "changes predictibly."

On the development side, it never hurts to revisit old code that may not have been cared for in years and bring it up to "current." Yeah, it can be a royal pain in the neck, but there are good reasons to do so.

- It maintains the relevance of the code. "Legacy" would no longer mean "pretty much useless for current needs."

- It increases the reliability of the code. One of the reasons for new features is reliability. For example, go back to all those "final int" implementations of enumerated data types and convert them to actual enums. All the range checking and other superfluous code can be removed.

- It increases the maintainability of the code. And not just as a concept. As our code ages, our pool of developers with knowledge of the legacy code, and the state of the technology used by that code, shrinks. I still see the occasional job listing for COBOL developers. I can only shake my head and wish them luck in finding someone.

As I said, this can be painful but this should be scheduled as periodic tasks with the resources already allocated. It is always, always a lot more painful and expensive when the need to do this hits us unexpectedly. It also inevitably comes with a drop-dead date way too short to allow for good work.

Plan for it and do it. A language that may periodically (but not too often) break backward compatibility may be just the thing to force this upon us whan our own discipline fails us.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.