Performance Zone is brought to you in partnership with:

Matt Raible has been building web applications for most of his adult life. He started tinkering with the web before Netscape 1.0 was even released. For the last 16 years, Matt has helped companies adopt open source technologies (Spring, Hibernate, Apache, Struts, Tapestry, Grails) and use them effectively. Matt has been a speaker at many conferences worldwide, including Devoxx, Jfokus, ÜberConf, No Fluff Just Stuff, and a host of others.

Matt is a DZone MVB and is not an employee of DZone and has posted 140 posts at DZone. You can read more from them at their website. View Full User Profile

You shouldn't have to worry about front-end optimization.

  • submit to reddit

After writing yesterday's article on optimizing AngularJS apps with Grunt I received an interesting reply from @markj9 on Twitter.

I clicked on the provided link, listened to the podcast (RR HTTP 2.0 with Ilya Grigorik) and discovered some juicy bits around 27:00. The text below is from the podcast's transcript at the bottom of the page.

AVDI:  Yeah. If I pushed back a little, it’s only because I just wonder, are the improvements coming solely from the perspective of a Google or Facebook, or are they also coming from the perspective of the hundreds of thousands of people developing smaller applications and websites?

ILYA:  Yeah. So, I think it’s the latter, which is to say the primary objective here is actually to make the browsers faster. So, if you open a webpage over HTTP 2.0, it should load faster. And that’s one kind of population that will benefit from it. And the second one is actually developers. We want to make developing web applications easier. You shouldn’t have to worry about things like spriting and concatenating files and doing all this stuff, domain sharding and all this other mess, which are just completely unnecessary and actually makes performance worse in many cases because each one of those has negative repercussions.

Things like, let’s say concatenating your style sheets or JavaScript. Why do we do that? Well, we do that because we want to reduce the number of requests because we have this connection limit with HTTP 1.0. But the downside then is let’s say you’ve — actually Rails does this, you concatenate all of your CSS into one giant bundle. Great, we reduced the number of requests. We can download it faster. Awesome. Then you go in and your designer changes one line from whatever, the background color from blue to red. And now, you have to download the entire bundle. You have to invalidate that certain file and you need to download the whole thing.

Chances are, if you’re doing sound software development today, you already have things split into modules. Like here is my base.css, here is my other page.css. Here are my JavaScript modules. And there’s no reason why we need to concatenate those into one giant bundle and invalidate on every request. This is something that we’ve automated to some degree, but it’s unnecessary. And it actually slows down the browser, too, in unexpected ways.

We recently realized that serving these large JavaScript files actually hurts your performance because we can’t execute the JavaScript until we get the entire file. So, if you actually split it into a bunch of smaller chunks, it actually allows us to execute them incrementally, one chunk at a time. And that makes the site faster. Same thing for CSS, splitting all that stuff. And this may sound trivial, but in practice, it’s actually a giant pain for a lot of applications.

The conversation goes on to talk about how this change in thinking is largely caused by the fact that bandwidth is no longer a problem, latency is.

JAMES:  That seems really, really weird to me though. Everything has been moving in that direction and you’re saying our data on that’s just wrong. It’s not faster?

ILYA:  Yeah. Part of it is the connectivity profiles are also changing. So when we first started advocating for those sorts of changes back in, whatever it was, 2005, 2007, when this stuff started showing up, the connection speeds were different. We were primarily maybe DSL was state of the art and bandwidth was really an issue there. So, you spend more time just downloading resources. Now that bandwidth is much less of an issue, latency is the problem. And because of that, these “best practices” are changing. And with HTTP 2.0, you actually don’t have to do that at all. And in fact, some of those things will actually hurt your performance.

As you can imagine, this news is quite surprising to me. Optimizations like gzipping and expires headers will continue to be important, but concatenating and minifying might become a "worst" practice? That seems crazy, especially when the tools I test with (YSlow and Page Speed browser plugins) give me higher grades for minifying and reducing the number of requests.

The good news is there's lots of good things coming in HTTP 2.0, and you can use it today.

ILYA:  ... any application that’s delivered over HTTP 1.0 will work over HTTP 2.0. There’s nothing changing there. The semantics are all the same. It could be the case that certain optimizations that you’ve done for HTTP 1.1 will actually hurt in HTTP 2.0. And when I say hurt, in practice at least from what I’ve seen today, it doesn’t mean that your site is actually going to be slower. It’s just that it won’t be any better than HTTP 1.0.

Upgrading my servers to support HTTP 2.0 brings up an interesting dilemma. How do I measure that non-minified and non-concatenated assets are actually better? Do I just benchmark page load times or are there better tools for proving things are faster and better?

Published at DZone with permission of Matt Raible, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)