New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
script loading solution #28
Comments
kyle: "@paul_irish i don't agree. http://bit.ly/9IfMMN cacheability (external CDN's), parallel downloading, script change-volatility..." |
james burke: "@paul_irish @fearphage @getify RequireJS has build tool to do script bundling/minifying, so can have best of both: dynamic and prebuilt" |
The easiest way for developers to get started with script loading would probably be using $Lab.js, because it's already using chaining syntax that allot of jQuery users are familiar with. If they are building big enterprise apps they can always migrate to require.js if needed. |
currently there are three main script loading techniques:
use it or not, which one to use is kinda debatable: http://blog.getify.com/2010/12/on-script-loaders/ |
With the release of jQuery 1.5 and deferreds -- http://www.erichynds.com/jquery/using-deferreds-in-jquery/ , Boris Moore's utilizing them in DeferJS, a new script loader project: https://github.com/BorisMoore/DeferJS |
By default script loading stops all other downloads, so downloading modernizr in the header is bad. Inlining loader make sense, because loaders can download script in parallel and in not blocking mode. For example if you do not need all modernizr features, you can inline head.min.js which is only 6kb or custom build of modernizr (http://modernizr.github.com/Modernizr/2.0-beta/). Inlining CSS sometimes make sense too. Google uses inlining, they inline css, js and empty 1x1 gifs through datauri. |
LabJS is becoming pretty widely used and is a good solution - also it can be included asynchronously so doesn't need to block. http://blog.getify.com/2010/12/on-script-loaders/ is by the author |
http://yepnopejs.com/ just went 1.0 and doesn't break in new webkit, unlike LAB and head.js. Script loading is hard. yepnope is also integrated into Modernizr as So we'll probably have a script loader in h5bp by way of Modernizr.load pretty soon. I don't think it'll make 1.0 but once i take Modernizr up to 1.8 we'll toss that into h5bp 1.1. Yeeeah |
Hi Paul I've porting an existing site to use your H5BP and I want to use the yepnope.js script loader. It's really nice to see it all the bits and bots put together as you have done. What would you recommend using at the moment?
Regardless of how best to include it, how do you recommend loading the scripts with yepnope,js? I figure we should be doing it around here : https://github.com/paulirish/html5-boilerplate/blob/master/index.html#L52 and use yepnope to load the CDN / Local copy of jQuery and our other scripts. But, do you think it's best to use an external script include or render a script block within the html, which then loads the scripts via yepnope.js? Many thanks. Andy |
Oh and another thing. As yepnope can load css via, I would say it's best to include the main css as you would normally and use yepnope to only include css for specific fixes. For example including some css that is only applied to older versions of IE. |
hokapoka, Use the beta version of modernizr.. just include what you need (and include the actual code for the jquery fallback with yepnope is on http://yepnopejs.com/ And yes i like your idea of the conditional load of IE css. |
tbh there is too much blind faith around script loaders wrt performance and i dont think we're ready to say THIS IS THE RIGHT WAY. we need more research around filesizes, bandwidth and network conditions that indicate smart recommendations on script loading but right now the field is nascent and we'd be naive to recommend a blanket solution of script loading. so. closing this ticket and asking anyone who cares to do the comprehensive research and publishing required to make it easier for developers to make a smart choice about this one |
i have done quite a bit of research about concat vs. parallel load. i still, without reservation, make the recommendation to combine all js into one file first, then chunk it up into 2-3 ~equal sized chunks, and load those in parallel. I'd love to be able to take my research and make it wide spread and to scale, so that it was viable as "fact" in this area. The problem is I've tried and tried to find hosting bandwidth where it won't cost me lots of $$ to actually run the tests at scale, and have failed to find that hosting provision yet. If I/we can solve the bandwidth issue for testing, I have the tests that can be run to find out if the theory of parallel loading is in fact viable (as I believe it is). |
@getify what do you need as far as a testing rig? |
I can do about 1.5TB more data out of my personal server than I'm currently using. I have Nginx installed and that can handle somewhere around 4 trillion quadrillion hits per microsecond. I don't feel like the technology is the barrier here. If we're worried about locations, we can spoof higher latency, and/or find a couple other people with a little extra room on their boxes. |
BTW, I take a little bit of issue with "blind faith". It is easy, provable, and almost without question true that if you have an existing site loading many scripts with script-tags, using a parallel script loader (with no other changes) improves performance. This is true because even the newest browsers cannot (and never will, I don't think) unpin script loading from blocking DOM-ready. So even in best case browser loading, if there's no other benefit, drastically speeding up DOM-ready on a site is pretty much always a win (for users and UX). Your statement is a little bit of a false premise because it assumes that we're trying to compare, for every site, parallel-loading to script-concat. Most sites on the web don't/can't actually use script-concat, so really the comparison (for them, the majority) is not quite as nuanced and complicated as you assume. If they don't/can't use script-concat (for whatever reason), the comparison is simple: parallel-loading is almost always a win over script tags. If they are open to script-concat (or already use it), then yes, it does get a bit more nuanced/complicated to decide if parallel-loading could help or not. But script-concat is not a one-size-fits-all silver bullet solution either, so there's plenty of sites for whom parallel-loading will remain the preferred and best approach. Just because some sites deal with the nuances/complexities of deciding between parallel-loading vs. script-concat doesn't mean that the greater (more impactful) discussion of parallel-loading vs. script tags should be lost in the mix. The former is hard to prove, but the latter is almost a given at this point. All this is to say that, all things considered, IMHO a boilerplate should be encouraging a pattern which has the biggest impact in a positive direction. If 80% of sites on the internet today use script tags, most of which would benefit from moving from script tags to parallel-loading, then parallel-loading is a very healthy thing to suggest as a starting point for the boilerplate. It's a much smaller (but important) subsection of those sites which can potentially get even more benefit from exploring script-concat vs. parallel-loading. But a minority use-case isn't what should be optimized for in a boilerplate. Just my few cents. |
As far as bandwidth needs, I estimated that to get 10,000 people (what I felt was needed to be an accurate sampling) to run the test once (and many people would run it several times, I'm sure), it would be about 200GB of bandwidth spent. For some people, that's a drop in the bucket. For me, 200GB of bandwidth in a few days time would be overwhelming to my server hosting costs. So, I haven't pursued scaling the tests on that reason alone. Moreover, I have more than a dozen variations of this test that I think we need to explore. So, dozens of times of using 100-200GB of bandwidth each would be quite cost prohibitive for me to foot the bill on. I didn't want to start down that road unless I was sure that I had enough bandwidth to finish the task. They're just static files, and the tests don't require lots of concurrent users, so there's no real concerns about traditional scaling issues like CPU, etc. Just bandwidth, that's all. We can take the rest of the discussion of the tests offline and pursue it over email or IM. I would very much like to finally scale the tests and "settle" this issue. It's been hanging around the back of my brain for the better part of a year now. |
I can do unlimited TB on my dreamhost VPS so this won't be a problem. right now i'm doing 72gb/day and can handle way more. :) |
I agree with paul, and think there is quite a bit of misinformation about how and when script-loaders are going to be of any benefit to anyone. Your first paragraph says it's 'easy', 'provable' and 'without question' that script loaders improve performance. I made a similar postulation to @jashkenas a while back, and he and I put together some identical pages as best we could to try and measure performance of our best techniques. He's a fan of 100% concat, and I tried 2 different script loading techniques. https://github.com/SlexAxton/AssetRace The code is all there. Obviously there wasn't a huge testing audience, but the results at best showed that this script-loader was about the same speed as the concat method (with your similar sized 3 file parallel load guidelines followed), and at worst showed that script-loaders varied much more and were generally slower within a margin of error. Feel free to fork and find a solution that beats on or both of ours, even if it's just on your machine in one browser. As for the "false premise" because h5bp assumes that people concat their js. This argument is entirely invalid because h5bp offers a script build tool, complete with concat and minification. So the argument that parallel-loading is almost always a win over multiple script tags may be true, but it's not better than what h5bp offers currently. That is the context of this discussion. I think the worst case scenario are people taking something like yepnope or lab.js and using it as a script tag polyfill. That's absolutely going to result in slower loading (of their 19 JS and 34 CSS files), as well as introduce a slew of backwards and forwards compatibility issues that they'll be completely unaware of. I think in the spirit of giving people the most sensible and performant and compatible default for a boilerplate, a build tool goes a lot further to ensure all three. |
I'll happily find some time to take a look at the tests you put together. I'm sure you guys know what you're doing so I'm sure your tests are valid and correct. OTOH, I have lots of contradictory evidence. If I had ever seen anything compelling to suggest that parallel script loading was a waste or unhelpful to the majority of sites, I would have long ago abandoned the crazy time sink that is LABjs. I can say with 100% certainty that I have never, in 2 years of helping put LABjs out ther for people, found a situation where LABjs was slower than the script tag alternative. Zero times has that ever occured to me. There've been a few times that people said they didn't see much benefit. There've been a few times where people were loading 100+ files and so the crazy overhead of that many connections wiped out any benefits they might have otherwise seen. But I've never once had someone tell me that LABjs made their site slower. I have literally myself helped 50+ different sites move from script tags to LABjs, and without fail sites saw performance improvements right off the bat. Early on in the efforts, I took a sampling of maybe 7 or 8 sites that I had helped, and they had collectively seen an average of about 15% improvement in loading speed. For the 4 or 5 sites that I manage, I of course implemented LABjs, and immediately saw as much as 3x loading speed. Of course, when LABjs was first put out there, it was state-of-the-art for browsers to load scripts in parallel (only a few were doing that). So the gains were huge and visible then. Now, we have almost all browsers doing parallel loading, so the gains aren't so drastic anymore. But the one thing that is undeniable is that browsers all block the DOM-ready event for loading of script tags. They have to because of the possibility of finding Take a look at the two diagrams on slide 10 of this deck: http://www.slideshare.net/shadedecho/the-once-and-future-script-loader-v2 Compare the placement of the blue line (DOM-ready). That's a drastic improvement in perceived performance (UX), even if overall page-load time (or time to finish all assets loading) isn't any better.
The faulty assumption here is that just because h5bp offers this tool, that all (or even most) users of h5bp can use it. Even if 100% of the users of h5bp do use it, that doesn't mean that if h5bp were rolled out to the long-tail of the internet, that all of them would use that concat tool. There are a bunch of other factors that can easily prevent someone from using that. There are very few reasons why someone can't move from using script tags to using a parallel script loader. As such, parallel script loading still offers a broader appeal to the long-tail of the internet. It still is easier for the majority of sites that do not use script loading optimizations, to move from nothing to something, and that something offers them performance wins. Few of those long-tail sites will ever spend the effort on (or have the skill to experiement with) automated script build tools in their cheap $6/mo, mass shared hosting, non-CDN'd web hosting environments.
I could not disagree with this statement more. LABjs is specifically designed as a script tag polyfill. And the improvements of LABjs over regular script tags (ignore script concat for the time being) are well established and have never been seriously refuted. If you have proof that most (or even a lot of) sites out there using LABjs would be better off going back to script tags, please do share. There is absolutely no reason why parallel script loading is going to result in slower loading than what the browser could accomplish with script tags. That makes no sense. And as I established above, script tags will always block DOM-ready, where parallel script loading will not.
What compatibility issues are you talking about? LABjs' browser support matrix has absolutely the vast majority of every web browser on the planet covered. The crazy small sliver of browsers it breaks in is far outweighed by the large number of browsers it has clear benefits in. LABjs 1.x had a bunch of crazy hacks in it, like cache-preloading, which indeed were major concerns for breakage with browsers. LABjs 2.x has flipped that completely upside down, and now uses reliable and standardized approaches for parallel loading in all cases, only falling back to the hack for the older webkit browser. In addition, LABjs 2.x already has checks in it for feature-tests of coming-soon script loading techniques (hopefully soon to be standardized) like "real preloading". I can't speak definitively for any other script loaders -- I know many still use hacks -- but as for LABjs, I'm bewildered by the claim that it introduces forward or backward compatibility issues, as I think this is patently a misleading claim. |
to elaborate slightly on why i intend for LABjs to in fact be a script tag polyfill...
So, for the newer browsers, LABjs is a "polyfill" in the sense that it's bringing "non-DOM-ready-blocking script loading" to the browser in a way that script tags cannot do. The only possible way you could approach doing that in modern browsers without a parallel script loader would be to use script tags with |
Honestly, I still think we should petition standards for a script loading object. Having to create a script tag of a different type than text/javascript to trigger the cache (or worse, use an object tag or an image object or whatever a new version of a popular browser will require) is jumping a lot of hoops for nothing and performance will vary depending of too much variables. I can understand we still load stylesheets using dom node insertion (but that's only because of order) but when it comes to script, I think it doesn't make sense at all anymore (I wish google would stop using document.write in most of their scripts but that's another story entirely). Also, I think we're missing the biggest point regarding script loaders here: to be able to load js code on-demand rather than load everything up-front (even with everything in cache, parsing and initializing takes time and it can get pretty ugly with a non-trivial ammount of concatenated scripts). Having some wait-time after a UI interaction is much less of a problem than having the browser "hang" even a little at start-up (DOM may be ready all-right, but what good is it if the code to enhance the page and add iteraction hasn't been executed yet: ever noticed how some sites load immediately then something clunky occurs?). So strict performance measurement is all fine and dandy, but I still think perceived performance is the ultimate goal... and is sadly far less easy to estimate/optimize/compute. |
This is intense. |
@jaubourg--
There is much petitioning going on regarding how the standards/specs and browsers can give us better script loading tech. First big win in this category in years was the "ordered async" ( The next debate, which I'm currently in on-going discussions with Ian Hickson about, is what I call "real preloading". In my opinion, "real preloading" (which IE already supports since v4, btw) would be the nearest thing to a "silver bullet" that would solve nearly all script loading scenarios rather trivially. I am still quite optimistic that we'll see something like this standardized. See this wiki for more info: http://wiki.whatwg.org/wiki/Script_Execution_Control
This is called "cache preloading", and it's an admitted ugly and horrible hack. LABjs way de-emphasizes this now as of v2 (only uses it as a fallback for older webkit). Other script loaders unfortunately still use it as their primary loading mechanism. But 90% of the need for "cache preloading" can be solved with "ordered async", which is standardized and isn't a hack, so well-behaved script loaders should be preferring that over "cache preloading" now. So, I agree that "cache preloading" sucks, but there's much better ways to use
Very much agree that's an important benefit that script loaders bring. But it's sort of a moot argument in this thread, because the "script concat" folks simply cannot, without script loading, solve the use-case, so it makes no sense to "compare" the two. You can say as a "script concat" proponent "fine, we don't care about that use case", but you can't say "we can serve that use-case better using XYZ". Perceived performance is huge and important, I agree. On-demand loading is a huge part of making that happen. On-demand loading will also improve real actual performance (not just perception) because it tends to lead to less actually being downloaded if you only download what's needed (few page visits require 100% of the code you've written). Perceived performance is also why I advocate the DOM-ready argument above. Because how quickly a user "feels" like they can interact with a page is very important to how quick they think the page is (regardless of how fast it really loaded). That's a fact established by lots of user research. |
Gotta love the passionate, long comments by @getify If I can contribute in any way to the research, I would love to. |
Yep, I followed the script tag "enhancements" discussion regarding preloading and I just don't buy the "add yet another attribute on the script tag" approach as a viable approach. I've seen what it did to the xhr spec: a lot of complexity in regard to the little benefit we get in the end. What's clear is that we pretty much only need the preloading behaviour when doing dynamic insertion (ie. doing so in javascript already) so why on earth should we still use script tag injection? It's not like we keep the tag there or use it as a DOM node: it's just a means to an end that has nothing to do with document structure. I'd be much more comfortable with something along those lines: window.loadScript( url, function( scriptObject ) {
if ( !scriptObject.error ) {
scriptObject.run();
}
}); This would do wonders. It's easy enough to "join" multiple script loading events and then run those script in whatever order is necessary. It also doesn't imply the presence of a DOM which makes it even more generic. I wish we would get away from script tag injection altogether asap. Beside, it's easy enough to polyfill this using the tricks we all know. It's also far less of a burden than a complete require system (but can be a building brick for a require system that is then not limited to browsers). That being said, I agree 100% with you on perceived performance, I just wanted to point it out because the "let's compact it all together" mantra is quickly becoming some kind of belief that blurs things far too much for my taste ;) |
fwiw, So that means.... 98.5% of users have |
details plz? i haven't seen anything about this |
dude, chill |
@savetheclocktower-- Fair questions. I didn't start my participation in this thread strongly advocating for LABjs (or any script loader) to be included in h5bp. I think it's useful (see below), but it wasn't a major concern of mine that I was losing sleep over. Clearly, this thread has morphed into an all out attack on everything that is "script loading". That is, obviously, something I care a bit more about.
I advocate first for moving all your dozens of script tags to a parallel script loader like LABjs. This takes nothing more than the ability to adjust your markup. That's a far easier/less intimidating step than telling a mom&pop site to use an automated node.js-based build system, for instance. And for those who CAN do builds of their files, I advocate that LABjs still has benefit, because it can help you load those chunks in parallel. If you flat out disagree that chunks are in any way useful, then you won't see any reason to use LABjs over
The only reason I think a script loader (specifically one which is designed, like LABjs, to have a one-to-one mapping between script tags and That's the only reason I started participating in this thread. It's the only reason I took issue with @paulirish's "blind faith" comment WAY above here in the thread. |
Sooooooooooo yeah. I think it's clear this discussion has moved on way past whether a script loader is appropriate for the h5bp project. But that's good, as this topic is worth exploring. regardless, I'm very interested in reproducible test cases alongside test results. It also seems the spec for We need straight up documentation on these behaviors that captures all browsers, different connection types and network effects. I'm not sure if a test rig should use cuzillion or assetrace, but that can be determined. I've set up a ticket to gather some interest in that paulirish/lazyweb-requests#42 Join me over there if you're into the superfun tasks of webperf research and documenting evidence. Let's consider this thread closed, gentlemen. |
Lazy loading isn't the core benefit of AMD modules as @jrburke described on his comments.. The main reason that I choose to use AMD modules as much as I can is because it improves code structure. It keeps the source files small and concise - easier to develop and maintain - the same way that using css I feel that this post I wrote last year fits the subject: The performance dogma - It's not all about performance and make sure you aren't wasting your time "optimizing" something that doesn't make any real difference... And I'm with @SlexAxton, I want AMD but simple script tags are probably enough for most people. Maybe a valid approach would be to add a new setting to pick AMD project and run RequireJS optimizer instead of the concat tasks (RequireJS optimizer Ant task), that would be pretty cool and probably not that hard to implement. |
@paulirish What about including AMD support? Where should we discuss that? |
@benatkin open a new ticket bro. |
@paulirish OK, thanks. @jrburke would you please open up a new ticket to continue the discussion you started? I think I'll add a comment, but I don't think I can lay out a case for AMD support as well as you can. |
Entertaining and informative. Thanks guys. |
I think someone needs to start a new script loader project and called it "Issue28". :) |
For widest compat, fast performance can be had by putting script at bottom, minify, gzip, but don't defer. At least not until browser compatibility is consistent for a few years straight. Bottlenecks can come from ads, too much javascript, bloated HTML, too much CSS, too many iframes, too many requests, server latency, inefficient javascript. Applications that use a lot of third party libs have problems caused by not just too much javascript, but more than that, they tend to also have many other problems, mostly bloated HTML, invalid HTML, too much css, and inefficient javascript. Twitter comes right to mind, with having two version of jQuery and two onscroll handlers that cause a bouncing right column onscroll. The kicker is that if you know what you're doing, you can avoid those problems. You don't need things like jQuery or underscore, and so your scripts are much smaller. You write clean, simple, valid HTML and CSS. Consequentially, your pages load faster, the app is more flexible in terms of change, and SEO improves. And so then using a script loader just adds unwarranted complexity and overhead. |
https://github.com/BroDotJS/AssetRage BOOM! I close the clubs and I close the threads. |
What a thread ... wow. Imo, the discussion started in the context of the h5bp, which is intended to be a starting point for web devs. From the thread, and in this context, I think there is unfortunately not enough evidence to draw a final conclusion. |
Sidenote. I am not very familiar with AMD and from a first look, it seemds intimidating to me, or at least not something I can pick up very easily. I think most 'ordinary' web devs will agree. |
And another comment .... And the other 9% have a single, concatenated JS file ... in the HEAD. Devs keep building sites like they have been for years. Changing a way of working, a build system, the code ... it has to be easy, very easy, or else it won't happen. I have worked on many sites where combining the JS in the HEAD into a single file and loading it a bottom of BODY broke the pages on the site. And then what? In most cases, it's not simply an hour work to fix that. Serious refactoring needs to take place ... and this does not happen because of the lack of knowledge and, especially, the lack of time. (oh right, the thread is closed...) |
We're talking about a library build on top of jQuery and Modernizr. Says it all, really. Who uses that? Oh, shit, I forget, Twitter.com, which uses two jQuerys and also has in source code, the following: Line 352, Column 6: End tag div seen, but there were open elements. Error Line 350, Column 6: Unclosed element ul. Error Line 330, Column 6: Unclosed element ul. And the problem with expecting the browser to error correct that is that HTML4 didn't define error correction mechanisms and so you'll end up with a who-knows-what who-knows-where. Sure, HTML5 defines error handling, but it ain't retroactive -- there's still plenty of "old" browsers out there. And speaking of shit, anyone here had a look at jQuery ES5 shims? BTW, do you have anything to add to that statement of yours "that the webdev using the h5bp will actually have clean HTML," aaronpeters? |
@GarrettS ok, ok, I should have written "will probably have clean HTML" |
:-D we can always hope! |
Beating a dead horse, I know ... but it turns out that at the same time we were having this scintillating discussion, the current version of LABjs actually had a bug that caused JavaScript to execute in the wrong order in some browsers: getify/LABjs#36 Oh, the irony. |
must. resist. posting. totally. [in]appropriate. image. for. previous. statement.... aggggh! AGONY! |
My favorite part was when the dude that made dhtmlkitchen.com (currently totally messed up) started talking about markup errors. |
That site has been transferred to Paulo Fragomeni. Yes I made it and proud of what I wrote there, as here. Go take a screenshot of your weak avatar, jackass. |
...and after you're done with that, try to pull your head out of your ass and understand the difference between my old personal website (which is no longer maintained by me) and one that is developed by a team and financed by a profitable, multi-million dollar company (though Twitter may be worth billions AFAIK). |
Glad we're keeping this classy, and on topic, guys. |
jashkenas got the relevant bits of info out early on in this discussion. But then there was the backlash. No! It must not be! Souders said to do it! And there was the bad advice to use defer, not caring how it fails when it fails. And then ironically, out of nowhere, there came a claim that h5bp users would be doing things properly. And this is very ironic because this comment came after comments from its supporters who evidently produce invalid markup and use a load of third party abstraction layers (and awful ones). And after the comment about using defer. And so what does any of this have do with dhtmlkitchen.com being down? Nothing at all, obviously. That was just a weak jab from an h5bp forker who can't stand to hear criticism. |
Bros. This thread is closed. Remember? You don't have to go home, but you can't flame here. |
Hey y'all remember that one time when we made an epic thread where there were multiple debates, personal flame wars, people getting angry all over the place, an obscene image or two, and an all-around good time? Can't believe it was free. We should do that again sometime. |
Updated the readme to reflect the availability of Java for older PPC bas...
# This issue thread is now closed. ## It was fun, but the conversations have moved elsewhere for now. Thanks ### In appreciation of the funtimes we had, @rmurphey made us a happy word cloud of the thread.
Enjoy.
via labjs or require.
also how does this play into the expectation of a build script that concatenates and minifies all script? should script loading be an option?
The text was updated successfully, but these errors were encountered: