Heya felix. It actually IS possible to get several times faster than jQuery in microbenchmarks on specific implementations when you can hit the native version for a path or something your interpreter knows how to optimize. As an example, I did an optimzing pass through underscore.js specifically for node.js, mostly swapping in native calls and switching loops to the idiomatic for(var i =0, ii=x.length, i < ii, i++) since V8 knows how to optimize those. The result is a several times speedup. Native methods are faster than javascript ones. This shouldn't be surprising.
Despite the work being done for node, I wanted to see how it looked on the various browser engines, so I made a pasted together set of screenshots [1] showing the underscore benchmark suite running in Chrome, Firefox 3.5, and a recent Minefield nightly. Results are paired underscore before my tweaks and after.
I believe the methods all have the same signatures and are operating on the same set of data. jQuery has the disadvantage here of having to work on a variety of interpreters where I only care about v8. The interesting bit is to note the jQuery.map() call on the newer tracemonkey is FASTER than using my for loop. It all depends on what the interpreter can optimize and hitting those paths. I have notes [2] on my optimizations if you're interested in how the various approaches bench on node. I was going for low hanging fruit, so these were done with a simple timing function on the node repl.
Interesting! I've been using underscore myself, great stuff! Are you going to maintain your fork in the future?
As far as getting faster than jQuery goes, I agree with what you're saying. What is difficult is getting faster in a "meaningful" way - everything else seems like a waste of time to me.
We've also pulled a bunch of grayrest's excellent patches back into Underscore -- the ones that work cross-browser. The latest 0.5.1 version of Underscore includes them.
The article's mention is a pretty weak use of the "Cargo Cult" analogy. It would be better if they just talked bout benchmark shenanigans in general. There is nothing particularly "Cargo Cult" about the situation. It's not as if the purported culprits are completely clueless at a paradigmatic level about what benchmarks are, but rather, that they aren't applied in a rigorous enough way. That's a far cry from making driftwood mockups of airports and radar sets. (More like using working radar sets that are unreliably cheap and trashy.)
RightJS is also not as cross browser compatible as jQuery is, just look at their selector engine, its an alias to querySelectorAll. Of course aliasing it directly is faster then checking to see if it exists and then using it if it does...
It would be interesting to see someone convert a random JQuery based App to RightJS and see what differences there are - especially in modern browsers.
>> "The jQuery example, from the beginning, was creating DOM elements from HTML strings, while RightJS was wrapping the document.createElement API. This is not the same thing and you cannot learn anything from comparing apples to oranges."
What you can learn though, is that using the built in DOM methods, or wrapping createElement if you need to, is far faster than using some other abstraction from the DOM.
Actually, using HTML strings is quite a bit faster than using the DOM methods in many browsers. Strange but true. (I guess it's because the extra time spent parsing the string is trivial compared to the extra time you spend mucking around in the JS interpreter when you use the DOM.)
People what is wrong with you! This post earn more votes than main source!
I'm not javascript expert but RightJS looks very solid work to me. And someone spend a lot of time building that that free library for you. I will probably not use it because I'm used to JQuery but I will say thanks to the author who build it and dedicate his free time for us.
Author of RightJS thank you for your time and keep up the good work.
Honestly, who cares? If your bottleneck is really your JS library, you need to re-evaluate what it is that you're doing. In a practical situation, you would never manipulate thousands of DOM elements at once. Benchmarks are fun, but largely irrelevant.
Okay, sure, for a document-based website with a few bells and whistles. If you want to build one of these new-fangled "web apps" that I hear so much about, you will absolutely need to manipulate hundreds or thousands of DOM elements and do so fast and consistently across god-knows-how-many versions of - to make the list short - IE, FF, Safari, Chrome, and Opera. There is a major push for JavaScript optimization going on industry-wide, and the libraries are a crucial component of this.
First, I own one of those new-fangled web apps (http://agilezen.com/). It's built on jQuery, has a fairly complex Javascript-driven UI, and nowhere do I manipulate hundreds or thousands of DOM elements.
Second, I wasn't arguing against libraries -- I was saying that the difference in performance is probably not what you want to use to decide which library to use.
There's some pretty sluggish websites out there due to poor use of js libs. Browse them on a phone or netbook and it all adds up.
You'd certainly manipulate hundreds of DOM elements at a time, consider say a twitter stream, where each post has "10 seconds ago" marker, and they all need updating.
The bottleneck in that situation would be finding the elements, not updating them. If there is any worry about performance, the change would be to cache that list of elements rather than searching again and again, regardless of the library used. (Unless your library was very clever, and could cache the results for you. I'm not sure what browser support there is for ondomupdated events, which you would need to watch for this to work in a general fashion.)
This was my own thinking. If this is another one of those "runs .1 seconds faster in the benchmark tests", then I am not really impressed. On how many websites are you really going to notice this?
Despite the work being done for node, I wanted to see how it looked on the various browser engines, so I made a pasted together set of screenshots [1] showing the underscore benchmark suite running in Chrome, Firefox 3.5, and a recent Minefield nightly. Results are paired underscore before my tweaks and after.
[1] http://gr.ayre.st/s/images/underscore_perf_benches.gif
I believe the methods all have the same signatures and are operating on the same set of data. jQuery has the disadvantage here of having to work on a variety of interpreters where I only care about v8. The interesting bit is to note the jQuery.map() call on the newer tracemonkey is FASTER than using my for loop. It all depends on what the interpreter can optimize and hitting those paths. I have notes [2] on my optimizations if you're interested in how the various approaches bench on node. I was going for low hanging fruit, so these were done with a simple timing function on the node repl.
[2] http://wiki.github.com/grayrest/underscore/node-conversion-i...