<< mdadm: replacing a disk | Home | Hamburger anyone? >>

node.js vs Java nashorn

Why did I test it?

In my current Java project, I make some use of JavaScript. Performances were quite horrendous with Rhino in java 6 and 7, but in Java 8 there is a brand new JavaScript runtime: Nashorn.

Its principal feature is that it's much faster than Rhino. At last, we're going to be able to execute some serious javascript on the server side. Given the richness of the node.js ecosystem, that's a whole new bunch of tools at our disposal. But will they run fast, or fast enough?

First, why not just run those scripts with node?

There are several things in there. When I do Java, I know that what I do is highly portable. Much more so than in any other language, node.js included. So dragging a dependency on node.js annoy me. Also, it's much easier to communicate with nashorn (give it inputs, get outputs back or even callbacks) than with an external process. But most of all, the stability of the whole ecosystem kind of scares me. npm is broken every other day and more often than not, upgrading anything leads to a whole bunch of problems. The simple fact that this whole thing is not even packaged for Ubuntu leaves me wondering.

What did I test?

So, for a practical test, I wanted to minify javascript libs on the fly. So I took a typical Javascript payload from my website and applied UglifyJS to it. The initial payload is the blind concatenation of all the JS files at my disposal in my repo. That's about 62kb of JavaScript. This is real JavaScript used in a real world application, not some dummy routine.

First of all, the compression result. I have a slightly different script to run on node.js and on Java, but they all give an output of 28909 bytes out of 62498. I used the same options for both: do it all, compress as hell.

Notes: I'm running this on an Ubuntu 14.04LTS with a Q6600, an old quad-core from Intel not hyper-threaded. I've tested Java 1.7.0_11, 1.8.0_05 and 1.8.0_45 (with the optimistic type system enabled in nashorn)

The test

The results of the first run (that's after uglifyjs has been loaded):

  • node.js: 910ms
  • Java 8u05: 22s
  • Java 8u45: 28s
  • Java 7: 19s

So, Java 8 is slower than Java 7. Node crushes them pretty hard - and that's quite an understatement. Note that the newer the version of Java, the worse it is. I surely did not expect that. I don't know much about V8 (node js engine), but I know a few things about Java. HotSpot takes its time to optimize the code it runs. Let's do a for loop with 10 iterations and see how much time it takes at the 10th iteration:

  • node.js: 600ms - one core used
  • Java 8u05: 3.6s - three cores used - about 10.8s cpu time
  • Java 8u45: 3s - three cores used - about 12s cpu time
  • Java 7: 17s - one core used

Now, this is more interesting. Java 8 is clearly ahead, but still about 6 times slower than node. Java8u45 is a bit ahead on that one. Java7 is still dreadful, but that was expected.

A last note on the cores used. I noticed it during this test because it runs for longer than the first one. Both tests involving Java 7 and node.js take about 100% CPU, which means they use one core (out of my quad core). The test in Java 8 uses about 300% CPU, which means bout three cores. This is also true for the first test of course, but I didn't notice it at first.

Let's do one more test: After 100 compressions, how much time does it take?

  • node.js: 560ms, one core used
  • Java 8u05: 1.5s, two cores used so about 3s cpu time
  • Java 8u45: 2s, three cores used so about 6s cpu time
  • Java 7: 17s - one core used

Well, this isn't much of a game changer, but it show one thing: Java 8 takes its time to optimize its shit. The Java8u05 process went from 300% to 200% cpu at around iteration 55. Java8u45 went down to 200% CPU at iteration 140, getting the script done in around 1.3s but that was too late for the iteration 100 threshold.

Even after the 10th iteration, it continues to go faster. And after a while, it only takes two cores, not three. This is less bad for Java 8 than the previous one. Now Java is only 2.7 times slower than node and 5.4 times if you count CPU time.

Edit: Let's do one more test: After 1000 compressions, how much time does it take?

  • node.js: 670ms, one core used
  • Java 8u05: 1.3s, 1.2 core used
  • Java 8u45: 1.1s, 1 core used
  • Java 7: 17s - one core used

Yes, you read that correctly, node.js is going slower after 1000 iterations...

Well, the least one can say it's that Java isn't fast... to become fast. The cpu consumption went from 200% to 120% at around iteration 155. My guess is that the extra cores used were due to the optimizer. It seems to be a heck of a lot of work.

Well, now we can say that Java 8 warmed up is a bit less than 2 times slower than node. But it takes a hell of a lot of time to get there. That said, the result isn't bad at all, given all the resources Google did put in its V8 engine. Java is capable to execute JavaScript at a decent speed.

Wrap up

secondsIteration 1Iteration 10Iteration 100Iteration 1000
Java 8u05223.61.51.3
Java 8u4528321.1
Java 719171717

After discarding all results above 5s (so discarding Java7 entirely) and discarding iterations above 200 (they don't seem to change anything but for node who keeps climbing a little to 670ms and then stabilizes), here are the numbers:


I don't know about you, but for my server, I'd rather have a curve that does down than a curve that goes up like node. It would seem as if Java8u45 is longer to warm up but gives better results when warmed up. In the end, Java is 1.6x slower than node, so it can execute its share of JavaScript on the server side. Depending on what you need, it might do the trick. It has to warm up though before being that performant.

The other thing I've learned is that if I want to minify my JavaScript on the fly, I'm going to need some sort of caching mechanism, because none of those run times gives me acceptable performance. This mitigates the 1.6x factor between Java8 and nodejs.

I've also learned that I'll need some sort of warmup at startup, not for my app, but for my JVM. I mean, the first guy getting there will have to wait 22 seconds before getting its JavaScript served? No way, thankyouverymuch.

I've tried a little playing with Java9 but it isn't really mature at this point so I won't disclose anything. Results are on par with Java 8u45. I strongly hope Oracle will pull their shit together and optimize this further.

That is unless they decide to enforce their copyrights on the Java API in which case they'll have to license this JavaScript API from Netscape. It seems only fair. Then again, they'll have to get a license for SQL from IBM, or else they'll go out of business.

Oracle, please spend less in lawyers that undermine your core business and more on engineers which will give a chance to your products. My $.02.

A final note on performance

This warmup that we have seen in the graph is a mix of the JVM warmup and Nashorn engine. But mostly, it is Nashorn. This means that if you create an engine every time you need to execute a script, you won't benefit from it. Here is the most simple way to create an engine:

ScriptEngine engine = factory.getEngineByName("JavaScript");

Well, those pesky ScriptEngine objects created? Never discard them, or else you'll discard the entire optimizations with them.

Categories : Java, JavaScript
Avatar: Axel Faust

Re: node.js vs Java nashorn

Would you mind disclosing some details concerning your Nashorn test setup? I guess some people (including me) are interested to know about the specific JDK version used (which update) and the parameters used in launching the Nashorn process, i.e. if optimistic typing was enabled... Did you compare "iteration 1" results against use of persistent code cache?
Avatar: pieroxy

Re: node.js vs Java nashorn

I updated the test to include java 8u45 which includes optimistic typing. As for the command line I used, I just launched the test with java -cp . Interpreted

The code:

import java.nio.charset.*;
import java.nio.file.*;
import java.io.*;

import javax.script.*;

public class Interpreted {

  static String readFile(String path, Charset encoding) 
    throws IOException 
    byte[] encoded = Files.readAllBytes(Paths.get(path));
    return new String(encoded, encoding);

  public static void main(String[]args) throws Exception {
      ScriptEngineManager factory = new ScriptEngineManager();
      // Used for jdk8u05 and Java7: ScriptEngine engine = factory.getEngineByName("JavaScript");
      // Used for jdk8u45: ScriptEngine engine = new NashornScriptEngineFactory().getScriptEngine(new String[] {"-ot=true"});

      Bindings bindings = new SimpleBindings();
      bindings.put("console", new Console());
      System.out.println("Loading uglify");
      executeJs("uglifyjs-full.js",engine, bindings);
      System.out.println("Loading test");
      executeJs("test.js",engine, bindings);

  private static void executeJs(String fileName, ScriptEngine engine, Bindings bindings) throws Exception {
    String test = readFile(fileName, StandardCharsets.UTF_8);
    engine.put(ScriptEngine.FILENAME, fileName);
    engine.eval(test, bindings);

Avatar: nha

Re: node.js vs Java nashorn

It looks like it is not minifying the JS code, just executing it in nashorn (basically loading uglify and then loading the test code, but not minifying it).
Avatar: pieroxy

Re: node.js vs Java nashorn

Minification is done in test.js
Avatar: Georg

Re: node.js vs Java nashorn

Thanks pieroxy, great comparison. I checked the influence of -pcc , -ot=true (or =false) and the type info and code caches to see if they would help, but they are no real remedy: Nashorn performance? Just use default settings and very few engines

Re: node.js vs Java nashorn

Very interesting! Could you post the code to your JavaScript test harness as well? Thanks!

Re: node.js vs Java nashorn

Not much of a shocker. One thing to remeber with Node.js is that V8 is awesome when it comes to arithmetic operations. Numbers can typically be converted to native ints and all processes have native math instructions. Other JS objects however don't have true native equivalents, and their methods don't have native equivalent instructions, so the whole idea of converting them to machine code is a bit a stretch. Strings are somewhat of an exception because chars are after all just ints, and char/string functions are typically some form of arithmetic. Long story short, V8 is able to crunch numbers (and chars) without fluff, just pure raw machine instructions. Dumping 10,000 strings into a database however will show that node.js isn't always quicker then alternatives.
Avatar: Peter

Re: node.js vs Java nashorn

I know I'm late to the party but I'm betting if you tied the Nashorn test to one core it would probably perform on par with node.js. You're getting bounced between cores which causes a reload of the local cache for the core. That's not efficient, hence why node.js runs in one core. This LMAX Disrupter Thread and core affinity article http://highscalability.com/blog/2015/9/30/strategy-taming-linux-scheduler-jitter-using-cpu-isolation-a.html walks through them trying to isolate their stuff to one core for this exact reason. At the end of the day this repo does all the affinity magic https://github.com/OpenHFT/Java-Thread-Affinity

Re: node.js vs Java nashorn

I did test as well, and seems that nashorn is even faster that node, the thing is that jvm initially is slow due some optimization being applied after som few thousands of iterations - i think JIT comes into play. Btw, try code including eval functions and you will see how node becomes slow.