Testing improvements to lz-string
This page is trying to summarize the work I'm doing on lz-string to try and make it faster.
You can leave feedback in the blog.
1. Working with arrays instead of strings
I had a hunch that building the output of the compression algorithm with += wasn't the best option. I did setup a small test which confirmed a much faster way to build a string, character after character. Indeed, building an array with the charcodes and calling String.fromCharCode.apply with the array is *much* faster, except on iOS - where it is only marginally faster.I did modify the library with this in mind and here are the results:
- Compression. Things are a little faster, but that's not very consistent across browsers. Upon closer inspection, that implementation is a little faster but puts more strain on the GC. Hence it runs sometimes faster (no gc) and sometimes slower (gc more stressed)
- Decompression. Well, it looks like Chrome doesn't like array.concat. Indeed, while decompressing, I don't have one character at a time, but chunks of characters, so I resorted to using array.concat. Bad idea on all browsers and excruciatingly bad idea on Chrome.
- Wrap up: Decompression is a clear loss. Compression is not a clear win. I'll do nothing with this.
2. Inlining everything into a huge function
I had a second hunch that calling plenty of small functions passing global variables in objects was probably not very efficient in terms of performance. I did setup a small test which confirmed that calling functions for simple things is inefficient and that wrapping global vars in objects is also less efficient than accessing local vars. This time, I expect that this represents a much bigger part of the compression time. So far, only compression has been tested.I did modify the library with this in mind (drawback: the code is now completely and utterly unreadable - but the v1.0 is still there with readable code and binary compatibility) and here are the results:
- Compression of a big string. Things are about 5% to 20% faster, depending on the browser.
- Compression of a small string. Same here, things are consistently faster
- Update: Again, applying this idea to the decompression algorithm does provide marginally faster decompression.
- Wrap up: This change made it in the lib!
3. Arrays vs Objects
The decompression algorithm uses a dictionary whose keys are numbers. I did setup a small test which shows that except for Chrome, reading from an array in those conditions is more than twice as fast than reading from an object.I did modify the decompression library with this in mind (it consisted in replacing dictionary : {} by dictionary : [] - tiny change) and here are the results when compared with an implementation optimized with the above (#2) optimization:
- Decompression. Well, as expected not much improvements on Chrome, but everything else is about twice as fast !
- Wrap up: This change made it in the lib!