Home

Add a comment

 

Avatar: pieroxy

nodejs error ?

I'm not worried about encoding problems here, but I am worried that a UTF-8 encoded string is going to be substantially bigger than the UTF-16 counterpart LZString is producing, resulting in wasted bandwidth. Your UTF16 detection is going to return true all the time : compressToUTF16 *is* generating UTF-16 characters as its name can tell.

The best way would probably be to generate ISO-(LATIN-1 for example) characters (all 256 of them being valid), or UTF-16. But for this to be "optimal" you need the content type to be set correctly. Do you have the hand on the content encoding of these requests? If yes, I suggest switching to "Content-Type: text/html; charset=utf-16" for a better bandwidth usage. 

If not, we'd need to write a compressToUTF8, using 7 bytes per character. Not quite hard.

The ideal solution would be for socket.io to be able to transfer byte arrays instead of strings. After all, we're trying to transmit binary data, not text.

 


nodejs error ?


Title
Body
HTML : b, strong, i, em, blockquote, br, p, pre, a href="", ul, ol, li, sub, sup
OpenID Login
Name
E-mail address
Website
Remember me Yes  No 

E-mail addresses are not publicly displayed, so please only leave your e-mail address if you would like to be notified when new comments are added to this blog entry (you can opt-out later).

Home