There is a mismatch between the way C (the implementation language of Apache HTTPD) and JavaScript handle character data. Within C, characters are represented as arrays of small integer values (typically, 8 bits, although 16 bits per character is also possible). C relies upon the standard library to provide interpretation and rendering. Within C itself, it is just binary data.
JavaScript, on the other hand, treats characters within a string as atomic entities. In particular, JavaScript will not compose character values from encodings. So, if one has a Unicode Transformation Format (UTF)-8 encoding of characters outside the range from 0-127, and he/she is not careful to handle the encoding/decoding correctly, JavaScript programs may generate strings with inappropriate encodings. For example, the character “π” corresponds to the code point U+3C0. In UTF-8, this is 0xcf, 0x80. A string containing the character “π” can be constructed by passing the value 0x3c0 to the String.fromCharCode( ) method, but passing the bytes 0xcf, 0x80 will result in a two character string Ï <pad> (the second character is actually a control character).
This is a problem in the context of cryptography, because most cryptographic algorithms operate on binary data without regard to character encodings, and rely on the external system to manage character data appropriately. This external management does not exist in JavaScript.