Because all well-formed sequences (and subsequences) are interpreted
according to the corresponding UTF. That is quite different from random
byte stream with no declared semantics, or a byte stream with a different
declared semantic.
Thus if you are given a Unicode 8-bit string <61, 62, 80, 63>, you know
that the interpretation is <a, b, <fragment>, c>, and *not* the EBCDIC US
interpretation as </, Â, Ø, Ä> = U+002F U+00C2 U+00C4 U+00D8.
Mark <https://plus.google.com/114199149796022210033>
*
*
*— Il meglio è l’inimico del bene —*
**
On Mon, Jan 7, 2013 at 12:44 PM, Philippe Verdy <verdy_p_at_wanadoo.fr> wrote:
> Well then I don't know why you need a definition of an "Unicode 16-bit
> string". For me it just means exactly the same as "16-bit string", and
>
Received on Mon Jan 07 2013 - 15:00:21 CST
This archive was generated by hypermail 2.2.0 : Mon Jan 07 2013 - 15:00:22 CST