From: Doug Ewell (dewell@adelphia.net)
Date: Sat Apr 23 2005 - 15:00:18 CST
Hans Aberg <haberg at math dot su dot se> wrote:
> We are essentially back at a discussion held here sometime ago: The
> limit of number of Unicode code points is due to design flaw in the
> UTF-16 encoding, where the engineers who did it failed to properly
> separate the notions of character numbers and integer-to-binary
> encoding.
Whether this statement, or its underlying assumption, are true or false
is orthogonal to my point. There is no need for a universal character
encoding for every alphabet invented by a high-school kid, and no need
for Unicode to turn itself into a self-service registry for every nonce
glyph or variant that someone thinks is a "character."
-- Doug Ewell Fullerton, California http://users.adelphia.net/~dewell/
This archive was generated by hypermail 2.1.5 : Sat Apr 23 2005 - 15:01:21 CST