From: Doug Ewell (dewell@adelphia.net)
Date: Fri Oct 28 2005 - 07:17:27 CST
The Roman numerals were included in Unicode 1.0 because they were part
of a legacy character set, probably Asian, with which Unicode had to
maintain 1-to-1 convertibility. Nobody likes them much, and there is
not a chance in the world that they would be in Unicode if not for the
legacy convertibility requirement.
-- Doug Ewell Fullerton, California http://users.adelphia.net/~dewell/ ----- Original Message ----- From: "Alexej Kryukov" <akrioukov@newmail.ru> To: <unicode@unicode.org> Sent: Friday, October 28, 2005 5:25 Subject: Re: Improper grounds for rejection of proposal N2677 > I am wondering why in this long thread nobody has mentioned the Roman > digits in U+216*--217*. They look quite similar to > hexadecimal digits, because they also have case pars and also are > normally represented with Latin letters. Moreover, I would understand > if only I, V, X, L, C, D and M were separately encoded, but currently > all Roman numerals from I to XII (and only these ones), which can > be composed from the characters listed above, also have separate > codepoints. > > So I am just wondering, why these characters were encoded? Should > they be actually used to represent Roman numerals, or it is OK to > replace them with the corresponding Latin Letters? And how should > numerals above XII to be composed? > > To my mind, these characters look quite strange, but, if encoding > them is considered correct, then the hexadecimal digits should be > encoded for the very same reasons...
This archive was generated by hypermail 2.1.5 : Fri Oct 28 2005 - 08:02:29 CST