From: spir (denis.spir@free.fr)
Date: Fri Feb 19 2010 - 04:27:43 CST
On Thu, 18 Feb 2010 19:14:58 +0200
Apostolos Syropoulos <ijdt.editor@gmail.com> wrote:
> > Yes, it is absolutely necessary. Converting from a legacy encoding to
> > Unicode and back should be a lossless operation. How else would interchange
> > between legacy systems, and Unicode systems work?
> >
>
> That's a problem that should concern those who still use legacy systems. In
> addition, today to the best of my knowledge no one
> is using 8bit Greek encodings. Finally, just because there are some people
> using legacy systems, should we continue
> supporting something that is wrong?
I start to have the impression that, supposedly, compatibility with legacy character sets was (and still is) the source of various Unicode design flaws (*). Typically, they seem to add unneeded complication to a basically complicated problem. Maybe it's only me.
Where can one find rationales for design decisions? I would like to change my mind for sensible reasons.
Denis
(*) Including the #1 flaw imo: precomposed characters -- but again maybe it's only me. (As legacy formatted texts need to be "transcoded" anyway, mapping to a couple of codes in some cases is no big deal, is it? Also, this has to be done only once... And on the software side, unicode-aware apps *must* be able to cope with decomposed characters.)
Ditto for eg duplicate codes, and for allowing "unordered" combining marks. (But these issues are not as problematic as precomposed characters.)
________________________________
la vita e estrany
This archive was generated by hypermail 2.1.5 : Fri Feb 19 2010 - 04:30:10 CST