Markus Kuhn wrote:
>
> PCs are orders of magnitude too fast today anyway, and many
> applications are desperately looking for useful things to do between the
> keystrokes of the horribly slow users ... ;-)
Said tongue in cheek, but some people have to work in evironments with
lesser computational resources, or support multiple users. I'm on a
project right now with massive CPU and memory requirements because the
software vendor wasn't too worried about efficiency.
> As far as PS and LS is concerned: Since we introduce with UTF-8 anyway a
> not much backwards compatible new encoding, there is nothing wrong with
> recycling the old C0 codes at the same time. I never understood the
> point why anything is solved by introducing two additional control
> characters if we have with the existing 32 already over two dozen unused
> ones. Just let say 0x0a become the line separator and 0x0b the paragraph
> separator if you define a new set of formatting codes anyway. I wouldn't
> worry too much about the PS and LS codes of Unicode. It would have been
> nice if Unicode gave meaning to the existing C0 positions instead of
> adding even more control codes. We have to send everything through
> converters anyway, so backwards compatibility is not that much of an
> issue.
When you were implementing all the Unicode applications in the world
you evidently missed the ones I worked on that needed all of the
ISO-6429 control codes (C0 and C1.) Sure I could have mapped them
up to private use space, but it would have made migrating the code
even bumpier that it was. If they had done that, I would hope a
number of other convenient compatibility features would also have
been thrown out just to make sure all your text processing code
had to be rewritten at once. It would have stopped all the creeping
composed codepoints at the possible cost of becoming another
marginal standard.
Geoffrey Waigh
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:44 EDT