Wednesday, July 16, 1997
It seems to me that there are topics for standardization of interest to
both "Internet protocol designers" (or those who get data based on their
designs) and to "word processing file" folks. I refer to the need to
render the various writing systems of South, Southeast and Western Asia,
e.g., Thai,Tamil, Devanagari, Arabic and Hebrew. While requiring much
less memory than Han based scripts, their rendering processing needed to
produce a minimal level of legibility is much greater, IMHO. While the
"character-glyph model" defines the distinction between the two it does
little to bridge the gap between them. Unicode 2.0 explains many of the
difficulties rendering these scripts pose, but must each vendor then
tackle them separately? Is a "unirender" standard worth considering?
Jim Agenbroad ( jage@LOC.gov )
The above are purely personal opinions, not necessarily the
official views of any government or any agency of any.
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:35 EDT