Re: universal code and national code

From: Philippe Verdy (verdy_p@wanadoo.fr)
Date: Sat Jan 20 2007 - 13:03:19 CST

  • Next message: William J Poser: "Hanunoo font?"

    The working group that managed the ISO-8859 standard (as well as working on the integration of other national 7/8-bit charsets) has now stopped its activities, because all its members agreed that they would better work on the ISO 10646 standard itself.

    Anyway, there is little chance that legacy 7/8-bit codes will disappear soon. (There may be some new charsets created nationally, but they won't be standardized at ISO, given that such standard will immediately need to be created with an unambiguous mapping to ISO-10646/Unicode).

    We have not seen any national standard body promoting the adoption of a new national 7/8-bit charset (single-byte or multibyte), but they are still working sometimes on adapting their own national standard. But such attempts are still made by some software vendors, for their own needs (and anyone in fact can create such custom charsets, which will get a chance to be more or lesssupported by others ONLY if this charset has an unambiguous and completely defined mapping to ISO-10646).

    I don't see what the United Nations have to do there; it's not the place for discussing international technological standards, unlike ISO.

    ISO-8859-15 was not so bad when it was created, because it was done when the ISO working group on 7/8-bit charset was still working.

    Only if there's a strong interest for some newer national charset promoted in some country, may eventually ISO consider accepting this standard, but the most important thing is that this will not happen if the promoted charset has no complete mapping to ISO 10646. So the first target for for national standard bodies is not to adopt such charset, but to have the characters they need being encoded in ISO 10646. Then, having a 7/8 bit charset made on top of this is a secondary non-blocking issue (for example Morocco could adopt a 8-bit charset for the newly encoded Tifinagh, possibly based on the ISO-8859 standard but we have not seen such attempts for now, and if it ever happens, it won't be a ISO standard).

    So what is the issue? I see none. ISO 10646 is now effectively the central encoding for all existing, past, and future character sets, and it has been adopted for almost new developments by public and private standard bodies, and vendors. There is no more any justification to create new 7/8-bit charset, given that most new protocols depend on the support of ISO/IEC 10646 (and almost all other legacy charsets often only have an optional support). With the now widely available and deployed technologies, we nolonger needs such extensions.

    However, all standards have a lifetime, which depends on the support it has from the users and the industry, and the money they are ready to invest to maintain it.

    The only next VERY COSTLY evolution will be when ISO 10646 needs a successor, if there are unsolvable issues with the existing policies, in a way that avoiding these issues will create incompatibilities, and it is not expected soon.

    I think that the ISO 10646 standard is there for a very long time, possibly even more than the Unicode standard itself (which may have a successor, if there is a new better way to handle international text, while keeping the ISO 10646 compatibility). But before this happens, the cost of changing the existing rules will have to be evaluated. Let's hope that, at that time, this will not create a split with multiple incompatible successors...

    Anyway, I can't see any specific technical problem in the 3 points you are listing. Theses problems have now been solved by adapting them to the ISO 10646 framework (with the great help of the additional Unicode properties and algorithms), and the necessary extensions and policies have been made (or are being actively made) to make them compatible with it.

    So nothing forbids now all operating systems, programming languages, computers, embedded systems, firmwares and softwares to support ISO 10646 and work with it using Unicode policies and algorithms... except lazy programmers, or people that don't want to learn how to use the standard and that have not understood the interest of using it for:
    * maximum interoperability
    * easier rescaling (think about other technical evolutions)
    * immediate profits due to larger marketability
    * reduced development costs (by using the many implementations already available)
    * longer software lifetime
    * reduced future maintenance and upgrade costs in the long term.
      ----- Original Message -----
      From: Luke Onslow
      To: unicode@unicode.org
      Sent: Saturday, January 20, 2007 10:35 AM
      Subject: universal code and national code

      Dear all,

      What is the anticipation for all national codes to transition to universal code? i.e. Do countries have roadmaps to adopt Unicode in support with United Nations?

      One bad example is ISO-8859-15. I guess this was because of legacy systems. The European Union doesn't seem to have any targets. Does United States have any? What about China, Russia, Japan...?

      The technical problems are:
      1. Programming Languages
      2. Operating Systems
      3. Computers



    This archive was generated by hypermail 2.1.5 : Sat Jan 20 2007 - 13:07:06 CST