NCR encode/decode, vs Unicode approach

From: Huo, Henry (Henry.Huo@aig.com)
Date: Mon Jun 19 2006 - 08:26:15 CDT

  • Next message: Addison Phillips: "RE: NCR encode/decode, vs Unicode approach"

    We are evaluate the legacy systems, and would like to get you gurus' advises
    on what's the best approach to support multilingual web products.

     

    Currently, the legacy web applications are running on Websphere5 and Sybase
    12.5 which setup with CP850 for varchar and char.

    Web front-end will do NCR encoding/decoding (&#nnnnn;) for double-byte
    characters, e.g. Japanese, Chinese characters, and no encode/decode for
    us-ascii inputs.

    We are currently working on a plan to support all kinds of language,
    including English, German (umlauts), Korea, Chinese, Japanese, etc. Could
    you please advise what's the best approach? If we convert the Sybase
    database to use unichar/univarchar, then we need to change all of the legacy
    apps to use UTF-8 encode/decode, and the efforts are huge. If we would like
    to keep the current CP850 char/varchar in Sybase database site, should we
    encode/decode with NCR (&#nnnnn;) for all Web applications handling
    different languages cross different countries?? Will the NCR encode/decode
    support all languages w/o issues --- we already noticed some issues, like
    invalid characters "??" in the database.

     

    Thank you so much for your help and any input is highly appreciated.

     

    With best regards,

    - Henry



    This archive was generated by hypermail 2.1.5 : Mon Jun 19 2006 - 10:10:08 CDT