[Unicode]  Technical Reports

Unicode Technical Standard #35

Locale Data Markup Language (LDML)

Version 1.1
Authors Mark Davis
Date 2004-06-07
This Version http://www.unicode.org/reports/tr35/tr35-2.html
Previous Version

(see below)

Latest Version


Namespace: http://www.unicode.org/cldr/
DTDs: http://www.unicode.org/cldr/dtd/1.1/ldml.dtd
Tracking Number 2


This document describes an XML format (vocabulary) for the exchange of structured locale data.


This document has been reviewed by Unicode members and other interested parties, and has been approved by the Unicode Locale Data Technical Committee as a Unicode Technical Standard. This is a stable document and may be used as reference material or cited as a normative reference by other specifications.

A Unicode Technical Standard (UTS) is an independent specification. Conformance to the Unicode Standard does not imply conformance to any UTS. Each UTS specifies a base version of the Unicode Standard. Conformance to the UTS requires conformance to that version or higher.

Please submit corrigenda and other comments with the online reporting form [Feedback]. Related information that is useful in understanding this document is found in the References. For the latest version of the Unicode Standard see [Unicode]. For a list of current Unicode Technical Reports see [Reports]. For more information about versions of the Unicode Standard, see [Versions]. For possible errata for this document, see [Errata].

Previous Version

The 1.0 version of this document was hosted on the OpenI18N site:

1.0 Version: http://www.openi18n.org/spec/ldml/1.0/ldml-spec.htm
1.0 Namespace: http://www.openi18n.org/spec/ldml
1.0 DTDs: http://www.openi18n.org/spec/ldml/1.0/ldml.dtd
Mirror http://www.unicode.org/cldr/dtd/1.0/



Not long ago, computer systems were like separate worlds, isolated from one another. The internet and related events have changed all that. A single system can be built of many different components, hardware and software, all needing to work together. Many different technologies have been important in bridging the gaps; in the internationalization arena, Unicode has provided a lingua franca for communicating textual data. But there remain differences in the locale data used by different systems.

Common, recommended practice for internationalization is to store and communicate language-neutral data, and format that data for the client. This formatting can take place on any of a number of the components in a system; a server might format data based on the user's locale, or it could be that a client machine does the formatting. The same goes for parsing data, and locale-sensitive analysis of data.

But there remain significant differences across systems and applications in the locale-sensitive data used for such formatting, parsing, and analysis. Many of those differences are simply gratuitous; all within acceptable limits for human beings, but resulting in different results. In many other cases there are outright errors. Whatever the cause, the differences can cause discrepancies to creep into a heterogeneous system. This is especially serious in the case of collation (sort-order), where different collation caused not only ordering differences, but also different results of queries! That is, with a query of customers with names between "Abbot, Cosmo" and "Arnold, James", if different systems have different sort orders, different lists will be returned. (For comparisons across systems formatted as HTML tables, see [Comparisons].)

There are a number of steps that can be taken to improve the situation. The first is to provide an XML format for locale data interchange. This provides a common format for systems to interchange data so that they can get the same results. The second is to gather up locale data from different systems, and compare that data to find any differences. The third is to provide an online repository for such data. The fourth is to have an open process for reconciling differences between the locale data used on different systems and validating the data, to come up with a useful, common, consistent base of locale data.

Note: There are many different equally valid ways in which data can be judged to be "correct" for a particular locale. The goal for the common locale data is to make it as consistent as possible with existing locale data, and acceptable to users in that locale.

This document describes one of those pieces, an XML format for the communication of locale data. With it, for example, collation rules can be exchanged, allowing two implementations to exchange a specification of collation. Using the same specification, the two implementations will achieve the same results in comparing strings.

For more information, see the Common XML Locale Repository project page [LocaleProject].

What is a locale?

Before diving into the XML structure, it is helpful to describe the model behind the structure. People do not have to subscribe to this model to use the data, but they do need to understand it so that the data can be correctly translated into whatever model their implementation uses.

The first issue is basic: what is a locale? In this document, a locale is an id that refers to a set of user preferences that tend to be shared across significant swaths of the world. Traditionally, the data associated with this id provides support for formatting and parsing of dates, times, numbers, and currencies; for measurement units, for sort-order (collation), plus translated names for timezones, languages, countries, and scripts. They can also include text boundaries (character, word, line, and sentence), text transformations (including transliterations), and support for other services.

Locale data is not cast in stone: the data used on someone's machine generally may reflect the US format, for example, but preferences can typically set to override particular items, such as setting the date format for 2002.03.15, or using metric vs. Imperial measurement units. In the abstract, locales are simply one of many sets of preferences that, say, a website may want to remember for a particular user. Depending on the application, it may want to also remember the user's timezone, preferred currency, preferred character set, smoker/non-smoker preference, meal preference (vegetarian, kosher, etc.), music preference, religion, party affiliation, favorite charity, etc.

Locale data in a system may also change over time: country boundaries change; governments (and currencies) come and go: committees impose new standards; bugs are found and fixed in the source data; and so on. Thus the data needs to be versioned for stability over time.

In general terms, the locale id is a parameter that is supplied to a particular service (date formatting, sorting, spell-checking, etc.). The format in this document does not attempt to collect together all the data that could conceivably be used by all possible services. Instead, it collects together data that is in common use in systems and internationalization libraries for basic services. The main difference among locales is in terms of language; there may also be some differences according to different countries or regions. However, the line between locales and languages, as commonly used in the industry, are rather fuzzy. For more information, see Appendix D: Language and Locale IDs.

We will speak of data as being "in locale X". That does not imply that a locale is a collection of data; it is simply shorthand for "the set of data associated with the locale id X". Each individual piece of data is called a resource, and a tag indicating the key of resource is called a resource tag.

Locale IDs

A CLDR locale id consists of the following format:

locale_id := base_locale_id options?

base_locale_id := language_code ("_" script_code)? ("_" territory_code)? ("_" variant_code)?

options := "@" key "=" type ("," key "=" type )*

As usual x? means that x is optional; x* means that x occurs zero or more times.

Note: The successor to RFC 3066 is being currently developed. Once that standard has been approved, the goal is to update this locale id definition to correspond to that. This would be a correspondence, not necessarily precisely the same syntax.

The field values are given in the following table. All field values are case-insensitive, except for the type, which is case-sensitive. However, customarily the language code is lowercase, the territory and variant codes are uppercase, and the script code is titlecase (that is, first character uppercase and other characters lowercase). The type may also be referred to as a key-value, for clarity.

Locale Field Definitions
Field Allowable Characters Allowable values
language_code ASCII letters [ISO639] 2-letter codes where they exist; otherwise 3-letter codes (the mapping between 2-letter codes and 3-letter codes is not part of this format.), or [RFC3066] codes that do not contain script / territory codes.
script_code ASCII letters [ISO15924] 4-letter codes. In most cases the script is not necessary, since the language is only customarily written in a single script. Examples of usage are:
az-Arab Azerbaijani in Arabic script
az-Cyrl Azerbaijani in Cyrillic script
az-Latn Azerbaijani in Latin script
zh-Hans Chinese, in simplified script
zh-Hant Chinese, in traditional script
territory_code ASCII letters [ISO3166] 2-letter codes. Also known as a country_code, although the territories may not be countries.
variant_code ASCII letters Values used in CLDR are listed below. For information on the process for adding new standard variants or element/type pairs, see [LocaleProject].
key ASCII letters and digits
type ASCII letters, digits, and "-"



The locale id format generally follows the description in the OpenI18N Locale Naming Guideline [NamingGuideline], with some enhancements. The main differences from the those guidelines are that the locale id:

  1. does not include a charset (since the data in LDML format always provides a representation of all Unicode characters. The repository is stored in UTF-8, although that can be transcoded to other encodings as well.),
  2. adds the ability to have a variant, as in Java
  3. adds the ability to discriminate the written language by script (or script variant).

  4. is a superset of [RFC3066] codes.

Note: The language + script + territory code combination can itself be considered simply a language code: For more information, see Appendix D: Language and Locale IDs.

A locale that only has a language code (and possibly a script code) is called a language locale; one with both language and territory codes is called a territory locale (or country locale).

The variant codes specify particular variants of the locale, typically with special options. They cannot overlap with script or territory codes, so they must have either one letter or have more than 4 letters. The currently defined variants include:

Variant Definitions
variant Description
bokmal Bokmål, variant of Norwegian
nynorsk Nynorsk, variant of Norwegian
aaland Åland, variant of Swedish used in Finland

Note: The first two of the above variants are for backwards compatibility. Typically the entire contents of these are defined by an <alias> element pointing at nb_NO (Norwegian Bokmål) and nn_NO(Norwegian Nynorsk) locale IDs.

The currently defined optional key/type combinations include the following. Additional type values are defined in the detail sections of this document.

Key/Type Definitions
key type Description
collation phonebook For a phonebook-style ordering (used in German).
pinyin Pinyin order for CJK characters (that is, an ordering for CJK characters based on a character-by-character transliteration into a pinyin)
traditional For a traditional-style sort (as in Spanish)
stroke Stroke order for CJK characters
direct Hindi variant
posix A "C"-based locale.
calendar* gregorian (default)

alias: arabic

Astronomical Arabic
chinese Traditional Chinese calendar

alias: civil-arabic

Civil (algorithmic) Arabic calendar
hebrew Traditional Hebrew Calendar
japanese Imperial Calendar (same as Gregorian except for the year, with one era for each Emperor)

alias: thai-buddhist

Thai Buddhist Calendar (same as Gregorian except for the year)
*For information on the calendar algorithms associated with the data used with these types, see [Calendars].
currency ISO 4217 code Currency value identified by ISO code. See [Data Formats]
timezone Olson ID Identification for timezone according to the Olson Database ID. See [Data Formats].

Note: There is no standard system (or rather, there are many standard systems) for defining locale ID syntax. This definition of Locale IDs may not match the locale ID syntax used on a particular system, so some process of ID translation may be required.

Locale Inheritance

The XML format relies on an inheritance model, whereby the resources are collected into bundles, and the bundles organized into a tree. Data for the many Spanish locales does not need to be duplicated across all of the countries having Spanish as a national language. Instead, common data is collected in the Spanish language locale, and territory locales only need to supply differences. The parent of all of the language locales is a generic locale known as root. Wherever possible, the resources in the root are language & territory neutral. For example, the collation order in the root is the UCA (see UAX #10). Since English language collation has the same ordering, the 'en' locale data does not need to supply any collation data, nor does either the 'en_US' or the 'en_IE' locale data.

Given a particular locale id "en_US_someVariant", the search chain for a particular resource is the following.


If a type and key are supplied in the locale id, then logically the chain from that id to the root is searched for a resource tag with a given type, all the way up to root. If no resource is found with that tag and type, then the chain is searched again without the type.

Thus the data for any given locale will only contain resources that are different from the parent locale. For example, most territory locales will inherit the bulk of their data from the language locale: "en" will contain the bulk of the data: "en_US" will only contain a few items like currency. All data that is inherited from a parent is presumed to be valid, just as valid as if it were physically present in the file. This provides for much smaller resource bundles, and much simpler (and less error-prone) maintenance.

Where this inheritance relationship does not match a target system, such as POSIX, the data logically should be fully resolved in converting to a format for use by that system, by adding all inherited data to each locale data set.

The locale data does not contain general character properties that are derived from the Unicode Character Database [UCD]. That data being common across locales, it is not duplicated in the bundles. Constructing a POSIX locale from the following data requires use of UCD data. In addition, POSIX locales may also specify the character encoding, which requires the data to be transformed into that target encoding.

Multiple Inheritance

In clearly specified instances, resources may inherit from within the same locale. For example, currency format symbols inherit from the number format symbols; the Buddhist calendar inherits from the Gregorian calendar. This only happens where documented in this specification. In these special cases, the inheritance functions as normal, up to the root. If the data is not found along that path, then a second search is made, logically changing the element/attribute to the alternate values.

For example, for the locale "en_US" the month data in <calendar class="buddhist"> inherits first from <calendar class="buddhist"> in "en", then in "root". If not found there, then it inherits from <calendar type="gregorian"> in "en_US", then "en", then in "root".

XML Format

The following sections describe the structure of the XML format for locale data. To start with, the root element is <ldml>. That element contains the following elements:

The structure of each of these elements and their contents will be described below. The first few elements have little structure, while dates, numbers, and collations are more involved.

In general, all translatable text in this format is in element contents, while attributes are reserved for types and non-translated information (such as numbers or dates). The reason that attributes are not used for translatable text is that spaces are not preserved, and we cannot predict where spaces may be significant in translated material.

Note that the data in examples given below is purely illustrative, and doesn't match any particular language. For a more detailed example of this format, see [Example]. There is also a DTD for this format, but remember that the DTD alone is not sufficient to understand the semantics, the constraints, nor  the interrelationships between the different elements and attributes. You may wish to have copies of each of these to hand as you proceed through the rest of this document.

Common Elements

At any level in any element, two special elements are allowed.

<special xmlns:yyy="xxx">

This element is designed to allow for arbitrary additional annotation and data that is product-specific. It has one required attribute, which specifies the XML namespace of the special data. For example, the following used the version 1.0 POSIX special element.

<!DOCTYPE ldml SYSTEM "http://www.unicode.org/cldr/dtd/1.0/ldml.dtd" [
    <!ENTITY % posix SYSTEM "http://www.unicode.org/cldr/dtd/1.0/ldmlPOSIX.dtd">
<special xmlns:posix="http://www.opengroup.org/regproducts/xu.htm">
        <!-- old abbreviations for pre-GUI days -->

<alias source="<locale_ID>"/>

The contents of any element can be replaced by an alias, which points to another source for the data. The resource is to be fetched from the corresponding location in the other source. Normal resource searching is to be used; take the following example:

    <collation type="phonebook">
      <alias source="de_DE" type="phonebook">

The resource bundle at "de_DE" will be searched for a resource element at the same position in the the same tree with type "collation". If not found there, then the resource bundle at "de" will be searched, etc.

It is an error to have a circular chain of aliases. That is, a collection of LDML XML documents must not have situations where a sequence of alias lookups (including inheritance and multiple inheritance) can be followed indefinitely without terminating.


Many elements can have a display name. This is a translated name that can be presented to users when discussing the particular service. For example, a number format, used to format numbers using the conventions of that locale, can have translated name for presentation in GUIs.


Where present, the display names must be unique; that is, two distinct code would not get the same display name. Any translations should follow customary practice for the locale in question. For more information, see [Data Formats].

<default type="someID"/>

In some cases, a number of elements are present. The default element can be used to indicate which of them is the default, in the absence of other information. The value of the type attribute is to match the value of the type attribute for the selected item.

  <default type="medium" /> 
  <timeFormatLength type="full">
    <timeFormat type="standard">
      <pattern type="standard">h:mm:ss a z</pattern> 
  <timeFormatLength type="long">
    <timeFormat type="standard">
      <pattern type="standard">h:mm:ss a z</pattern> 
  <timeFormatLength type="medium">
    <timeFormat type="standard">
      <pattern type="standard">h:mm:ss a</pattern> 

Like all other elements, the <default> element is inherited. Thus, it can also refer to inherited resources. For example, suppose that the above resources are present in fr, and that in fr_BE we have the following:

  <default type="long"/>

In that case, the default time format for fr_BE would be the inherited "long" resource from fr. Now suppose that we had in fr_CA:

  <timeFormatLength type="medium">
    <timeFormat type="standard">
      <pattern type="standard">...</pattern> 

In this case, the <default> is inherited from fr, and has the value "medium". It thus refers to this new "medium" pattern in this resource bundle.

Escaping Characters

Unfortunately, XML does not have the capability to contain all Unicode code points. Due to this, extra syntax is required to represent those code points that cannot be otherwise represented in element content. This also must be used where spaces are significant (otherwise they can be stripped).

Escaping Characters
Code Point XML Example
U+0000 <cp hex="0">

Note: If XML 1.1 is approved in the current state, then this would not be necessary -- except for NULL (U+0000), which is typically never tailored. However, for backwards compatibility with 1.0 systems it is best for some time to come to use these special escapes.

Common Attributes

<... type="stroke" ...>

The attribute type is also used to indicate an alternate resource that can be selected with a matching type=option in the locale id modifiers, or be referenced by a default element. For example:

    <currency type="preEuro">...</currency>

If there is no type attribute present, then the value is assumed to be "standard".

<... draft="true" ...>

If this attribute is present, it indicates the status of all the data in this element and any subelements (unless they have a contrary draft value).

<... standard="..." ...>

The value of this list is a list of strings representing standards: international, national, organization, or vendor standards. The presence of this attribute indicates that the data in this element is compliant with the indicated standards. Where possible, for uniqueness, the string should be a URL that represents that standard. The strings are separated by commas; leading or trailing spaces on each string are not significant. Examples:

<collation standard="MSA 200:2002">
<dateFormatStyle standard=”http://www.iso.ch/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=26780&ICS1=1&ICS2=140&ICS3=30”>


The identity element contains information identifying the target locale for this data, and general information about the version of this data.

<version number="1.1">

The version element provides, in an attribute, the version of this file.  The contents of the element can contain textual notes about the changes between this version and the last. For example:

<version number="1.1">Various notes and changes in version 1.1</version>

<generation date="2002-08-28" />

The generation element contains the last modified date for the data. The data is in XML Schema format (yyyy-mm-dd).

<language type="en"/>

The language code is the primary part of the specification of the locale id, with values as described above.

<script type="Latn" />

The script field may be used in the identification of written languages, with values described above.

<territory type="US"/>

The territory code is a common part of the specification of the locale id, with values as described above.

<variant type="nynorsk"/>

The variant code is the tertiary part of the specification of the locale id, with values as described above.


Display names for scripts, languages, countries, and variants in this locale are supplied by this element. These supply localized names for these items for use in user-interfaces for displaying lists of locales and scripts. Examples are given below.

Where present, the display names must be unique; that is, two distinct code would not get the same display name. Any translations should follow customary practice for the locale in question. For more information, see [Data Formats].


This contains a list of elements that provide the user-translated names for language codes from [ISO639], as described in Locale_IDs.

<language type="ab">Abkhazian</language>
<language type="aa">Afar</language>
<language type="af">Afrikaans</language>
<language type="sq">Albanian</language>


This element can contain an number of script elements. Each script element provides the localized name for a script, given by the value of the type attribute. The script IDs can be either the long or short forms from Scripts.txt in the UCD. (See UAX #24: Script Names [Scripts] for more information.) For example, in the language of this locale, the name for the Latin script might be "Romana", and for the Cyrillic script is "Kyrillica". That would be expressed with the following.

<script type="Latn">Romana</script>
<script type="Cyrl">Kyrillica</script>


This contains a list of elements that provide the user-translated names for territory codes from [ISO3166], as described in Locale_IDs.

<territory type="AF">Afghanistan</territory>
<territory type="AL">Albania</territory>
<territory type="DZ">Algeria</territory>
<territory type="AD">Andorra</territory>
<territory type="AO">Angola</territory>
<territory type="US">United States</territory>


This contains a list of elements that provide the user-translated names for the variant_code values described in Locale_IDs.

<variant type="nynorsk">Nynorsk</variant>


This contains a list of elements that provide the user-translated names for the key values described in Locale_IDs.

<key type="collation">Sortierung</key>


This contains a list of elements that provide the user-translated names  for the type values described in Locale_IDs. Since the translation of an option name may depend on the key it is used with, the latter is optionally supplied.

<type type="phonebook" key="collation">Telefonbuch</type>


This top-level element specifies general layout features. It currently only has one possible element (other than <special>, which is always permitted).

<orientation lines="top-to-bottom" characters="left-to-right" />

The lines and characters attributes specify the default general ordering of lines within a page, and characters within a line. The values are:

Orientation Attributes
Vertical top-to-bottom
Horizontal left-to-right

If the lines value is one of the vertical attributes, then the characters value must be one of the horizontal attributes, and vice versa. For example, for English the lines are top-to-bottom, and the characters are left-to-right. For Mongolian the lines are right-to-left, and the characters are top to bottom. This does not override the ordering behavior of bidirectional text; it does, however, supply the paragraph direction for that text (for more information, see UAX #9: The Bidirectional Algorithm [BIDI]).


The <characters> element provides optional information about characters are in common use in the locale, and information that can be helpful in picking among character encodings that are typically used to transmit data in the language of the locale. It typically only occurs in a language locale, not in a language/territory locale.


This element indicates that normal usage of the language of this locale requires these letters. ("Letter is interpreted broadly, as in the Unicode General Category, and also includes syllabaries and ideographs.) An encoding that cannot encompass at least these letters is inappropriate for encoding data in the language of this locale. The list of characters is in the Unicode Set format, which allows boolean combinations of sets of letters, including those specified by Unicode properties.

The list should not include non-letters: punctuation marks, digits, etc., although this policy may be changed in future versions of this standard.

The letters do not necessarily form a complete set (especially for languages using large character sets, such as CJK). Nor does the list necessarily include letters that are used in common foreign words used in that language. The letters are only the lowercase alternatives, but implicitly include the normal "case-closure": all uppercase and titlecase variants. For the special case of Turkish, the dotted capital I should be included. Sequences of characters that are considered to be a single letter in the alphabet, such as "ch" can be included, using curly braces (e.g., [[a-z{ch}{ll}{rr}] - [w]]). For more information, see [Data Formats].

<mapping registry="iana" type="windows-1252"/>

There can be multiple mapping elements. Each indicates the character conversion mapping table name for a character encoding that may be commonly used to encode data in the language of this locale. The version field of the mapping table is omitted. The ordering among the mapping elements is not significant. The mapping elements themselves are not inherited from parents.

The registry indicates the source of the encoding. Currently the only registry that can be used is "iana", which specifies use of  an IANA name.  Note: while IANA names are not precise for conversion (see UTR #22: Character Mapping Tables [CharMapML]), they are sufficient for this purpose.


The delimiters supply common delimiters for bracketing quotations. The quotation marks are used with simple quoted text, such as:

He said, “Don’t be absurd!”

The alternate marks are used with embedded quotations, such as:

He said, “Remember what the Mad Hatter said: ‘Not the same thing a bit! Why you might just as well say that “I see what I eat” is the same thing as “I eat what I see”!’”



<measurementSystem type="US"/>

The measurement system is the normal measurement system in common everyday use (except for date/time). The values are "metric" (= ISO 1000), "US", or "UK"; others may be added over time.

Note: In the future, we may need to add display names for the particular measurement units (millimeter vs millimetre vs whatever the Greek, Russian, etc are), and a message format for position those with respect to numbers. E.g. "{number} {unitName}" in some languages, but "{unitName} {number}" in others.

Note: Numbers indicating measurements should never be interchanged without known dimensions. You never want the number 3.51 interpreted as 3.51 feet by one user and 3.51 meters by another. However, this element can be used to convert dimensioned numbers into the user's desired notation: so the value of 3.51 meters can be formatted as 11.52 feet on a particular user's system.


The paperSize element gives the height and width of paper used for normal business letters. The units for the numbers are always in millimeters. For example, the paperSize in the root (the default) is A4:


An example of locale data that differs from this would be en-US:



This top-level element contains information regarding the format and parsing of dates and times. The data is based on the Java/ICU format. Most of these are fairly self-explanatory, except minDays and localizedPatternChars. For information on this, and more information on other elements and attributes, see [JavaDates].

Note: there is an on-line demonstration of date formatting and parsing at [LocaleExplorer] (pick the locale and scroll to "Date Patterns").

The <dates> element has three possible sub-elements: <localizedPatternCharacters>, <calendars> and <timeZoneNames>


The interpretation of this is explained in [JavaDates].


This element contains multiple <calendar> elements, each of which specifies the fields used for formatting and parsing dates and times according to the given calendar. The month names are identified numerically, starting at 1. The day names are identified with short strings, since there is no universally-accepted numeric designation.

Many calendars will only differ from the Gregorian Calendar in the year and era values. For example, the Japanese calendar will have many more eras (one for each Emperor), and the years will be numbered within that era. All calendar data inherits from the Gregorian calendar in the same locale data (if not present in the chain up to root), so only the differing data will be present. See Multiple Inheritance.

Both month and day names may vary along two axes: the width and the context. The context is either format (the default), the form used within a date format string (such as "Saturday, November 12th", or stand-alone, the form used independently, such as in Calendar headers. The width can be wide (the default), abbreviated, or narrow. The latter is the shortest possible width: it is typically used in calendar headers. The values in formats must be distinct; that is, "S" could not be used both for Saturday and for Sunday. The same is not true for stand-alone values; they might only be distinguished by context, especially in the narrow format.

If the stand-alone form does not exist (in the chain up to root), then it inherits from the format form. See Multiple Inheritance.

The older monthNames, dayNames, and monthAbbr, dayAbbr are maintained for backwards compatibility. They are equivalent to: using the months element with the context type="format" and the width type="wide" (for ...Names) and type="narrow" (for ...Abbr), respectively.


  <calendar type="gregorian">
      <default type="format"/>
      <monthContext type="format">
         <default type="wide"/>
         <monthWidth type="wide">
            <month type="1">January</month>
            <month type="2">February</month>
            <month type="11">November</month>
            <month type="12">December</month>
        <monthWidth type="abbreviated">
            <month type="1">Jan</month>
            <month type="2">Feb</month>
            <month type="11">Nov</month>
            <month type="12">Dec</month>
       <monthContext type="stand-alone">
         <default type="wide"/>
         <monthWidth type="wide">
            <month type="1">Januaria</month>
            <month type="2">Februaria</month>
            <month type="11">Novembria</month>
            <month type="12">Decembria</month>
        <monthWidth type="narrow">
            <month type="1">J</month>
            <month type="2">F</month>
            <month type="11">N</month>
            <month type="12">D</month>

      <default type="format"/>
      <dayContext type="format">
         <default type="wide"/>
         <dayWidth type="wide">
            <day type="sun">Sunday</day>
            <day type="mon">Monday</day>
            <day type="fri">Friday</day>
            <day type="sat">Saturday</day>
        <dayWidth width="abbreviated">
            <day type="sun">Sun</day>
            <day type="mon">Mon</day>
            <day type="fri">Fri</day>
            <day type="sat">Sat</day>
        <dayWidth width="narrow">
            <day type="sun">Su</day>
            <day type="mon">M</day>
            <day type="fri">F</day>
            <day type="sat">Sa</day>
      <dayContext type="stand-alone">
        <dayWidth width="narrow">
            <day type="sun">S</day>
            <day type="mon">M</day>
            <day type="fri">F</day>
            <day type="sat">S</day>
        <minDays count="1"/>
        <firstDay day="sun"/>
        <weekendStart day="fri" time="18:00"/>
        <weekendEnd day="sun" time="18:00"/>


        <era type="0">BC</era>
        <era type="1">AD</era>
        <era type="0">Before Christ</era>
        <era type="1">Anno Domini</era>
      <default type=”medium”/>
      <dateFormatLength type=”full”>
          <pattern>EEEE, MMMM d, yyyy</pattern>
     <dateFormatLength type="medium">
       <default type="DateFormatsKey2">
       <dateFormat type="DateFormatsKey2">
        <pattern>MMM d, yyyy</pattern>
       <dateFormat type="DateFormatsKey3">
         <pattern>MMM dd, yyyy</pattern>
       <default type="medium"/>
       <timeFormatLength type=”full”>
           <displayName>DIN 5008 (EN 28601)</displayName>
           <pattern>h:mm:ss a z</pattern>
       <timeFormatLength type="medium">
           <pattern>h:mm:ss a</pattern>

       <default type="medium"/>
       <dateTimeFormatLength type=”full”>
            <pattern>{0} {1}</pattern>

  <calendar class="buddhist">
        <era type="0">BE</era>

Note: the weekendStart time defaults to "00:00:00" (midnight at the start of the day). The weekendEnd time defaults to "24:00:00" (midnight at the end of the day). (That is, Friday at 24:00:00 is the same time as Saturday at 00:00:00.) Thus the following are equivalent:

<weekendStart day="sat"/>
<weekendEnd day="sun"/>
<weekendStart day="sat" time="00:00"/>
<weekendEnd day="sun" time="24:00"/>
<weekendStart day="fri" time="24:00"/>
<weekendEnd day="mon" time="00:00"/>


The timezone IDs are language-independent, and follow the Olson Data [Olson]. However, the display names for those IDs can vary by locale. The generic time is so-called wall-time; what clocks use when they are correctly switched from standard to daylight time at the mandated time of the year.

Note: The type field for each zone is the identification of that zone. It is not to be translated.

<zone type="America/Los_Angeles" >
        <generic>Pacific Time</generic>
        <standard>Pacific Standard Time</standard>
        <daylight>Pacific Daylight Time</daylight>
    <exemplarCity>San Francisco</exemplarCity>

<zone type="Europe/London">
        <generic>British Time</generic>
        <standard>British Standard Time</standard>
        <daylight>British Daylight Time</daylight>

Note: Transmitting "14:30" with no other context is incomplete unless it contains information about the time zone. Ideally one would transmit neutral-format date/time information, commonly in UTC, and localize as close to the user as possible. (For more about UTC, see [UTCInfo].)

The conversion from local time into UTC depends on the particular time zone rules, which will vary by location. The standard data used for converting local time (sometimes called wall time) to UTC and back is the Olson Data [Olson], used by UNIX, Java, ICU, and others. The data includes rules for matching the laws for time changes in different countries. For example, for the US it is:

"During the period commencing at 2 o'clock antemeridian on the first Sunday of April of each year and ending at 2 o'clock antemeridian on the last Sunday of October of each year, the standard time of each zone established by sections 261 to 264 of this title, as modified by section 265 of this title, shall be advanced one hour..." (United States Law - 15 U.S.C. §6(IX)(260-7)).

Each region that has a different timezone or daylight savings time rules, either now or at any time in the past, is given a unique internal ID, such as Europe/Paris. As with currency codes, these are internal codes that should be localized if exposed to a user (such as in the Windows Control Panels>Date/Time>Time Zone).

Unfortunately, laws change over time, and will continue to change in the future, both for the boundaries of timezone regions and the rules for daylight savings. Thus the Olson data is continually being augmented. Any two implementations using the same version of the Olson data will get the same results for the same IDs (assuming a correct implementation). However, if implementations use different versions of the data they may get different results. So if precise results are required then both the Olson ID and the Olson data version must be transmitted between the different implementations.

For more information, see [Data Formats].


The numbers element supplies information for formatting and parsing numbers and currencies. It has the following sub-elements: <symbols>, <decimalFormats>, <scientificFormats>, <percentFormats>, <currencyFormats>, and <currencies>. The data is based on the Java/ICU format. The currency IDs are from [ISO4217]. For more information, including the pattern structure, see [JavaNumbers].

Note: there is an on-line demonstration of number formatting and parsing at [LocaleExplorer] (pick the locale and scroll to "Number Patterns").

  <decimalFormatLength type="long">
  <default type="long"/>
  <scientificFormatLength type="long">
  <scientificFormatLength type="medium">
  <percentFormatLength type="long">
  <currencyFormatLength type="long">
      <pattern>¤ #,##0.00;(¤ #,##0.00)</pattern>
    <currency type="USD">
    <currency type ="JPY">
    <currency type ="INR">
        <symbol choice="true">0≤Rf|1≤Ru|1<Rf</symbol>
    <currency type="PTE">

In formatting currencies, the currency number format is used with the appropriate symbol from <currencies>, according to the currency code. The <currencies> list can contain codes that are no longer in current use, such as PTE. The choice attribute can be used to indicate that the value uses a pattern which is to be interpreted as a Java ChoiceFormat [JavaChoice], with the 0 parameter being the numeric value.

Currencies can also contain optional grouping, decimal data, and pattern elements. This data is inherited from the <symbols> in the same locale data (if not present in the chain up to root), so only the differing data will be present. See Multiple Inheritance.

Note: Currency values should never be interchanged without a known currency code. You never want the number 3.5 interpreted as $3.5 by one user and ¥3.5 by another. Locale data contains localization information for currencies, not a currency value for a country. A currency amount logically consists of a numeric value, plus an accompanying [ISO4217] currency code (or equivalent). The currency code may be implicit in a protocol, such as where USD is implicit. But if the raw numeric value is transmitted without any context, then it has no definitive interpretation.

Notice that the currency code is completely independent of the end-user's language or locale. For example, RUR is the code for Russian Rubles. A currency amount of <RUR, 1.23457×10³> would be localized for a Russian user into "1 234,57р." (using U+0440 (р) cyrillic small letter er). For an English user it would be localized into the string "Rub 1,234.57" The end-user's language is needed for doing this last localization step; but that language is completely orthogonal to the currency code needed in the data. After all, the same English user could be working with dozens of currencies.Notice also that the currency code is also independent of whether currency values are inter-converted, which requires more interesting financial processing: the rate of conversion may depend on a variety of factors.

Thus logically speaking, once a currency amount is entered into a system, it should be logically accompanied by a currency code in all processing. This currency code is independent of whatever the user's original locale was. Only in badly-designed software is the currency code (or equivalent) not present, so that the software has to "guess" at the currency code based on the user's locale.

Note: The number of decimal places and the rounding for each currency is not locale-specific data, and is not contained in the Locale Data Markup Language format. Those values override whatever is given in the currency numberFormat. For more information, see Supplemental Data.

For background information on currency names, see [CurrencyInfo].


The following are included for compatibility with POSIX.



This section contains one or more collation elements, distinguished by type. Each collation contains rules that specify a certain sort-order, as a tailoring of the UCA table defined in UTS #10: Unicode Collation Algorithm [UCA]. (For a chart view of the UCA, see Collation Chart [UCAChart].) This syntax is an XMLized version of the Java/ICU syntax. For illustration, the rules are accompanied by the corresponding basic ICU rule syntax [ICUCollation] (used in ICU and Java) and/or the ICU parameterizations, and the basic syntax may be used in examples.

Note: ICU provides a concise format for specifying orderings, based on tailorings to the UCA. For example, to specify that k and q follow 'c', one can use the rule: "& c < k < q". The rules also allow people to set default general parameter values, such as whether uppercase is before lowercase or not. (Java contains an earlier version of ICU, and has not been updated recently. It does not support any of the basic syntax marked with [...], and its default table is not the UCA.)

However, it is not necessary for ICU to be used in the underlying implementation. The features are simply related to the ICU capabilities, since that supplies more detailed examples. Note: there is an on-line demonstration of collation at [LocaleExplorer] (pick the locale and scroll to "Collation Rules").


The version attribute is used in case a specific version of the UCA is to be specified. It is optional, and is specified if the results are to be identical on different systems. If it is not supplied, then the version is assumed to be the same as the Unicode version for the system as a whole.

Note: For version 3.1.1 of the UCA, the version of Unicode must also be specified with any versioning information; an example would be "3.1.1/4.0" for version 3.1.1 of the UCA, for version 3.2 of Unicode. This has been changed by decision of the UTC, so that it will no longer be necessary as of UCA 4.0. So for 4.0 and beyond, the version just has a single number.


Like the ICU rules, the tailoring syntax is designed to be independent of the actual weights used in any particular UCA table. That way the same rules can be applied to UCA versions over time, even if the underlying weights change. The following describes the overall document structure of a collation:

 <settings caseLevel="on"/>
  <!-- rules go here -->

The optional base element <base>...</base>, contains an alias element that points to another data source that defines a base collation. If present, it indicates that the settings and rules in the collation are modifications applied on top of the respective elements in the base collation. That is, any successive settings, where present, override what is in the base as described in Setting Options. Any successive rules are concatenated to the end of the rules in the base. The results of multiple rules applying to the same characters is covered in Orderings.

Setting Options

In XML, these are attributes of <settings>. For example, <setting strength="secondary"> will only compare strings based on their primary and secondary weights.

If the attribute is not present, the default (or for the base url's attribute, if there is one) is used. The default is listed in italics.

Collation Settings
Attribute Options Basic Example   XML Example Description
strength primary (1)
secondary (2)
tertiary (3)
quarternary (4)
identical (5)
[strength 1] strength = "primary" Sets the default strength for comparison, as described in the UCA.
alternate non-ignorable
[alternate non-ignorable] alternate = "non-ignorable" Sets alternate handling for variable weights, as described in UCA
backwards on
[backwards 2]   backwards = "on" Sets the comparison for the second level to be backwards ("French"), as described in UCA
normalization on
[normalization on]  normalization = "off" If on, then the normal UCA algorithm is used. If off, then all strings that are in [FCD] will sort correctly, but others won't. So should only be set off if the the strings to be compared are in FCD.
caseLevel on
[caseLevel on] caseLevel = "off" If set to on, a level consisting only of case characteristics will be inserted in front of tertiary level. To ignore accents but take cases into account, set strength to primary and case level to on
caseFirst upper
[caseFirst off] caseFirst = "off" If set to upper, causes upper case to sort before lower case. If set to lower, lower case will sort before upper case. Useful for locales that have already supported ordering but require different order of cases. Affects case and tertiary levels.
hiraganaQ on
[hiraganaQ on] hiragana­Quarternary = "on" Controls special treatment of Hiragana code points on quaternary level. If turned on, Hiragana codepoints will get lower values than all the other non-variable code points. The strength must be greater or equal than quaternary if you want this attribute to take effect.
numeric on
[numeric on] numeric = "on" If set to on, any sequence of Decimal Digits (General_Category = Nd in the [UCD]) is sorted at a primary level with its numeric value. For example, "A-21" < "A-123".

Collation Rule Syntax

The goal for the collation rule syntax is to have clearly expressed rules with a concise format, that parallels the Basic syntax as much as possible.  The rule syntax uses abbreviated element names for primary (level 1), secondary (level 2), tertiary (level 3), and identical, to be as short as possible. The reason for this is because the tailorings for CJK characters are quite large (tens of thousands of elements), and the extra overhead would have been considerable. Other elements and attributes do not occur as frequently, and have longer names.

Note: The rules are stated in terms of actions that cause characters to change their ordering relative to other characters. This is for stability; assigning characters specific weights would not work, since the exact weight assignment in UCA (or ISO 14651) is not required for conformance -- only the relative ordering of the weights. In addition, stating rules in terms of relative order is much less sensitive to changes over time in the UCA itself.


The following are the normal ordering actions used for the bulk of characters. Each rule contains a string of ordered characters that starts with an anchor point or a reset value. The reset value is an absolute point in the UCA that determines the order of other characters. For example, the rule & a < g, places "g" after "a" in a tailored UCA: the "a" does not change place. Logically, subsequent rule after a reset indicates a change to the ordering (and comparison strength) of the characters in the UCA. For example, the UCA has the following sequence (abbreviated for illustration):

... a <3 a <3 ⓐ <3 A <3 A <3 Ⓐ <3 ª <2 á <3 Á <1 æ <3 Æ <1 ɐ <1 ɑ <1 ɒ <1 b <3 b <3 ⓑ <3 B <3 B <3 ℬ ...

Whenever a character is inserted into the UCA sequence, it is inserted at the first point where the strength difference will not disturb the other characters in the UCA. For example, & a < g puts g in the above sequence with a strength of L1. Thus the g must go in after any lower strengths,  as follows:

... a <3 a <3 ⓐ <3 A <3 A <3 Ⓐ <3 ª <2 á <3 Á <1 g <1 æ <3 Æ <1 ɐ <1 ɑ <1 ɒ <1 b <3 b <3 ⓑ <3 B <3 B <3 ℬ ...

The rule & a << g, which uses a level-2 strength, would produce the following sequence:

... a <3 a <3 ⓐ <3 A <3 A <3 Ⓐ <3 ª <2 g <2 á <3 Á <1 æ <3 Æ <1 ɐ <1 ɑ <1 ɒ <1 b <3 b <3 ⓑ <3 B <3 B <3 ℬ ...

And the rule & a <<< g, which uses a level-3 strength, would produce the following sequence:

... a <3 g <3 a <3 ⓐ <3 A <3 A <3 Ⓐ <3 ª <2 á <3 Á <1 æ <3 Æ <1 ɐ <1 ɑ <1 ɒ <1 b <3 b <3 ⓑ <3 B <3 B <3 ℬ ...

Since resets always work on the existing state, the rule entries must be are in the proper order. A character or sequence may occur multiple times; each subsequent occurrence causes a different change. The following shows the result of serially applying a three rules.






& a < g

... a <1 g ...

Put g after a.


& a < h < k

... a <1 h <1 k <1 g ...

Now put h and k after a (inserting before the g).


& h << g

... a <1 h <1 g <1 k ...

Now put g after h (inserting before k).

Notice that characters can occur multiple times, and thus override previous rules.

Except for the case of expansion sequence syntax, every sequence after a reset is equivalent in action to breaking up the sequence into an atomic rule: a reset + relation pair. The tailoring is then equivalent to applying each of the atomic rules to the UCA in order, according to the above description.



Equivalent Atomic Rules

& b < q <<< Q
& a < x <<< X << q <<< Q < z
& b < q
& q <<< Q
& a < x
& x <<< X
& X << q
& q <<< Q
& Q < z

In the case of expansion sequence syntax, the equivalent atomic sequence can be derived by first transforming the expansion sequence syntax into normal expansion syntax. (See Expansions.)

Specifying Collation Ordering
Basic Symbol Basic Example XML Symbol XML Example Description
&   & Z   <reset> <reset>Z</reset> Don't change the ordering of Z, but place subsequent characters relative to it.
<   & a
< b  
<p> <reset>a<reset>
Make 'b' sort after 'a', as a primary (base-character) difference
<<   & a
<< ä  
<s> <reset>a<reset>
Make 'ä' sort after 'a' as a secondary (accent) difference
<<<   & a
<<< A  
<t> <reset>a<reset>
Make 'A' sort after 'a' as a secondary (accent) difference
=   & x
= y  
<i> <reset>v<reset>
Make 'w' sort identically to 'v'

Resets only need to be at the start of a sequence, to position the characters relative a character that is in the UCA (or has already occurred in the tailoring). For example: <reset>z</reset><p>a</p><p>b</p><p>c</p><p>d</p>.

Some additional elements are provided to save space with large tailorings. The addition of a 'c' to the element name indicates that each of the characters in the contents of that element are to be handled as if they were separate elements with the corresponding strength:

Abbreviating Ordering Specifications
XML Symbol XML Example Equivalent
<pc> <pc>bcd</pc> <p>b</p><p>c</p><p>d</p>
<sc> <sc>àáâã</sc> <s>à</s><s>á</s><s>â</s><s>ã</s>
<tc> <tc>PpP</tc> <t>P</t><t></t><t></t>
<ic> <ic>VwW</ic> <i>V</i><i>w</i><i>W</i>


To sort a sequence as a single item (contraction), just use the sequence, e.g.

Specifying Contractions
BASIC Example XML Example Description
& k
< ch
Make the sequence 'ch' sort after 'k', as a primary (base-character) difference


There are two ways to handle expansions (where a character sorts as a sequence) with both the basic syntax and the XML syntax. The first method is to reset to the sequence of characters. This is called sequence expansion syntax. The second is to use the extension sequence. Both are equivalent in practice (unless the reset sequence happens to be a contraction). This is called normal expansion syntax.

Specifying Expansions
Basic XML Description
& c 
<< k / h
<x><s>k</s> <extend>h</extend></x>
normal expansion syntax:
Make 'k' sort after the sequence 'ch'; thus 'k' will behave as if it expands to a character after 'c' followed by an 'h'.
& ch
<< k
sequence expansion syntax:
Make 'k' sort after the sequence 'ch'; thus 'k' will behave as if it expands to a character after 'c' followed by an 'h'.

(unless 'ch' is defined beforehand as a contraction).

If an <extend> element is necessary, it requires the rule to be embedded in an <x> element.

The sequence expansion syntax can be quite tricky, so it should be avoided where possible. In particular:

When expressing rules as atomic rules, the sequences must first be transformed into normal expansion syntax:

Expansion Sequence Normal Expansion

Equivalent Atomic Rules

& ab << q <<< Q
& ad <<< AD < x <<< X
& a << q / b <<< Q / b
& a <<< AD / d < x <<< X
& b << q / b
& q <<< Q / b
& a < AD / d
& AD < x
& x<<< X

Context Before

The context before a character can affect how it is ordered, such as in Japanese. This could be expressed with a combination of contractions and expansions, but is faster using a context. (The actual weights produced are different, but the resulting string comparisons are the same.) If a context element occurs, it must be the first item in the rule.

Specifying Previous Context
Basic XML
&[before 3] ァ
<<< ァ|ー
= ァ |ー
= ぁ|ー
<reset before="tertiary"></reset>

If an <extend> element is necessary, it requires the rule to be embedded in an <x> element. There can also be a <context> at the same time. For example, the following are allowed:

Placing Characters Before Others

There are certain circumstances where characters need to be placed before a given character, rather than after. This is the case with Pinyin, for example, where certain accented letters are positioned before the base letter. That is accomplished with the following syntax.

Placing Characters Before Others
Item Options Basic Example   XML Example
before  primary
& [before 2] a
<< à
<reset before="secondary">a</reset>

It is an error if the strength of the before relation is not identical to the relation after the reset. Thus the following are errors, since the value of the before attribute does not agree with the relation <s>.

Basic Example   XML Example
& [before 2] a
< à
<reset before="primary">a</reset>
& [before 2] a
<<< à
<reset before="tertiary">a</reset>

Logical Reset Positions

The UCA has the following overall structure for weights, going from low to high.

Specifying Logical Positions
Name Description UCA Examples
first tertiary ignorable
last tertiary ignorable
p, s, t = ignore Control Codes
Format Characters
Hebrew Points
Tibetan Signs
first secondary ignorable
last secondary ignorable
p, s = ignore None in UCA
first primary ignorable
last primary ignorable
p = ignore Most combining marks
first variable
last variable
if alternate = non-ignorable
p != ignore,
if alternate = shifted
p, s, t = ignore
first non-ignorable
last non-ignorable
p != ignore Small number of exceptional symbols
implicits p != ignore, assigned automatically CJK, CJK compatibility (those that are not decomposed)
CJK Extension A, B
first trailing
last trailing
p != ignore,
used for trailing syllable components
Jamo Trailing
Jamo Leading

Each of the above Names (except implicits) can be used with a reset to position characters relative to that logical position. That allows characters to be ordered before or after a logical position rather than a specific character.

Note: The reason for this is so that tailorings can be more stable. A future version of the UCA might add characters at any point in the above list. Suppose that you set character X to be after Y. It could be that you want X to come after Y, no matter what future characters are added; or it could be that you just want Y to come after a given logical position, e.g. after the last primary ignorable.

Here is an example of the syntax:

Sample Logical Position
Basic XML
& [first tertiary ignorable]
<< à 

For example, to make a character be a secondary ignorable, one can make it be immediately after (at a secondary level) a specific character (like a combining dieresis), or one can make it be immediately after the last secondary ignorable.

The last-variable element indicates the "highest" character that is treated as punctuation with alternate handling. Unlike the other logical positions, it can be reset as well as referenced. For example, it can be reset to be just above spaces if all visible punctuation are to be treated as having distinct primary values.

Specifying Last-Variable
Attribute Options Basic Example   XML Example
variableTop at & x
= [last variable]
after & x
< [last variable]
before & [before 1] x
< [last variable]
<reset before="primary">x</reset>

The default value for variable-top depends on the UCA setting. For example, in 3.1.1, the value is at:


The <last_variable/> cannot occur inside an <x> element, nor can there be any element content. Thus there can be no <context> or <extend> or text data in the rule. For example, the following are all disallowed:

Special-Purpose Commands

The suppress contractions tailoring command turns off any existing contractions that begin with those characters. It is typically used to turn off the Cyrillic contractions in the UCA, since they are not used in many languages and have a considerable performance penalty. The argument is a Unicode Set.

The optimize tailoring command is purely for performance. It indicates that those characters are sufficiently common in the target language for the tailoring that their performance should be enhanced.

Special-Purpose Commands
Basic XML
[suppress contractions [Љ-ґ]] <suppress_contractions>[Љ-ґ]</suppress_contractions>
[optimize [Ά-ώ]] <optimize>[Ά-ώ]</optimize>

The reason that these are not settings is so that their contents can be arbitrary characters.

Example Collation

The following is a simple example that takes portions of the Swedish tailoring plus part of a Japanese tailoring, for illustration. For more complete examples, see the actual locale data: Japanese, Chinese, Swedish, Traditional German are particularly illustrative.

<collation version="3.1.1">
  <settings caseLevel="on"/>
        <!-- following is equivalent to <p>亜</p><p>唖</p><p>娃</p>... -->

Appendix A: Sample Special Elements

The elements in this section are not part of the Locale Data Markup Language 1.0 specification. Instead, they are special elements used for application-specific data to be stored in the Common Locale Repository. They may change or be removed future versions of this document, and are present her more as examples of how to extend the format. (Some of these items may move into a future version of the Locale Data Markup Language specification.)

These DTDs use namespaces and the special element. To include one or more, use the following pattern to import the special DTDs that are used in the file:

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE ldml SYSTEM "http://www.unicode.org/cldr/dtd/1.1/ldml.dtd" [
    <!ENTITY % icu SYSTEM "http://www.unicode.org/cldr/dtd/1.1/ldmlICU.dtd">
    <!ENTITY % openOffice SYSTEM "http://www.unicode.org/cldr/dtd/1.1/ldmlOpenOffice.dtd">

Thus to include just the ICU DTD, one uses:

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE ldml SYSTEM "http://www.unicode.org/cldr/dtd/1.1/ldml.dtd" [
    <!ENTITY % icu SYSTEM "http://www.unicode.org/cldr/dtd/1.1/ldmlICU.dtd">

Note: A previous version of this document contained a special element for ISO TR 14652 compatibility data. That element has been withdrawn, pending further investigation. Warning: 14652 is a Type 1 TR: "when the required support cannot be obtained for the publication of an International Standard, despite repeated effort". See the ballot comments on 14652 Comments for details on the 14652 defects. For example, most of these patterns make little provision for substantial changes in format when elements are empty, so are not particularly useful in practice. Compare, for example, the mail-merge capabilities of production software such as Microsoft Word or OpenOffice.


There are three main areas where ICU has capabilities that go beyond what is shown above.


The rule-based number format (RBNF) encapsulates a set of rules for mapping binary numbers to and from a readable representation. They are typically used for spelling out numbers, but can also be used for other number systems like roman numerals, or for ordinal numbers (1st, 2nd, 3rd,...). The rules are fairly sophisticated; for details see Rule-Based Number Formatter [RBNF].


    <special xmlns:icu="http://oss.software.ibm.com/icu/">
            <icu:ruleBasedNumberFormat type="spellout">
                    and =%default=;
                    100: =%default=;
                    ' and =%default=;
                    100: , =%default=;
                    1000: ,
            <icu:ruleBasedNumberFormat type="ordinal">
                    th; st; nd; rd; th;
                    20: &gt;&gt;;
                    100: &gt;&gt;;
            <icu:ruleBasedNumberFormat type="duration">
                0 seconds; 1 second; =0= seconds;


Boundaries provide rules for grapheme-cluster ("user-character"), word, line, and sentence breaks. This format is the Java/ICU syntax, at the top level. For a description of that, see Rule-Based Break Iterator [RBBI]. The enclosing special element is a sub-element of <ldml>.

    <special xmlns:icu="http://oss.software.ibm.com/icu/">
            <!-- Boundary rules.
            Selected samples are given with no attempt to make them work.
            This format is the Java/ICU syntax, at the top level.
            For real data, see http://oss.software.ibm.com/developerworks/opensource/cvs/icu4j
            in BreakIteratorRules.java
            displayName attributes removed for now
            <icu:grapheme type="RuleBased" append="true">
                <!-- in addition to the normal rules, treat CH and RR as graphemes. -->
            <icu:word type="Dictionary" import="thaiDict.dat" >
                <!-- When doing Thai word break, check the normal word break rules first. -->
                $digit [[:Pd:]&#xAD;&#x2027;&apos;.]


There may be language-specific transformations, typically used in locale data for transliterations. Such transformations require far more than a simple list of matching characters, since the matches are highly context-sensitive. Each such transform is supplied in a <transform> element. The contents of the transform element is a list of rules, as described in the ICU documentation for [ICUTransforms]. The enclosing special element is a sub-element of <ldml>. The type value is either a script (long or short name) or a locale id, or a pair separated by "-".

Note: there is an on-line demonstration of transforms at [ICUTransforms].

Example: The following is an abbreviated example for Greek to Latin and back, in a Greek locale. The target value can be a script ID or a locale ID.

    <special xmlns:icu="http://oss.software.ibm.com/icu/">
            <icu:transform type="Latin">
                # variables
                $gammaLike = [ΓΚΞΧγκξχϰ] ;
                ::NFD (NFC) ; # convert everything to decomposed for simplicity
                α ↔ a ;   Α ↔ A ;
                β ↔ v ;   Β ↔ V ;
                γ } $gammaLike ↔ n } $egammaLike ; # contextual transform
                Γ } $gammaLike ↔ N } $egammaLike ; # contextual transform
                γ ↔ g ;   Γ ↔ G ;
                δ ↔ d ;   Δ ↔ D ;
                ε ↔ e ;   Ε ↔ E ;
                ζ ↔ z ;   Ζ ↔ Z ;
                Θ } $beforeLower ↔ Th ; # contextual transform
                θ ↔ th ;  Θ ↔ TH ;
                ι ↔ i ;   Ι ↔ I ;
                κ ↔ k ;   Κ ↔ K ;
                λ ↔ l ;   Λ ↔ L ;
                μ ↔ m ;   Μ ↔ M ;
                ν } $gammaLike → n\' ; # contextual transform
                Ν } $gammaLike ↔ N\' ; # contextual transform
                ν ↔ n ;   Ν ↔ N ;
                ::NFC (NFD) ; # convert back to composed


A number of the elements above can have extra information for openoffice.org, such as the following example:

    <special xmlns:openOffice="http://www.openoffice.org">

Appendix B: Transmitting Locale Information

In a world of on-demand software components, with arbitrary connections between those components, it is important to get a sense of where localization should be done, and how to transmit enough information so that it can be done at that appropriate place. End-users need to get messages localized to their languages, messages that not only contain a translation of text, but also contain variables such as date, time, number formats, and currencies formatted according to the users' conventions. The strategy for doing the so-called JIT localization is made up of two parts:

  1. Store and transmit neutral-format data wherever possible.
    • Neutral-format data is data that is kept in a standard format, no matter what the local user's environment is. Neutral-format is also (loosely) called binary data, even though it actually could be represented in many different ways, including a textual representation such as in XML.
    • Such data should use accepted standards where possible, such as for currency codes.
    • Textual data should also be in a uniform character set (Unicode/10646) to avoid possible data corruption problems when converting between encodings.
  2. Localize that data as "close" to the end-user as possible.

There are a number of advantages to this strategy. The longer the data is kept in a neutral format, the more flexible the entire system is. On a practical level, if transmitted data is neutral-format, then it is much easier to manipulate the data, debug the processing of the data, and maintain the software connections between components.

Once data has been localized into a given language, it can be quite difficult to programmatically convert that data into another format, if required. This is especially true if the data contains a mixture of translated text and formatted variables. Once information has been localized into, say, Romanian, it is much more difficult to localize that data into, say, French. Parsing is more difficult than formatting, and may run up against different ambiguities in interpreting text that has been localized, even if the original translated message text is available (which it may not be).

Moreover, the closer we are to end-user, the more we know about that user's preferred formats. If we format dates, for example, at the user's machine, then it can easily take into account any customizations that the user has specified. If the formatting is done elsewhere, either we have to transmit whatever user customizations are in play, or we only transmit the user's locale code, which may only approximate the desired format. Thus the closer the localization is to the end user, the less we need to ship all of the user's preferences arond to all the places that localization could possibly need to be done.

Even though localization should be done as close to the end-user as possible, there will be cases where different components need to be aware of whatever settings are appropriate for doing the localization. Thus information such as a locale code or timezone needs to be communicated between different components.

Message Formatting and Exceptions

Windows (FormatMessage, String.Format), Java (MessageFormat) and ICU (MessageFormat, umsg) all provide methods of formatting variables (dates, times, etc) and inserting them at arbitrary positions in a string. This avoids the manual string concatenation that causes severe problems for localization. The question is, where to do this? It is especially important since the original code site that originates a particular message may be far down in the bowels of a component, and passed up to the top of the component with an exception. So we will take that case as representative of this class of issues.

There are circumstances where the message can be communicated with a language-neutral code, such as a numeric error code or mnemonic string key, that is understood outside of the component. If there are arguments that need to accompany that message, such as a number of files or a datetime, those need to accompany the numeric code so that when the localization is finally at some point, the full information can be presented to the end-user. This is the best case for localization.

More often, the exact messages that could originate from within the component are not known outside of the component itself; or at least they may not be known by the component that is finally displaying text to the user. In such a case, the information as to the user's locale needs to be communicated in some way to the component that is doing the localization. That locale information does not necessarily need to be communicated deep within the component; ideally, any exceptions should bundle up some language-neutral message ID, plus the arguments needed to format the message (e.g. datetime), but not do the localization at the throw site. This approach has the advantages noted above for JIT localization.

In addition, exceptions are often caught at a higher level; they don't end up being displayed to any end-user at all. By avoiding the localization at the throw site, it the cost of doing formatting, when that formatting is not really necessary. In fact, in many running programs most of the exceptions that are thrown at a low level never end up being presented to an end-user, so this can have considerable performance benefits.

Appendix C: Supplemental Data

The following represents the format for supplemental information. This is information that is important for proper formatting, but is not contained in the locale hierarchy. It is not localizable, nor is it overridden by locale data. It uses the following format, where the data here is solely for illustration:

      <info iso4217="CHF" rounding="5"/>
      <info iso4217="ITL" digits="0"/>
      <info iso4217="FOO" digits="0" rounding="5"/>
    <region iso3166="IT"> <!-- Italy -->
      <currency iso4217="EUR"/>
      <currency iso4217="EUR" before="2002-01-01">
        <alternate iso4217="ITL"/>
      <currency iso4217="ITL" before="2000-01-01"/>
    <region iso3166="ET"> <!-- Ethiopia -->
      <currency iso4217="ITL" before="1945-03-01"/> 
    <region iso3166="DE"> <!-- Germany -->
      <currency iso4217="EUR"/>
    <region iso3166="US"> <!-- USA -->
      <currency iso4217="USD"/>
    <region iso3166="EC"> <!-- Ecuador -->
      <currency iso4217="USD"/>
      <currency iso4217="ECS" before="2000-01-01"/>
    <region iso3166="CH"> <!-- Switzerland -->
      <currency iso4217="CHF"/>

The only data currently represented is currency data. Each currencyData element contains one fractions element followed by one or more region elements. The fractions element contains any number of info elements, with the following attributes:

Each region element contains one attribute:

And can have any number of currency elements, with the following attributes. (Each currency element can also contain zero or more alternate elements. These are a list of alternate currencies, in preference order.)

Each before value governs the time up to the previous before value. That is, suppose that we have the following data for the region code R:

  <region iso3166="R">
    <currency iso4217="C01" before="1942"/>
    <currency iso4217="C02"/>
    <currency iso4217="C03" before="1927"/>
    <currency iso4217="none" before="1937-02-13"/>

Logically, the currency elements are treated in sorted order, according to the before value. The default value for the before element is logically +∞. This results in the following mapping for region R, using a set of half-open intervals:


Condition (based on time t)


1942-01-01 00:00:00 GMT

≤ t ≤



1937-02-13 00:00:00 GMT

t <

1942-01-01 00:00:00 GMT


1927-01-01 00:00:00 GMT

t <

1937-02-13 00:00:00 GMT



t <

1927-01-01 00:00:00 GMT

Open issue: In the future, we should supply information for mapping locales to a normalized version, thus en_Latin_US would normalize to en_US.

Appendix D: Language and Locale IDs

People have very slippery notions of what distinguishes a language code vs. a locale code. The problem is that both are somewhat nebulous concepts.

In practice, many people use [RFC3066] codes to mean locale codes instead of strictly language codes. It is easy to see why this came about; because [RFC3066] includes an explicit region (territory) code, for most people it was sufficient for use as a locale code as well. For example, when typical web software receives an [RFC3066] code, it will use it as a locale code. Other typical software will do the same: in practice, language codes and locale codes are treated interchangeably. Some people recommend distinguishing on the basis of "-" vs "_" (e.g. zh-TW for language code, zh_TW for locale code), but in practice that does not work because of the free variation out in the world in the use of these separators. Notice that Windows, for example, uses "-" as a separator in its locale codes. So pragmatically one is forced to treat "-" and "_" as equivalent when interpreting either one on imput.

Another reason for the conflation of these codes is that very little data in most systems is distinguished by region alone; currency codes and measurement systems being some of the few. Sometimes date or number formats are mentioned as regional, but that really doesn't make much sense. If people see the sentence "You will have to adjust the value to १,२३४.५६७ from ૭૧,૨૩૪.૫૬" (using Indic digits), they would say that sentence is simply not English. Number format is far more closely associated with language than it is with region. The same is true for date formats: people would never expect to see intermixed a date in the format "2003年4月1日" (using Kanji) in text purporting to be purely English. There are regional differences in date and number format — differences which can be important — but those are different in kind than other language differences between regions.

As far as we are concerned — as a completely practical matter — two languages are different if they require substantially different localized resources. Distinctions according to spoken form are important in some contexts, but the written form is by far and away the most important issue for data interchange. Unfortunately, this is not the principle used in [ISO639], which has the fairly unproductive notion (for data interchange) that only spoken language matters (it is also not completely consistent about this, however).

[RFC3066] can express a difference if the use of written languages happens to correspond to region boundaries expressed as [ISO3166] region codes, and has recently added codes that allow it to express some important cases that are not distinguished by [ISO3166] codes. These include simplified and traditional Chinese (both used in Hong Kong S.A.R.); Latin Serbian, Azeri, and Uzbek in both Cyrillic and; Azeri in Arab.

Notice also that currency codes are different than currency localizations. The currency localizations should normally be in the language-based resource bundles, not in the territory-based resource bundles. Thus, the resource bundle en contains the localized mappings in English for a range of different currency codes: USD => $, RUR => Rub, etc. (In protocols, the currency codes should always accompany any currency amounts; otherwise the data is ambiguous, and software is forced to use the user's territory to guess at the currency. For some informal discussion of this, see JIT Localization.)

Written Language

Criteria for what makes a written language should be purely pragmatic; what would copy-editors say? If one gave them text like the following, they would respond that is far from acceptable English for publication, and ask for it to be redone:

  1. "Theatre Center News: The date of the last version of this document was 2003年3月20日. A copy can be obtained for $50,0 or 1.234,57 грн. We would like to acknowledge contributions by the following authors (in alphabetical order): Alaa Ghoneim, Behdad Esfahbod, Ahmed Talaat, Eric Mader, Asmus Freytag, Avery Bishop, and Doug Felt."

So one would change it to either B or C below, depending on which orthographic variant of English was the target for the publication:

  1. "Theater Center News: The date of the last version of this document was 3/20/2003. A copy can be obtained for $50.00 or 1,234.57 Ukrainian Hryvni. We would like to acknowledge contributions by the following authors (in alphabetical order): Alaa Ghoneim, Ahmed Talaat, Asmus Freytag, Avery Bishop, Behdad Esfahbod, Doug Felt, Eric Mader."

  2. "Theatre Centre News: The date of the last version of this document was 20/3/2003. A copy can be obtained for $50.00 or 1,234.57 Ukrainian Hryvni. We would like to acknowledge contributions by the following authors (in alphabetical order): Alaa Ghoneim, Ahmed Talaat, Asmus Freytag, Avery Bishop, Behdad Esfahbod, Doug Felt, Eric Mader."

Clearly there are many acceptable variations on this text. For example, copy editors might still quibble with the use of first vs. last name sorting in the list, but clearly the first list was not acceptable English alphabetical order. And in quoting a name, like "Theatre Centre News", one may leave it in the source orthography even if it differs from the publication target orthography. And so on. However, just as clearly, there limits on what is acceptable English, and "2003年3月20日", for example, is not.

Appendix E: Unicode Sets

A UnicodeSet is a set of Unicode characters determined by a pattern, following (proposed) UTS #18: Unicode Regular Expressions [URegex]. For a concrete implementation of this, see [ICUUnicodeSet].

Patterns are a series of characters bounded by square brackets that contain lists of characters and Unicode property sets. Lists are a sequence of characters that may have ranges indicated by a '-' between two characters, as in "a-z". The sequence specifies the range of all characters from the left to the right, in Unicode order. For example, [a c d-f m] is equivalent to [a c d e f m]. Whitespace can be freely used for clarity as [a c d-f m] means the same as [acd-fm].

Unicode property sets are specified by any Unicode property, such as [:Letter:], using the PropertyAlias file and the PropertyValueAlias file. The syntax for specifying the property names is an extension of either POSIX or Perl syntax with the addition of "=value". For example, you can match letters by using the POSIX syntax [:Letter:], or by using the Perl-style syntax \u005cp{Letter}. The type can be omitted for the Category and Script properties, but is required for other properties.

The table below shows the two kinds of syntax: POSIX and Perl style. Also, the table shows the "Negative", which is a property that excludes all characters of a given kind. For example, [:^Letter:] matches all characters that are not [:Letter:].




POSIX-style Syntax 



Perl-style Syntax 



These following low-level lists or properties then can be freely combined with the normal set operations (union, inverse, difference, and intersection):

The binary operators '&' and '-' have equal precedence and bind left-to-right. Thus [[:letter:]-[a-z]-[\u0100-\u01FF]] is equivalent to [[[:letter:]-[a-z]]-[\u0100-\u01FF]]. Another example is the set [[ace][bdf] - [abc][def]] is not the empty set, but instead the set [def].

Another caveat with the '&' and '-' operators is that they operate between sets. That is, they must be immediately preceded and immediately followed by a set. For example, the pattern [[:Lu:]-A] is illegal, since it is interpreted as the set [:Lu:] followed by the incomplete range -A. To specify the set of uppercase letters except for 'A', enclose the 'A' in a set: [[:Lu:]-[A]]. A multicharacter string can be in a Unicode set, to represent a tailored grapheme for a particular language. The syntax uses curly braces for that case.


The set containing 'a' 


The set containing 'a' through 'z' and all letters in between, in Unicode order 


The set containing all characters but 'a' through 'z', that is, U+0000 through 'a'-1 and 'z'+1 through U+FFFF 


The union of sets specified by pat1 and pat2 


The intersection of sets specified by pat1 and pat2 


The asymmetric difference of sets specified by pat1 and pat2 

[a{ab}{ac}] The character 'a' and the multicharacter strings "ab" and "ac"


The set of characters belonging to the given Unicode category, as defined by Character.getType(); in this case, Unicode uppercase letters. The long form for this is [:UppercaseLetter:]. 


The set of characters belonging to all Unicode categories starting with 'L', that is, [[:Lu:][:Ll:][:Lt:][:Lm:][:Lo:]]. The long form for this is [:Letter:]. 

In Unicode Sets, there are two ways to quote syntax characters and whitespace:

Single Quote

Two single quotes represents a single quote, either inside or outside single quotes. Text within single quotes is not interpreted in any way (except for two adjacent single quotes). It is taken as literal text (special characters become non-special).

Backslash Escapes

Outside of single quotes, certain backslashed characters have special meaning:


Exactly 4 hex digits; h in [0-9A-Fa-f] 


Exactly 8 hex digits 


1-2 hex digits 


1-3 octal digits; o in [0-7]  


U+0007 (BELL) 
















The Unicode character named "name".

Anything else following a backslash is mapped to itself, except in an environment where it is defined to have some special meaning. For example, \p{uppercase} is the set of uppercase letters in Unicode.

Any character formed as the result of a backslash escape loses any special meaning and is treated as a literal. In particular, note that \u and \U escapes create literal characters. (In contrast, for example, javac treats Unicode escapes as just a way to represent arbitrary characters in an ASCII source file, and any resulting characters are _not_ tagged as literals.)


Ancillary Information To properly localize, parse, and format data requires ancillary information, which is not expressed in Locale Data Markup Language. Some of the formats for values used in Locale Data Markup Language are constructed according to external specifications. The sources for this data and/or formats include the following:
[Charts] The online code charts can be found at http://www.unicode.org/charts/ An index to characters names with links to the corresponding chart is found at http://www.unicode.org/charts/charindex.html
[DUCET] The Default Unicode Collation Element Table (DUCET)
For the base-level collation, of which all the collation tables in this document are tailorings.
[FAQ] Unicode Frequently Asked Questions
For answers to common questions on technical issues.
[FCD] As defined in UTN #5 Canonical Equivalences in Applications
[Feedback] Reporting Errors and Requesting Information Online
[Glossary] Unicode Glossary
For explanations of terminology used in this and other documents.
[JavaDates] Java DateFormat, DateFormatSymbols, SimpleDateFormat:
[JavaNumbers] Java NumberFormat, DecimalFormat, DecimalFormatSymbols:
[JavaChoice] Java ChoiceFormat
[Olson] The Olson Data
For timezone and daylight savings information.
[Reports] Unicode Technical Reports
For information on the status and development process for technical reports, and for a list of technical reports.
[UCA] UTS #10: Unicode Collation Algorithm
[UCD] The Unicode Character Database (UCD)
For character properties, casing behavior, default line-, word-, cluster-breaking behavior, etc.
[Unicode] The Unicode Consortium. The Unicode Standard, Version 4.0. Reading, MA, Addison-Wesley, 2003. 0-321-18578-1.
[Versions] Versions of the Unicode Standard
For information on version numbering, and citing and referencing the Unicode Standard, the Unicode Character Database, and Unicode Technical Reports.
Other Standards Various standards define codes that are used as keys or values in Locale Data Markup Language. These include:
[ISO639] ISO Language Codes
Actual List:
[ISO3166] ISO Region Codes
Actual List
[ISO4217] ISO Currency Codes
Actual List (may not work in the future, since BSI wants £205 for the list)

ISO Script Codes
Older version with Actual List:

[RFC3066] IETF Language Codes
Registered Exception List (those not of the form language + region)
General The following are general references from the text:
[BIDI] UAX #9: The Bidirectional Algorithm
[Calendars] Calendrical Calculations: The Millennium Edition by Edward M. Reingold, Nachum Dershowitz; Cambridge University Press; Book and CD-ROM edition (July 1, 2001); ISBN: 0521777526
[CharMapML] UTR #22: Character Mapping Tables
[Comparisons] Comparisons between locale data from different sources
[CurrencyInfo] Currency Names
UNECE Currency Data
[DataFormats] CLDR Data Formats
[Example] A sample in Locale Data Markup Language
[ICUCollation] ICU rule syntax:
[ICUTransforms] Transforms
Transforms Demo
[ICUUnicodeSet] ICU UnicodeSet

Java Locale

[LocaleExplorer] ICU Locale Explorer
[LocaleProject] Common Locale Data Repository Project
[NamingGuideline] OpenI18N Locale Naming Guideline
[RBNF] Rule-Based Number Format
[RBBI] Rule-Based Break Iterator
(The format will be moved into the ICU User Guide soon.)
[Scripts] UAX #24: Script Names
[UCAChart] Collation Chart
[URegex] UTR #18: Unicode Regular Expression Guidelines
UTS #18: Unicode Regular Expressions (Proposed Update)
[UTCInfo] NIST Time and Frequency Division Home Page
U.S. Naval Observatory: What is Universal Time?

Windows Culture Info (with  mappings from [RFC3066]-style codes to LCIDs)


Thanks to Patrick Andries, Philips Benjamin, Avery Chan, Alexis Cheng, Helena Shih Chapman, Lee Collins, Simon Dean, Sivaraj Doddannan, Doug Felt, Tom Garland, Deborah Goldsmith, Chris Hansten, Andy Heninger, Hideki Hiura, Jarkko Hietaniemi, Alexander Kachur, Karlsson Kent, Walter Keutgen, Akio Kido, Yuri Kirghisov, Rici Lake, Antoine Leca, Alan Liu, Steven R Loomis, Eric Mader, Sasha Maric, Eric Muller, Kentaroh Noji, Sandra O'Donnell, Åke Persson, Eike Rathke, George Rhoten, Markus Scherer, Baldev Soor, Michael Twomey, Philippe Verdy, Ram Viswanadha, Vladimir Weinstein for their contributions to LDML and/or CLDR.


The following summarizes modifications from the previous version of this document.

  • First UTS version 2004/03/08.
  • Rolled in 1.0 errata.
  • Added aliases for calendars
  • Added keywords for currency id, timezone id
  • Added note on the successor to RFC 3066
  • Removed Data Access, pending resolution
  • Added width and context to dayNames and monthNames, changing element structure
  • Added optional pattern for currencies
  • Clarified restriction on before attribute in collation, and order of rules (introducing new term "atomic")
  • Clarified multiple inheritance
  • Moved POSIX items into specification (from special)
  • Misc. other edits