Revision | 12 |
Authors | Mark Davis (mark@unicode.org) |
Date | 1999-03-09 |
This Version | http://www.unicode.org/unicode/reports/tr15/tr15-12.html |
Previous Version | http://www.unicode.org/unicode/reports/tr15/tr15-11.html |
Latest Version | http://www.unicode.org/unicode/reports/tr15 |
This document describes specifications for four normalized forms of Unicode text. The design is in public review phase. We welcome review feedback.
This draft is published for review purposes. Previous versions of this draft have been considered by the Unicode Technical Committee, but no final decision has been reached. At its next meeting, the Unicode Technical Committee may approve, reject, or further amend this document.
In particular, the precise version of the character database referenced in the text may change. The current deadline for setting the version is April, 1999. It is expected that the version will change from 2.1.8 to 3.0.
The content of technical reports must be understood in the context of the latest version of the Unicode Standard. See http://www.unicode.org/unicode/standard/versions/ for more information.
This document does not, at this time, imply any endorsement by the Consortium's staff or member organizations. Please mail comments to unicore@unicode.org.
This document is divided into the following sections:
The Unicode Standard Version 2.1 describes several forms of normalization in Section 5.9. Two of these forms are precisely specified in Section 3.9. In particular, the standard defines a canonical decomposition format, which can be used as a normalization for interchanging text. This format allows for binary comparison while maintaining canonical equivalence with the original unnormalized text.
The standard also defines a compatibility decomposition format, which allows for binary comparison while maintaining compatibility equivalence with the original unnormalized text. The latter can also be useful in many circumstances, since it levels the differences between compatibility characters which are inappropriate in those circumstances. For example, the half-width and full-width katakana characters will have the same compatibility decomposition and are thus compatibility equivalents; however, they are not canonical equivalents.
Both of these formats are normalizations to decomposed characters. While Section 3.9 also discusses a normalization to composite characters (also known as decomposible or precomposed characters), it does not precisely specify the format. Because of the nature of the precomposed forms in the Unicode Standard, there is more than one possible specification for a normalized form with composite characters. This document provides a unique specification for those forms, and a label for each normalized form.
The four normalization forms are labeled as follows.
Title |
Description |
Specification |
---|---|---|
Normalization Form D | Canonical Decomposition | Sections 3.6, 3.9, and 3.10 of The Unicode Standard, also summarized under Decomposition |
Normalization Form C | Canonical Decomposition, followed by Canonical Composition |
see Specification |
Normalization Form KD | Compatibility Decomposition | Sections 3.6, 3.9, and 3.10 of The Unicode Standard, also summarized under Decomposition |
Normalization Form KC | Compatibility Decomposition, followed by Canonical Composition |
see Specification |
As with decomposition, there are two forms of normalization to composite characters, Form C and Form KC. The difference between these depends on whether the resulting text is to be a canonical equivalent to the original unnormalized text, or is to be a compatibility equivalent to the original unnormalized text. (In KC and KD, a K is used to stand for compatibility to avoid confusion with the C standing for canonical.) Both types of normalization can be useful in different circumstances.
Normalization Form C is basically the form of text which uses canonical composite characters where possible, and maintains the distinction between characters that are compatibility equivalents. Typical strings of composite accented Unicode characters are already in Normalization Form C. Implementations of Unicode which restrict themselves to a repertoire containing no combining marks (such as those that declare themselves to be implementations at Level 1 as defined in ISO/IEC 10646-1) are already using Normalization Form C. (Implementations of later versions of 10646 need to be aware of the versioning issues--see Versioning.)
Normalization Form KC additionally levels the differences between compatibility characters which are inappropriately distinguished in many circumstances. For example, the half-width and full-width katakana characters will normalize to the same strings, as will Roman Numerals and their letter equivalents. More complete examples are provided below. However, there is loss of information when text is transformed into Normalization Form KC, so it is not recommended for all circumstances.
To summarize the treatment of compability characters that were in the source text:
Neither of the composition normalization forms C and KC are closed under string concatenation. For example, the strings "a" and "^" (combining circumflex) are both in form C, but the concatenation of the two ("a" + "^" => "a^") is not: the normalized form is the precomposed character "â". There is no way to produce a composition normalized form that is closed under simple string concatenation without disturbing other string operations. If desired, however, a specialized function could be constructed that produced a normalized concatenation. This does not occur with the decomposition normalization forms D and KD. |
We will use the following notation for brevity.
Unicode names are shortened, such as the following:
| ||||||||||||||||
A sequence of characters may be represented by using plus signs between the character names, or by using string notation. | ||||||||||||||||
"...\uXXXX..." represents the Unicode character U+XXXX embedded within a string. | ||||||||||||||||
A single character which is equivalent to the sequence of characters B + C may be written as B-C. | ||||||||||||||||
The normalization forms for a string X can be abbreviated as D(X), KD(X), C(X) and KC(X), respectively. | ||||||||||||||||
Conjoining jamo of various types (initial, medial, final) are represented by subscripts, such as ki, am, and kf. |
Because additional composite characters may be added to future versions of the Unicode standard, composition is less stable than decomposition. Therefore, it is necessary to specify a fixed version for the composition process, so that implementations can get the same result for normalization even if they upgrade to a new version of Unicode.
Decomposition is only unstable if an existing character decomposition mapping changes. The Unicode Technical Committee has the policy of carefully reviewing proposed corrections in character decompositions, and only making changes where the benefits very clearly outweigh the drawbacks. |
The fixed version of the composition process is defined by reference to a particular version of the Unicode Character Database, called the composition version. At this point, that version is specified to be the 2.1.8 version, the content of the file UnicodeData-2.1.8.txt (abbreviated as UCD2.1.8); however, the final version is expected to be Unicode 3.0. For more information, see:
To see what difference the composition version makes, suppose that Unicode version 4.0 adds the composite Q-caron. For an implementation that uses Unicode version 4.0, strings in Normalization Forms C or KC will continue to contain the sequence Q + caron, and not the new character Q-caron, since a canonical composition for Q-caron was not defined in the composition version.
All of the following definitions depend on the rules for equivalence and decomposition found in Chapter 3 of The Unicode Standard, Version 2.0, and the decomposition mappings in the Unicode Character Database.
Decomposition must be done in accordance with these rules. In particular, the decomposition mappings found in the Unicode Character Database must be applied recursively, and then the string put into canonical order. |
Hangul syllable decomposition is considered a canonical decomposition. See Technical Report #8: The Unicode Standard Version 2.1 (http://www.unicode.org/unicode/reports/tr8.html). |
D1. A primary composite is a character that has a canonical decomposition mapping in the Unicode Character Database but is not in the Composition Exclusion Table.
D2. In a sequence of canonically decomposed characters S = <C0, C1,...,Cn,Cn+1>, the character Cn+1 can be primary canonically combined with C0 if
In such a case, X is said to be the primary canonical composition of C0 and Cn+1.
Note that because of D1, the contents of Composition Exclusion Table, and the definition of canonical equivalence in the Unicode Standard, D2 has the following implications:
|
D3. In the sequence of characters in D2, if some character Cj (0 <= j <= n) has the same combining class as Cn+1, then Cj blocks Cn+1.
A process that produces Unicode text that purports to be in a Normalization Form shall do so in accordance with the specifications in this document.
A process that tests Unicode text to determine whether it is a in a Normalization Form shall do so in accordance with the specifications in this document.
The specifications for Normalization Forms are written in terms of a process for producing a decomposition or composition from an arbitrary Unicode string. This is a logical description--particular implementations can have more efficient mechanisms as long as they produce the same result. Similarly, testing for a particular Normalization Form does not require applying the process of normalization, so long as the result of the test is equivalent to applying normalization and then testing bit-for-bit identity. |
The process of forming a composition in Normalization Form C or KC involves:
This is specified more precisely below.
The Normalization Form C for a string S is obtained by applying the following process, or any other process that leads to the same result:
The result of this process is a new string S' which is in Normalization Form C.
By definition, there is at most one previous character B that C can primary canonically combine with. Moreover, the new character B-C cannot primary canonically combine with any characters between it and the last character before C. That means that the iterative process never has to back up and reconsider earlier pairs of characters. This feature is also true for Normalization Form KC. |
The Normalization Form KC for a string S is obtained by applying the following process, or any other process that leads to the same result:
The result of this process is a new string S' which is in Normalization Form KC.
Normalization Form KC does not attempt to map characters to compatibility composites. For example, a compatibility composition of "office" does not produce "o\uFB03ce", even though "\uFB03" is a character that is the compatibility equivalent of the sequence of three characters 'ffi'. |
Original | Decomposed | Composed | Notes | |
---|---|---|---|---|
a | D-dot_above | D + dot_above | D-dot_above | Both decomposed and precomposed canonical sequences produce the same result. |
b | D + dot_above | D + dot_above | D-dot_above | |
c | D-dot_below + dot_above | D + dot_below + dot_above | D-dot_below + dot_above | By the time we have gotten to dot_above, it cannot be combined with the base character. There may be intervening combining marks (see f), so long as the result of the combination is canonically equivalent. |
d | D-dot_above + dot_below | D + dot_below + dot_above | D-dot_below + dot_above | |
e | D + dot_above + dot_below | D + dot_below + dot_above | D-dot_below + dot_above | |
f | D + dot_above+ horn + dot_below | D + horn + dot_below + dot_above | D-dot_below + horn + dot_above | |
g | E-macron-grave | E + macron + grave | E-macron-grave | Multiple combining characters are combined with successive base characters. |
h | E-macron + grave | E + macron + grave | E-macron-grave | |
i | E-grave + macron | E + grave + macron | E-grave + macron | Characters will not be combined if they would not be canonical equivalents because of their ordering. |
j | angstrom_sign | A + ring | A-ring | Since Å (A-ring) is the preferred composite, it is the form produced for both characters. |
k | A-ring | A + ring | A-ring | |
l | "Äffin" | "A\u0308ffin" | "Äffin" | The ffi_ligature (U+FB03) is not decomposed, since it has a compatibility mapping, not a canonical mapping. (See Normalization Form KC Examples.) |
m | "Ä\uFB03n" | "A\u0308\uFB03n" | "Ä\uFB03n" | |
n | "Henry IV" | "Henry IV" | "Henry IV" | Similarly, the ROMAN NUMERAL IV (U+2163) is not decomposed. |
o | "Henry \u2163" | "Henry \u2163" | "Henry \u2163" | |
p | ga | ka + ten | ga | Different compatibility equivalents of a single Japanese character will not result in the same string in Normalization Form C. |
q | ka + ten | ka + ten | ga | |
r | hw_ka + hw_ten | hw_ka + hw_ten | hw_ka + hw_ten | |
s | ka + hw_ten | ka + hw_ten | ka + hw_ten | |
t | hw_ka + ten | hw_ka + ten | hw_ka + ten | |
u | kaks | ki + am + ksf | kaks | Hangul syllables are maintained. |
Cases (a-k) above are the same in both Normalization Form C and KC, and are not repeated here.
Original | Decomposed | Composed | Notes | |
---|---|---|---|---|
l' | "Äffin" | "A\u0308ffin" | "Äffin" | The ffi_ligature (U+FB03) is decomposed in Normalization Form KC (where it is not in Normalization Form C). |
m' | "Ä\uFB03n" | "A\u0308\ffin" | "Äffin" | |
n' | "Henry IV" | "Henry IV" | "Henry IV" | Similarly, the resulting strings here are identical in Normalization Form KC. |
o' | "Henry \u2163" | "Henry IV" | "Henry IV" | |
p' | ga | ka + ten | ga | Different compatibility equivalents of a single Japanese character will result in the same string in Normalization Form KC. |
q' | ka + ten | ka + ten | ga | |
r' | hw_ka + hw_ten | ka + ten | ga | |
s' | ka + hw_ten | ka + ten | ga | |
t' | hw_ka + ten | ka + ten | ga | |
u' | kaks | ki + am + ksf | kaks | Hangul syllables are maintained. In earlier versions of Unicode, jamo characters like ksf had compatibility mappings to kf + sf. These mappings were removed in Unicode 2.1.9 to ensure that Hangul syllables are maintained. |
The following were the design goals for the specification of the normalization forms, and are presented here for reference.
The first major design goal for the normalization forms is uniqueness: two equivalent strings will have precisely the same normalized form. More explicitly,
This is an absolutely required goal.
The second major design goal for the normalization forms is stability of characters that are not involved in the composition or decomposition process.
There are four exceptions to Goal 2.2 in the Unicode Standard Version 2.1, according to D2. Four new characters have been accepted to remedy this situation by the time the database version is fixed, in Unicode 3.0. These are:
|
The third major design goal for the normalization forms is that it allow for efficient implementations.
There are a number of optimizations that can be made in programs that produce Normalization Form C. Rather than first decomposing the text fully, a quick check can be made on each character. If it is already in the proper precomposed form, then no work has to be done. Only if the current character is combining or in the Composition Exclusion Table does a slower code path need to be invoked. (This code path will need to look at previous characters, back to the last character with canonical class zero.)
The majority of the cycles spent in doing composition is spent looking up the appropriate data. The data lookup for Normalization Form C can be very efficiently implemented, since it only has to look up pairs of characters, not arbitrary strings. First a multi-stage table (as discussed in on page 5-8 of The Unicode Standard, Version 2.0) is used to map a character c to a small integer i in a contiguous range from 0 to n. The code for doing this looks like:
i = data[index[c >> BLOCKSHIFT] + (c & BLOCKMASK)];
Then a pair of these small integers are simply mapped through a two-dimensional array to get a resulting value. This yields much better performance than a general-purpose string lookup in a hash table.
Since the Hangul compositions and decompositions are algorithmic, memory storage can be significantly reduced if the corresponding operations are done in code rather than by simply storing the data in the general purpose tables. Here is is sample code illustrating algorithmic Hangul canonical decomposition and composition done according to the specification in Section 3.10 Combining Jamo Behavior. Although coded in Java, the same structure can be used in other programming languages.
static final int SBase = 0xAC00, LBase = 0x1100, VBase = 0x1161, TBase = 0x11A7, LCount = 19, VCount = 21, TCount = 28, NCount = VCount * TCount, // 588 SCount = LCount * NCount; // 11172
public static String decomposeHangul(char s) { int SIndex = s - SBase; if (SIndex < 0 || SIndex >= SCount) { return String.valueOf(s); } StringBuffer result = new StringBuffer(); int L = LBase + SIndex / NCount; int V = VBase + (SIndex % NCount) / TCount; int T = TBase + SIndex % TCount; result.append((char)L); result.append((char)V); if (T != TBase) result.append((char)T); return result.toString(); }
Notice an important feature of Hangul composition. Whenever the source string is not in Normalization Form D, you can't just detect character sequences of the form <L, V> and <L, V, T>. You also must catch the sequences of the form <LV, T>. To guarantee uniqueness, these sequences must also be composed. This is illustrated in Step 2 below.
public static String composeHangul(String source) { int len = source.length(); if (len == 0) return ""; StringBuffer result = new StringBuffer(); char last = source.charAt(0); // copy first char result.append(last); for (int i = 1; i < len; ++i) { char ch = source.charAt(i); // 1. check to see if two current characters are L and V int LIndex = last - LBase; if (0 <= LIndex && LIndex < LCount) { int VIndex = ch - VBase; if (0 <= VIndex && VIndex < VCount) { // make syllable of form LV last = (char)(SBase + (LIndex * VCount + VIndex) * TCount); result.setCharAt(result.length()-1, last); // reset last continue; // discard ch } } // 2. check to see if two current characters are LV and T int SIndex = last - SBase; if (0 <= SIndex && SIndex < SCount && (SIndex % TCount) == 0) { int TIndex = ch - TBase; if (0 <= TIndex && TIndex <= TCount) { // make syllable of form LVT last += TIndex; result.setCharAt(result.length()-1, last); // reset last continue; // discard ch } } // if neither case was true, just add the character last = ch; result.append(ch); } return result.toString(); }
Additional transformations can be performed on sequences of Hangul jamo for various purposes. For example, to regularize sequences of Hangul jamo into standard syllables, the choseong and jungseong fillers can be inserted, as described in Chapter 3. (In the text of the 2.0 standard, these standard syllables are called canonical syllables, but this has nothing to do with canonical composition or decomposition.) For keyboard input, additional compositions may be performed. For example, the trailing consonants kf + sf may be combined into ksf. In addition, some Hangul input methods do not require a distinction on input between initial and final consonants, and change between them on the basis of context. For example, in the keyboard sequence mi + em + ni + si + am, the consonant ni would be reinterpreted as nf, since there is no possible syllable nsa. This results in the two syllables men and sa.
However, none of these transformations are considered part of the Unicode Normalization Formats.
Hangul decomposition is also used to form the character names for the Hangul syllables. Here is sample code that illustrates this process:
public static String getHangulName(char s) { int SIndex = s - SBase; if (0 > SIndex || SIndex >= SCount) { throw new IllegalArgumentException("Not a Hangul Syllable: " + s); } StringBuffer result = new StringBuffer(); int LIndex = SIndex / NCount; int VIndex = (SIndex % NCount) / TCount; int TIndex = SIndex % TCount; return "HANGUL SYLLABLE " + JAMO_L_TABLE[LIndex] + JAMO_V_TABLE[VIndex] + JAMO_T_TABLE[TIndex]; } static private String[] JAMO_L_TABLE = { // Value; Short Name; Unicode Name "G", // U+1100; G; HANGUL CHOSEONG KIYEOK "GG", // U+1101; GG; HANGUL CHOSEONG SSANGKIYEOK "N", // U+1102; N; HANGUL CHOSEONG NIEUN "D", // U+1103; D; HANGUL CHOSEONG TIKEUT "DD", // U+1104; DD; HANGUL CHOSEONG SSANGTIKEUT "L", // U+1105; L; HANGUL CHOSEONG RIEUL "M", // U+1106; M; HANGUL CHOSEONG MIEUM "B", // U+1107; B; HANGUL CHOSEONG PIEUP "BB", // U+1108; BB; HANGUL CHOSEONG SSANGPIEUP "S", // U+1109; S; HANGUL CHOSEONG SIOS "SS", // U+110A; SS; HANGUL CHOSEONG SSANGSIOS "", // U+110B; ; HANGUL CHOSEONG IEUNG "J", // U+110C; J; HANGUL CHOSEONG CIEUC "JJ", // U+110D; JJ; HANGUL CHOSEONG SSANGCIEUC "C", // U+110E; C; HANGUL CHOSEONG CHIEUCH "K", // U+110F; K; HANGUL CHOSEONG KHIEUKH "T", // U+1110; T; HANGUL CHOSEONG THIEUTH "P", // U+1111; P; HANGUL CHOSEONG PHIEUPH "H" // U+1112; H; HANGUL CHOSEONG HIEUH }; static private String[] JAMO_V_TABLE = { // Value; Short Name; Unicode Name "A", // U+1161; A; HANGUL JUNGSEONG A "AE", // U+1162; AE; HANGUL JUNGSEONG AE "YA", // U+1163; YA; HANGUL JUNGSEONG YA "YAE", // U+1164; YAE; HANGUL JUNGSEONG YAE "EO", // U+1165; EO; HANGUL JUNGSEONG EO "E", // U+1166; E; HANGUL JUNGSEONG E "YEO", // U+1167; YEO; HANGUL JUNGSEONG YEO "YE", // U+1168; YE; HANGUL JUNGSEONG YE "O", // U+1169; O; HANGUL JUNGSEONG O "WA", // U+116A; WA; HANGUL JUNGSEONG WA "WAE", // U+116B; WAE; HANGUL JUNGSEONG WAE "OE", // U+116C; OE; HANGUL JUNGSEONG OE "YO", // U+116D; YO; HANGUL JUNGSEONG YO "U", // U+116E; U; HANGUL JUNGSEONG U "WEO", // U+116F; WEO; HANGUL JUNGSEONG WEO "WE", // U+1170; WE; HANGUL JUNGSEONG WE "WI", // U+1171; WI; HANGUL JUNGSEONG WI "YU", // U+1172; YU; HANGUL JUNGSEONG YU "EU", // U+1173; EU; HANGUL JUNGSEONG EU "YI", // U+1174; YI; HANGUL JUNGSEONG YI "I", // U+1175; I; HANGUL JUNGSEONG I }; static private String[] JAMO_T_TABLE = { // Value; Short Name; Unicode Name "", // filler, for LV syllable "G", // U+11A8; G; HANGUL JONGSEONG KIYEOK "GG", // U+11A9; GG; HANGUL JONGSEONG SSANGKIYEOK "GS", // U+11AA; GS; HANGUL JONGSEONG KIYEOK-SIOS "N", // U+11AB; N; HANGUL JONGSEONG NIEUN "NJ", // U+11AC; NJ; HANGUL JONGSEONG NIEUN-CIEUC "NH", // U+11AD; NH; HANGUL JONGSEONG NIEUN-HIEUH "D", // U+11AE; D; HANGUL JONGSEONG TIKEUT "L", // U+11AF; L; HANGUL JONGSEONG RIEUL "LG", // U+11B0; LG; HANGUL JONGSEONG RIEUL-KIYEOK "LM", // U+11B1; LM; HANGUL JONGSEONG RIEUL-MIEUM "LB", // U+11B2; LB; HANGUL JONGSEONG RIEUL-PIEUP "LS", // U+11B3; LS; HANGUL JONGSEONG RIEUL-SIOS "LT", // U+11B4; LT; HANGUL JONGSEONG RIEUL-THIEUTH "LP", // U+11B5; LP; HANGUL JONGSEONG RIEUL-PHIEUPH "LH", // U+11B6; LH; HANGUL JONGSEONG RIEUL-HIEUH "M", // U+11B7; M; HANGUL JONGSEONG MIEUM "B", // U+11B8; B; HANGUL JONGSEONG PIEUP "BS", // U+11B9; BS; HANGUL JONGSEONG PIEUP-SIOS "S", // U+11BA; S; HANGUL JONGSEONG SIOS "SS", // U+11BB; SS; HANGUL JONGSEONG SSANGSIOS "NG", // U+11BC; NG; HANGUL JONGSEONG IEUNG "J", // U+11BD; J; HANGUL JONGSEONG CIEUC "C", // U+11BE; C; HANGUL JONGSEONG CHIEUCH "K", // U+11BF; K; HANGUL JONGSEONG KHIEUKH "T", // U+11C0; T; HANGUL JONGSEONG THIEUTH "P", // U+11C1; P; HANGUL JONGSEONG PHIEUPH "H", // U+11C2; H; HANGUL JONGSEONG HIEUH };
This section discusses three different possible approaches to composition. These alternatives are fine composition (i.e., Normalization Form C), coarse composition, and medium composition. Code samples of the first two forms are provided for comparison.
The following code snippet shows a sample implementation of Normalization Form C. For comparison, this approach is constrasted with some alternative approaches below. For the purposes of discussion we can call this form fine composition. Although coded in Java, the same structure can be used in other programming languages. For a live demonstration of the code, see http://www.macchiato.com/mark/compose/.
/** * Implements the specification described in UTR#15 * To isolate the relevent features for this example, * source is presumed to already be in Normalization Form D. */ static void fineCompose(String source, StringBuffer target) { StringBuffer buffer = new StringBuffer(); for (int i = 0; i < source.length(); ++i) { char ch = source.charAt(i); int currentClass = charClass(ch); int len = buffer.length(); // check if the new character combines with the first // buffer character if (len != 0) { char composite = pairwiseCombines(buffer.charAt(0), ch); if (composite != NOT_A_CHAR // if combines, && (len == 1 || charClass(buffer.charAt(len-1)) != currentClass)) { // !blocked buffer.setCharAt(0, composite); // then replace first continue; // done with char for this iteration } } if (charClass(ch) == 0) { // if zero-class, target.append(buffer); // add buffer to target buffer.setLength(0); // clear buffer } buffer.append(ch); // add character to buffer } // add last buffer target.append(buffer); } /** * Return the canonical combining class * derived from the Unicode character database. */ static int charClass(char ch) {...} /** * Return the precomposed character corresponding to the two * component characters. Returns NOT_A_CHAR if no such * precomposed character exists. Based on the Unicode Character * database, but doesn't include the primary excluded characters. */ static char pairwiseCombines(char first, char second) {...}
An alternative style of composition was considered, which for the purposes of discussion we can call coarse composition. With this mechanism, a combining character sequence only composes if the entire sequence can be represented by a single precomposed character. This may appear to be a simpler option, but it has the disadvantage that an irrevalent combining mark can cause a precomposed character to break down. As the code samples show, there is actually not much difference in complexity in practice. For a live demonstration of the code, see http://www.macchiato.com/mark/compose/.
/** * Coarse composition is presented here for comparison. * To isolate the relevent features for this example, * source is presumed to already be in Normalization Form D. */ static void coarseCompose(String source, StringBuffer target) { StringBuffer buffer = new StringBuffer(); for (int i = 0; i < source.length(); ++i) { char ch = source.charAt(i); boolean isBase = isBaseChar(ch); // if the previous chars are a possible sequence, // either add them to target, or add the equivalent composite if (isBase && buffer.length() != 0) { char composite = coarseCombines(buffer); if (composite == NOT_A_CHAR) { // doesn't combine, so target.append(buffer); // add buffer to target } else { // does combine, so target.append(composite); // add composite to target } buffer.setLength(0); // clear buffer } buffer.append(ch); // add character to buffer } // check last buffer if (buffer.length() != 0) { char composite = coarseCombines(buffer); if (composite == NOT_A_CHAR) { // doesn't combine, so target.append(buffer); // add buffer to target } else { // does combine, so target.append(composite); // add composite to target } } } /** * Returns true if the character is a base character. */ static boolean isBaseChar(char ch) {...} /** * Returns true if the buffer corresponds to a single precomposed * character, not including the primary excluded characters. */ static char coarseCombines(StringBuffer buffer) {...}
A second alternative style of composition is similar to coarse composition, except that it will combine initial subsequences, as long as there are no intervening combining marks. For the purposes of discussion we can call this medium composition. Although this produces better results than coarse combination, it does not do as well as fine composition, and is less efficient.
In the Unicode Character Database, two characters may have the same canonical decomposition. Here is an example of this:
Source | Decomposition | |
---|---|---|
212B ('Å' ANGSTROM SIGN) | => | + 030A ('°' COMBINING RING ABOVE) |
00C5 ('Å' LATIN CAPITAL LETTER A WITH RING ABOVE) | => |
However, in such cases, the Unicode Character Database will first decompose one of the characters to the other, and then decompose from there. That is, one of the characters (in this case ANGSTROM SIGN) will have a singleton decomposition.
Singleton decompositions are some of the decompositions excluded from primary composition. The characters having excluded decompositions are included in Unicode essentially for compatibility with certain pre-existing standards. They fall into three classes:
[Note: once this document is accepted, a machine readable form of the following table will be made available on the Unicode ftp site.]
|
Singletons |
0340 COMBINING GRAVE TONE MARK 0341 COMBINING ACUTE TONE MARK 0343 COMBINING GREEK KORONIS 0374 GREEK NUMERAL SIGN 037E GREEK QUESTION MARK 0387 GREEK ANO TELEIA 1F71 GREEK SMALL LETTER ALPHA WITH OXIA 1F73 GREEK SMALL LETTER EPSILON WITH OXIA 1F75 GREEK SMALL LETTER ETA WITH OXIA 1F77 GREEK SMALL LETTER IOTA WITH OXIA 1F79 GREEK SMALL LETTER OMICRON WITH OXIA 1F7B GREEK SMALL LETTER UPSILON WITH OXIA 1F7D GREEK SMALL LETTER OMEGA WITH OXIA 1FBB GREEK CAPITAL LETTER ALPHA WITH OXIA 1FBE GREEK PROSGEGRAMMENI 1FC9 GREEK CAPITAL LETTER EPSILON WITH OXIA 1FCB GREEK CAPITAL LETTER ETA WITH OXIA 1FD3 GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA 1FDB GREEK CAPITAL LETTER IOTA WITH OXIA 1FE3 GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND OXIA 1FEB GREEK CAPITAL LETTER UPSILON WITH OXIA 1FEE GREEK DIALYTIKA AND OXIA 1FEF GREEK VARIA 1FF9 GREEK CAPITAL LETTER OMICRON WITH OXIA 1FFB GREEK CAPITAL LETTER OMEGA WITH OXIA 1FFD GREEK OXIA 2000 EN QUAD 2001 EM QUAD 2126 OHM SIGN 212A KELVIN SIGN 212B ANGSTROM SIGN 2329 LEFT-POINTING ANGLE BRACKET 232A RIGHT-POINTING ANGLE BRACKET F900 CJK COMPATIBILITY IDEOGRAPH-F900 ..FA2D CJK COMPATIBILITY IDEOGRAPH-FA2D |
---|---|
Non-zeros |
0E33 THAI CHARACTER SARA AM 0EB3 LAO VOWEL SIGN AM |
Script-specifics |
0958 DEVANAGARI LETTER QA ..095F DEVANAGARI LETTER YYA FB1F HEBREW LIGATURE YIDDISH YOD YOD PATAH FB2A HEBREW LETTER SHIN WITH SHIN DOT ..FB36 HEBREW LETTER ZAYIN WITH DAGESH FB38 HEBREW LETTER TET WITH DAGESH ..FB3C HEBREW LETTER LAMED WITH DAGESH FB3E HEBREW LETTER MEM WITH DAGESH FB40 HEBREW LETTER NUN WITH DAGESH FB41 HEBREW LETTER SAMEKH WITH DAGESH FB43 HEBREW LETTER FINAL PE WITH DAGESH FB44 HEBREW LETTER PE WITH DAGESH FB46 HEBREW LETTER TSADI WITH DAGESH ..FB4E HEBREW LETTER PE WITH RAFE |
Post Composition Version |
This set is currently empty, but will be updated with each subsequent version of Unicode. |
For those accessing this document without access to the Unicode Standard, the following summarizes the canonical decomposition process. For a complete discussion, see Sections 3.6, 3.9 and 3.10.
Canonical decomposition is the process of taking a string, replacing composite characters using the Unicode canonical decomposition mappings, and reordering the result according to the Unicode canonical ordering values. This reordering is done by swapping any two characters where the first has a higher canonical ordering value than the second and the second has a non-zero canonical ordering value. For example:
1. Take the string with the characters "ác´¸" (a-acute, c, acute, cedilla)
2. The data file contains the following relevant information:
0061;LATIN SMALL LETTER A;...;0;... 0063;LATIN SMALL LETTER C;...;0;... 00E1;LATIN SMALL LETTER A WITH ACUTE;...;0;...;0061 0301;... 0107;LATIN SMALL LETTER C WITH ACUTE;...;0;...;0063 0301;... 0301;COMBINING ACUTE ACCENT;...;230;... 0327;COMBINING CEDILLA;...;202;...3. Applying the decomposition mappings, we get "a´c´¸" (a, acute, c, acute, cedilla).
This is because 00E1(a-acute) has a canonical decomposition mapping to 0061 0301 (a, acute)4. Applying the canonical ordering, we get "a´c¸´" (a, acute, c, cedilla, acute)
This is because cedilla has a lower canonical ordering value (202) than acute (230) does. The positions of 'a' and 'c' are not affected, since they have zero canonical ordering values.
Compatibility decomposition is the process of taking a string, replacing composite characters using the both the Unicode canonical decomposition mappings and the Unicode compatibility decomposition mappings, and reordering the result according to the Unicode canonical ordering values.
Copyright © 1998-1998 Unicode, Inc. All Rights Reserved.
The Unicode Consortium makes no expressed or implied warranty of any kind, and assumes no liability for errors or omissions. No liability is assumed for incidental and consequential damages in connection with or arising out of the use of the information or programs contained or accompanying this technical report.
Unicode and the Unicode logo are trademarks of Unicode, Inc., and are registered in some jurisdictions.
Unicode Home Page: http://www.unicode.org
Unicode Technical Reports: http://www.unicode.org/unicode/reports/