Next: , Previous: , Up: Top   [Contents][Index]

5 The universal charset

Standard ISO 10646 defines a universal character set, intended to encompass in the long run all languages written on this planet. It is based on wide characters, and offer possibilities for two billion characters (2^31).

This charset was to become available in recode under the name UCS, with many external surfaces for it. But in the current version, only surfaces of UCS are offered, each presented as a genuine charset rather than a surface. Such surfaces are only meaningful for the UCS charset, so it is not that useful to draw a line between the surfaces and the only charset to which they may apply.

UCS stands for Universal Character Set. UCS-2 and UCS-4 are fixed length encodings, using two or four bytes per character respectively. UTF stands for UCS Transformation Format, and are variable length encodings dedicated to UCS. UTF-1 was based on ISO 2022, it did not succeed9. UTF-2 replaced it, it has been called UTF-FSS (File System Safe) in Unicode or Plan9 context, but is better known today as UTF-8. To complete the picture, there is UTF-16 based on 16 bits bytes, and UTF-7 which is meant for transmissions limited to 7-bit bytes. Most often, one might see UTF-8 used for external storage, and UCS-2 used for internal storage.

When recode is producing any representation of UCS, it uses the replacement character U+FFFD for any valid character which is not representable in the goal charset10. This happens, for example, when UCS-2 is not capable to echo a wide UCS-4 character, or for a similar reason, an UTF-8 sequence using more than three bytes. The replacement character is meant to represent an existing character. So, it is never produced to represent an invalid sequence or ill-formed character in the input text. In such cases, recode just gets rid of the noise, while taking note of the error in its usual ways.

Even if UTF-8 is an encoding, really, it is the encoding of a single character set, and nothing else. It is useful to distinguish between an encoding (a surface within recode) and a charset, but only when the surface may be applied to several charsets. Specifying a charset is a bit simpler than specifying a surface in a recode request. There would not be a practical advantage at imposing a more complex syntax to recode users, when it is simple to assimilate UTF-8 to a charset. Similar considerations apply for UCS-2, UCS-4, UTF-16 and UTF-7. These are all considered to be charsets.


Next: , Previous: , Up: Universal   [Contents][Index]

5.1 Universal Character Set, 2 bytes

One surface of UCS is usable for the subset defined by its first sixty thousand characters (in fact, 31 * 2^11 codes), and uses exactly two bytes per character. It is a mere dump of the internal memory representation which is natural for this subset and as such, conveys with it endianness problems.

A non-empty UCS-2 file normally begins with a so called byte order mark, having value 0xFEFF. The value 0xFFFE is not an UCS character, so if this value is seen at the beginning of a file, recode reacts by swapping all pairs of bytes. The library also properly reacts to other occurrences of 0xFEFF or 0xFFFE elsewhere than at the beginning, because concatenation of UCS-2 files should stay a simple matter, but it might trigger a diagnostic about non canonical input.

By default, when producing an UCS-2 file, recode always outputs the high order byte before the low order byte. But this could be easily overridden through the 21-Permutation surface (see Permutations). For example, the command:

recode u8..u2/21 < input > output

asks for an UTF-8 to UCS-2 conversion, with swapped byte output.

Use UCS-2 as a genuine charset. This charset is available in recode under the name ISO-10646-UCS-2. Accepted aliases are UCS-2, BMP, rune and u2.

The recode library is able to combine UCS-2 some sequences of codes into single code characters, to represent a few diacriticized characters, ligatures or diphtongs which have been included to ease mapping with other existing charsets. It is also able to explode such single code characters into the corresponding sequence of codes. The request syntax for triggering such operations is rudimentary and temporary. The combined-UCS-2 pseudo character set is a special form of UCS-2 in which known combinings have been replaced by the simpler code. Using combined-UCS-2 instead of UCS-2 in an after position of a request forces a combining step, while using combined-UCS-2 instead of UCS-2 in a before position of a request forces an exploding step. For the time being, one has to resort to advanced request syntax to achieve other effects. For example:

recode u8..co,u2..u8 < input > output

copies an UTF-8 input over output, still to be in UTF-8, yet merging combining characters into single codes whenever possible.


Next: , Previous: , Up: Universal   [Contents][Index]

5.2 Universal Character Set, 4 bytes

Another surface of UCS uses exactly four bytes per character, and is a mere dump of the internal memory representation which is natural for the whole charset and as such, conveys with it endianness problems.

Use it as a genuine charset. This charset is available in recode under the name ISO-10646-UCS-4. Accepted aliases are UCS, UCS-4, ISO_10646, 10646 and u4.


Next: , Previous: , Up: Universal   [Contents][Index]

5.3 Universal Transformation Format, 7 bits

UTF-7 comes from IETF rather than ISO, and is described by RFC 2152, in the MIME series. The UTF-7 encoding is meant to fit UCS-2 over channels limited to seven bits per byte. It proceeds from a mix between the spirit of Quoted-Printable and methods of Base64, adapted to Unicode contexts.

This charset is available in recode under the name UNICODE-1-1-UTF-7. Accepted aliases are UTF-7, TF-7 and u7.


Next: , Previous: , Up: Universal   [Contents][Index]

5.4 Universal Transformation Format, 8 bits

Even if UTF-8 does not originally come from IETF, there is now RFC 2279 to describe it. In letters sent on 1995-01-21 and 1995-04-20, Markus Kuhn writes:

UTF-8 is an ASCII compatible multi-byte encoding of the ISO 10646 universal character set (UCS). UCS is a 31-bit superset of all other character set standards. The first 256 characters of UCS are identical to those of ISO 8859-1 (Latin-1). The UCS-2 encoding of UCS is a sequence of bigendian 16-bit words, the UCS-4 encoding is a sequence of bigendian 32-bit words. The UCS-2 subset of ISO 10646 is also known as “Unicode”. As both UCS-2 and UCS-4 require heavy modifications to traditional ASCII oriented system designs (e.g. Unix), the UTF-8 encoding has been designed for these applications.

In UTF-8, only ASCII characters are encoded using bytes below 128. All other non-ASCII characters are encoded as multi-byte sequences consisting only of bytes in the range 128-253. This avoids critical bytes like NUL and / in UTF-8 strings, which makes the UTF-8 encoding suitable for being handled by the standard C string library and being used in Unix file names. Other properties include the preserved lexical sorting order and that UTF-8 allows easy self-synchronisation of software receiving UTF-8 strings.

UTF-8 is the most common external surface of UCS, each character uses from one to six bytes, and is able to encode all 2^31 characters of the UCS. It is implemented as a charset, with the following properties:

These properties also have a few nice consequences:

In some case, when little processing is done on a lot of strings, one may choose for efficiency reasons to handle UTF-8 strings directly even if variable length, as it is easy to get start of characters. Character insertion or replacement might require moving the remainder of the string in either direction. In most cases, it is faster and easier to convert from UTF-8 to UCS-2 or UCS-4 prior to processing.

This charset is available in recode under the name UTF-8. Accepted aliases are UTF-2, UTF-FSS, FSS_UTF, TF-8 and u8.


Next: , Previous: , Up: Universal   [Contents][Index]

5.5 Universal Transformation Format, 16 bits

Another external surface of UCS is also variable length, each character using either two or four bytes. It is usable for the subset defined by the first million characters (17 * 2^16) of UCS.

Martin J. Dürst writes (to comp.std.internat, on 1995-03-28):

UTF-16 is another method that reserves two times 1024 codepoints in Unicode and uses them to index around one million additional characters. UTF-16 is a little bit like former multibyte codes, but quite not so, as both the first and the second 16-bit code clearly show what they are. The idea is that one million codepoints should be enough for all the rare Chinese ideograms and historical scripts that do not fit into the Base Multilingual Plane of ISO 10646 (with just about 63,000 positions available, now that 2,000 are gone).

This charset is available in recode under the name UTF-16. Accepted aliases are Unicode, TF-16 and u6.


Next: , Previous: , Up: Universal   [Contents][Index]

5.6 Frequency count of characters

A device may be used to obtain a list of characters in a file, and how many times each character appears. Each count is followed by the UCS-2 value of the character and, when known, the RFC 1345 mnemonic for that character.

This charset is available in recode under the name count-characters.

This count feature has been implemented as a charset. This may change in some later version, as it would sometimes be convenient to count original bytes, instead of their UCS-2 equivalent.


Previous: , Up: Universal   [Contents][Index]

5.7 Fully interpreted UCS dump

Another device may be used to get fully interpreted dumps of an UCS-2 stream of characters, with one UCS-2 character displayed on a full output line. Each line receives the RFC 1345 mnemonic for the character if it exists, the UCS-2 value of the character, and a descriptive comment for that character. As each input character produces its own output line, beware that the output file from this conversion may be much, much bigger than the input file.

This charset is available in recode under the name dump-with-names.

This dump-with-names feature has been implemented as a charset rather than a surface. This is surely debatable. The current implementation allows for dumping charsets other than UCS-2. For example, the command ‘recode l2..full < input implies a necessary conversion from Latin-2 to UCS-2, as dump-with-names is only connected out from UCS-2. In such cases, recode does not display the original Latin-2 codes in the dump, only the corresponding UCS-2 values. To give a simpler example, the command

echo 'Hello, world!' | recode us..dump

produces the following output:

UCS2   Mne   Description

0048   H     latin capital letter h
0065   e     latin small letter e
006C   l     latin small letter l
006C   l     latin small letter l
006F   o     latin small letter o
002C   ,     comma
0020   SP    space
0077   w     latin small letter w
006F   o     latin small letter o
0072   r     latin small letter r
006C   l     latin small letter l
0064   d     latin small letter d
0021   !     exclamation mark
000A   LF    line feed (lf)

The descriptive comment is given in English and ASCII, yet if the English description is not available but a French one is, then the French description is given instead, using Latin-1. However, if the LANGUAGE or LANG environment variable begins with the letters ‘fr’, then listing preference goes to French when both descriptions are available.

Here is another example. To get the long description of the code 237 in Latin-5 table, one may use the following command.

echo -n 237 | recode l5/d..dump

If your echo does not grok ‘-n’, use ‘echo 237\c’ instead. Here is how to see what Unicode U+03C6 means, while getting rid of the title lines.

echo -n 0x03C6 | recode u2/x2..dump | tail +3

Footnotes

(9)

It is not probable that recode will ever support UTF-1.

(10)

This is when the goal charset allows for 16-bits. For shorter charsets, the ‘--strict’ (‘-s’) option decides what happens: either the character is dropped, or a reversible mapping is produced on the fly.


Previous: , Up: Universal   [Contents][Index]