From news.bu.edu!taco.cc.ncsu.edu!news-server.ncren.net!news.duke.edu!godot.cc.duq.edu!hudson.lm.com!news.pop.psu.edu!news.cac.psu.edu!howland.reston.ans.net!cs.utexas.edu!not-for-mail Thu Jan 19 01:50:17 1995 Path: news.bu.edu!taco.cc.ncsu.edu!news-server.ncren.net!news.duke.edu!godot.cc.duq.edu!hudson.lm.com!news.pop.psu.edu!news.cac.psu.edu!howland.reston.ans.net!cs.utexas.edu!not-for-mail From: g53150@SAKURA.KUDPC.KYOTO-U.AC.JP (christian wittern) Newsgroups: alt.chinese.computing Subject: Introducing CEF (Chinese Encoding Framework) Date: 18 Jan 1995 19:45:31 -0600 Organization: UTexas Mail-to-News Gateway Lines: 226 Sender: nobody@cs.utexas.edu Message-ID: <199501190145.TAA26980@mail.cs.utexas.edu> Reply-To: christian wittern NNTP-Posting-Host: news.cs.utexas.edu The following is posted to several lists, my apologies for noise caused by this. Introducing CEF (Chinese Encoding Framework) by Christian Wittern, Kyoto The Problem Due to the structure of the Chinese script and the tools available today for processing this on computers, there are characters which can not be input in almost any premodern Chinese text. Although they don't account for much in terms of percentage of the text (in most cases I have seen they are in the range between 1 to 3 % of a Big5 encoded text - your mileage may vary). Rather than defining adhoc a private encoding for every character missing from the codeset in use, as it is done today in many cases, I think it is advisable to use a standard reference for those wherever possible. This faciliates data exchange and the maintenance of databases with information about characters. Looking for such a standard, I decided to use the taiwanese CNS code. This is the Chinese National Code for Taiwan. In the form published in 1992, it defines the glyph-shape, strokecouunt and radical heading for 48027 characters. For all these characters a reference font in a 40 by 40 grid (and for most of them also in 24 by 24 grid) is available from the issuing body. These characters are assigned to 7 levels with the more frequent at the lower levels and the variant forms at the two top levels. The whole architecture reserves space for five more standard levels and four level are reserved for non-standard, private encoding, bringing the total to 16 levels, with a hypothetical space for roughly 120 000 ideographs. On top of the currently defined ones, one more level with about 7000 characters is currently under revision and expected to be published in the course of 1995. This will bring the total number of assigned characters to roughly 55000. The first two levels of CNS encode the same set of characters as Big5, the currently most widely internal code used in Taiwan. The internal arrangement is slightly different and duplicate encodings have been removed, but that needs not to be of concern here. CNS beginning with level 3 can therefore be seen as a true extension of Big5 without any overlap. This means that if a way were to be found to access those characters from Big5 based systems, all of a sudden these systems would be able to handle 48027 characters without the need for a new operating system or wordprocessor. Even high-end DTP incorporating those characters could then be done. Exactly such an extension is attempted here. Introducing CEF CEF stands for Chinese Encoding Framework and is a protocol to work with a number of different codesets in the same document. The details will be explained below. Having decided to use CNS to extend Big5, it is necessary to dig a bit deaper into CNS. The overall structure has already been outlined, but how does it relate to other codesets in use in East Asia, e.g. the Korean KSC, the Japanese JIS and the mainland Chinese GB? And what about Unicode? The answer to this is somewhat disappointing: Although CNS defines roughly eight times the number of characters, there are still more than three hundred characters missing that are in the Japanese JIS. Compared to GB the number of missing characters is roughly 1800. And with this it is also clear that some characters will be missing from the Unicode Han Character Repertoire. Upon closer examination the reason is soon obvious. CNS in its higher levels defines occasionally some abbreviated forms, but in general does not include characters generated as a result of the modern character reforms. I consider this a serious drawback and an obstacle to a true universal character set, but this seems to have been a design principle of CNS. It is of course also understandable, as CNS is not designed for the use of researchers but rather for the use of taiwanese governement agencies for its census registers. It should be noted however, that I don't expect serious problems for the encoding of *premodern* texts arising from the fact that these characters are missing. Nevertheless, for the sake of completeness, and to make sure that roundtrip conversion from and to JIS is possible, some additional characters have been added in the CEF database for private use at the International Research Institute for Zen-Buddhism (IRIZ) in Kyoto, where this has been developped. This reintroduces the above mentioned private encodings and makes the whole picture less appealing, but on the other hand it still provides a working solution for real-world problems. The Details Now, what does all this mean practically? If different codesets, or even only different levels within the same codeset are to be mixed in one text, there must be a means to distinguish between different codesets. There are generally two possible ways of doing this, namely using an escape sequence which signals: "Here begins codeset X" or adding a flag to each character which identifies the codeset the character is in. In this proposal flags are used, since they are much easier to handle. This approach is different from the one described in the standard documents for CNS, which uses escape squences: Rather than encoding the characters in binary, machine-only readable form with escape sequences or flags that might confuse processing software, a replacement with descriptive placeholders like in SGML text is recommended. The *terminus technicus* in SGML context for this is "entity reference" and this is how characters, that are not in the basic codeset of a document are handled in SGML. (sidenote: SGML stands for Standard General Markup Language and is an universal standard for handling text with arbitrary attributes. It also has a powerful scheme to deal with characters not found in a documents base character set. More information on SGML can be found in the Usenet group comp.text.sgml or at the ftp site ftp.ifi.uio.no in /pub/SGML.) The processing system (for example a wordprocessor or DTP software) reading files containing such placeholders will need some information about which character present to the user, when a certain placeholder is encountered. The placeholders used in CEF contain information about which character (or better: codepoint) they represent within themselves, so this information can be derived and processed automatically. The more general case in SGML systems is that this information is provide by a table. Although any convention could be used, I construct these placeholders like in the following example: &C3-A4A1; The first and last characters are SGML conventions, which begin and end a reference to codes outside the character set. The second character, "C" signals that we are dealing with CNS, what follows is a "3" for the third level of CNS, after the hyphen is the hex code for the character. It should be noted here, that in CEF I use the CNS code with the high bits of both bytes set, that means, in the CNS table, this character from the third level would actually have the code "2421". The reason for this is, that to use those CNS characters in Chinese Windows (Big5 version), I created five TrueType fonts, one for each of the level above Big5 and those fonts can not use a totally different codespace. SGML aware systems can process those references automatically and are very flexible in that. For most wordprocessors the conversion of those placeholders can be done by a simple macro, which is executed every time a plain text file is opened. The macro has to recognize the placeholder, delete it from the document, and insert the character represented by that code in the document in the correct CNS-font. Part of the information about the character is thus encoded in the selected font. When searching the resulting formatted text, it is a good idea to include the desired font (=code) into the search request. The reverse transformation has to take place when a formatted text is saved as a plain text file, this can be done by a similar macro. A number of other tools for processing such texts are under development at the IRIZ, namely code conversion tools and a tool for the automated production of concordances. For display and printing a number of TrueType fonts have been developped, (it is hoped, that they will be made available to interested parties), which have been generated at the IRIZ from the above mentioned CNS bitmaps. Currently these will work only under Chinese Windows (Big5 Version), but any attempt is made to make them available also to users of the CLK on the Macintosh. Another most important aspect to consider is how such characters can be inputted in a text. For this purpose, a database is under development, which will allow queries in the whole characterset of more than 48000 characters. The only table completed so far (although still with draft status) is a Fourcorner table, which contains keys for all characters. To reduce the number of matches for a given key, this could be combined with the strokecount, for which also a complete table has been produced. Other input tables include a table for the Cangjie method, which includes roughly 46000 keys, a radical/strokecount table, and a pinyin table, which are still in early stages and amy take some time to be completed. Preliminary tests with the framework outlined here have been very satisfying and it is hoped that CEF will prove to be a useful convention for the input and work with all sorts of premodern texts. Christian Wittern, Kyoto g53150@sakura.kudpc.kyoto-u.ac.jp