| 1 | =head1 NAME
|
|---|
| 2 |
|
|---|
| 3 | perluniintro - Perl Unicode introduction
|
|---|
| 4 |
|
|---|
| 5 | =head1 DESCRIPTION
|
|---|
| 6 |
|
|---|
| 7 | This document gives a general idea of Unicode and how to use Unicode
|
|---|
| 8 | in Perl.
|
|---|
| 9 |
|
|---|
| 10 | =head2 Unicode
|
|---|
| 11 |
|
|---|
| 12 | Unicode is a character set standard which plans to codify all of the
|
|---|
| 13 | writing systems of the world, plus many other symbols.
|
|---|
| 14 |
|
|---|
| 15 | Unicode and ISO/IEC 10646 are coordinated standards that provide code
|
|---|
| 16 | points for characters in almost all modern character set standards,
|
|---|
| 17 | covering more than 30 writing systems and hundreds of languages,
|
|---|
| 18 | including all commercially-important modern languages. All characters
|
|---|
| 19 | in the largest Chinese, Japanese, and Korean dictionaries are also
|
|---|
| 20 | encoded. The standards will eventually cover almost all characters in
|
|---|
| 21 | more than 250 writing systems and thousands of languages.
|
|---|
| 22 | Unicode 1.0 was released in October 1991, and 4.0 in April 2003.
|
|---|
| 23 |
|
|---|
| 24 | A Unicode I<character> is an abstract entity. It is not bound to any
|
|---|
| 25 | particular integer width, especially not to the C language C<char>.
|
|---|
| 26 | Unicode is language-neutral and display-neutral: it does not encode the
|
|---|
| 27 | language of the text and it does not define fonts or other graphical
|
|---|
| 28 | layout details. Unicode operates on characters and on text built from
|
|---|
| 29 | those characters.
|
|---|
| 30 |
|
|---|
| 31 | Unicode defines characters like C<LATIN CAPITAL LETTER A> or C<GREEK
|
|---|
| 32 | SMALL LETTER ALPHA> and unique numbers for the characters, in this
|
|---|
| 33 | case 0x0041 and 0x03B1, respectively. These unique numbers are called
|
|---|
| 34 | I<code points>.
|
|---|
| 35 |
|
|---|
| 36 | The Unicode standard prefers using hexadecimal notation for the code
|
|---|
| 37 | points. If numbers like C<0x0041> are unfamiliar to you, take a peek
|
|---|
| 38 | at a later section, L</"Hexadecimal Notation">. The Unicode standard
|
|---|
| 39 | uses the notation C<U+0041 LATIN CAPITAL LETTER A>, to give the
|
|---|
| 40 | hexadecimal code point and the normative name of the character.
|
|---|
| 41 |
|
|---|
| 42 | Unicode also defines various I<properties> for the characters, like
|
|---|
| 43 | "uppercase" or "lowercase", "decimal digit", or "punctuation";
|
|---|
| 44 | these properties are independent of the names of the characters.
|
|---|
| 45 | Furthermore, various operations on the characters like uppercasing,
|
|---|
| 46 | lowercasing, and collating (sorting) are defined.
|
|---|
| 47 |
|
|---|
| 48 | A Unicode character consists either of a single code point, or a
|
|---|
| 49 | I<base character> (like C<LATIN CAPITAL LETTER A>), followed by one or
|
|---|
| 50 | more I<modifiers> (like C<COMBINING ACUTE ACCENT>). This sequence of
|
|---|
| 51 | base character and modifiers is called a I<combining character
|
|---|
| 52 | sequence>.
|
|---|
| 53 |
|
|---|
| 54 | Whether to call these combining character sequences "characters"
|
|---|
| 55 | depends on your point of view. If you are a programmer, you probably
|
|---|
| 56 | would tend towards seeing each element in the sequences as one unit,
|
|---|
| 57 | or "character". The whole sequence could be seen as one "character",
|
|---|
| 58 | however, from the user's point of view, since that's probably what it
|
|---|
| 59 | looks like in the context of the user's language.
|
|---|
| 60 |
|
|---|
| 61 | With this "whole sequence" view of characters, the total number of
|
|---|
| 62 | characters is open-ended. But in the programmer's "one unit is one
|
|---|
| 63 | character" point of view, the concept of "characters" is more
|
|---|
| 64 | deterministic. In this document, we take that second point of view:
|
|---|
| 65 | one "character" is one Unicode code point, be it a base character or
|
|---|
| 66 | a combining character.
|
|---|
| 67 |
|
|---|
| 68 | For some combinations, there are I<precomposed> characters.
|
|---|
| 69 | C<LATIN CAPITAL LETTER A WITH ACUTE>, for example, is defined as
|
|---|
| 70 | a single code point. These precomposed characters are, however,
|
|---|
| 71 | only available for some combinations, and are mainly
|
|---|
| 72 | meant to support round-trip conversions between Unicode and legacy
|
|---|
| 73 | standards (like the ISO 8859). In the general case, the composing
|
|---|
| 74 | method is more extensible. To support conversion between
|
|---|
| 75 | different compositions of the characters, various I<normalization
|
|---|
| 76 | forms> to standardize representations are also defined.
|
|---|
| 77 |
|
|---|
| 78 | Because of backward compatibility with legacy encodings, the "a unique
|
|---|
| 79 | number for every character" idea breaks down a bit: instead, there is
|
|---|
| 80 | "at least one number for every character". The same character could
|
|---|
| 81 | be represented differently in several legacy encodings. The
|
|---|
| 82 | converse is also not true: some code points do not have an assigned
|
|---|
| 83 | character. Firstly, there are unallocated code points within
|
|---|
| 84 | otherwise used blocks. Secondly, there are special Unicode control
|
|---|
| 85 | characters that do not represent true characters.
|
|---|
| 86 |
|
|---|
| 87 | A common myth about Unicode is that it would be "16-bit", that is,
|
|---|
| 88 | Unicode is only represented as C<0x10000> (or 65536) characters from
|
|---|
| 89 | C<0x0000> to C<0xFFFF>. B<This is untrue.> Since Unicode 2.0 (July
|
|---|
| 90 | 1996), Unicode has been defined all the way up to 21 bits (C<0x10FFFF>),
|
|---|
| 91 | and since Unicode 3.1 (March 2001), characters have been defined
|
|---|
| 92 | beyond C<0xFFFF>. The first C<0x10000> characters are called the
|
|---|
| 93 | I<Plane 0>, or the I<Basic Multilingual Plane> (BMP). With Unicode
|
|---|
| 94 | 3.1, 17 (yes, seventeen) planes in all were defined--but they are
|
|---|
| 95 | nowhere near full of defined characters, yet.
|
|---|
| 96 |
|
|---|
| 97 | Another myth is that the 256-character blocks have something to
|
|---|
| 98 | do with languages--that each block would define the characters used
|
|---|
| 99 | by a language or a set of languages. B<This is also untrue.>
|
|---|
| 100 | The division into blocks exists, but it is almost completely
|
|---|
| 101 | accidental--an artifact of how the characters have been and
|
|---|
| 102 | still are allocated. Instead, there is a concept called I<scripts>,
|
|---|
| 103 | which is more useful: there is C<Latin> script, C<Greek> script, and
|
|---|
| 104 | so on. Scripts usually span varied parts of several blocks.
|
|---|
| 105 | For further information see L<Unicode::UCD>.
|
|---|
| 106 |
|
|---|
| 107 | The Unicode code points are just abstract numbers. To input and
|
|---|
| 108 | output these abstract numbers, the numbers must be I<encoded> or
|
|---|
| 109 | I<serialised> somehow. Unicode defines several I<character encoding
|
|---|
| 110 | forms>, of which I<UTF-8> is perhaps the most popular. UTF-8 is a
|
|---|
| 111 | variable length encoding that encodes Unicode characters as 1 to 6
|
|---|
| 112 | bytes (only 4 with the currently defined characters). Other encodings
|
|---|
| 113 | include UTF-16 and UTF-32 and their big- and little-endian variants
|
|---|
| 114 | (UTF-8 is byte-order independent) The ISO/IEC 10646 defines the UCS-2
|
|---|
| 115 | and UCS-4 encoding forms.
|
|---|
| 116 |
|
|---|
| 117 | For more information about encodings--for instance, to learn what
|
|---|
| 118 | I<surrogates> and I<byte order marks> (BOMs) are--see L<perlunicode>.
|
|---|
| 119 |
|
|---|
| 120 | =head2 Perl's Unicode Support
|
|---|
| 121 |
|
|---|
| 122 | Starting from Perl 5.6.0, Perl has had the capacity to handle Unicode
|
|---|
| 123 | natively. Perl 5.8.0, however, is the first recommended release for
|
|---|
| 124 | serious Unicode work. The maintenance release 5.6.1 fixed many of the
|
|---|
| 125 | problems of the initial Unicode implementation, but for example
|
|---|
| 126 | regular expressions still do not work with Unicode in 5.6.1.
|
|---|
| 127 |
|
|---|
| 128 | B<Starting from Perl 5.8.0, the use of C<use utf8> is no longer
|
|---|
| 129 | necessary.> In earlier releases the C<utf8> pragma was used to declare
|
|---|
| 130 | that operations in the current block or file would be Unicode-aware.
|
|---|
| 131 | This model was found to be wrong, or at least clumsy: the "Unicodeness"
|
|---|
| 132 | is now carried with the data, instead of being attached to the
|
|---|
| 133 | operations. Only one case remains where an explicit C<use utf8> is
|
|---|
| 134 | needed: if your Perl script itself is encoded in UTF-8, you can use
|
|---|
| 135 | UTF-8 in your identifier names, and in string and regular expression
|
|---|
| 136 | literals, by saying C<use utf8>. This is not the default because
|
|---|
| 137 | scripts with legacy 8-bit data in them would break. See L<utf8>.
|
|---|
| 138 |
|
|---|
| 139 | =head2 Perl's Unicode Model
|
|---|
| 140 |
|
|---|
| 141 | Perl supports both pre-5.6 strings of eight-bit native bytes, and
|
|---|
| 142 | strings of Unicode characters. The principle is that Perl tries to
|
|---|
| 143 | keep its data as eight-bit bytes for as long as possible, but as soon
|
|---|
| 144 | as Unicodeness cannot be avoided, the data is transparently upgraded
|
|---|
| 145 | to Unicode.
|
|---|
| 146 |
|
|---|
| 147 | Internally, Perl currently uses either whatever the native eight-bit
|
|---|
| 148 | character set of the platform (for example Latin-1) is, defaulting to
|
|---|
| 149 | UTF-8, to encode Unicode strings. Specifically, if all code points in
|
|---|
| 150 | the string are C<0xFF> or less, Perl uses the native eight-bit
|
|---|
| 151 | character set. Otherwise, it uses UTF-8.
|
|---|
| 152 |
|
|---|
| 153 | A user of Perl does not normally need to know nor care how Perl
|
|---|
| 154 | happens to encode its internal strings, but it becomes relevant when
|
|---|
| 155 | outputting Unicode strings to a stream without a PerlIO layer -- one with
|
|---|
| 156 | the "default" encoding. In such a case, the raw bytes used internally
|
|---|
| 157 | (the native character set or UTF-8, as appropriate for each string)
|
|---|
| 158 | will be used, and a "Wide character" warning will be issued if those
|
|---|
| 159 | strings contain a character beyond 0x00FF.
|
|---|
| 160 |
|
|---|
| 161 | For example,
|
|---|
| 162 |
|
|---|
| 163 | perl -e 'print "\x{DF}\n", "\x{0100}\x{DF}\n"'
|
|---|
| 164 |
|
|---|
| 165 | produces a fairly useless mixture of native bytes and UTF-8, as well
|
|---|
| 166 | as a warning:
|
|---|
| 167 |
|
|---|
| 168 | Wide character in print at ...
|
|---|
| 169 |
|
|---|
| 170 | To output UTF-8, use the C<:utf8> output layer. Prepending
|
|---|
| 171 |
|
|---|
| 172 | binmode(STDOUT, ":utf8");
|
|---|
| 173 |
|
|---|
| 174 | to this sample program ensures that the output is completely UTF-8,
|
|---|
| 175 | and removes the program's warning.
|
|---|
| 176 |
|
|---|
| 177 | You can enable automatic UTF-8-ification of your standard file
|
|---|
| 178 | handles, default C<open()> layer, and C<@ARGV> by using either
|
|---|
| 179 | the C<-C> command line switch or the C<PERL_UNICODE> environment
|
|---|
| 180 | variable, see L<perlrun> for the documentation of the C<-C> switch.
|
|---|
| 181 |
|
|---|
| 182 | Note that this means that Perl expects other software to work, too:
|
|---|
| 183 | if Perl has been led to believe that STDIN should be UTF-8, but then
|
|---|
| 184 | STDIN coming in from another command is not UTF-8, Perl will complain
|
|---|
| 185 | about the malformed UTF-8.
|
|---|
| 186 |
|
|---|
| 187 | All features that combine Unicode and I/O also require using the new
|
|---|
| 188 | PerlIO feature. Almost all Perl 5.8 platforms do use PerlIO, though:
|
|---|
| 189 | you can see whether yours is by running "perl -V" and looking for
|
|---|
| 190 | C<useperlio=define>.
|
|---|
| 191 |
|
|---|
| 192 | =head2 Unicode and EBCDIC
|
|---|
| 193 |
|
|---|
| 194 | Perl 5.8.0 also supports Unicode on EBCDIC platforms. There,
|
|---|
| 195 | Unicode support is somewhat more complex to implement since
|
|---|
| 196 | additional conversions are needed at every step. Some problems
|
|---|
| 197 | remain, see L<perlebcdic> for details.
|
|---|
| 198 |
|
|---|
| 199 | In any case, the Unicode support on EBCDIC platforms is better than
|
|---|
| 200 | in the 5.6 series, which didn't work much at all for EBCDIC platform.
|
|---|
| 201 | On EBCDIC platforms, the internal Unicode encoding form is UTF-EBCDIC
|
|---|
| 202 | instead of UTF-8. The difference is that as UTF-8 is "ASCII-safe" in
|
|---|
| 203 | that ASCII characters encode to UTF-8 as-is, while UTF-EBCDIC is
|
|---|
| 204 | "EBCDIC-safe".
|
|---|
| 205 |
|
|---|
| 206 | =head2 Creating Unicode
|
|---|
| 207 |
|
|---|
| 208 | To create Unicode characters in literals for code points above C<0xFF>,
|
|---|
| 209 | use the C<\x{...}> notation in double-quoted strings:
|
|---|
| 210 |
|
|---|
| 211 | my $smiley = "\x{263a}";
|
|---|
| 212 |
|
|---|
| 213 | Similarly, it can be used in regular expression literals
|
|---|
| 214 |
|
|---|
| 215 | $smiley =~ /\x{263a}/;
|
|---|
| 216 |
|
|---|
| 217 | At run-time you can use C<chr()>:
|
|---|
| 218 |
|
|---|
| 219 | my $hebrew_alef = chr(0x05d0);
|
|---|
| 220 |
|
|---|
| 221 | See L</"Further Resources"> for how to find all these numeric codes.
|
|---|
| 222 |
|
|---|
| 223 | Naturally, C<ord()> will do the reverse: it turns a character into
|
|---|
| 224 | a code point.
|
|---|
| 225 |
|
|---|
| 226 | Note that C<\x..> (no C<{}> and only two hexadecimal digits), C<\x{...}>,
|
|---|
| 227 | and C<chr(...)> for arguments less than C<0x100> (decimal 256)
|
|---|
| 228 | generate an eight-bit character for backward compatibility with older
|
|---|
| 229 | Perls. For arguments of C<0x100> or more, Unicode characters are
|
|---|
| 230 | always produced. If you want to force the production of Unicode
|
|---|
| 231 | characters regardless of the numeric value, use C<pack("U", ...)>
|
|---|
| 232 | instead of C<\x..>, C<\x{...}>, or C<chr()>.
|
|---|
| 233 |
|
|---|
| 234 | You can also use the C<charnames> pragma to invoke characters
|
|---|
| 235 | by name in double-quoted strings:
|
|---|
| 236 |
|
|---|
| 237 | use charnames ':full';
|
|---|
| 238 | my $arabic_alef = "\N{ARABIC LETTER ALEF}";
|
|---|
| 239 |
|
|---|
| 240 | And, as mentioned above, you can also C<pack()> numbers into Unicode
|
|---|
| 241 | characters:
|
|---|
| 242 |
|
|---|
| 243 | my $georgian_an = pack("U", 0x10a0);
|
|---|
| 244 |
|
|---|
| 245 | Note that both C<\x{...}> and C<\N{...}> are compile-time string
|
|---|
| 246 | constants: you cannot use variables in them. if you want similar
|
|---|
| 247 | run-time functionality, use C<chr()> and C<charnames::vianame()>.
|
|---|
| 248 |
|
|---|
| 249 | If you want to force the result to Unicode characters, use the special
|
|---|
| 250 | C<"U0"> prefix. It consumes no arguments but forces the result to be
|
|---|
| 251 | in Unicode characters, instead of bytes.
|
|---|
| 252 |
|
|---|
| 253 | my $chars = pack("U0C*", 0x80, 0x42);
|
|---|
| 254 |
|
|---|
| 255 | Likewise, you can force the result to be bytes by using the special
|
|---|
| 256 | C<"C0"> prefix.
|
|---|
| 257 |
|
|---|
| 258 | =head2 Handling Unicode
|
|---|
| 259 |
|
|---|
| 260 | Handling Unicode is for the most part transparent: just use the
|
|---|
| 261 | strings as usual. Functions like C<index()>, C<length()>, and
|
|---|
| 262 | C<substr()> will work on the Unicode characters; regular expressions
|
|---|
| 263 | will work on the Unicode characters (see L<perlunicode> and L<perlretut>).
|
|---|
| 264 |
|
|---|
| 265 | Note that Perl considers combining character sequences to be
|
|---|
| 266 | separate characters, so for example
|
|---|
| 267 |
|
|---|
| 268 | use charnames ':full';
|
|---|
| 269 | print length("\N{LATIN CAPITAL LETTER A}\N{COMBINING ACUTE ACCENT}"), "\n";
|
|---|
| 270 |
|
|---|
| 271 | will print 2, not 1. The only exception is that regular expressions
|
|---|
| 272 | have C<\X> for matching a combining character sequence.
|
|---|
| 273 |
|
|---|
| 274 | Life is not quite so transparent, however, when working with legacy
|
|---|
| 275 | encodings, I/O, and certain special cases:
|
|---|
| 276 |
|
|---|
| 277 | =head2 Legacy Encodings
|
|---|
| 278 |
|
|---|
| 279 | When you combine legacy data and Unicode the legacy data needs
|
|---|
| 280 | to be upgraded to Unicode. Normally ISO 8859-1 (or EBCDIC, if
|
|---|
| 281 | applicable) is assumed. You can override this assumption by
|
|---|
| 282 | using the C<encoding> pragma, for example
|
|---|
| 283 |
|
|---|
| 284 | use encoding 'latin2'; # ISO 8859-2
|
|---|
| 285 |
|
|---|
| 286 | in which case literals (string or regular expressions), C<chr()>,
|
|---|
| 287 | and C<ord()> in your whole script are assumed to produce Unicode
|
|---|
| 288 | characters from ISO 8859-2 code points. Note that the matching for
|
|---|
| 289 | encoding names is forgiving: instead of C<latin2> you could have
|
|---|
| 290 | said C<Latin 2>, or C<iso8859-2>, or other variations. With just
|
|---|
| 291 |
|
|---|
| 292 | use encoding;
|
|---|
| 293 |
|
|---|
| 294 | the environment variable C<PERL_ENCODING> will be consulted.
|
|---|
| 295 | If that variable isn't set, the encoding pragma will fail.
|
|---|
| 296 |
|
|---|
| 297 | The C<Encode> module knows about many encodings and has interfaces
|
|---|
| 298 | for doing conversions between those encodings:
|
|---|
| 299 |
|
|---|
| 300 | use Encode 'decode';
|
|---|
| 301 | $data = decode("iso-8859-3", $data); # convert from legacy to utf-8
|
|---|
| 302 |
|
|---|
| 303 | =head2 Unicode I/O
|
|---|
| 304 |
|
|---|
| 305 | Normally, writing out Unicode data
|
|---|
| 306 |
|
|---|
| 307 | print FH $some_string_with_unicode, "\n";
|
|---|
| 308 |
|
|---|
| 309 | produces raw bytes that Perl happens to use to internally encode the
|
|---|
| 310 | Unicode string. Perl's internal encoding depends on the system as
|
|---|
| 311 | well as what characters happen to be in the string at the time. If
|
|---|
| 312 | any of the characters are at code points C<0x100> or above, you will get
|
|---|
| 313 | a warning. To ensure that the output is explicitly rendered in the
|
|---|
| 314 | encoding you desire--and to avoid the warning--open the stream with
|
|---|
| 315 | the desired encoding. Some examples:
|
|---|
| 316 |
|
|---|
| 317 | open FH, ">:utf8", "file";
|
|---|
| 318 |
|
|---|
| 319 | open FH, ">:encoding(ucs2)", "file";
|
|---|
| 320 | open FH, ">:encoding(UTF-8)", "file";
|
|---|
| 321 | open FH, ">:encoding(shift_jis)", "file";
|
|---|
| 322 |
|
|---|
| 323 | and on already open streams, use C<binmode()>:
|
|---|
| 324 |
|
|---|
| 325 | binmode(STDOUT, ":utf8");
|
|---|
| 326 |
|
|---|
| 327 | binmode(STDOUT, ":encoding(ucs2)");
|
|---|
| 328 | binmode(STDOUT, ":encoding(UTF-8)");
|
|---|
| 329 | binmode(STDOUT, ":encoding(shift_jis)");
|
|---|
| 330 |
|
|---|
| 331 | The matching of encoding names is loose: case does not matter, and
|
|---|
| 332 | many encodings have several aliases. Note that the C<:utf8> layer
|
|---|
| 333 | must always be specified exactly like that; it is I<not> subject to
|
|---|
| 334 | the loose matching of encoding names.
|
|---|
| 335 |
|
|---|
| 336 | See L<PerlIO> for the C<:utf8> layer, L<PerlIO::encoding> and
|
|---|
| 337 | L<Encode::PerlIO> for the C<:encoding()> layer, and
|
|---|
| 338 | L<Encode::Supported> for many encodings supported by the C<Encode>
|
|---|
| 339 | module.
|
|---|
| 340 |
|
|---|
| 341 | Reading in a file that you know happens to be encoded in one of the
|
|---|
| 342 | Unicode or legacy encodings does not magically turn the data into
|
|---|
| 343 | Unicode in Perl's eyes. To do that, specify the appropriate
|
|---|
| 344 | layer when opening files
|
|---|
| 345 |
|
|---|
| 346 | open(my $fh,'<:utf8', 'anything');
|
|---|
| 347 | my $line_of_unicode = <$fh>;
|
|---|
| 348 |
|
|---|
| 349 | open(my $fh,'<:encoding(Big5)', 'anything');
|
|---|
| 350 | my $line_of_unicode = <$fh>;
|
|---|
| 351 |
|
|---|
| 352 | The I/O layers can also be specified more flexibly with
|
|---|
| 353 | the C<open> pragma. See L<open>, or look at the following example.
|
|---|
| 354 |
|
|---|
| 355 | use open ':utf8'; # input and output default layer will be UTF-8
|
|---|
| 356 | open X, ">file";
|
|---|
| 357 | print X chr(0x100), "\n";
|
|---|
| 358 | close X;
|
|---|
| 359 | open Y, "<file";
|
|---|
| 360 | printf "%#x\n", ord(<Y>); # this should print 0x100
|
|---|
| 361 | close Y;
|
|---|
| 362 |
|
|---|
| 363 | With the C<open> pragma you can use the C<:locale> layer
|
|---|
| 364 |
|
|---|
| 365 | BEGIN { $ENV{LC_ALL} = $ENV{LANG} = 'ru_RU.KOI8-R' }
|
|---|
| 366 | # the :locale will probe the locale environment variables like LC_ALL
|
|---|
| 367 | use open OUT => ':locale'; # russki parusski
|
|---|
| 368 | open(O, ">koi8");
|
|---|
| 369 | print O chr(0x430); # Unicode CYRILLIC SMALL LETTER A = KOI8-R 0xc1
|
|---|
| 370 | close O;
|
|---|
| 371 | open(I, "<koi8");
|
|---|
| 372 | printf "%#x\n", ord(<I>), "\n"; # this should print 0xc1
|
|---|
| 373 | close I;
|
|---|
| 374 |
|
|---|
| 375 | or you can also use the C<':encoding(...)'> layer
|
|---|
| 376 |
|
|---|
| 377 | open(my $epic,'<:encoding(iso-8859-7)','iliad.greek');
|
|---|
| 378 | my $line_of_unicode = <$epic>;
|
|---|
| 379 |
|
|---|
| 380 | These methods install a transparent filter on the I/O stream that
|
|---|
| 381 | converts data from the specified encoding when it is read in from the
|
|---|
| 382 | stream. The result is always Unicode.
|
|---|
| 383 |
|
|---|
| 384 | The L<open> pragma affects all the C<open()> calls after the pragma by
|
|---|
| 385 | setting default layers. If you want to affect only certain
|
|---|
| 386 | streams, use explicit layers directly in the C<open()> call.
|
|---|
| 387 |
|
|---|
| 388 | You can switch encodings on an already opened stream by using
|
|---|
| 389 | C<binmode()>; see L<perlfunc/binmode>.
|
|---|
| 390 |
|
|---|
| 391 | The C<:locale> does not currently (as of Perl 5.8.0) work with
|
|---|
| 392 | C<open()> and C<binmode()>, only with the C<open> pragma. The
|
|---|
| 393 | C<:utf8> and C<:encoding(...)> methods do work with all of C<open()>,
|
|---|
| 394 | C<binmode()>, and the C<open> pragma.
|
|---|
| 395 |
|
|---|
| 396 | Similarly, you may use these I/O layers on output streams to
|
|---|
| 397 | automatically convert Unicode to the specified encoding when it is
|
|---|
| 398 | written to the stream. For example, the following snippet copies the
|
|---|
| 399 | contents of the file "text.jis" (encoded as ISO-2022-JP, aka JIS) to
|
|---|
| 400 | the file "text.utf8", encoded as UTF-8:
|
|---|
| 401 |
|
|---|
| 402 | open(my $nihongo, '<:encoding(iso-2022-jp)', 'text.jis');
|
|---|
| 403 | open(my $unicode, '>:utf8', 'text.utf8');
|
|---|
| 404 | while (<$nihongo>) { print $unicode $_ }
|
|---|
| 405 |
|
|---|
| 406 | The naming of encodings, both by the C<open()> and by the C<open>
|
|---|
| 407 | pragma, is similar to the C<encoding> pragma in that it allows for
|
|---|
| 408 | flexible names: C<koi8-r> and C<KOI8R> will both be understood.
|
|---|
| 409 |
|
|---|
| 410 | Common encodings recognized by ISO, MIME, IANA, and various other
|
|---|
| 411 | standardisation organisations are recognised; for a more detailed
|
|---|
| 412 | list see L<Encode::Supported>.
|
|---|
| 413 |
|
|---|
| 414 | C<read()> reads characters and returns the number of characters.
|
|---|
| 415 | C<seek()> and C<tell()> operate on byte counts, as do C<sysread()>
|
|---|
| 416 | and C<sysseek()>.
|
|---|
| 417 |
|
|---|
| 418 | Notice that because of the default behaviour of not doing any
|
|---|
| 419 | conversion upon input if there is no default layer,
|
|---|
| 420 | it is easy to mistakenly write code that keeps on expanding a file
|
|---|
| 421 | by repeatedly encoding the data:
|
|---|
| 422 |
|
|---|
| 423 | # BAD CODE WARNING
|
|---|
| 424 | open F, "file";
|
|---|
| 425 | local $/; ## read in the whole file of 8-bit characters
|
|---|
| 426 | $t = <F>;
|
|---|
| 427 | close F;
|
|---|
| 428 | open F, ">:utf8", "file";
|
|---|
| 429 | print F $t; ## convert to UTF-8 on output
|
|---|
| 430 | close F;
|
|---|
| 431 |
|
|---|
| 432 | If you run this code twice, the contents of the F<file> will be twice
|
|---|
| 433 | UTF-8 encoded. A C<use open ':utf8'> would have avoided the bug, or
|
|---|
| 434 | explicitly opening also the F<file> for input as UTF-8.
|
|---|
| 435 |
|
|---|
| 436 | B<NOTE>: the C<:utf8> and C<:encoding> features work only if your
|
|---|
| 437 | Perl has been built with the new PerlIO feature (which is the default
|
|---|
| 438 | on most systems).
|
|---|
| 439 |
|
|---|
| 440 | =head2 Displaying Unicode As Text
|
|---|
| 441 |
|
|---|
| 442 | Sometimes you might want to display Perl scalars containing Unicode as
|
|---|
| 443 | simple ASCII (or EBCDIC) text. The following subroutine converts
|
|---|
| 444 | its argument so that Unicode characters with code points greater than
|
|---|
| 445 | 255 are displayed as C<\x{...}>, control characters (like C<\n>) are
|
|---|
| 446 | displayed as C<\x..>, and the rest of the characters as themselves:
|
|---|
| 447 |
|
|---|
| 448 | sub nice_string {
|
|---|
| 449 | join("",
|
|---|
| 450 | map { $_ > 255 ? # if wide character...
|
|---|
| 451 | sprintf("\\x{%04X}", $_) : # \x{...}
|
|---|
| 452 | chr($_) =~ /[[:cntrl:]]/ ? # else if control character ...
|
|---|
| 453 | sprintf("\\x%02X", $_) : # \x..
|
|---|
| 454 | quotemeta(chr($_)) # else quoted or as themselves
|
|---|
| 455 | } unpack("U*", $_[0])); # unpack Unicode characters
|
|---|
| 456 | }
|
|---|
| 457 |
|
|---|
| 458 | For example,
|
|---|
| 459 |
|
|---|
| 460 | nice_string("foo\x{100}bar\n")
|
|---|
| 461 |
|
|---|
| 462 | returns the string
|
|---|
| 463 |
|
|---|
| 464 | 'foo\x{0100}bar\x0A'
|
|---|
| 465 |
|
|---|
| 466 | which is ready to be printed.
|
|---|
| 467 |
|
|---|
| 468 | =head2 Special Cases
|
|---|
| 469 |
|
|---|
| 470 | =over 4
|
|---|
| 471 |
|
|---|
| 472 | =item *
|
|---|
| 473 |
|
|---|
| 474 | Bit Complement Operator ~ And vec()
|
|---|
| 475 |
|
|---|
| 476 | The bit complement operator C<~> may produce surprising results if
|
|---|
| 477 | used on strings containing characters with ordinal values above
|
|---|
| 478 | 255. In such a case, the results are consistent with the internal
|
|---|
| 479 | encoding of the characters, but not with much else. So don't do
|
|---|
| 480 | that. Similarly for C<vec()>: you will be operating on the
|
|---|
| 481 | internally-encoded bit patterns of the Unicode characters, not on
|
|---|
| 482 | the code point values, which is very probably not what you want.
|
|---|
| 483 |
|
|---|
| 484 | =item *
|
|---|
| 485 |
|
|---|
| 486 | Peeking At Perl's Internal Encoding
|
|---|
| 487 |
|
|---|
| 488 | Normal users of Perl should never care how Perl encodes any particular
|
|---|
| 489 | Unicode string (because the normal ways to get at the contents of a
|
|---|
| 490 | string with Unicode--via input and output--should always be via
|
|---|
| 491 | explicitly-defined I/O layers). But if you must, there are two
|
|---|
| 492 | ways of looking behind the scenes.
|
|---|
| 493 |
|
|---|
| 494 | One way of peeking inside the internal encoding of Unicode characters
|
|---|
| 495 | is to use C<unpack("C*", ...> to get the bytes or C<unpack("H*", ...)>
|
|---|
| 496 | to display the bytes:
|
|---|
| 497 |
|
|---|
| 498 | # this prints c4 80 for the UTF-8 bytes 0xc4 0x80
|
|---|
| 499 | print join(" ", unpack("H*", pack("U", 0x100))), "\n";
|
|---|
| 500 |
|
|---|
| 501 | Yet another way would be to use the Devel::Peek module:
|
|---|
| 502 |
|
|---|
| 503 | perl -MDevel::Peek -e 'Dump(chr(0x100))'
|
|---|
| 504 |
|
|---|
| 505 | That shows the C<UTF8> flag in FLAGS and both the UTF-8 bytes
|
|---|
| 506 | and Unicode characters in C<PV>. See also later in this document
|
|---|
| 507 | the discussion about the C<utf8::is_utf8()> function.
|
|---|
| 508 |
|
|---|
| 509 | =back
|
|---|
| 510 |
|
|---|
| 511 | =head2 Advanced Topics
|
|---|
| 512 |
|
|---|
| 513 | =over 4
|
|---|
| 514 |
|
|---|
| 515 | =item *
|
|---|
| 516 |
|
|---|
| 517 | String Equivalence
|
|---|
| 518 |
|
|---|
| 519 | The question of string equivalence turns somewhat complicated
|
|---|
| 520 | in Unicode: what do you mean by "equal"?
|
|---|
| 521 |
|
|---|
| 522 | (Is C<LATIN CAPITAL LETTER A WITH ACUTE> equal to
|
|---|
| 523 | C<LATIN CAPITAL LETTER A>?)
|
|---|
| 524 |
|
|---|
| 525 | The short answer is that by default Perl compares equivalence (C<eq>,
|
|---|
| 526 | C<ne>) based only on code points of the characters. In the above
|
|---|
| 527 | case, the answer is no (because 0x00C1 != 0x0041). But sometimes, any
|
|---|
| 528 | CAPITAL LETTER As should be considered equal, or even As of any case.
|
|---|
| 529 |
|
|---|
| 530 | The long answer is that you need to consider character normalization
|
|---|
| 531 | and casing issues: see L<Unicode::Normalize>, Unicode Technical
|
|---|
| 532 | Reports #15 and #21, I<Unicode Normalization Forms> and I<Case
|
|---|
| 533 | Mappings>, http://www.unicode.org/unicode/reports/tr15/ and
|
|---|
| 534 | http://www.unicode.org/unicode/reports/tr21/
|
|---|
| 535 |
|
|---|
| 536 | As of Perl 5.8.0, the "Full" case-folding of I<Case
|
|---|
| 537 | Mappings/SpecialCasing> is implemented.
|
|---|
| 538 |
|
|---|
| 539 | =item *
|
|---|
| 540 |
|
|---|
| 541 | String Collation
|
|---|
| 542 |
|
|---|
| 543 | People like to see their strings nicely sorted--or as Unicode
|
|---|
| 544 | parlance goes, collated. But again, what do you mean by collate?
|
|---|
| 545 |
|
|---|
| 546 | (Does C<LATIN CAPITAL LETTER A WITH ACUTE> come before or after
|
|---|
| 547 | C<LATIN CAPITAL LETTER A WITH GRAVE>?)
|
|---|
| 548 |
|
|---|
| 549 | The short answer is that by default, Perl compares strings (C<lt>,
|
|---|
| 550 | C<le>, C<cmp>, C<ge>, C<gt>) based only on the code points of the
|
|---|
| 551 | characters. In the above case, the answer is "after", since
|
|---|
| 552 | C<0x00C1> > C<0x00C0>.
|
|---|
| 553 |
|
|---|
| 554 | The long answer is that "it depends", and a good answer cannot be
|
|---|
| 555 | given without knowing (at the very least) the language context.
|
|---|
| 556 | See L<Unicode::Collate>, and I<Unicode Collation Algorithm>
|
|---|
| 557 | http://www.unicode.org/unicode/reports/tr10/
|
|---|
| 558 |
|
|---|
| 559 | =back
|
|---|
| 560 |
|
|---|
| 561 | =head2 Miscellaneous
|
|---|
| 562 |
|
|---|
| 563 | =over 4
|
|---|
| 564 |
|
|---|
| 565 | =item *
|
|---|
| 566 |
|
|---|
| 567 | Character Ranges and Classes
|
|---|
| 568 |
|
|---|
| 569 | Character ranges in regular expression character classes (C</[a-z]/>)
|
|---|
| 570 | and in the C<tr///> (also known as C<y///>) operator are not magically
|
|---|
| 571 | Unicode-aware. What this means that C<[A-Za-z]> will not magically start
|
|---|
| 572 | to mean "all alphabetic letters"; not that it does mean that even for
|
|---|
| 573 | 8-bit characters, you should be using C</[[:alpha:]]/> in that case.
|
|---|
| 574 |
|
|---|
| 575 | For specifying character classes like that in regular expressions,
|
|---|
| 576 | you can use the various Unicode properties--C<\pL>, or perhaps
|
|---|
| 577 | C<\p{Alphabetic}>, in this particular case. You can use Unicode
|
|---|
| 578 | code points as the end points of character ranges, but there is no
|
|---|
| 579 | magic associated with specifying a certain range. For further
|
|---|
| 580 | information--there are dozens of Unicode character classes--see
|
|---|
| 581 | L<perlunicode>.
|
|---|
| 582 |
|
|---|
| 583 | =item *
|
|---|
| 584 |
|
|---|
| 585 | String-To-Number Conversions
|
|---|
| 586 |
|
|---|
| 587 | Unicode does define several other decimal--and numeric--characters
|
|---|
| 588 | besides the familiar 0 to 9, such as the Arabic and Indic digits.
|
|---|
| 589 | Perl does not support string-to-number conversion for digits other
|
|---|
| 590 | than ASCII 0 to 9 (and ASCII a to f for hexadecimal).
|
|---|
| 591 |
|
|---|
| 592 | =back
|
|---|
| 593 |
|
|---|
| 594 | =head2 Questions With Answers
|
|---|
| 595 |
|
|---|
| 596 | =over 4
|
|---|
| 597 |
|
|---|
| 598 | =item *
|
|---|
| 599 |
|
|---|
| 600 | Will My Old Scripts Break?
|
|---|
| 601 |
|
|---|
| 602 | Very probably not. Unless you are generating Unicode characters
|
|---|
| 603 | somehow, old behaviour should be preserved. About the only behaviour
|
|---|
| 604 | that has changed and which could start generating Unicode is the old
|
|---|
| 605 | behaviour of C<chr()> where supplying an argument more than 255
|
|---|
| 606 | produced a character modulo 255. C<chr(300)>, for example, was equal
|
|---|
| 607 | to C<chr(45)> or "-" (in ASCII), now it is LATIN CAPITAL LETTER I WITH
|
|---|
| 608 | BREVE.
|
|---|
| 609 |
|
|---|
| 610 | =item *
|
|---|
| 611 |
|
|---|
| 612 | How Do I Make My Scripts Work With Unicode?
|
|---|
| 613 |
|
|---|
| 614 | Very little work should be needed since nothing changes until you
|
|---|
| 615 | generate Unicode data. The most important thing is getting input as
|
|---|
| 616 | Unicode; for that, see the earlier I/O discussion.
|
|---|
| 617 |
|
|---|
| 618 | =item *
|
|---|
| 619 |
|
|---|
| 620 | How Do I Know Whether My String Is In Unicode?
|
|---|
| 621 |
|
|---|
| 622 | You shouldn't care. No, you really shouldn't. No, really. If you
|
|---|
| 623 | have to care--beyond the cases described above--it means that we
|
|---|
| 624 | didn't get the transparency of Unicode quite right.
|
|---|
| 625 |
|
|---|
| 626 | Okay, if you insist:
|
|---|
| 627 |
|
|---|
| 628 | print utf8::is_utf8($string) ? 1 : 0, "\n";
|
|---|
| 629 |
|
|---|
| 630 | But note that this doesn't mean that any of the characters in the
|
|---|
| 631 | string are necessary UTF-8 encoded, or that any of the characters have
|
|---|
| 632 | code points greater than 0xFF (255) or even 0x80 (128), or that the
|
|---|
| 633 | string has any characters at all. All the C<is_utf8()> does is to
|
|---|
| 634 | return the value of the internal "utf8ness" flag attached to the
|
|---|
| 635 | C<$string>. If the flag is off, the bytes in the scalar are interpreted
|
|---|
| 636 | as a single byte encoding. If the flag is on, the bytes in the scalar
|
|---|
| 637 | are interpreted as the (multi-byte, variable-length) UTF-8 encoded code
|
|---|
| 638 | points of the characters. Bytes added to an UTF-8 encoded string are
|
|---|
| 639 | automatically upgraded to UTF-8. If mixed non-UTF-8 and UTF-8 scalars
|
|---|
| 640 | are merged (double-quoted interpolation, explicit concatenation, and
|
|---|
| 641 | printf/sprintf parameter substitution), the result will be UTF-8 encoded
|
|---|
| 642 | as if copies of the byte strings were upgraded to UTF-8: for example,
|
|---|
| 643 |
|
|---|
| 644 | $a = "ab\x80c";
|
|---|
| 645 | $b = "\x{100}";
|
|---|
| 646 | print "$a = $b\n";
|
|---|
| 647 |
|
|---|
| 648 | the output string will be UTF-8-encoded C<ab\x80c = \x{100}\n>, but
|
|---|
| 649 | C<$a> will stay byte-encoded.
|
|---|
| 650 |
|
|---|
| 651 | Sometimes you might really need to know the byte length of a string
|
|---|
| 652 | instead of the character length. For that use either the
|
|---|
| 653 | C<Encode::encode_utf8()> function or the C<bytes> pragma and its only
|
|---|
| 654 | defined function C<length()>:
|
|---|
| 655 |
|
|---|
| 656 | my $unicode = chr(0x100);
|
|---|
| 657 | print length($unicode), "\n"; # will print 1
|
|---|
| 658 | require Encode;
|
|---|
| 659 | print length(Encode::encode_utf8($unicode)), "\n"; # will print 2
|
|---|
| 660 | use bytes;
|
|---|
| 661 | print length($unicode), "\n"; # will also print 2
|
|---|
| 662 | # (the 0xC4 0x80 of the UTF-8)
|
|---|
| 663 |
|
|---|
| 664 | =item *
|
|---|
| 665 |
|
|---|
| 666 | How Do I Detect Data That's Not Valid In a Particular Encoding?
|
|---|
| 667 |
|
|---|
| 668 | Use the C<Encode> package to try converting it.
|
|---|
| 669 | For example,
|
|---|
| 670 |
|
|---|
| 671 | use Encode 'decode_utf8';
|
|---|
| 672 | if (decode_utf8($string_of_bytes_that_I_think_is_utf8)) {
|
|---|
| 673 | # valid
|
|---|
| 674 | } else {
|
|---|
| 675 | # invalid
|
|---|
| 676 | }
|
|---|
| 677 |
|
|---|
| 678 | For UTF-8 only, you can use:
|
|---|
| 679 |
|
|---|
| 680 | use warnings;
|
|---|
| 681 | @chars = unpack("U0U*", $string_of_bytes_that_I_think_is_utf8);
|
|---|
| 682 |
|
|---|
| 683 | If invalid, a C<Malformed UTF-8 character (byte 0x##) in unpack>
|
|---|
| 684 | warning is produced. The "U0" means "expect strictly UTF-8 encoded
|
|---|
| 685 | Unicode". Without that the C<unpack("U*", ...)> would accept also
|
|---|
| 686 | data like C<chr(0xFF>), similarly to the C<pack> as we saw earlier.
|
|---|
| 687 |
|
|---|
| 688 | =item *
|
|---|
| 689 |
|
|---|
| 690 | How Do I Convert Binary Data Into a Particular Encoding, Or Vice Versa?
|
|---|
| 691 |
|
|---|
| 692 | This probably isn't as useful as you might think.
|
|---|
| 693 | Normally, you shouldn't need to.
|
|---|
| 694 |
|
|---|
| 695 | In one sense, what you are asking doesn't make much sense: encodings
|
|---|
| 696 | are for characters, and binary data are not "characters", so converting
|
|---|
| 697 | "data" into some encoding isn't meaningful unless you know in what
|
|---|
| 698 | character set and encoding the binary data is in, in which case it's
|
|---|
| 699 | not just binary data, now is it?
|
|---|
| 700 |
|
|---|
| 701 | If you have a raw sequence of bytes that you know should be
|
|---|
| 702 | interpreted via a particular encoding, you can use C<Encode>:
|
|---|
| 703 |
|
|---|
| 704 | use Encode 'from_to';
|
|---|
| 705 | from_to($data, "iso-8859-1", "utf-8"); # from latin-1 to utf-8
|
|---|
| 706 |
|
|---|
| 707 | The call to C<from_to()> changes the bytes in C<$data>, but nothing
|
|---|
| 708 | material about the nature of the string has changed as far as Perl is
|
|---|
| 709 | concerned. Both before and after the call, the string C<$data>
|
|---|
| 710 | contains just a bunch of 8-bit bytes. As far as Perl is concerned,
|
|---|
| 711 | the encoding of the string remains as "system-native 8-bit bytes".
|
|---|
| 712 |
|
|---|
| 713 | You might relate this to a fictional 'Translate' module:
|
|---|
| 714 |
|
|---|
| 715 | use Translate;
|
|---|
| 716 | my $phrase = "Yes";
|
|---|
| 717 | Translate::from_to($phrase, 'english', 'deutsch');
|
|---|
| 718 | ## phrase now contains "Ja"
|
|---|
| 719 |
|
|---|
| 720 | The contents of the string changes, but not the nature of the string.
|
|---|
| 721 | Perl doesn't know any more after the call than before that the
|
|---|
| 722 | contents of the string indicates the affirmative.
|
|---|
| 723 |
|
|---|
| 724 | Back to converting data. If you have (or want) data in your system's
|
|---|
| 725 | native 8-bit encoding (e.g. Latin-1, EBCDIC, etc.), you can use
|
|---|
| 726 | pack/unpack to convert to/from Unicode.
|
|---|
| 727 |
|
|---|
| 728 | $native_string = pack("C*", unpack("U*", $Unicode_string));
|
|---|
| 729 | $Unicode_string = pack("U*", unpack("C*", $native_string));
|
|---|
| 730 |
|
|---|
| 731 | If you have a sequence of bytes you B<know> is valid UTF-8,
|
|---|
| 732 | but Perl doesn't know it yet, you can make Perl a believer, too:
|
|---|
| 733 |
|
|---|
| 734 | use Encode 'decode_utf8';
|
|---|
| 735 | $Unicode = decode_utf8($bytes);
|
|---|
| 736 |
|
|---|
| 737 | You can convert well-formed UTF-8 to a sequence of bytes, but if
|
|---|
| 738 | you just want to convert random binary data into UTF-8, you can't.
|
|---|
| 739 | B<Any random collection of bytes isn't well-formed UTF-8>. You can
|
|---|
| 740 | use C<unpack("C*", $string)> for the former, and you can create
|
|---|
| 741 | well-formed Unicode data by C<pack("U*", 0xff, ...)>.
|
|---|
| 742 |
|
|---|
| 743 | =item *
|
|---|
| 744 |
|
|---|
| 745 | How Do I Display Unicode? How Do I Input Unicode?
|
|---|
| 746 |
|
|---|
| 747 | See http://www.alanwood.net/unicode/ and
|
|---|
| 748 | http://www.cl.cam.ac.uk/~mgk25/unicode.html
|
|---|
| 749 |
|
|---|
| 750 | =item *
|
|---|
| 751 |
|
|---|
| 752 | How Does Unicode Work With Traditional Locales?
|
|---|
| 753 |
|
|---|
| 754 | In Perl, not very well. Avoid using locales through the C<locale>
|
|---|
| 755 | pragma. Use only one or the other. But see L<perlrun> for the
|
|---|
| 756 | description of the C<-C> switch and its environment counterpart,
|
|---|
| 757 | C<$ENV{PERL_UNICODE}> to see how to enable various Unicode features,
|
|---|
| 758 | for example by using locale settings.
|
|---|
| 759 |
|
|---|
| 760 | =back
|
|---|
| 761 |
|
|---|
| 762 | =head2 Hexadecimal Notation
|
|---|
| 763 |
|
|---|
| 764 | The Unicode standard prefers using hexadecimal notation because
|
|---|
| 765 | that more clearly shows the division of Unicode into blocks of 256 characters.
|
|---|
| 766 | Hexadecimal is also simply shorter than decimal. You can use decimal
|
|---|
| 767 | notation, too, but learning to use hexadecimal just makes life easier
|
|---|
| 768 | with the Unicode standard. The C<U+HHHH> notation uses hexadecimal,
|
|---|
| 769 | for example.
|
|---|
| 770 |
|
|---|
| 771 | The C<0x> prefix means a hexadecimal number, the digits are 0-9 I<and>
|
|---|
| 772 | a-f (or A-F, case doesn't matter). Each hexadecimal digit represents
|
|---|
| 773 | four bits, or half a byte. C<print 0x..., "\n"> will show a
|
|---|
| 774 | hexadecimal number in decimal, and C<printf "%x\n", $decimal> will
|
|---|
| 775 | show a decimal number in hexadecimal. If you have just the
|
|---|
| 776 | "hex digits" of a hexadecimal number, you can use the C<hex()> function.
|
|---|
| 777 |
|
|---|
| 778 | print 0x0009, "\n"; # 9
|
|---|
| 779 | print 0x000a, "\n"; # 10
|
|---|
| 780 | print 0x000f, "\n"; # 15
|
|---|
| 781 | print 0x0010, "\n"; # 16
|
|---|
| 782 | print 0x0011, "\n"; # 17
|
|---|
| 783 | print 0x0100, "\n"; # 256
|
|---|
| 784 |
|
|---|
| 785 | print 0x0041, "\n"; # 65
|
|---|
| 786 |
|
|---|
| 787 | printf "%x\n", 65; # 41
|
|---|
| 788 | printf "%#x\n", 65; # 0x41
|
|---|
| 789 |
|
|---|
| 790 | print hex("41"), "\n"; # 65
|
|---|
| 791 |
|
|---|
| 792 | =head2 Further Resources
|
|---|
| 793 |
|
|---|
| 794 | =over 4
|
|---|
| 795 |
|
|---|
| 796 | =item *
|
|---|
| 797 |
|
|---|
| 798 | Unicode Consortium
|
|---|
| 799 |
|
|---|
| 800 | http://www.unicode.org/
|
|---|
| 801 |
|
|---|
| 802 | =item *
|
|---|
| 803 |
|
|---|
| 804 | Unicode FAQ
|
|---|
| 805 |
|
|---|
| 806 | http://www.unicode.org/unicode/faq/
|
|---|
| 807 |
|
|---|
| 808 | =item *
|
|---|
| 809 |
|
|---|
| 810 | Unicode Glossary
|
|---|
| 811 |
|
|---|
| 812 | http://www.unicode.org/glossary/
|
|---|
| 813 |
|
|---|
| 814 | =item *
|
|---|
| 815 |
|
|---|
| 816 | Unicode Useful Resources
|
|---|
| 817 |
|
|---|
| 818 | http://www.unicode.org/unicode/onlinedat/resources.html
|
|---|
| 819 |
|
|---|
| 820 | =item *
|
|---|
| 821 |
|
|---|
| 822 | Unicode and Multilingual Support in HTML, Fonts, Web Browsers and Other Applications
|
|---|
| 823 |
|
|---|
| 824 | http://www.alanwood.net/unicode/
|
|---|
| 825 |
|
|---|
| 826 | =item *
|
|---|
| 827 |
|
|---|
| 828 | UTF-8 and Unicode FAQ for Unix/Linux
|
|---|
| 829 |
|
|---|
| 830 | http://www.cl.cam.ac.uk/~mgk25/unicode.html
|
|---|
| 831 |
|
|---|
| 832 | =item *
|
|---|
| 833 |
|
|---|
| 834 | Legacy Character Sets
|
|---|
| 835 |
|
|---|
| 836 | http://www.czyborra.com/
|
|---|
| 837 | http://www.eki.ee/letter/
|
|---|
| 838 |
|
|---|
| 839 | =item *
|
|---|
| 840 |
|
|---|
| 841 | The Unicode support files live within the Perl installation in the
|
|---|
| 842 | directory
|
|---|
| 843 |
|
|---|
| 844 | $Config{installprivlib}/unicore
|
|---|
| 845 |
|
|---|
| 846 | in Perl 5.8.0 or newer, and
|
|---|
| 847 |
|
|---|
| 848 | $Config{installprivlib}/unicode
|
|---|
| 849 |
|
|---|
| 850 | in the Perl 5.6 series. (The renaming to F<lib/unicore> was done to
|
|---|
| 851 | avoid naming conflicts with lib/Unicode in case-insensitive filesystems.)
|
|---|
| 852 | The main Unicode data file is F<UnicodeData.txt> (or F<Unicode.301> in
|
|---|
| 853 | Perl 5.6.1.) You can find the C<$Config{installprivlib}> by
|
|---|
| 854 |
|
|---|
| 855 | perl "-V:installprivlib"
|
|---|
| 856 |
|
|---|
| 857 | You can explore various information from the Unicode data files using
|
|---|
| 858 | the C<Unicode::UCD> module.
|
|---|
| 859 |
|
|---|
| 860 | =back
|
|---|
| 861 |
|
|---|
| 862 | =head1 UNICODE IN OLDER PERLS
|
|---|
| 863 |
|
|---|
| 864 | If you cannot upgrade your Perl to 5.8.0 or later, you can still
|
|---|
| 865 | do some Unicode processing by using the modules C<Unicode::String>,
|
|---|
| 866 | C<Unicode::Map8>, and C<Unicode::Map>, available from CPAN.
|
|---|
| 867 | If you have the GNU recode installed, you can also use the
|
|---|
| 868 | Perl front-end C<Convert::Recode> for character conversions.
|
|---|
| 869 |
|
|---|
| 870 | The following are fast conversions from ISO 8859-1 (Latin-1) bytes
|
|---|
| 871 | to UTF-8 bytes and back, the code works even with older Perl 5 versions.
|
|---|
| 872 |
|
|---|
| 873 | # ISO 8859-1 to UTF-8
|
|---|
| 874 | s/([\x80-\xFF])/chr(0xC0|ord($1)>>6).chr(0x80|ord($1)&0x3F)/eg;
|
|---|
| 875 |
|
|---|
| 876 | # UTF-8 to ISO 8859-1
|
|---|
| 877 | s/([\xC2\xC3])([\x80-\xBF])/chr(ord($1)<<6&0xC0|ord($2)&0x3F)/eg;
|
|---|
| 878 |
|
|---|
| 879 | =head1 SEE ALSO
|
|---|
| 880 |
|
|---|
| 881 | L<perlunicode>, L<Encode>, L<encoding>, L<open>, L<utf8>, L<bytes>,
|
|---|
| 882 | L<perlretut>, L<perlrun>, L<Unicode::Collate>, L<Unicode::Normalize>,
|
|---|
| 883 | L<Unicode::UCD>
|
|---|
| 884 |
|
|---|
| 885 | =head1 ACKNOWLEDGMENTS
|
|---|
| 886 |
|
|---|
| 887 | Thanks to the kind readers of the [email protected],
|
|---|
| 888 | [email protected], [email protected], and [email protected]
|
|---|
| 889 | mailing lists for their valuable feedback.
|
|---|
| 890 |
|
|---|
| 891 | =head1 AUTHOR, COPYRIGHT, AND LICENSE
|
|---|
| 892 |
|
|---|
| 893 | Copyright 2001-2002 Jarkko Hietaniemi E<lt>[email protected]<gt>
|
|---|
| 894 |
|
|---|
| 895 | This document may be distributed under the same terms as Perl itself.
|
|---|