These are the basic Unicode object types used for the Unicode implementation in Python:
Note that UCS2 and UCS4 Python builds are not binary compatible. Please keep this in mind when writing extensions or interfaces.
The following APIs are really C macros and can be used to do fast checks and to access internal read-only data of Unicode objects:
Unicode provides many different character properties. The most often needed ones are available through these macros which are mapped to C functions depending on the Python configuration.
These APIs can be used for fast direct character conversions:
To create Unicode objects and access their basic sequence properties, use these APIs:
Create a Unicode object from the Py_UNICODE buffer u of the given size. u may be NULL which causes the contents to be undefined. It is the user’s responsibility to fill in the needed data. The buffer is copied into the new object. If the buffer is not NULL, the return value might be a shared object. Therefore, modification of the resulting Unicode object is only allowed when u is NULL.
Take a C printf()-style format string and a variable number of arguments, calculate the size of the resulting Python unicode string and return a string with the values formatted into it. The variable arguments must be C types and must correspond exactly to the format characters in the format string. The following format characters are allowed:
Format Characters | Type | Comment |
---|---|---|
%% | n/a | The literal % character. |
%c | int | A single character, represented as an C int. |
%d | int | Exactly equivalent to printf("%d"). |
%u | unsigned int | Exactly equivalent to printf("%u"). |
%ld | long | Exactly equivalent to printf("%ld"). |
%lu | unsigned long | Exactly equivalent to printf("%lu"). |
%zd | Py_ssize_t | Exactly equivalent to printf("%zd"). |
%zu | size_t | Exactly equivalent to printf("%zu"). |
%i | int | Exactly equivalent to printf("%i"). |
%x | int | Exactly equivalent to printf("%x"). |
%s | char* | A null-terminated C character array. |
%p | void* | The hex representation of a C pointer. Mostly equivalent to printf("%p") except that it is guaranteed to start with the literal 0x regardless of what the platform’s printf yields. |
%A | PyObject* | The result of calling ascii(). |
%U | PyObject* | A unicode object. |
%V | PyObject*, char * | A unicode object (which may be NULL) and a null-terminated C character array as a second parameter (which will be used, if the first parameter is NULL). |
%S | PyObject* | The result of calling PyObject_Str(). |
%R | PyObject* | The result of calling PyObject_Repr(). |
An unrecognized format character causes all the rest of the format string to be copied as-is to the result string, and any extra arguments discarded.
Coerce an encoded object obj to an Unicode object and return a reference with incremented refcount.
bytes, bytearray and other char buffer compatible objects are decoded according to the given encoding and using the error handling defined by errors. Both can be NULL to have the interface use the default values (see the next section for details).
All other objects, including Unicode objects, cause a TypeError to be set.
The API returns NULL if there was an error. The caller is responsible for decref’ing the returned objects.
Shortcut for PyUnicode_FromEncodedObject(obj, NULL, "strict") which is used throughout the interpreter whenever coercion to Unicode is needed.
If the platform supports wchar_t and provides a header file wchar.h, Python can interface directly to this type using the following functions. Support is optimized if Python’s own Py_UNICODE type is identical to the system’s wchar_t.
To encode and decode file names and other environment strings, Py_FileSystemEncoding should be used as the encoding, and "surrogateescape" should be used as the error handler (PEP 383). To encode file names during argument parsing, the "O&" converter should be used, passing PyUnicode_FSConverter() as the conversion function:
Convert obj into result, using Py_FileSystemDefaultEncoding, and the "surrogateescape" error handler. result must be a PyObject*, return a bytes() object which must be released if it is no longer used.
New in version 3.1.
Decode a null-terminated string using Py_FileSystemDefaultEncoding and the "surrogateescape" error handler.
If Py_FileSystemDefaultEncoding is not set, fall back to UTF-8.
Use PyUnicode_DecodeFSDefaultAndSize() if you know the string length.
wchar_t support for platforms which support it:
Create a Unicode object from the wchar_t buffer w of the given size. Passing -1 as the size indicates that the function must itself compute the length, using wcslen. Return NULL on failure.
Python provides a set of built-in codecs which are written in C for speed. All of these codecs are directly usable via the following functions.
Many of the following APIs take two arguments encoding and errors. These parameters encoding and errors have the same semantics as the ones of the built-in str() string object constructor.
Setting encoding to NULL causes the default encoding to be used which is ASCII. The file system calls should use PyUnicode_FSConverter() for encoding file names. This uses the variable Py_FileSystemDefaultEncoding internally. This variable should be treated as read-only: On some systems, it will be a pointer to a static string, on others, it will change at run-time (such as when the application invokes setlocale).
Error handling is set by errors which may also be set to NULL meaning to use the default handling defined for the codec. Default error handling for all built-in codecs is “strict” (ValueError is raised).
The codecs all use a similar interface. Only deviation from the following generic ones are documented for simplicity.
These are the generic codec APIs:
Create a Unicode object by decoding size bytes of the encoded string s. encoding and errors have the same meaning as the parameters of the same name in the unicode() built-in function. The codec to be used is looked up using the Python codec registry. Return NULL if an exception was raised by the codec.
Encode the Py_UNICODE buffer of the given size and return a Python bytes object. encoding and errors have the same meaning as the parameters of the same name in the Unicode encode() method. The codec to be used is looked up using the Python codec registry. Return NULL if an exception was raised by the codec.
Encode a Unicode object and return the result as Python bytes object. encoding and errors have the same meaning as the parameters of the same name in the Unicode encode() method. The codec to be used is looked up using the Python codec registry. Return NULL if an exception was raised by the codec.
These are the UTF-8 codec APIs:
Create a Unicode object by decoding size bytes of the UTF-8 encoded string s. Return NULL if an exception was raised by the codec.
If consumed is NULL, behave like PyUnicode_DecodeUTF8(). If consumed is not NULL, trailing incomplete UTF-8 byte sequences will not be treated as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.
Encode the Py_UNICODE buffer of the given size using UTF-8 and return a Python bytes object. Return NULL if an exception was raised by the codec.
These are the UTF-32 codec APIs:
Decode length bytes from a UTF-32 encoded buffer string and return the corresponding Unicode object. errors (if non-NULL) defines the error handling. It defaults to “strict”.
If byteorder is non-NULL, the decoder starts decoding using the given byte order:
*byteorder == -1: little endian
*byteorder == 0: native order
*byteorder == 1: big endian
If *byteorder is zero, and the first four bytes of the input data are a byte order mark (BOM), the decoder switches to this byte order and the BOM is not copied into the resulting Unicode string. If *byteorder is -1 or 1, any byte order mark is copied to the output.
After completion, *byteorder is set to the current byte order at the end of input data.
In a narrow build codepoints outside the BMP will be decoded as surrogate pairs.
If byteorder is NULL, the codec starts in native order mode.
Return NULL if an exception was raised by the codec.
Return a Python bytes object holding the UTF-32 encoded value of the Unicode data in s. Output is written according to the following byte order:
byteorder == -1: little endian
byteorder == 0: native byte order (writes a BOM mark)
byteorder == 1: big endian
If byteorder is 0, the output string will always start with the Unicode BOM mark (U+FEFF). In the other two modes, no BOM mark is prepended.
If Py_UNICODE_WIDE is not defined, surrogate pairs will be output as a single codepoint.
Return NULL if an exception was raised by the codec.
These are the UTF-16 codec APIs:
Decode length bytes from a UTF-16 encoded buffer string and return the corresponding Unicode object. errors (if non-NULL) defines the error handling. It defaults to “strict”.
If byteorder is non-NULL, the decoder starts decoding using the given byte order:
*byteorder == -1: little endian
*byteorder == 0: native order
*byteorder == 1: big endian
If *byteorder is zero, and the first two bytes of the input data are a byte order mark (BOM), the decoder switches to this byte order and the BOM is not copied into the resulting Unicode string. If *byteorder is -1 or 1, any byte order mark is copied to the output (where it will result in either a \ufeff or a \ufffe character).
After completion, *byteorder is set to the current byte order at the end of input data.
If byteorder is NULL, the codec starts in native order mode.
Return NULL if an exception was raised by the codec.
If consumed is NULL, behave like PyUnicode_DecodeUTF16(). If consumed is not NULL, PyUnicode_DecodeUTF16Stateful() will not treat trailing incomplete UTF-16 byte sequences (such as an odd number of bytes or a split surrogate pair) as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.
Return a Python bytes object holding the UTF-16 encoded value of the Unicode data in s. Output is written according to the following byte order:
byteorder == -1: little endian
byteorder == 0: native byte order (writes a BOM mark)
byteorder == 1: big endian
If byteorder is 0, the output string will always start with the Unicode BOM mark (U+FEFF). In the other two modes, no BOM mark is prepended.
If Py_UNICODE_WIDE is defined, a single Py_UNICODE value may get represented as a surrogate pair. If it is not defined, each Py_UNICODE values is interpreted as an UCS-2 character.
Return NULL if an exception was raised by the codec.
These are the UTF-7 codec APIs:
Encode the Py_UNICODE buffer of the given size using UTF-7 and return a Python bytes object. Return NULL if an exception was raised by the codec.
If base64SetO is nonzero, “Set O” (punctuation that has no otherwise special meaning) will be encoded in base-64. If base64WhiteSpace is nonzero, whitespace will be encoded in base-64. Both are set to zero for the Python “utf-7” codec.
These are the “Unicode Escape” codec APIs:
Create a Unicode object by decoding size bytes of the Unicode-Escape encoded string s. Return NULL if an exception was raised by the codec.
Encode the Py_UNICODE buffer of the given size using Unicode-Escape and return a Python string object. Return NULL if an exception was raised by the codec.
These are the “Raw Unicode Escape” codec APIs:
Create a Unicode object by decoding size bytes of the Raw-Unicode-Escape encoded string s. Return NULL if an exception was raised by the codec.
Encode the Py_UNICODE buffer of the given size using Raw-Unicode-Escape and return a Python string object. Return NULL if an exception was raised by the codec.
These are the Latin-1 codec APIs: Latin-1 corresponds to the first 256 Unicode ordinals and only these are accepted by the codecs during encoding.
Create a Unicode object by decoding size bytes of the Latin-1 encoded string s. Return NULL if an exception was raised by the codec.
Encode the Py_UNICODE buffer of the given size using Latin-1 and return a Python bytes object. Return NULL if an exception was raised by the codec.
These are the ASCII codec APIs. Only 7-bit ASCII data is accepted. All other codes generate errors.
Create a Unicode object by decoding size bytes of the ASCII encoded string s. Return NULL if an exception was raised by the codec.
Encode the Py_UNICODE buffer of the given size using ASCII and return a Python bytes object. Return NULL if an exception was raised by the codec.
These are the mapping codec APIs:
This codec is special in that it can be used to implement many different codecs (and this is in fact what was done to obtain most of the standard codecs included in the encodings package). The codec uses mapping to encode and decode characters.
Decoding mappings must map single string characters to single Unicode characters, integers (which are then interpreted as Unicode ordinals) or None (meaning “undefined mapping” and causing an error).
Encoding mappings must map single Unicode characters to single string characters, integers (which are then interpreted as Latin-1 ordinals) or None (meaning “undefined mapping” and causing an error).
The mapping objects provided must only support the __getitem__ mapping interface.
If a character lookup fails with a LookupError, the character is copied as-is meaning that its ordinal value will be interpreted as Unicode or Latin-1 ordinal resp. Because of this, mappings only need to contain those mappings which map characters to different code points.
Create a Unicode object by decoding size bytes of the encoded string s using the given mapping object. Return NULL if an exception was raised by the codec. If mapping is NULL latin-1 decoding will be done. Else it can be a dictionary mapping byte or a unicode string, which is treated as a lookup table. Byte values greater that the length of the string and U+FFFE “characters” are treated as “undefined mapping”.
Encode the Py_UNICODE buffer of the given size using the given mapping object and return a Python string object. Return NULL if an exception was raised by the codec.
Encode a Unicode object using the given mapping object and return the result as Python string object. Error handling is “strict”. Return NULL if an exception was raised by the codec.
The following codec API is special in that maps Unicode to Unicode.
Translate a Py_UNICODE buffer of the given length by applying a character mapping table to it and return the resulting Unicode object. Return NULL when an exception was raised by the codec.
The mapping table must map Unicode ordinal integers to Unicode ordinal integers or None (causing deletion of the character).
Mapping tables need only provide the __getitem__() interface; dictionaries and sequences work well. Unmapped character ordinals (ones which cause a LookupError) are left untouched and are copied as-is.
These are the MBCS codec APIs. They are currently only available on Windows and use the Win32 MBCS converters to implement the conversions. Note that MBCS (or DBCS) is a class of encodings, not just one. The target encoding is defined by the user settings on the machine running the codec.
Create a Unicode object by decoding size bytes of the MBCS encoded string s. Return NULL if an exception was raised by the codec.
Encode the Py_UNICODE buffer of the given size using MBCS and return a Python bytes object. Return NULL if an exception was raised by the codec.
The following APIs are capable of handling Unicode objects and strings on input (we refer to them as strings in the descriptions) and return Unicode objects or integers as appropriate.
They all return NULL or -1 if an exception occurs.
Concat two strings giving a new Unicode string.
Split a string giving a list of Unicode strings. If sep is NULL, splitting will be done at all whitespace substrings. Otherwise, splits occur at the given separator. At most maxsplit splits will be done. If negative, no limit is set. Separators are not included in the resulting list.
Split a Unicode string at line breaks, returning a list of Unicode strings. CRLF is considered to be one line break. If keepend is 0, the Line break characters are not included in the resulting strings.
Translate a string by applying a character mapping table to it and return the resulting Unicode object.
The mapping table must map Unicode ordinal integers to Unicode ordinal integers or None (causing deletion of the character).
Mapping tables need only provide the __getitem__() interface; dictionaries and sequences work well. Unmapped character ordinals (ones which cause a LookupError) are left untouched and are copied as-is.
errors has the usual meaning for codecs. It may be NULL which indicates to use the default error handling.
Join a sequence of strings using the given separator and return the resulting Unicode string.
Replace at most maxcount occurrences of substr in str with replstr and return the resulting Unicode object. maxcount == -1 means replace all occurrences.
Rich compare two unicode strings and return one of the following:
Note that Py_EQ and Py_NE comparisons can cause a UnicodeWarning in case the conversion of the arguments to Unicode fails with a UnicodeDecodeError.
Possible values for op are Py_GT, Py_GE, Py_EQ, Py_NE, Py_LT, and Py_LE.
Return a new string object from format and args; this is analogous to format % args. The args argument must be a tuple.
Check whether element is contained in container and return true or false accordingly.
element has to coerce to a one element Unicode string. -1 is returned if there was an error.