- Character (computing)
-
In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language.
Examples of characters include letters, numerical digits, and common punctuation marks (such as '.' or '-'). The concept also includes control characters, which do not correspond to symbols in a particular natural language, but rather to other bits of information used to process text in one or more languages. Examples of control characters include carriage return or tab, as well as instructions to printers or other devices that display or otherwise process text.
Characters are typically combined into strings.
Contents
Character encoding
Main article: Character encodingComputers and communication equipment represent characters using a character encoding that assigns each character to something — an integer quantity represented by a sequence of bits, typically — that can be stored or transmitted through a network. Two examples of popular encodings are ASCII and the UTF-8 encoding for Unicode. While most character encodings map characters to numbers and/or bit sequences, Morse code instead represents characters using a series of electrical impulses of varying length.
Terminology
Historically, the term character has been widely used by industry professionals to refer to an encoded character, often as defined by the programming language or API). Likewise, character set has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The term glyph is used to describe a particular physical appearance of a character. Many computer fonts consist of glyphs that are indexed by the numerical code of the corresponding character.
With the advent and widespread acceptance of Unicode[1] and bit-agnostic encoding forms,[clarification needed], a character is increasingly being seen as a unit of information, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines character, or abstract character as "a member of a set of elements used for the organisation, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things.
For example, the Hebrew letter aleph ("א") is often used by mathematicians to denote certain kinds of infinity, but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("code points"), though they may be rendered identically. Conversely, the Chinese logogram for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local typefaces may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.
The Unicode standard also differentiates between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers.
char
A
char
in the C programming language is a fixed-size byte entity, which at one time was large enough to store a character value from ASCII or other encodings. Since often only 256 different values can be stored in a byte, it is impossible to store characters from Unicode and other modern sets in achar
. Instead larger storage units such aswchar_t
, or more than one byte per character such as UTF-8, are used.Unfortunately the fact that a character was stored in a byte led to the two terms being used interchangeably in most documentation. This often makes the documentation confusing and/or misleading, and has also led to extremely inefficient implementations of UTF-8 where offsets are replaced with repetitive counting of characters, and has also led to bugs when different systems disagree on the count.
word character
A 'word' character has special meaning in some aspects of computing. A 'word character' typically means alphabet A-Z (upper or lower case), the digits 0 to 9 and the underscore.[2][3]
See also
References
- ^ Davis, Mark (2008-05-05). "Moving to Unicode 5.1". Google Blog. http://googleblog.blogspot.com/2008/05/moving-to-unicode-51.html. Retrieved 2008-09-28.
- ^ http://www.regular-expressions.info/charclass.html
- ^ See also the
[:word:]
regular expression character class
External links
- Characters: A Brief Introduction by The Linux Information Project (LINFO)
- ISO/IEC TR 15285:1998 summarizes the ISO/IEC's character model, focusing on terminology definitions and differentiating between characters and glyphs
Data types Uninterpreted Numeric - Integer
- Fixed-point
- Floating-point
- Rational
- Complex
- Bignum
- Interval
- Decimal
Text - Character
- String
Pointer Composite Other - Boolean
- Bottom type
- Collection
- Enumerated type
- Exception
- Function type
- Opaque data type
- Recursive data type
- Semaphore
- Stream
- Top type
- Type class
- Unit type
- Void
Related topics - Abstract data type
- Data structure
- Interface
- Kind
- Primitive data type
- Subtyping
- Template
- Type constructor
- Parametric polymorphism
Categories:- Character encoding
- Data types
- Digital typography
- Primitive types
Wikimedia Foundation. 2010.