Character Encoding: Difference between revisions
(→ASCII) |
(→ASCII) |
||
Line 25: | Line 25: | ||
==ASCII== | ==ASCII== | ||
ASCII stands for American Standard Code for Information Interchange and it is a seven-bit encoding scheme used to encode letters, numerals, symbols, and device control codes as fixed-length codes using integers. | ASCII stands for American Standard Code for Information Interchange and it is a seven-bit encoding scheme used to encode letters, numerals, symbols, and [[ANSI Escape Sequences#Overview|device control codes]] as fixed-length codes using integers. | ||
==Unicode== | ==Unicode== |
Revision as of 20:40, 25 June 2018
External
Internal
Overview
Character encoding is the process though which characters within a text document are represented by numeric codes. Depending of the character encoding used, the same text will end up with different binary representations. Common character encoding standards are ASCII, Unicode and UCS.
Concepts
Character Set
Character Code
Unicode is a character code.
Code Point
Code Space
Character Encoding Standards
ASCII
ASCII stands for American Standard Code for Information Interchange and it is a seven-bit encoding scheme used to encode letters, numerals, symbols, and device control codes as fixed-length codes using integers.
Unicode
Unicode supports a larger character set than ASCII.
Unicode Transformation Format (UTF)
Binary representation of a text represented in Unicode depends on the "transformation format" used. UTF stands for "Unicode Transformation Format", and the number specified after the dash in the transformation format name represents the number of bits used to represent each character.
UTF-8
UTF-16
UTF-16 support a large enough character set to represent both Western and Eastern letters and symbols.