Difference between revisions of "Tutorials:Value types"
Line 1: | Line 1: | ||
− | |||
− | |||
− | |||
− | |||
== Bits, Bytes, and Words == | == Bits, Bytes, and Words == | ||
A bit is a binary digit. So a bit is a zero or a one. Bits are implemented in computer hardware using switches. If the switch is closed (on) then the bit is one and if the switch is open (off) then the bit is zero. A bit is limited to representing two values, since it's a base two. | A bit is a binary digit. So a bit is a zero or a one. Bits are implemented in computer hardware using switches. If the switch is closed (on) then the bit is one and if the switch is open (off) then the bit is zero. A bit is limited to representing two values, since it's a base two. | ||
Line 16: | Line 12: | ||
Now you will tend to hear that all values are stored in hexadecimal format, but really it's that all computers will convert the stored binary to hexadecimal when displaying the data else it's alway binary bits (until quantum computers are standard). | Now you will tend to hear that all values are stored in hexadecimal format, but really it's that all computers will convert the stored binary to hexadecimal when displaying the data else it's alway binary bits (until quantum computers are standard). | ||
− | Note: [https://wikipedia.org/wiki/Hexadecimal Hexadecimal] is just a base 16 number system, [https://wikipedia.org/wiki/Decimal decimal] is a base 10, and [https://wikipedia.org/wiki/Binary_number binary] is a | + | Note: [https://wikipedia.org/wiki/Hexadecimal Hexadecimal] is just a base 16 number system, [https://wikipedia.org/wiki/Decimal decimal] is a base 10, and [https://wikipedia.org/wiki/Binary_number binary] is a base 2. |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | So bytes are like the base data units, and we can store any ASCII character in a byte. You'll very often come across size names like ''WORD'' and ''DWORD''. "In computing, a word is the natural unit of data used by a particular processor design"[https://en.wikipedia.org/wiki/Word_(computer_architecture)]. And that's the definition that was used for assembly initially as well. When computers used 8 bit processors a ''WORD'' was 1 byte, when they were 16 bit a ''WORD'' was 2 bytes, however computers starting becoming really popular around the time of 32 bit processors and for maximum compatibility assemblers stopped using that definition and just stuck with a ''WORD'' being 2 bytes. So even though the natural unit for a 32 bit processor is 4 bytes and 8 for a 64 bit processor a ''WORD'' is always 2 bytes and a ''DWORD'' (double word) is always 4 bytes. | |
− |
Revision as of 02:00, 6 February 2018
Bits, Bytes, and Words
A bit is a binary digit. So a bit is a zero or a one. Bits are implemented in computer hardware using switches. If the switch is closed (on) then the bit is one and if the switch is open (off) then the bit is zero. A bit is limited to representing two values, since it's a base two.
Since the English alphabet contains more than two letters, a letter cannot be represented by a bit. A byte is a sequence of bits. Since the mid 1960's a byte has been 8 bits in length. 01000001 is an example of a byte. Since there are 8 bits in a byte there are 28 different possible sequences for one byte, ranging from 00000000 to 11111111. This means that a byte can be used to represent any type of value with no more than 28 = 256 possible values. Since the number of things that you can enter on a computer keyboard is smaller than 256 (including all key stoke pairs, like shift or control plus another key), a code for a key stoke is represented with a code within a byte.[1]
Note: Unicode was introduced to handle multiple languages, and is based on ASCII.
Side note: ASCII was based on telegraph code, and started out as a 7 bit system.
Now you will tend to hear that all values are stored in hexadecimal format, but really it's that all computers will convert the stored binary to hexadecimal when displaying the data else it's alway binary bits (until quantum computers are standard).
Note: Hexadecimal is just a base 16 number system, decimal is a base 10, and binary is a base 2.
So bytes are like the base data units, and we can store any ASCII character in a byte. You'll very often come across size names like WORD and DWORD. "In computing, a word is the natural unit of data used by a particular processor design"[1]. And that's the definition that was used for assembly initially as well. When computers used 8 bit processors a WORD was 1 byte, when they were 16 bit a WORD was 2 bytes, however computers starting becoming really popular around the time of 32 bit processors and for maximum compatibility assemblers stopped using that definition and just stuck with a WORD being 2 bytes. So even though the natural unit for a 32 bit processor is 4 bytes and 8 for a 64 bit processor a WORD is always 2 bytes and a DWORD (double word) is always 4 bytes.