Fundamentals6 min readLast updated: Mon Jan 15 2024 00:00:00 GMT+0000 (Coordinated Universal Time)

Introduction to Binary & Bits

If you strip away the screens, the mouse, the keyboard, and the sleek operating systems, every computer is ultimately a machine that switches electricity on and off.

This concept—On vs. Off—is the physical reality. In mathematics, we represent this as 1 and 0. This is Binary.

The Bit

The "Bit" (Binary Digit) is the smallest unit of data in computing. It can only be one of two values: 0 or 1.

  • 1 Bit = 2 possibilities (0, 1)
  • 2 Bits = 4 possibilities (00, 01, 10, 11)
  • 3 Bits = 8 possibilities
  • 8 Bits = 256 possibilities

This exponential growth is why computers are so powerful. By chaining just a few bits together, we can represent billions of different numbers.

The Byte

A Byte is simply a group of 8 bits. It is the standard "unit" of storage for almost all modern computers.

Why 8? Historically, different computers used different grouping sizes (6-bit, 7-bit, 9-bit). However, 8 bits (which allows for 256 unique values) became the standard because it was just big enough to store a single letter of the alphabet (including punctuation and uppercase/lowercase) plus some control codes.

Terminology Reference

Name Size Approximate Decimal Value
Bit 1 bit 0 or 1
Nibble 4 bits 0 to 15
Byte 8 bits 0 to 255
Kilobyte (KB) 1,024 bytes ~1 Thousand
Megabyte (MB) 1,024 KB ~1 Million

Why Binary?

Humans use a Decimal system (Base-10), likely because we have 10 fingers.
Computers use Binary (Base-2) because it is reliable to build hardware that only needs to check two states: "Is there voltage?" (1) or "Is there no voltage?" (0).

Trying to build a computer that could distinguish between 10 different voltage levels (0.1v, 0.2v, 0.3v...) would be incredibly expensive and prone to errors caused by electrical interference.

Hexadecimal: The Developer's Shorthand

You will often see computer data represented like this: #FF00A2 or 0x4F. This is Hexadecimal (Base-16).

Hexadecimal is not how the computer stores data; it's how humans read binary. Because a Byte is 8 bits, writing it out in binary is tedious:
11111111

In Hexadecimal, we can write that same byte as just two characters:
FF

This makes reading raw data files, memory dumps, and color codes much easier for engineers.