A computer does not think. It does not understand. It reacts to electricity. Every program you have ever run, every file you have ever saved, every pixel on your screen right now -- all of it reduces to electrical signals moving through circuits at close to the speed of light.
Before we can talk about how a computer starts up, we need to understand the language it speaks. That language has exactly two words.
Two Voltages, One Idea
Inside a modern computer, the processor and memory chips run on a supply voltage -- typically somewhere between 0.8 and 1.2 volts for a CPU core. The exact number depends on the chip, but the principle is the same everywhere.
The chip treats voltage levels as one of two states. A voltage near the supply level counts as "high." A voltage near ground counts as "low." Anything in between is undefined -- the hardware is not designed to work there, and if a signal lingers in that middle zone, the circuit may behave unpredictably.
This is not an arbitrary design choice. Engineers use two states because they are easy to distinguish electrically, even in the presence of noise, temperature drift, and manufacturing variation. A system with ten voltage levels would be more information-dense per wire, but vastly harder to build reliably. Two states give you the widest possible margin for error.
The Bit
We call each of these two-state signals a bit. The word is a contraction of "binary digit." A bit is the smallest possible unit of information: it distinguishes between exactly two alternatives.
By convention, we label these alternatives 1 and 0. HIGH voltage maps to 1. LOW voltage maps to 0. But the mapping is arbitrary. Some circuits reverse it. What matters is that every conductor in a digital circuit carries exactly one bit of information at any given moment.
A single bit is not very expressive. It can represent on/off, yes/no, true/false. That is all. To say anything more interesting, you need more bits.
Counting in Binary
When you write a number like 347 in everyday decimal notation, each digit position represents a power of ten. The 3 means "three hundreds" (3 x 10^2). The 4 means "four tens" (4 x 10^1). The 7 means "seven ones" (7 x 10^0).
Binary works identically, but with powers of two. Each position is worth twice as much as the one to its right.
Consider the binary number 1101. Reading from right to left:
- Position 0: 1 x 2^0 = 1
- Position 1: 0 x 2^1 = 0
- Position 2: 1 x 2^2 = 4
- Position 3: 1 x 2^3 = 8
Add them up: 8 + 4 + 0 + 1 = 13.
That is all binary is. The same positional number system you already know, with a smaller alphabet.
The Byte
Eight bits grouped together form a byte. This is the fundamental unit of storage and data transfer in virtually every modern computer. One byte can represent 2^8 = 256 different values, numbered 0 through 255.
Why eight? Partly convention, partly practical engineering. Eight bits is enough to represent a single character in Western text (the ASCII standard uses 7 bits, with the eighth available for error checking or extensions). It divides neatly into halves (called nibbles -- 4 bits each), and powers of two scale cleanly: 2 bytes = 16 bits, 4 bytes = 32 bits, 8 bytes = 64 bits.
The sizes you see in everyday computing are all built from bytes:
| Name | Bytes | Bits |
|---|---|---|
| Byte | 1 | 8 |
| Kilobyte | 1,024 | 8,192 |
| Megabyte | 1,048,576 | 8,388,608 |
| Gigabyte | 1,073,741,824 | 8,589,934,592 |
Notice these are powers of 1,024, not 1,000. That factor of 1,024 (which is 2^10) comes from the binary nature of memory addressing. A chip with 10 address lines can reach 1,024 locations. Marketing departments prefer to use 1,000 because it makes the number bigger, which is why your "500 GB" hard drive shows up as 465 GB in your operating system. The drive manufacturer counted in billions; the OS counts in powers of 1,024.
Hexadecimal: A Shorthand for Humans
Binary numbers get long fast. The number 255 in binary is 11111111 -- eight characters to represent a fairly small value. For numbers in the millions, binary notation becomes unreadable.
Programmers use hexadecimal (base 16) as a compact way to write binary values. Hex uses sixteen symbols: 0-9 and A-F. Each hex digit represents exactly four bits -- one nibble.
This mapping is precise and mechanical:
| Hex | Binary | Decimal |
|---|---|---|
| 0 | 0000 | 0 |
| 1 | 0001 | 1 |
| 2 | 0010 | 2 |
| 3 | 0011 | 3 |
| 4 | 0100 | 4 |
| 5 | 0101 | 5 |
| 6 | 0110 | 6 |
| 7 | 0111 | 7 |
| 8 | 1000 | 8 |
| 9 | 1001 | 9 |
| A | 1010 | 10 |
| B | 1011 | 11 |
| C | 1100 | 12 |
| D | 1101 | 13 |
| E | 1110 | 14 |
| F | 1111 | 15 |
To convert hex to binary, replace each hex digit with its four-bit equivalent. 0xFF becomes 1111 1111. 0x4A becomes 0100 1010. No arithmetic required -- just substitution.
Hex numbers are usually prefixed with 0x to distinguish them from decimal. When you see 0xDEADBEEF in a debugger, that is a 32-bit value: 1101 1110 1010 1101 1011 1110 1110 1111.
Logic Gates: Where Meaning Begins
Voltage levels become useful when you start combining them. A logic gate is a circuit that takes one or more input bits and produces an output bit according to a fixed rule.
The three fundamental gates are:
AND: The output is 1 only if both inputs are 1. Think of two switches in series -- both must be closed for current to flow.
OR: The output is 1 if either input (or both) is 1. Think of two switches in parallel -- either one lets current through.
NOT: The output is the opposite of the input. One input, one output. A 1 becomes 0, a 0 becomes 1.
Every computation your processor performs -- addition, comparison, memory access, encryption, video decoding -- decomposes into combinations of these three operations. A modern CPU contains billions of transistors, but each transistor is acting as a switch in a logic gate. The complexity comes from composition, not from any single component.
From Gates to Arithmetic
You can build an adder from AND, OR, and NOT gates. To add two single-bit numbers:
- The sum bit is the XOR of the two inputs (XOR is "exclusive or" -- true when exactly one input is 1).
- The carry bit is the AND of the two inputs.
XOR itself is built from AND, OR, and NOT: A XOR B = (A OR B) AND NOT (A AND B).
Chain eight of these single-bit adders together, feeding each carry output into the next stage's carry input, and you have an 8-bit adder. It adds two bytes and produces a byte-sized result plus a carry flag. This is the same principle used in every ALU (arithmetic logic unit) ever built.
How Memory Stores a Bit
A logic gate computes, but it does not remember. Once the input changes, the old output is gone. To store information, you need a circuit that can hold its state.
The simplest storage element is a latch -- two NOT gates connected in a loop. The output of the first feeds the input of the second, and the output of the second feeds back to the input of the first. This creates a stable feedback loop that holds either a 0 or a 1 indefinitely, as long as power is supplied.
Add some control gates to decide when the latch is allowed to change, and you have a flip-flop -- a one-bit memory cell that updates only on a clock edge. Group eight flip-flops together and you have a register that stores one byte. Stack millions of these cells into a grid with row and column addressing, and you have RAM.
This will matter enormously when we talk about what happens at power-on. The processor wakes up with empty RAM. Every byte reads as garbage. The machine must find its instructions from somewhere that does not forget -- and that somewhere is the subject of the next few articles.
Everything Is a Number
Text, images, sound, video, executable programs, network packets, encryption keys -- inside the machine, all of it is bytes. There is no difference in the hardware between a byte that represents the letter "A" (0x41 in ASCII), a byte that represents the color of a pixel, and a byte that is part of a machine instruction.
The meaning of a byte depends entirely on context. The value 0x41 is the letter "A" if a text editor reads it, the number 65 if a calculator reads it, a shade of red if a graphics program reads it, or a processor instruction (INC ECX on x86) if the CPU executes it. Same pattern of voltages. Different interpretation.
This is one of the most important ideas in computing: data and instructions live in the same memory, encoded the same way. The computer does not know whether a given byte is a number, a letter, or an instruction. It does whatever the program counter tells it to do with whatever bytes it finds. If the program counter points at your JPEG, the CPU will cheerfully try to execute your vacation photo. It will not end well, but nothing in the hardware prevents it.
Why This Matters for Cold Boot
When you press the power button, every piece of RAM in the system starts in an undefined state. The processor has no program loaded. There is no operating system. There are no files.
What exists is a set of circuits that can move voltages around according to fixed rules. Billions of transistors implementing AND, OR, and NOT. A bus that can carry bytes from one place to another. A small piece of non-volatile memory -- a chip that does not forget when the power goes off -- containing just enough instructions to get started.
The entire journey from power button to running operating system is the process of loading bytes into RAM, one layer at a time, each layer complex enough to load the next. Every step in that chain operates on the principles covered in this article: voltage levels, binary encoding, logic gates, and the fundamental interchangeability of data and instructions.
That is what Cold Boot is about. We start here, at the bottom, where electricity becomes information. The next article picks up the story at the exact moment you press the power button.
Next: The Moment Power Arrives