22

Your First Shell Script

A shell script is a text file that the shell reads as a sequence of commands.

You have been typing commands one at a time into the shell. That is fine for quick tasks, but when you need to run the same sequence of commands regularly -- or when the logic gets complex -- you write a shell script: a text file containing commands that the shell executes in order.

A shell script is not a compiled program. It is not written in a special language. It is the same commands you have been typing interactively, saved in a file. The shell reads the file line by line and executes each command exactly as if you had typed it.

The Shebang Line

Open your text editor and create a file called hello.sh:

#!/bin/bash
echo "Hello from a shell script"

The first line -- #!/bin/bash -- is called the shebang (or hashbang). It tells the operating system which program should interpret this file. When you execute the script, the kernel reads these two bytes (#!) at the start, sees the path /bin/bash, and launches bash with the script file as input.

Key term: Shebang The character sequence #! at the very beginning of a script file, followed by the path to the interpreter program. When the kernel sees #! at the start of an executable file, it runs the specified interpreter and passes the script as its argument. The name comes from "hash" (#) and "bang" (!).

The shebang must be the very first line. No blank lines before it. No spaces before the #. If it is missing or wrong, the system may try to interpret the file with the wrong shell, or fail to run it at all.

Common shebangs:

  • #!/bin/bash -- use bash specifically
  • #!/bin/sh -- use the system's POSIX shell (often dash)
  • #!/usr/bin/env python3 -- use whatever python3 is in PATH

The #!/usr/bin/env form is useful for portability because it searches PATH rather than hardcoding the interpreter's location.

Fig. 22.0 -- How the kernel processes the shebang

$ ./hello.sh

Kernel reads first 2 bytes #! Reads interpreter path: /bin/bash from the rest of line 1 Executes: /bin/bash ./hello.sh bash reads the file and runs each line

bash ignores the #! line because # is a comment character in shell syntax.

When you run a script, the kernel inspects the first two bytes. If they are #!, it reads the rest of line 1 as the path to the interpreter. It then launches that interpreter with the script file as an argument. The interpreter reads the file from the beginning, but since # starts a comment in most shells, the shebang line is harmlessly skipped.

Making a Script Executable

Before you can run ./hello.sh, you need to give the file execute permission:

$ chmod +x hello.sh
$ ./hello.sh
Hello from a shell script

The chmod +x command sets the executable bit on the file. Without it, the kernel will refuse to run the file even though it contains valid commands. You can also run it explicitly through the interpreter:

$ bash hello.sh
Hello from a shell script

This does not require the execute bit because you are running bash (which is already executable) and passing the script as an argument. But the ./ form is cleaner and more conventional for scripts you intend to reuse.

Variables in Scripts

Shell variables work the same in scripts as they do interactively. Assign with = (no spaces around the equals sign), and reference with $:

#!/bin/bash
name="Cold Boot"
count=22
echo "This is article $count of the $name series"

You can capture the output of a command into a variable using command substitution:

#!/bin/bash
today=$(date +%Y-%m-%d)
file_count=$(ls /usr/bin | wc -l)
echo "Today is $today"
echo "There are $file_count programs in /usr/bin"

The $(...) syntax runs the command inside and replaces itself with the command's output. The older backtick syntax `command` does the same thing but is harder to read and cannot be nested.

Key term: Command substitution The syntax $(command) runs the enclosed command and replaces itself with the command's standard output. This lets you capture the result of a command into a variable or embed it in a string. It is one of the most commonly used shell features.

Quoting Rules

Quoting is a source of endless bugs in shell scripts. Here are the rules:

Double quotes ("...") preserve the string as a single token but allow variable expansion:

$ name="Cold Boot"
$ echo "Welcome to $name"
Welcome to Cold Boot

Single quotes ('...') preserve the string literally. No variable expansion, no special characters:

$ echo 'The price is $5.00'
The price is $5.00

No quotes cause the shell to split the value on whitespace and expand wildcards. This is almost never what you want:

$ files="one two three"
$ echo $files       # three separate arguments to echo
one two three
$ echo "$files"     # one argument containing spaces
one two three

The practical rule: always double-quote your variables unless you have a specific reason not to. "$variable" is safe. $variable is a bug waiting to happen.

Always double-quote variables in shell scripts: "$variable", not $variable. Unquoted variables are subject to word splitting and glob expansion, which causes subtle, hard-to-debug failures when values contain spaces or special characters.

Conditionals: if Statements

Shell scripts can make decisions. The if statement tests a condition and runs different commands based on the result:

#!/bin/bash
if [ -f /etc/hostname ]; then
    echo "Hostname file exists"
    cat /etc/hostname
else
    echo "No hostname file found"
fi

The [ ... ] is actually a command (it is an alias for the test command). It evaluates the expression inside and exits with status 0 (true) or 1 (false). The if statement checks the exit status.

Common test expressions:

  • [ -f file ] -- true if file exists and is a regular file
  • [ -d dir ] -- true if directory exists
  • [ -z "$var" ] -- true if variable is empty or unset
  • [ -n "$var" ] -- true if variable is non-empty
  • [ "$a" = "$b" ] -- true if strings are equal
  • [ "$a" != "$b" ] -- true if strings differ
  • [ "$x" -eq "$y" ] -- true if integers are equal
  • [ "$x" -gt "$y" ] -- true if x is greater than y
Fig. 22.1 -- How if/then/else executes
[ -f /etc/hostname ] exit 0 (true) exit 1 (false) then echo "File exists" cat /etc/hostname else echo "Not found" fi Execution continues after fi
The if statement runs the test command (inside the brackets). If the test exits with status 0, the "then" block runs. If it exits with any other status, the "else" block runs. Both paths converge at "fi".

Exit Codes

Every command returns an exit code when it finishes -- a number between 0 and 255. By convention:

  • 0 means success
  • Anything else means failure

You can check the most recent exit code with $?:

$ ls /tmp
(file listing appears)
$ echo $?
0
$ ls /nonexistent
ls: cannot access '/nonexistent': No such file or directory
$ echo $?
2

In your own scripts, you set the exit code with the exit command:

#!/bin/bash
if [ ! -f "$1" ]; then
    echo "Error: file $1 not found" >&2
    exit 1
fi
echo "Processing $1..."
exit 0

Notice >&2 on the error message -- that sends the text to stderr, which is the correct channel for error output. The $1 is a special variable that holds the first command-line argument passed to the script.

Key term: Exit code A number from 0 to 255 returned by every process when it terminates. Zero means success; any non-zero value means failure. The shell stores the most recent exit code in the special variable $?. Exit codes are how if statements, && chains, and || chains make decisions.

Loops

The for Loop

The for loop iterates over a list of items:

#!/bin/bash
for fruit in apple banana cherry; do
    echo "I like $fruit"
done

Output:

I like apple
I like banana
I like cherry

You can loop over files:

#!/bin/bash
for file in /etc/*.conf; do
    echo "Config file: $file"
done

Or over command output:

#!/bin/bash
for user in $(cat /etc/passwd | cut -d: -f1); do
    echo "User: $user"
done

The while Loop

The while loop runs as long as a condition is true:

#!/bin/bash
count=1
while [ "$count" -le 5 ]; do
    echo "Count: $count"
    count=$((count + 1))
done

The $((...)) syntax performs arithmetic. Without it, the shell treats everything as strings.

A common pattern is reading a file line by line:

#!/bin/bash
while read -r line; do
    echo "Line: $line"
done < /etc/hostname

The < /etc/hostname at the end redirects the file into the while loop's stdin. The read -r command reads one line at a time into the variable line.

Fig. 22.2 -- for loop vs. while loop execution flow

for loop

list: [a, b, c] next item? yes item=a run body no done

while loop

initial: count=1 count -le 5 ? yes echo, count++ run body no done

Iterates over a fixed list Repeats while condition is true

A for loop steps through a fixed list of items, running the body once per item. A while loop re-tests its condition before each iteration and stops when the condition becomes false.

Putting It All Together

Here is a complete, practical shell script that combines everything from this article and the previous ones in the series. It checks disk usage on a set of directories and warns if any exceed a threshold:

#!/bin/bash
# disk-check.sh -- warn about directories using too much space

THRESHOLD=80   # percent
DIRS="/home /var /tmp"

echo "Disk usage check -- $(date)"
echo "Threshold: ${THRESHOLD}%"
echo "---"

warnings=0

for dir in $DIRS; do
    if [ ! -d "$dir" ]; then
        echo "SKIP: $dir does not exist" >&2
        continue
    fi

    usage=$(df "$dir" | tail -1 | awk '{print $5}' | tr -d '%')

    if [ "$usage" -gt "$THRESHOLD" ]; then
        echo "WARNING: $dir is at ${usage}% (exceeds ${THRESHOLD}%)"
        warnings=$((warnings + 1))
    else
        echo "OK: $dir is at ${usage}%"
    fi
done

echo "---"
if [ "$warnings" -gt 0 ]; then
    echo "$warnings warning(s) found"
    exit 1
else
    echo "All directories within limits"
    exit 0
fi

This script uses:

  • A shebang (#!/bin/bash)
  • Variables (THRESHOLD, DIRS, warnings, usage)
  • Command substitution ($(date), $(df ... | awk ...))
  • A for loop
  • Conditionals with if/else
  • File tests ([ ! -d "$dir" ])
  • Integer comparison ([ "$usage" -gt "$THRESHOLD" ])
  • Stderr for errors (>&2)
  • Arithmetic ($((warnings + 1)))
  • Exit codes (0 for success, 1 for warnings)
  • A pipeline (df | tail | awk | tr)
Fig. 22.3 -- Data flow through the disk-check script
disk-check.sh for dir in /home /var /tmp directory exists? skip df /home | tail -1 | awk '{print $5}' | tr -d '%' extracts usage percentage as a plain number usage > threshold? WARNING OK exit 1 (warnings) or exit 0 (all clear)
The script loops over directories, runs a pipeline to extract usage percentages, compares each against a threshold, and exits with an appropriate status code. Every concept from the last five articles appears in this one script.

Script Arguments

Scripts can accept arguments from the command line. They are available as special variables:

  • $0 -- the script's own name
  • $1, $2, $3, ... -- positional arguments
  • $# -- the number of arguments
  • $@ -- all arguments as separate words
#!/bin/bash
echo "Script: $0"
echo "First argument: $1"
echo "Second argument: $2"
echo "Total arguments: $#"
$ ./args.sh hello world
Script: ./args.sh
First argument: hello
Second argument: world
Total arguments: 2

A well-behaved script checks that it received the right number of arguments:

#!/bin/bash
if [ $# -lt 1 ]; then
    echo "Usage: $0 <filename>" >&2
    exit 1
fi

Debugging Scripts

When a script does not work, add set -x near the top. This makes bash print every command before it runs, showing you exactly what is happening:

#!/bin/bash
set -x
name="world"
echo "Hello, $name"

Output:

+ name=world
+ echo 'Hello, world'
Hello, world

Each line prefixed with + is bash showing you the command after variable expansion but before execution. This is invaluable for finding quoting bugs and logic errors.

Another useful setting is set -e, which makes the script stop immediately if any command fails (returns a non-zero exit code). Combined with set -u (treat unset variables as errors), these three settings catch most common scripting mistakes:

#!/bin/bash
set -euo pipefail

The pipefail option makes a pipeline return the exit code of the last failing command, rather than the last command overall.

Start your scripts with `set -euo pipefail` until you have a reason not to. These settings catch unset variables, failed commands, and broken pipes -- the three most common sources of silent script failures.

What You Have Learned

A shell script is a text file of commands with a shebang line that tells the kernel which interpreter to use. Variables hold values, and command substitution captures program output. Conditionals use [ ... ] (the test command) and check exit codes. Loops iterate over lists (for) or repeat while a condition holds (while). Exit codes communicate success (0) or failure (non-zero) to the calling program. Script arguments arrive in $1, $2, and so on.


This is the final article in the Cold Boot series. You have traveled from the first pulse of electricity through voltage rails and reset vectors, past the BIOS and bootloader, into the kernel and init system, through process management and permissions, and now into the shell where you write your own commands.

The journey from power-on to a running shell script touches every layer of a computer. Electricity becomes bits. Bits become instructions. Instructions become firmware. Firmware finds a bootloader. The bootloader loads a kernel. The kernel starts init. Init launches services. A terminal opens. A shell starts its read-execute loop. And now you can write a script that orchestrates all of it.

Every command you type from here forward sits on top of the entire stack you have just learned. You are not a passive user anymore. You understand what happens underneath.

Back to the Cold Boot series index