Big O Notation

Big O notation is a mathematical notation describing the limiting behavior of a function when the argument tends towards a particular value or infinity. In computer science, it classifies algorithms by how they respond to changes in input size.

It defines the worst-case time complexity or space complexity of an algorithm.

Asymptotic Notations

Computer science utilizes three primary asymptotic notations to describe bounds on algorithmic growth:

Common Complexity Classes

The table lists common time complexities ordered from most efficient to least efficient.

NotationNameDescriptionExample
$O(1)$ConstantExecution time remains the same regardless of input size.Array index lookup
$O(\log n)$LogarithmicExecution time increases by a constant amount when the input size doubles.Binary search
$O(n)$LinearExecution time scales in direct proportion to the input size.Simple iteration over an array
$O(n \log n)$LinearithmicExecution time scales by $n \log n$. Characteristic of efficient sorting algorithms.Merge sort, Heap sort
$O(n^2)$QuadraticExecution time scales with the square of the input size.Bubble sort, nested loops
$O(2^n)$ExponentialExecution time doubles with each addition to the input data set.Recursive Fibonacci calculation
$O(n!)$FactorialExecution time scales with the factorial of the input size.Traveling salesman problem via brute force

Mathematical Definition

Suppose $f(x)$ and $g(x)$ are two functions defined on some subset of the real numbers. We write:

$$ f(x) = O(g(x)) \text{ as } x \rightarrow \infty $$

if and only if there exist positive constants $M$ and $x_0$ such that:

$$ |f(x)| \le M |g(x)| \text{ for all } x \ge x_0 $$