Big O notation is a mathematical notation describing the limiting behavior of a function when the argument tends towards a particular value or infinity. In computer science, it classifies algorithms by how they respond to changes in input size.
It defines the worst-case time complexity or space complexity of an algorithm.
Computer science utilizes three primary asymptotic notations to describe bounds on algorithmic growth:
The table lists common time complexities ordered from most efficient to least efficient.
| Notation | Name | Description | Example |
|---|---|---|---|
| $O(1)$ | Constant | Execution time remains the same regardless of input size. | Array index lookup |
| $O(\log n)$ | Logarithmic | Execution time increases by a constant amount when the input size doubles. | Binary search |
| $O(n)$ | Linear | Execution time scales in direct proportion to the input size. | Simple iteration over an array |
| $O(n \log n)$ | Linearithmic | Execution time scales by $n \log n$. Characteristic of efficient sorting algorithms. | Merge sort, Heap sort |
| $O(n^2)$ | Quadratic | Execution time scales with the square of the input size. | Bubble sort, nested loops |
| $O(2^n)$ | Exponential | Execution time doubles with each addition to the input data set. | Recursive Fibonacci calculation |
| $O(n!)$ | Factorial | Execution time scales with the factorial of the input size. | Traveling salesman problem via brute force |
Suppose $f(x)$ and $g(x)$ are two functions defined on some subset of the real numbers. We write:
$$ f(x) = O(g(x)) \text{ as } x \rightarrow \infty $$
if and only if there exist positive constants $M$ and $x_0$ such that:
$$ |f(x)| \le M |g(x)| \text{ for all } x \ge x_0 $$