Big O Notation
Big O Notation
Big O notation is a method for determining how fast an algorithm is. Using Big O notation, we can learn whether our algorithm is fast or slow. This knowledge lets us design better algorithms.
Time complexity is the computational complexity that describes the amount of time it takes to run an algorithm
n
is the size of the input of an function.
There are different types of complexities:
- Space Complexity: How much memory is used by the algorithm.
- Time Complexity: How much time is used by the algorithm.
Variation of Big O Notations
- Big Omega (Ω) Notation: The best case scenario.
- Big Theta (Θ) Notation: The average case scenario.
- Big O (O) Notation: The worst case scenario.
Constant Time: O(1)
Constant time complexity is the best time complexity. It means that the time taken to run the algorithm is constant, regardless of the size of the input.
They do not scale with the size of the input.
x = [1, 2, 3, 4, 5]
print(x[3]) # O(1)
Logarithmic Time: O(log n)
Logarithmic time complexity is better than linear time complexity. It means that the time taken to run the algorithm increases logarithmically with the size of the input.
[ \log_3(9) = 2 ]
E.g. A common example of logarithmic time complexity is binary search.
def binary_search(alist, item):
first = 0
last = len(alist)-1
found = False
while first <= last and not found:
midpoint = (first + last)//2
if alist[midpoint] == item:
found = True
else:
if item < alist[midpoint]:
last = midpoint-1
else:
first = midpoint+1
return found
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 5
print(binary_search(arr, target)) # O(log n)
In English, this is:
- Go to the middle of the list
- Check to see if that element is the answer
- If it’s not, check to see if that element is more than the item we want to find
- If it is, ignore the right-hand side (all the numbers higher than the midpoint) of the list and choose a new midpoint.
- Start over again, by finding the midpoint in the new list.
Linear Time: O(n)
Linear time complexity is the most common time complexity. It means that the time taken to run the algorithm increases linearly with the size of the input.
def linear_search(arr, target):
for i in range(len(arr)):
if arr[i] == target:
return i
return -1
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 5
print(linear_search(arr, target)) # O(n)
Polynomial Time: O(n^c)
Polynomial time complexity is worse than linear time complexity. It means that the time taken to run the algorithm increases polynomially with the size of the input.
If one loop through a list is O(n), 2 loops must be O(n^2). For each loop, we go over the list once. For each item in that list, we go over the entire list once. Resulting in n2 operations.
a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for i in a:
for x in a:
print("x")
The thing with large time complexities is that they often show us that something can be quickened.
Exponential Time: O(2^n)
Exponential time complexity is the worst time complexity. It means that the time taken to run the algorithm increases exponentially with the size of the input.