Time and space complexity help you understand how your solution scales so you can make better decisions while coding.
Table of Contents
What Time Complexity Means?
Time complexity tells you how the amount of work increases when input size increases. It is not about actual time in seconds. It is about how many operations your code performs.If input doubles, how much does your work increase?
Same work → very efficient
Double work → manageable
4x or more → can become slow
Common Time Complexities
| Complexity | Name | Meaning |
|---|---|---|
| O(1) | Constant Time | Work does not grow |
| O(log n) | Logarithmic Time | Work grows very slowly |
| O(n) | Linear Time | Work grows proportionally |
| O(n log n) | Linearithmic Time | Slightly more than linear |
| O(n2) | Quadratic Time | Work grows very fast |
| O(2n) | Exponential Time | Work grows extremely fast |
O(1) - Constant Time
This means the operation takes the same time, no matter how large the input is. Below is a Python program to access an element from an array using its index:
# Python program to show constant time
def get_element(arr, index):
return arr[index]
arr = [10, 20, 30, 40, 50]
print(get_element(arr, 2))
Output:
Explanation:30
This function directly accesses the element at a given index. The operation arr[index] always takes exactly one step. There is no loop, no repeated work, and no dependency on input size. This is why the number of operations remains constant and the complexity is O(1).
O(log n) - Logarithmic Time
This happens when the problem size is reduced again and again, usually by half. Below is a Python program to count how many steps it takes to reduce a number to 1 by dividing it by 2.
# Python program to show logarithmic time
def count_steps(n):
steps = 0
while n 1:
n = n // 2
steps += 1
return steps
print(count_steps(16))
Output:
Explanation:4
In this code, the value of n is reduced by half in every iteration. Instead of decreasing one by one, it shrinks much faster. For example, 16 becomes 8, then 4, then 2, then 1. Even though the starting number is large, the number of steps required is very small. The partition repeatedly dividing the input leads to logarithmic growth.
O(n) - Linear Time
Work increases directly with the input size. Below is a Python program that prints all elements of an array:
# Python program to show linear time
def print_elements(arr):
for num in arr:
print(num)
print_elements([1, 2, 3])
Output:
Explanation:1
2
3
The loop runs once for each element in the array. If the array has 3 elements, the loop runs 3 times. If the array has 100 elements, it runs 100 times. This shows that the number of operations increases directly with the input size.
O(n log n) - Linearithmic Time
This is common in efficient sorting algorithms. Below is a python program to sort an array:
# Python program to show linearithmic time
def sort_array(arr):
return sorted(arr)
print(sort_array([3, 1, 2]))
Output:
Explanation:[1, 2, 3]
Sorting algorithms like merge sort or timsort (used in Python) divide the array into smaller parts and then combine them in a sorted way. The division happens in a logarithmic manner, and merging requires linear work. Because both processes happern together, the total complexity becomes n multiplied by log n. This is why efficient sorting algorithms are O(n log n) and not O(n2)
O(n²) - Quadratic Time
Work increases very fast due to nested loops. Below is a Python program to print all possible pairs of elements:
# Python program to show quadratic time
def print_pairs(arr):
for i in arr:
for j in arr:
print(i, j)
print_pairs([1, 2])
Output:
Explanation:1 1
1 2
2 1
2 2
Here, for every element in the outer loop, the inner loop runs completely. This means if there are n elements, the inner loop runs n times for each of those n elements. So the total number of operations becomes n multiplied by n. A input grows, the work increases very quickly. For example, 100 elements lead to 100 operations. That is why complexity is O(n2).
O(2n) - Exponential Time
This happens in recursive problems with repeated work. Below is a Python program to find the nth Fibonacci number using recursion:
# Python program to show exponential time
def fib(n):
if n = 1:
return n
return fib(n-1) + fib(n-2)
print(fib(5))
Output:
Explanation:5
This function calls itself multiple times for each value of n. For example, to calculate fib(5), it calls fib(4) and fib(3). Then fib(4) again calls fib(3) and fib(2), and this repetition continues. Many values are recomputed again and again. Because of this repeated branching, the number of function calls grows very rapidly. With each increase in n, the work roughly doubles. That is why the complexity is exponential, O(2ⁿ), which becomes very slow for large inputs.
Trick #1: Count Repetitions
Focus on how many times work happens. Below is a Python program to print numbers from 0 to n-1:
# Python program to print numbers from
# 0 to n-1
def count(n):
for i in range(n):
print(i)
count(3)
Output:
Explanation:0
1
2
The loop runs exactly n times, and each iteration performs one print operation. There are no nested loops or extra work inside the loop. Since the number of operations grows directly with n, the complexity is O(n).
Trick #2: Ignore Constants
Below is a Python program that runs two loops of size n:
# Python program to show ignoring
# constants
def example(n):
for i in range(n):
print(i)
for j in range(n):
print(j)
example(2)
Output:
Explanation:0
1
0
1
The first loop runs n times and the second loop also runs n times. So total operations are 2n. However, in complexity analysis, constant factors are ignored because they do not significantly affect growth for large inputs. Whether it is n or 2n, both grow at the same rate. That is why the final complexity is simplified to O(n).
Trick #3: Loop Patterns
Below is a Python program that keeps dividing a number until it becomes zero:
# Python program to show loop patterns
def halve(n):
while n 0:
print(n)
n = n // 2
halve(10)
Output:
Explanation:10
5
2
1
The value of n is reduced by half in each step. This means the loop does not run n times but only a few times, depending on how many times you can divide n by 2. Even for very large values of n, the number of iterations remains small. This is the key property of logarithmic complexity, so the time complexity is O(log n).
How to Quickly Identify Complexity?
| Step | What to Check? | Complexity |
|---|---|---|
| 1 | Single loop | O(n) |
| 2 | Nested loops | O(n2) |
| 3 | Halving input | O(log n) |
| 4 | Independent loops | O(n) |
| 5 | Heavy work inside loop | Multiply |
| 6 | Recursion | Count calls |
What Space Complexity Means?
Space complexity tells you how much extra memory your program uses apart from input.O(1) - Constant Space
In this, memory usage stays fixed. Below is a Python program to find sum of array of elements:
# Python program to show constant space
def find_sum(arr):
total = 0
for num in arr:
total += num
return total
print(find_sum([1, 2, 3]))
Output:
Explanation:6
This code uses only one extra variable called total. No matter how large the input array is, the memory usage does not increase. The loop processes elements one by one without storing additional data. That is why the space complexity remains constant, O(1).
O(n) - Linear Space
In this space complexity, memory grows with input. Below is a Python program to create a new array with doubled values:
# Python program to show linear space
def double_array(arr):
result = []
for num in arr:
result.append(num * 2)
return result
print(double_array([1, 2, 3]))
Output:
Explanation:[2, 4, 6]
Here, a new array called 'result' is created, and each element from the input is added to it after modification. If the input has n elements, the result array will also have n elements. This means memory usage increases with input size. That is why the space complexity is O(n).
O(log n) - Logarithmic Space
In this, memory grows slowly (often in the recursion stack). Below is a Python program for a recursive function that halves input:
# Python program to show logarithmic space
def recursive_log(n):
if n = 1:
return
recursive_log(n // 2)
recursive_log(16)
Output:
Explanation:no visible output
Each recursive call uses memory on the call stack. Since input is halved each time, the number of calls grows logarithmically.
O(n2) - Quadratic Space
Here memory grows very fast. Below is a Python program to create a 2D matrix:
# Python program to show quadratic space
def create_matrix(n):
matrix = []
for i in range(n):
row = []
for j in range(n):
row.append(0)
matrix.append(row)
return matrix
print(create_matrix(2))
Output:
Explanation:[[0, 0], [0, 0]]
A matrix of size n × n is created, so memory usage grows very quickly with input size.
Conclusion
Time complexity tells you how fast your solution grows when input increases. Space complexity tells you how much memory your solution needs.Once you start thinking in terms of the following:
- how many times work repeats
- how input size affects operations
- how memory grows
Frequently Asked Questions
1. Is time complexity dependent on programming language?2. Why do we ignore constants in complexity?No, time complexity depends on logic, not language. The same algorithm has the same complexity in Python, C, or Java.
3. Which is more important: time or space?Because constants do not affect growth for large inputs. O(2n) behaves the same as O(n).
4. Can a solution have both high time and space complexity?It depends on the problem. Some systems prefer faster execution, while others prefer less memory usage.
5. Why is O(n²) considered bad?Yes, some inefficient solutions use both more time and more memory. The goal is to optimize both.
6. Is recursion always bad for space?Because the number of operations increases very quickly. It becomes slow for large inputs.
Not always, but recursion uses stack memory. Iterative solutions can sometimes reduce space usage.
0 Comments