• Java

# Introduction Get link to this section

When writing an algorithm or analysing a program, you should be able to calculate the Big O of the algorithm. This process is called amortised analysis.

In this article, we will perform this analysis on some programs and algorithms, and explain how to calculate the Big O of each of them.

## The Big O family formalised

First, let's formalise what exactly Big O is. If you've read the previous article, you should be able to have some good intuition about what Big O is already.

A function is if and only if converges.

Why is this so?

From the previous post, we concluded that is if is the highest degree of in .
If we divide two functions with the same growth rate and approach infinity, we will end up with a computable value.

Let's try this out on an example. Imagine we have a function and we want to check if it is . Certainly, this is the case, because both functions have the same highest degree , but we will prove it mathematically.
We compute the limit as we approach an infinite input size:

Because the limit converges to a constant, we can conclude that is indeed .

You might notice that according to this definition of Big O, is also . This is true. The Big O of a function actually specifies all functions that have a growth rate equal or greater than the worst-case growth rate of . In Algorithms & Data Structures, we're only really insterested in the tightest Big O.

## Amortised Analysis Get link to this section

### Printing Squares Get link to this section

The time and space complexity of some functions can be easily determined, while it can be more difficult for others.

Let's start with a simple example. Can you figure out what the space and time complexity of the following function is?

```  1 void  2 printAsciiSquareBorder(int n)  3 {  4 for (int y = 0; y < n; y++)  5 {  6 for (int x = 0; x < n; x++)  7 {  8 // If this point is on an edge, print '#'  9 10 if (x == 0 || x == n - 1 || y == 0 || y == n - 1) 11 { 12 System.out.print("#"); 13 } 14 15 // Otherwise, print a space 16 17 else 18 { 19 System.out.print(" "); 20 } 21 } 22 23 // Print newline 24 25 System.out.println(); 26 } 27 } ```

The time complexity is . We can easily see this by the fact that we have two loops, each iterating times.
The outer loop is executed for each row, and the inner loop is executed once for each column in that row. So, the total number of iterations is .

The space complexity is constant (), however. This function does not use or allocate any additional memory. If we were to use a `String` or a `StringBuilder` to build the output before printing it for example, the space complexity would be .

### Recursion: Factorial Get link to this section

When you have a recursive function, it is a bit harder to determine the space and time complexity.

```  1 int  2 factorial(int n)  3 {  4 if (n == 1)  5 {  6 return 1;  7 }  8  9 return n * factorial(n - 1); 10 } ```

The time complexity is .

Notice that we have a recursive function with a branch:

• Base case: If n is equal to 1, the function returns 1.
• Recursive case: The function returns n times the result of calling factorial(n - 1).
The base case has a time complexity of , because it only executes once. What's left to do is to figure out the time complexity of the recursive case.
Intuitively, we can see that each recursive call is called with n decremented by one, and the base case is reached when n is equal to 1. Therefore, the function must be called times.
From this we can conclude that the time complexity of the recursive case is .

The space complexity is also . This is because the recursive call stack is levels deep.

Note that each stack frame holds return value, local variables and parameters. Some functions might not have any local variables or parameters, but all functions have a return value. Therefore you always have to take the stack frame size into account when calculating the space complexity for recursive functions.
In the above example, each call to the function takes one parameter and the return value on its stack frame.

Consider the binary search algorithm you have seen in the previous post.
We can also use this algorithm recursively:

```  1 boolean  2 sortedArrHasElement(int[] arr, int n, int left, int right)  3 {  4 if (left == right)  5 {  6 return arr[left] == n;  7 }  8  9 int mid = (left + right) / 2; 10 11 if (arr[mid] < n) 12 { 13 return sortedArrHasElement(arr, n, mid + 1, right); 14 } 15 else 16 { 17 return sortedArrHasElement(arr, n, left, mid); 18 } 19 } ```

The time complexity is , which should come as no surprise, because we are implementing a binary search algorithm.

Let's see why this is the case.
The code has three branches:

• Base case: If our range consists of a single element (`left == right`), we check if that element is equal to the element we are looking for. If it is, we return true. Otherwise, we return false.
• Recursive case 1: If the middle element is less than the element we are looking for, we recursively continue our search in the right half of the array (all elements between `mid + 1` and `right`).
• Recursive case 2: If the middle element is greater than or equal to the element we are looking for, we recursively continue our search in the left half of the array (all elements between `left` and `mid`.
The base case has a time complexity of , because it only executes once.
In both recursive cases, the function is called again with half the range. The range is recursively halved until the base case is reached.
This means that the time complexity of the recursive case is .

The space complexity is also , because the recursive call stack is deep.
Note that if we made this function iterative, the space complexity would be . An example can be found in the previous post.

### Generating Subsets Get link to this section

Consider the following problem:

You get an input array of integers and a non-zero integer `n`. Find the maximum subset with a sum that is divisible by `n`. Return the sum of the found subset.
Note that such a subset always exists: an empty array is also a subset of the input array. The sum of the empty array is 0, which is divisible by `n`.

Example:
Consider an input array of `[ 3, 8, 6, 5 ]` and `n` = `3`.
The subset of elements that is divisible by `3` and maximises the sum is `[ 3, 6 ]`. The sum is `9`, so we return `9`.

Try creating a function `void maxDivisibleSubsetSum(int[] arr, int n)` that returns the sum of the maximum divisible subset and analyse its time and space complexity. You may use helper functions and other constructs if you wish.

The algorithm can be solved with iteration or recursion. This implementation uses recursion to generate the subsets. There exist more efficient implementations, which are left as an exercise for the reader. (Competitive Programmer's Handbook, page 47)

```  1 public class Solution  2 {  3 int[] arr;  4 int n;  5 int maxSum;  6 ArrayList<int> subset;  7  8 void  9 processSubset() 10 { 11 int sum = 0; 12 13 // Calculate the sum of all the elements in the subset. 14 15 for (int i = 0; i < subset.size(); i++) 16 { 17 sum += subset.get(i); 18 } 19 20 // If the sum is divisible by n and bigger than the max sum, 21 // update the max sum. 22 23 if (sum % n == 0 && sum > maxSum) 24 { 25 maxSum = sum; 26 } 27 } 28 29 void 30 search(int k) 31 { 32 if (k == arr.length) 33 { 34 // We're at the end of the current search, process this subset 35 36 processSubset(); 37 } 38 else 39 { 40 // Here we branch our search in two segments: 41 // One that contains the element at position k, 42 // and one that does not. 43 44 // Branch 1: don't include the element and keep searching. 45 46 search(k + 1); 47 48 // Branch 2: temporarily include the element and keep searching. 49 // We will add the element at index k to the subset, 50 // then we will continue our search, 51 // and when the search is completed and the subset is processed, 52 // we remove the element again. 53 54 subset.add(arr[k]); 55 search(k + 1); 56 subset.remove(subset.size - 1); 57 } 58 } 59 60 void 61 maxDivisibleSubsetSum(int[] arr, int n) 62 { 63 // Save the input to the class, so the other methods 64 // can easily access the input. 65 66 this.arr = arr; 67 this.n = n; 68 69 // Search all subsets of the array 70 71 search(0); 72 73 // Return the maximum sum 74 75 return maxSum; 76 } 77 } ```

The `maxDivisibleSubsetSum` function has a couple of lines that take constant time and space. The rest of the function is a recursive search.
Here you can see a visualisation of the `search` function:

The numbers in the subsets represent indices of elements in the array that are included in the subset.

The `search` function is called times. We can easily derive this from the visualisation:
We have array elements, so for each of them we have to both include it and not include it. This splits the problem into two subproblems. Since the `search` function is called with levels of recursion, the time number of recursive calls is .

The base case of the `search` function calls the `processSubset` function. This function calculates the sum of the elements in the subset, and updates `n` if the sum is divisible by . It is trivial that this function takes linear time and constant space.

Since we know the time complexity of the `processSubset` function, we can calculate the time complexity of the `search` function too.
The `search` function consists of two branches: one that takes time, and one that takes time. We take the highest of these two for the time complexity, which is .

The space complexity of the `search` function is , since we need to store the current subset in the recursive calls. In the visualisation you can see that the leafs of the recursion tree all have a depth of three. We can generalise this to an arbitrary input size of .

Since the `maxDivisibleSubsetSum` function simply wraps the `search` function, we can conclude that the time complexity of the `maxDivisibleSubsetSum` function is , and its space complexity is .

## Final Thoughts Get link to this section

I hope that this post has been helpful.

If you found this post helpful, feel free to share it! Any feedback is very welcome!