Understanding the time complexity of a function in code is the mark of a great programmer. We don’t measure the time complexity of an algorithm in actual time, but we measure the number of operations it takes to complete. We want to know how many iterations the function could potentially execute at it’s max.

This is important because if a function that had a horrible Big O notation was executed at a very high velocity, then you could potentially get a Time Limit Exception, which is a big problem. Writing a function of an excellent Big O instead, can ensure stability during times of high user throughput.

These Big O notations will always fall under one of the eight time complexities, which we will step through in the following examples.

These time complexities can be ranked in the following order

Time Complexity | Complexity | Growth Rate |

O(1) | Constant | Good |

O(log n) | Logarithmic | Good |

O(n) | Linear | Good |

O(n log n) | Log Linear | Fair |

O(n^2) | Quadratic | Bad |

O(n^3) | Cubic | Bad |

O(2^n) | Exponential | Bad |

O(n!) | Factorial | WTF! |

I will not explain the different complexities in this order but understand this is how their performance is ranked.

### How to find the Big O notation of a function

To identify the Big O notation of a function, you need to first find the Big O of the individual operations within function. Add the Big O’s of the operations together and remove the constants. The highest order term will be the largest Big O notation of all operations.

- Break your code into the individual operations
- Calculate the Big O for each operation
- Sum the Big O of operations
- Negate any given constants from the equation
- Find the highest order term, which is your Big O

Count the steps

Let’s break the following function into individual operations and assign a Big O for each operation.

/** * Example of O(n^2) */ fun squaredFunc(n: IntArray){ //O(1) println("Hello World") //O(n) for(a in n){ println(a) //O(n) for(b in n){ println(b) } } }

Sum the operations

The sum of the operations is O(1) + O(n) * O(n), this can also be seen as O(1) + O(n^2).

Negate constants

The O(1) function is removed because we negate the constants from the equation, so we’re left with O(n^2). This is now our highest order term, and only term left.

Let’s look at another example. This function is nearly the same as the previous function except that it loops n times, before looping n*n times.

fun loopThenSquaredLoop(n: IntArray){ //O(n) for(num in n){ //O(1) println(num) } //O(n) for(a in n){ println(a) //O(n) for(b in n){ println(b) } } }

Find the highest order term

The sum of the function operations will give us (O(n) * O(1)) + O(n) * O(n), or you could say O(n) + O(n^2). There are not constants to remove this time, however O(n^2) is the highest order term in the sum of this function has a Big O(n^2).

### Constant

O(1)

A function that is constant, is going to have the same iterations for any given input. There may still be iterations in an O(1) function, but they’re just not iterations of n. Let’s look at a function that is O(1)

/** * Example of O(1) */ fun constant(n: IntArray){ val a = intArrayOf(1,2,3,4,5,6,7,8,9) //Disregard the constants for(num in a){ println(num) } }

We can break the operations of this simple function into two different notations. We will get O(1) for this ƒ(n) because

// This operation is removed from the equation because it is executed the same number of times agnostic to the value of n for(num in a){ //.. }

### Linear

O(n)

A function that is linear in its worst case will execute n times as it is proportional to the size of its inputs. The function below checks to see if a given array contains a given integer. Big O looks for the worst case so if a match is not found the functions operations will execute n times.

/** * Example of O(n) */ fun arrayContainsInt(a: IntArray, b: Int): Boolean{ for(num in a){ if(num == b) return true } return false }

This function can only be broken out into one operation, for(num in a). That operation is O(n) and is the highest order term thus this function is O(n).

O(a + b)

Realistically, your code will have more than one variable, in which a function can no longer be described with a simple n. You will have to take every parameter into consideration when identifying the Big O of a given function. It doesn’t matter if your function is linear, logarithmic, quadratic, etc. You will still need to include the input parameters in your Big O notation when those inputs are a factor when calculating iterations. Let’s look at a linear function that is also O(a+b)

/** * Example of O(a + b) */ fun multipleLoops(a: IntArray, b: IntArray){ //O(a) for(num in a){ if(num == 99) return } //O(b) for(num in b){ println(num) } }

If we some the steps O(a) + O(b), we are left with O(a+b). We cannot use a simple n here, because in its worst case, the function will loop ‘a’ + ‘b’ times.

O(max(a,b))

For a function that is that is O(max(a,b), we cannot use O(n) either because of the multiple inputs. Looking at the function below, there’s no way to tell if n will be assigned to a or b.

/** * Example of O(max(a,b)) */ fun largestIteration(a: IntArray, b: IntArray){ if(a.size > b.size){ //O(a) for(num in a){ println(a) } } else { //O(b) for(num in b){ println(b) } } }

The Math.max() function will give us the largest of the numbers provided and if the numbers are equal that number is returned, which is why we can safely say that the function above is O(max(a,b))

O(ab)

A function that is a multiple of its inputs can still be a linear function. This is not to be confused with a function that is exponential or squared by its’ inputs. If a function multiplies it’s inputs we simplify O(a * b) to O(ab). Let’s count and sum the Big O operations of the following function.

/** * Example of O(ab) */ fun multiplyLoops(a: IntArray, b: IntArray){ //O(a) for(numOne in a){ //O(b) for(numTwo in b){ println("num1: $numOne, num2: $numTwo") } } }

Let’s give an example of a=[1,2,3], b=[4,5,6] and see if we get 9 iterations.

num1: 1, num2: 4 num1: 1, num2: 5 num1: 1, num2: 6 num1: 2, num2: 4 num1: 2, num2: 5 num1: 2, num2: 6 num1: 3, num2: 4 num1: 3, num2: 5 num1: 3, num2: 6

As you can see we got 9 iterations because a.size * b.size = 9, so the function is O(a,b).

### Logarithmic

O(log n)

A logarithm is used to find the exponent for a given formula. If I wanted to know 3 to the power of what gives me 27, you could look at that as 3^x = 27. You can convert this to a logarithm by simply switching the exponent with the result, so switch x with 27 to get log(3,27).

So if a logarithm finds the exponent for a formula, so when it comes to time complexity, O(log n) is solving for the number of iterations, O(log n) = iterations. A key factor to spotting a logarithmic function is when n is divided, then iterated in a loop.

O(log n) {Base 2}

/** * Example of O(log n) Base 2 */ fun divideTilZero(n: Int) { var i = n //O(log n) while (i > 0) { i /= 2 println("i is now = $i") } }

We could convert this to a log by saying our Base = 2, because I /= 2, and n is our n is our exponent to solve for, so this function is asking us to solve log(2,n) = iterations. If we plug n = 40 into this function, we will get the following output.

i is now = 20 i is now = 10 i is now = 5 i is now = 2 i is now = 1 i is now = 0

We can change the Base of the logarithm by changing what n is divided by.

O(log n) {Base 5}

/** * Example of O(log n) Base 5 */ fun divideTilZero(n: Int) { var i = n //O(log n) while (i > 0) { i /= 5 println("i is now = $i") } }

We could also convert our while loop into a recursive function. The mark of O(log n) is that the number being iterated keeps dividing by some base. Here is an example of the same function above, performed recursively.

O(log n) {Base 2}, recursive

/** * Example of O(log n) base 2, recursive */ fun divideTilZeroRecursive(n : Double): Double { val base = 2.0 val a = floor(n / base) //O(log n) return divideTilZeroRecursive(a) }

### Log Linear

Knowing how a O(log n) function would look, it’s simple to picture how O(n log n) looks as it’s the same as n * log(n). The distinction between O(log n) and O(n log n) is significant because it moves the algorithm from a good growth rate, to a fair growth rate, see the Big O cheat sheet. To achieve a function like this we could simply take one of our previous functions and nest it in a loop of n.

O(n log n)

/** * Example of O(n log n) */ fun nLogN(n: Int){ //O(n) for(num in 0 until n){ var i = n //O(log n) while (i > 0) { i /= 2 println("i is now = $i") } } } /** * Example of recursive O(n log n) */ fun nLogNRecursive(n: Int){ //O(n) for(num in 0 until n){ //O(log n) divideTilZeroRecursive(num.toDouble()) } } /** * Example of O(log n), base 2, recursive */ fun divideTilZeroRecursive(n : Double): Double { val base = 2.0 val a = floor(n / base) //O(log n) return divideTilZeroRecursive(a) }

### Quadratic

This is another way of saying squared iterations or n * n. Any number multiplied by itself one time is squared for any function that loops through in with a second loop through n is a quadratic function. That means that in a functions worst case, it will iterate n^2 times.

O(n^2)

/** * Example of O(n^2) */ fun squared(n: Int){ var c = 0 for(a in 0 until n){ for(b in 0 until n){ c++ } } println("$n squared = $c iterations") }

As this is a quadratic function, whatever value we provide for n will give us n^2, as we are always evaluating the worst case of n. Here’s a few examples of the output of our squared function.

3 squared = 9 iterations 4 squared = 16 iterations 5 squared = 25 iterations

### Cubic

If you understand that quadratic functions are n^2, you will also get that cubic functions are n^3. Like squared functions, the cubic functions also have a bad growth rate, except they’re n^1 slower. We can picture a simple cubic function by nesting our previous squared function in an additional loop.

O(n^3)

/** * Example of O(n^3) */ fun cubed(n: Int){ var d = 0 for(a in 0 until n){ for(b in 0 until n){ for(c in 0 until n){ d++ } } } println("$n cubed = $d iterations") }

The output of such a function would give us cubed iterations for a given n input.

3 cubed = 27 iterations 4 cubed = 64 iterations 5 cubed = 125 iterations

### Exponential

An exponential function that doubles for each addition to n. Therefore an exponential function is referred to as O(2^n), because an exponential function is 2^n. You will typically find factorial function in functions that use recursion.

O(2^n)

/** * Example of O(2^n) */ fun fibonacci(n: Int): Int { return if (n <= 1) n else fibonacci(n - 1) + fibonacci(n - 2) }

You could envision how this function will call itself recursively on the call stack.

### Factorial

A factorial is a whole number n, equal to that number with each whole number <= n until 1. Picture the factorial of 4. The factorial of 4 is 4*3*2*1, which is 24. Factorials are represented using the symbol ‘!’ meaning the factorial of 4! is 24.

We can convert a factorial to a ?(?) of O(?!) by looping through n and calling ?(?-1), until our call stack reaches zero. This will give us the expected same behavior of 4! == 4*3*2*1 which will iterate O(n!) in its worse case.

O(n!)

/** * Example of O(n!) */ fun factorial(n: Int){ if(n == 0) return //O(n!) for(num in 0 until n){ factorial(n - 1) } }

## Summary

Time complexity for functions is calculated by iterations and not actual time it takes to execute a function. You want to refactor your functions to have the best Big O notation possible, to ensure that they will perform well with high inputs, the worst cases. In order to find the Big O of a function, you must sum the Big O for the individual operations of a functions, remove the constants and coefficients, and the highest order term will be your Big O. I hope that you’ve gained a better understanding of Big O notation in Kotlin, have a nice day 🙂