Big O Notation for Binary Search Trees

Persis Randolph
5 min readApr 20, 2021
Daniel Stor: http://turnoff.us/geek/binary-tree/

What’s a Binary Search Tree?

A Binary Search tree is a tree-like data structure that contains uniquely valued nodes. The nodes can have at most two children (or branches), one which is a smaller value (typically the left node), and another which houses a larger value (typically the right node).

Binary Search Trees are great for storing numbers since they have very fast access, search, insert, and delete functionality.

The Big O (notation)

The simplest way I can think to talk about big O notation is by stating its purpose. Big O notation is a mathematical notation designed to classify algorithms based upon the time they take to process (time complexity) or by the space they take up (space complexity) as the input size grows. Big O notation uses very generalized notations and is definitely not an exact science, but can be used to consider how scalable a data structure or operation is.

If a company was building a big application, they would want to store their data in a way that took the smallest amount of time possible or took up the least amount of space dealing with memory usage. They can use the terminology as defined by big O notation in order to talk about these complexities in a way that is easy to compare and contrast.

Let’s look at some basic examples for the most common time complexities and how they relate to big O notation.

Constant Time: O(1)

function add(num1, num2) {
return num1 + num2;
}

For the above function, there will always be two inputs: num1 and num2. The only operation that will ever be done on them is simple addition. This means that no matter what the input, it will always take a very small amount of time to complete the function and the amount of time does not vary depending on input size.

Linear time: O(n)

function loopThroughArray(array) {
for (let i = 0; i < array.length; i++) {
console.log(array[i]);
}
};

For this function, we are looping through a very basic array. What length is the array? Hard to tell from looking at this general code. You could input an array with a length of 3, or you could be inputting an array with a length of hundreds of elements. This is where linear time complexity comes into play. The amount of elements in the array (n elements) directly influences the time it will take to calculate the output. On a graph, you could see this demonstrated by a line going straight up the opposite corner at a 45-degree angle. O is calculated in relation to n, or the number of input elements.

Quadratic: O(n²)

function nestedArrayLoop(arr1,v ) {
for (let i = 0; i < array.length; i++) {
for (let j = 0; j < array[i].length; j++) {
console.log(array[i][j];
}
}
}

For nested arrays, we’d need to loop through both the outer array and each inner array. The amount of time it takes the loop through is multiplied by the number of elements contained in each array. If we take n * n elements we can also state this as n². The amount of input affects the output time exponentially.

…and finally, Logarithmic: O(log(n))

Remind me what a logarithm is again?

“A logarithmic function is the opposite of an exponential function. When you say something grows exponentially, it’s being multiplied. When something grows logarithmically, it is being divided.” (Jasmine Webb, A Gentle Explanation of Logarithmic Time Complexity)

Each node in the Binary Search Tree only can have 2 child nodes. That means that each time we go one step further down our tree we are proportionally decreasing the number of choices you have for output.

I know this is a really basic example of a Binary Search Tree, but let’s consider it again while exploring the basic functionality of this tree and how it relates to big O notation.

Access/Search: When trying to access or search for a particular node of a tree, you have to start at the top, or the root. Say we’re trying to get to the node with the value of 4. First, we must access from 5. We know the value we’re searching for is less than 5 so that means the value can be found somewhere in the left branches. This effectively gets rid of half our choices, allowing us to rule out the nodes with higher values.

Then, we are at the node with value 3. We know 4 is a higher value than 3 so we can search its right branch and cut off the possibilities from the left branch. In cutting off about half of the possible choices each time we move down the tree we are seeing logarithmic time complexity in action. The further we go, the fewer choices we have. The input number of nodes directs the output time resulting in an average time complexity of O(log(n).

Insert/Delete: Exactly the same as access/search, in order to insert an element or delete an element of any given value, at most we need to search one branch from each node from the time. This makes the average time complexity the same as above, O(log(n)).

When thinking about what time complexity is best, logarithmic is as close to constant as you can get without quite touching it. The line on the graph tracking O(log(n)) hovers down close to constant and is far more effective and scalable than linear time, which is next best. That each common operation for a Binary Search Tree is consistently logarithmic is a really good mark in regards to its scaling and much better than if the operation had linear time complexity or even quadratic time complexity and makes it worth considering as a data structure as opposed to unsorted arrays or hash table structures.

Space Complexity

Just a final note here to round things out. When considering space complexity we are considering how the number of input values (nodes in this case) directly increases the amount of space taken up in memory. In the case of the Binary Search Tree, the more nodes added to the tree, the more space is taken up, and that number is directly proportional. This makes the space complexity of a Binary Search Tree O(n), or linear.

--

--