When I first started my coding journey, I always hear fellow programmers talk about how important Algorithms and Data Structure is, how much time they spent practicing and preparing for the technical interview. After my graduation from Flatiron coding boot camp, I started learning more about Algorithms and Data Structure, one of the fundamental concept of algorithms is the Big-O Notation. It is a very useful method in terms of defining how long it takes an algorithm to run, as the size of your input grows. Time complexity is tied to Big-O Notation in terms of showing how the run time of a function increases with the increasing size of the function.
Before we dive deep into the Big-O Notation & Time Complexity, let us take a broader look at what is algorithm. Algorithm a process or set of rules followed in calculations or other problem-solving operations, especially by a computer. What it simply means the operations that we perform on data and the sets of instructions to execute them. Let’s take a look at an example of algorithm.
In the image shown, if we want to calculate the shortest distance from home to school. It is a very simple and intuitive that we can tell by looking at the graph right the way, there are three different ways to get to school from home. (Home->Store A->Store B->School; Home->Store B->School; Home->Intersection->School). We can use algorithm to solve this problem, by giving it a set of instructions. First by finding all the places we can go from home; then find all paths from each of those places; and then keep track of the distances we have traveled as we go; then repeat this process until we get to school; compare the routes and distances that we have traveled; last find out the shortest distance of path to school.
Now that we understood what is algorithm, how does the Big-O Notation play a part into algorithm? Knowing that algorithm is sets of instructions (functions) to solve a particular problem, we also want to know how much time does it take to run the function? How does the runtime of the function grow as the size of inputs increases? This is where the concepts of Big-O Notation and Time Complexity come in. In the image shown below, we can see the Big-O Complexity Chart that defines whether a function is excellent or horrible in terms of its runtime performance. For simplicity purpose, anything equal or above O(n²) is consider horrible in terms of Time Complexity, and we want to avoid those at all cost when we are solving problems using algorithm.
Now it might be very confusing when we are looking at the numbers on the chart for someone who is non-mathematical, but we will break it down into three main categories. First category is quadratic time complexity, which is expressed as O(n²). This is where we want to avoid our function runtime to grow exponentially when the size of our inputs increases. Second category is linear time complexity, expressed as O(n). This is where the runtime of our function increases in a linear way as the size of inputs increases. From the above chart, we can see linear time complexity is considered fair, which means it is still not the best runtime but useable in a way that it doesn’t explode when the size of our inputs increases. Last category is called constant time complexity, expressed as O(1), this is when the size of inputs increases, the runtime remains the same for our function. Though it is considered the best solution for algorithm, sometimes we simply cannot achieve this perfect results with our function.
Now that we know what are the three main categories of time complexity. How do we understand how it is calculated and how do we find out whether our function is linear, constant, or quadratic? Let us first take a look at how time complexity is calculated. T = an + b
Let T be the time it takes the function to run, a & b are constant, where n is size of our inputs. There are two steps we can use to figure out what time complexity this equation is. First step is to find the fastest growing term. We already know that b won’t grow since it is a constant, while n increases with the size of our inputs. This means that the fastest growing term in this equation is an. The second step is to take out the coefficient, which is a in the term, and we are left with n. This is time complexity equation for linear time O(n).
Similar equation is presented when it comes to the time calculation for quadratic time complexity. T = an^2 + bn + c
In this equation, we can tell right the way that the fastest growing term is an² simply due to n² being the highest exponential power. By taking out the coefficient a, we can come to the conclusion of quadratic time complexity = O(n²). We can also go about the same for constant time complexity T = a
where a is still a constant, which means for our function the runtime is a * 1, and it can be expressed as O(1).
So far we have learned what Big-O Notation and Time Complexity is, and how it is calculated. But how do we know what time complexity our function is when we try to solve a problem? Let us take a look at the code snippet shown below!
In this code snippet, we can see there are two functions shown in Python, the first function is a function that has given_array as parameter, and have a variable called total which is equal to 0. and then return that total variable. How much time does it take to execute each line of the code? We noticed this function doesn’t involve using the given_array and the line total = 0
is run at a constant time, which means it is O(1). The second line of stupid_function is also O(1) since it doesn’t involve using the given_array. Once we have the run time for each line of code, we can add them up to find out the runtime for this whole function. T = O(1) + O(1)
which is basically adding two constants and we can conclude that with an expression of c1 + c2 = c3
where c3 is another constant. Since it is a constant, that means stupid_function Time complexity is O(1).
The second function find_sum is another function where we can look for the time complexity. While the first line and the fourth line of code for this function is the same as the stupid_function, which means these lines are both O(1) runtime, and the second and third line of code is where things might get a little bit more complex. for each i in given_array:
it is a O(1) runtime for every time that we go through a number inside given_array, total += i
means we are simply adding each number inside the array to the total, it is also O(1) for each time that code is running. At the end, when we add all of the runtime for each line of the codes in find_sum function, T = O(1) + n * O(1) + O(1)
where the first and third O(1) is simply the first and fourth line of the function, while n is the number of times we execute line two and three, if given_array has more than one number. The results of this function come out to be O(n). Knowing this method, we can come to an conclusion that for every for loop we are using, the time complexity of our function will increase dramatically. (one for loop = O(n), one nested for loop = O(n²) )
In conclusion, Big-O Notation and Time Complexity are very important concepts to understand when it comes to Algorithm. Knowing how much runtime our function will be as size of our inputs increases, it will be very essential when it comes to structure our code for the maximum efficiency. For more information, check out the following links and feel free to add and chat with me on LinkedIn!