It’s important when you’re writing applications especially, those that are going to be processing a large amount of data that you understand how your application is going to scale.
Big-O notation gives us a really simple way to understand how many operations your application needs to perform, depending on how much data it needs to process.
There are a few different, Big-O notations that you’re likely to come across, but the main ones are:
If we plot these on a graph, we can quite clearly see how the number of operations that need to be performed increases with the number of elements that are being processed.
You can see with quadratic, exponential, and factorial that the number of operations rapidly increases the more elements you need to process.
With the number of operations rapidly increasing, the time it’s going to take to run your application is going to increase rapidly as well.
Constant is anything that isn’t going to change depending on the number of elements that you have to process.
If there’s a particular part of your code that runs the same number of times, no matter how big your dataset is, then that is considered constant.
Even a loop can be considered constant if it’s running the same number of times, no matter how big the dataset is.
for (int i = 0; i < 10; i++)
Technically every single line of code should go towards the time complexity, but when we’re calculating it, we don’t really care about constants.
Even if you have four lines of code that always run once we don’t calculate this as O(4), we just consider that to be O(1). This is because we’re looking for the big-picture calculation and a few additional lines of code with constant complexity aren’t going to affect the overall calculation that much.
We’re trying to get a sense overall of how your application is going to perform the larger the dataset it has to process.