Algorithm Complexity - What, How, Why

February 06, 2019 5 minutes

Let’s start with a few questions:

  • How do we say that an algorithm is better performing than another algorithm?
  • Are there any set criteria on which we can compare two algorithms?
  • Or in other words, why is it such a big deal to compare two algorithms in the first place.

Before we answer the above-mentioned questions, lets first be on the same page about a fact that comparing two algorithms is not as straight forward as it sounds. Even a single algorithm can perform differently during multiple executions (on same or different machines) depending on various factors like :

  • Resource Availability like CPU, Memory, Disk usage etc
  • If executed on another machine, changes in system architecture can also impact the performance
  • Type of input data on which the algorithm is executed.

Due to the above-mentioned points, it becomes difficult to evaluate how fast or slow (in comparison to itself or some other algorithm) an algorithm is performing. Change in any of the above-mentioned variables can change the execution time.

But then how do we evaluate an algorithm? We evaluate an algorithm based on the following two factors :

  1. Time Complexity of an algorithm defines the amount of time taken by an algorithm to run as a function of the length of the input.
  2. Space Complexity of an algorithm defines the amount of space or memory taken by an algorithm to run as a function of the length of the input.

But both of these again depend on the underlying execution platform, then how does these give a platform independent view of the algorithm efficiency? The answer lies in the way these are calculated. These are calculated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform.

Calculating these two only in terms of the operations performed by the underlying algorithm, makes these independent of on any of the variables that we discussed earlier, giving a platform independent estimate of the algorithm efficiency.

Order of Growth

Order of growth represent how the execution time depends on the length of the input data. For example, for a linear search algorithm, we know that we have to compare every element present in the list with the key element that we need to find. Which mans that the execution time is directly proportional to the input size and is linear in nature.

Similary for binary search, as we always discard half of the element in every iteration until we eiher find the element or there are no more elements to compare, the order of growth is NOT linear but is a function of the input size.

The idea is to use these growth functions instead of using the previously mentioned methods to compare two algorithms, as these functions depend only on the size of input data and are not affected by the execution environment conditions.

Asymptotic Notation

Asymptotic Notations are mathematical functions used to represent these growth functions. For ex: $n$ for linear search and $log_2 n$ for binary search. There are 3 common notations used to describe various execution cases. These are $\Theta$, $\Omega$ and $O$

1. Big-O Notation:

Commonly written as $O$, this notation is used to specify the worst-case growth function of an algorithm in context to the change in input size. In other words, we can say that this notation gives you an upper bound on the order of growth in terms of input size. For an algorithm with order of growth as $O(n^2)$, the actual order can be anything like $n$, $n^2$, $100n+5$, $50n^2+20n+5$ as all these are covered by the function $O(n^2)$.

2. Theta Notation:

Written as $\Theta$, this notation is used to define the tight bound on the growth function. This means it defines the actual rate of growth of the algorithm in terms of the input size. For example, an algorithm with complexity as $\Theta n^2$ can have growth functions like $5n^2$, $67n^2 + 5n$ but NOT $n$ i.e. the highest order term represents the theta notation.

3. Big Omega Notation:

Denoted by symbol $\Omega$, BigOmega notation represents the best-case growth function and represents the lower bound of the growth in terms of the input size. In other words, we can say that an algorithm with complexity as $\Omega n^2$ will have order of growth function of the order $n^2$ or anything higher than it like $10n^3 + 5n^2$.

While using these notations to describe the order of growth, the idea is to consider only the highest order terms from the function and ignore any constants or lower order terms that may be present along side the highest order term. For example, consider a random function $5n^2 + 10n+ 5$, after we drop the lower order terms (10n, 5) and the constant with the higher order term (5), we are left with the function $n^2$ which we can use to define the above mentioned notations.

  • As the theta notation represents the exact bound on the growth function, in this specific case, it becomes $\Theta (n^2)$.
  • BigOmega on the other hand represents the lower bound and hence can be represented using either of $\Omega (n^2)$ or $\Omega (n)$.
  • Similarly, as BigO notation represents an upper bound, the above example can be covered using $O(n^2)$ or $O(n^3)$ etc.

Limitations of Asymptotic Analysis

We just mentioned a few good reasons to use asymptotic notations to analyse algorithms, but are there any limitations or disadvantages of using these notations as well? Let’s try to answer this questions using some examples:

  1. As asymptotic notations is a representation of the order of growth in terms of input size, there can be scenarios where its applicability is questionable. For example, an algorithm with growth function as $O(1000log n)$ is asymptotically better than $O(2n)$, but its possible that the large value of n, for which this holds true never ever happens in production scenario.
  2. It does not consider other factors like memory layout, caching compatibility etc. that can prove fatal if ignored.
  3. Another limitation with asymptotic analysis is that it ignores the constants involved. So asymptotically both $565n^2$ and $n^2$ are equal but we surely knows which one is better performing.
comments powered by Disqus