Analyzing Algorithms 3/6: Asymptotic Notation

Analyzing Algorithms 3/6: Asymptotic Notation

 · 8 min read

Big O, Little o, what begins with O?

Asymptotic notation refers to the mathematics used to describe the growth of functions. Most programmers have at least heard of the infamous “Big O” notation. But many have long forgotten its exact meaning or have never dived deep enough to have a complete understanding. There is also quite a lot of misinformation floating around as it relates to the subtler points. Follow me through a journey to analyze an algorithm and discuss it in terms of asymptotic notation.

Note: There are many different shorthands for asymptotic notation. First, I will try to show the proper syntax, followed by the less formal but frequently used syntax where applicable.

Insertion Sort

Let’s consider the famous insertion sort algorithm. This is often one of the first algorithms introduced to new computer science students studying algorithms. The insertion sort is easy to visualize as it’s a very intuitive way for humans to sort. If you are unfamiliar with insertion sort, take a quick refresher before proceeding.

Suppose we are sorting a standard deck of 52 playing cards given to us in a random order. In the best case, the cards are already sorted, and it will only take n-1 or 51 comparisons to sort the deck. However, in the worst case, where the cards are in the exact opposite order, it will take (n^2-n) / 2 or 1275 comparisons. Since the comparisons are the only significant work done in this algorithm, we will consider only the comparisons when discussing the running time of this algorithm.

Note: Asymptotic notation doesn’t have to be applied to the running time of an algorithm. It can be applied to any function that grows. It is often applied to memory consumption as well. As far as terminology goes, if we are talking about the running time of an algorithm, it is often referred to as the algorithm’s time complexity. If we’re talking about the memory consumption of an algorithm, it is often referred to as the algorithm’s space complexity. More commonly, we are concerned about an algorithm’s time complexity, so if neither is explicitly mentioned, it’s usually safe to assume the author is talking about time complexity.

Definitions

Let’s look at the definition of some common asymptotic notations:

  • little omega
    𝜔(g(n)) is the set of all functions in the form f(n) whose order of growth is greater than that of g(n).

  • big omega
    𝛺(g(n)) is the set of all functions in the form f(n) whose order of growth is greater than or equal to that of g(n).

  • theta
    Θ(g(n)) is the set of all functions in the form f(n) whose order of growth is equal to that of g(n).

  • big O
    O(g(n)) is the set of all functions in the form f(n) whose order of growth is less than or equal to that of g(n).

  • little o
    o(g(n)) is the set of all functions in the form f(n) whose order of growth is less than that of g(n).

NameSymbolMeaning
little omega𝜔>
big omega𝛺>=
thetaΘ=
big OO<=
little oo<

Analyzing Insertion Sort

Because algorithms can have a vastly different running time depending on the input, we must specify either worst case, best case, or average case whenever we are referring to an algorithm’s order of growth. For example Θ(n^2) is the set of all functions whose order of growth is equal to that of f(n) = n^2. Given this, we can see that the function that represents insertion sort’s worst case runtime is a member of the set Θ((n^2-n)/2). However, when considering orders of growth, we are only concerned about the highest order and most significant terms. For example, even though f(n) = n^2/2 grows slower than f(n) = n^2, they both have the same order of growth. Therefore, we can say that the function that represents insertion sort’s worst case runtime is also a member of the set Θ(n^2).

Or, less formally, we can simply say that insertion sort’s worst case has a time complexity of Θ(n^2). Or, even more simply, that its worst-case runtime is Θ(n^2). We could also say that insertion sort’s worst case runtime is O(n^2) and 𝛺(n^2). We could not, however, say that it is o(n^2) or 𝜔(n^2). Considering insertion sort’s best case runtime, we can say that it is o(n^2), Θ(n), O(n), or even O(n^2).

Enough with the Formalities

Often times, this level of precision is not necessary and there is a convention that is commonly used among computer scientists. If you do not specify worst case, best case, or average case, then this means that the statement applies to all three cases.

For example, if you say an algorithm is O(n^2) without specifying the case, then this is equivalent to saying that the worst-case running time is O(n^2) (since that implies that the best and average cases are too). One way to think of it is the following:

  • An algorithm is O(n^2) if its worst case runtime is O(n^2).
  • An algorithm is 𝛺(n^2) if its best case runtime is 𝛺(n^2).
  • An algorithm is Θ(n^2) if its worst case and best case runtime are Θ(n^2).
NameSymbolMeaning
little omega𝜔best case >
big omega𝛺best case >=
thetaΘbest and worst case =
big OOworst case <=
little ooworst case <

Note: If an algorithm has a constant time complexity, meaning that the algorithm growth is not dependent on the input size, then it is expressed as Θ(n^0) or, less formally, Θ(1).

Asymptotic Growth is Just One Piece of the Puzzle

Just because an algorithm has a higher order of growth doesn’t mean it will always run slower. An algorithm that is Θ(n^2) could run faster than an algorithm that is Θ(n). However, this will only be true up to a certain size of n. Once n gets large enough, by definition, the Θ(n) will always outperform the Θ(n^2) algorithm. For example, the Θ(n^2) algorithm could actually be f(n) = 2n^2 and the Θ(n) algorithm could actually be f(n) = 20n. In this scenario, as seen below, the point where the Θ(n) algorithm performs better is n = 10.

algorithm comparison