This week’s post is on Big-O notation. To cover the topic and go through the various algorithms, we will study concepts such as:

-What Big-O notation is and why web developers should know about it.

-What is a quadratic function, an example using JavaScript code, and how efficiency is affected.#quadraticequation

-Linear search vs binary search#binarysearch

-A Fibonacci sequence exercise to compare the efficiency of algorithms (to observe the efficiency of an algorithm with complexity O(2^{n}) . #Fibonacci

#### What is Big-O Notation?

Straight to the point,** Big-O notation** tells how fast a function grows or declines. The O is used because the rate of growth of a function is also called its **order**. Some examples of notations you may encounter and their corresponding performance:

Big-O notation is important in computer science (hence in web development) because it allows to analyze the performance of the runtime in terms of how will it be affected as the input grows. It won’t give you the exact amount of time that it will take for a function to be processed, but by illustrating how quickly it will increase on the order of input (“n”, it can be a number, number of items in an array, input map, etc.) you can decide which algorithm will scale better with bigger data sets.

#### Quadratic Equation

Let’s take a look at one of the algorithms that can be used using the Big-O notation, (O(n^{2})). In a **quadratic algorithm** the runtime will be affected proportionally to the square of the size of the input data set (n). A good example of where this notation applies is nested loops. In a **nested loop** the first loop (outer loop) will execute first and the second loop (inner loop) will execute each time the outer loop executes. In the example below numSample is equal to 25 because it has been squared given there is one nested loop, if there were three loops for example the answer would be 125.

So what does this mean in terms of runtime? How is the efficiency of the code affected as the input set of the algorithm increases? In a quadratic equation the total runtime of the operation is the product of the outer loop multiplied by the inner loop. This means that time grows exponentially related to the number of inputs. As seen in the graph a linear algorithm O(n) would have a faster runtime as “n” inputs will require “n” iterations.

#### Binary Search vs Linear Search Algorithms

To look at one of the other algorithms **O(log N)** let’s take a look at binary search. In **binary search** the data set needs to be sorted, it will then take the middle element and compare it against the value you’re looking for. Based on the value it will decide whether to search in the upper-half or lower-half and if not found on that half it will halve it again and so on until the value is found. What makes this search so efficient is that even if you double the size of the data after one iteration it will be the size of the initial one since it has been reduced by half. Below, a very good example that illustrates binary search using JavaScript(the code was taken from here, I only modified the comments a bit for emphasis on the discussion):

In comparison the **linear search** doesn’t require the data set to be sorted, it only requires equality comparisons, sequential access, and uses the linear algorithm **O(n)**. A linear search for a scenario like the one pictured above where you’re seeking a target in an array would look something like this:

This would not be efficient at all since it would search through all the items in the array until it found the value. With this in mind it’s easy to see why binary search is more efficient than linear search.

#### Exploring the Fibonacci Sequence with Big-O Notation

In this section we will be doing an exercise with the Fibonacci sequence, but in case you’re like me and forgot what it is let’s first review it. The **Fibonacci sequence** is a group of numbers where the next number is determined by adding the two previous numbers. Ex. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34.

Using a **recursive function** (a function that calls itself) the complexity is **O(2**^{n}**)**, which means it grows exponentially as the size of the data set increases. The growth will double with each addition to the input. Going back to the graph we saw at the beginning, this type of algorithm has a significant reduction in efficiency the more data it has to handle.

If we wanted to portray a series of numbers using a recursive function it would look something like this:

On the other hand, if we wanted to obtain the same result but without using a recursive function:

Even with such a small input you can test how the non-recursive option is more efficient, which means the bigger the data set the greater the difference in total runtime will be. You can use a site like jsPerf or JSBEN.CH to test it out and it’s a good practice when comparing approaches.