### Big O notation, simply explained

Big **O** notation is a formula that allows us to determine how much of a given

resource an algorithm could potentially consume during execution.

So, Big **O** is really just a calculation of the max potential resource

consumption an algorithm may use while processing a dataset with a variable

number **n** of items.

*For instance, a resource could be (but is not limited to):*

Time — How long it takes to process a dataset.

Memory — How much memory could be consumed on a machine during processing of a

dataset.

HDD Space — How much space could be consumed on a machine during processing of a

dataset.

### The formula O(n)

The **O** represents a function and stands for *“Order”* of complexity, a.k.a.

the rate of growth of a function. The **O** function is used to calculate how

quickly the function grows or declines in its usage of the measured resource.

The **n** represents the number of items that must be processed. However **n**

may also be an equation that describes the processing complexity.

```
orderOfComplexity = O(1) # Static complexity.
orderOfComplexity = O(n) # Linear complexity.
orderOfComplexity = O(n log) # Divide and conquer / bracketing.
orderOfComplexity = O(n log n) # A hybrid.
orderOfComplexity = O(n^2) # Brute force.
```

**O(1) — Static complexity**

That is to say that the resource consumption is unchanged no matter the value of

**n**. An example of a static complexity could be that of a **first()**

function.

```
function first( array_values ) {
return array_values[ 0 ]
}
```

No matter how many elements are in the array, the function’s processing time

remains the same as there is no variance in how many computations are required

to execute the code.

### O(n) — Linear complexity

That is to say that the resource consumption has a direct correlation to a value

of **n**. This could be demonstrated in an implementation of the following

function: **each(array_values, action) => void**.

```
function each( array_values, action ) {
for( let i=0; i < array_values.length; i++ )
action( array_values[ i ] )
}
```

The time needed to execute this function has a direct relation to the number of

items in the passed array.

For example, if our algorithm takes 10ms processing an array with 10 elements we

can estimate that an array with 100 elements would take 100ms to process. As we

can see the time resource grows in a linear way with the number of items to

process.

### O(log n) — Divide and conquer

An algorithm that fits into this category with be able to split its work into

divisions and then perform an action on each subset. Speed improvements are

associated with this sort of algorithm over a **O(n)** linear approach (when

applicable).

First we must understand what the log(n) function actually does.

The

logfunction gives us the exponent of thebasethat is required to

get ton.

When the base value is not provided (as is the case here), the base value of

base 2 is assumed is CS (mathematics assumes base 10).

Let’s assume we have an array with 8 elements … **O(log2(8))**

Division 1: **8/2 = 4**

Division 2: **4/2 = 2**

Division 3: **2/2 = 1**

**log2(8) = 3** … So **2³ = 8**

These types of function will split the data then perform a check against one of

the two halves and recurse if needed using the remaining half of the data.

### O(n log n) — A hybrid

As with the **O(log2(n))** example, we divide and conquer splitting the array

with each step of the recursion.

As with the **O(n)** example, we will loop over each item passed in the array.

The difference is that the algorithm will only re-process each element a

relatively small number of times ~**log2(n)**. Instead of re-processing every

element with each step of the recursion, like we see with **O(n**²**)**.

### O(n²) — Brute force

This type of algorithm is most commonly found in processing that contains deeply

nested data. It is akin to a brute force processing of the dataset and is

generally regarded as slow.

Like an **O(n)** algorithm the number of items passed into the function has a

direct correlation to the time it will take to process. However, unlike **O(n)**

each iteration of the function is **O(n)** and that operation must be run **n²**

times.

Comments are closed.