If you want to preform a binary search on a set of numbers, you will first need to sort the numbers. There are many different ways that you can sort a set of numbers (which I will cover in future posts), but today I will teach you a method for defining the efficiency of a sorting algorithm, Big O notation. At its most basic level, Big O notation is simply put as O(some modification of n) where some modification of n would be representing how the time it takes to run the algorithm changes with the amount of input used. In this article, I will briefly explain each of the most commonly used modifiers with Big O notation.
The first Big O group that I will teach you is O(n^2). In this version, the time it takes for the sort to occur is directly related to the square of the number of items that you wish to sort. These types of algorithms usually use a nested for loop that will go through each element and compare it to every other element and will use this data to sort the list.
The next Big O group that I will teach you is O(n). In this version, the time it takes for the sort to occur is directly related to the number of items that you wish to sort. These types of algorithms usually use one for loop that will go through each element and sort it based off of what happens in that loop.
The last Big O group that I will teach you is O(n log n). In this version, the extra time that It takes to add another element before sorting decreases with the more elements that you have. These types of algorithms are the most efficient and usually call another copy of their algorithm on sub sections of the list through a process called recursion (you can find the definition here).
Now, I have shown you the three most common Big O groups; however, there are many more, but I will not discuss them all here. If you have any questions about this, just post them in the comments, and as always, I hope you have learned a thing or two about Big O notation.
No comments:
Post a Comment