Interview Kickstart has enabled over 21000 engineers to uplevel.
Sorting algorithm questions are a big part of coding interviews at tech companies. In this article, we’ve covered all the key information you need to know for your interview preparation, including:
Sorting algorithms are algorithms that rearrange the elements of a list in a specific order.
The order of the sorted list is decided by how elements are compared. The two most common orders of sorting a list are lexicographical order (or dictionary order) and numerical order (non-increasing and non-decreasing order). Have a look at some examples:
Input list: 7 5 2 8 10 2
Order: Non-Decreasing Order
Sorted list: 2 2 5 7 8 10
Input list: e v e r y t h i n g
Order: Lexicographic Order
Sorted list: e e g h i n r t v y
Input list: 1 4 5 2 3 5
Order: Non-Increasing Order
Sorted list: 5 5 4 3 2 1
Sorting algorithms are the backbone of many programming solutions. They can be used to:
The efficiency of an algorithm is mainly determined by two factors — space and time complexity. We always want to reduce the time complexity as well as space complexity to make the algorithm more efficient.
However, it is not always possible to achieve this. Generally, there is a trade-off between time and space complexity — when we try to reduce one, the other factor increases. For example, let’s analyze what happens if we try to make merge sort an in-place sort:
The merge sort algorithm takes O(n * log(n)) time and O(n) space to sort a list of size n.
This shows the trade-off between time and space complexity.
Following are the common types of sorting algorithms:
Simple sorting algorithms are called so because they’re pretty easy to understand and implement. These algorithms are seldom used to sort large lists because of their quadratic time complexity. However, while sorting a small list, the runtime doesn’t matter much, and we often use them for easier and quick implementation.
Bubble sort is one of the simplest sorting algorithms. It repeatedly swaps the adjacent elements if they are in the wrong order until the list is sorted. In every pass, we traverse the whole list and swap adjacent elements in the wrong order.
Bubble sort is named so because in every iteration, the elements with greater value than their surrounding “bubble” towards the end of the list.
To understand how bubble sort works, check out our article “Bubble Sort Algorithm.”
Selection sort is a simple sorting algorithm. The algorithm divides the list into two sublists: sorted and unsorted. Initially, the sorted list is empty, and the unsorted list is the whole input list.
At each step, we find the minimum element in the unsorted list and swap it with the leftmost element of the unsorted list. This process increases the sorted sublist’s size by 1 and decreases the unsorted list’s size by 1, moving the lists’ border by 1.
Read our article on Selection Sort to get an in-depth understanding of the algorithm.
Insertion sort is a simple sorting algorithm. The algorithm divides the list into two sublists: sorted and unsorted. Initially, the sorted list is empty, and the unsorted list is the whole input list. At each step, we pick the leftmost element in the unsorted list and place it at the correct position in the sorted list.
For examples, pseudocode, and code, check out “Insertion Sort Algorithm.”
Efficient algorithms are called so because they have the best asymptotic time complexity of O(n logn) any comparison-based sorting algorithm can have. Considering only the time complexity, they can be used to sort lists of all kinds, but in practice, they are mainly used to sort large lists.
Merge sort is one of the most efficient and popular sorting algorithms. Merge sort uses the concept of divide and conquer. The algorithm divides the unsorted list into n sublists, each of size one, which can be considered sorted trivially.
The algorithm then repeatedly merges the sorted sublists to produce new sorted sublists until there is only one sorted sublist of size n remaining. This final sorted sublist of size n is the sorted list.
To learn more, read our article on the “Merge Sort Algorithm.”
Heap sort is an efficient comparison-based sorting algorithm. It is very similar to the selection sort algorithm. Like selection sort, heap sort divides the list into a sorted and unsorted region. At each step, the algorithm extracts the maximum element from the unsorted region and places it at the beginning of the sorted region.
Unlike selection sort, heap sort does not find the maximum element by traversing the whole unsorted region; instead, it maintains the unsorted region in a heap data structure and extracts the maximum element in logarithmic time.
The heap data structure satisfies the heap-order property, i.e., the value at each node is greater than or equal to its children. However, after extracting the maximum element from the heap, the heap doesn’t follow the heap-order property. So, we need to rearrange the heap to restore the heap-order property. This process of rearranging is called heapify.
Read our article on Heap Sort to understand the algorithm in detail.
Quicksort is an efficient comparison-based in-place sorting algorithm. It is also called partition-exchange sort. Like merge sort, It is a divide-and-conquer algorithm.
The algorithm chooses a pivot element and partitions the list into two halves. The left half contains all the elements smaller than the pivot element, and the right half has all the elements greater than the pivot element. The algorithm then sorts both halves recursively.
There are various versions of the quicksort algorithm, depending upon what we choose as the pivot element.
For a detailed review of quicksort, check out the “Quicksort Algorithm” article.
Non-comparison sorting algorithms are sorting algorithms that don’t compare elements to sort the list. It uses other techniques such as counting the number of elements having distinct key values or using the internal character of the elements to sort the list.
Counting sort is a non-comparison-based sorting algorithm. It counts the number of occurrences of each unique element of the list in an auxiliary array, calculates its prefix sum array, and then traverses the input array and assigns them at their correct position in the output array.
Read our article on Counting Sort — it covers everything from examples to code.
Radix sort is a non-comparison-based sorting algorithm. It sorts the list digit by digit. It considers the digit in the order from the least significant to the most significant (LSD to MSD) or the most significant to the least significant (MSD to LSD). It uses the Counting Sort algorithm to sort the list considering a certain digit with some certain place value.
We have a detailed article on Radix Sort if you wish to deep dive into it.
Bucket sort is a non-comparison-based sorting algorithm. It is also known as bin sort. It divides the list into several groups called buckets by appropriately mapping each element to a bucket.
After that, it sorts each bucket using any appropriate sorting algorithm or by calling itself recursively and then gathers the bucket to get the sorted list. The mapping function is a function of the characteristic of input. One common mapping function is f(x) = floor( x * n / M), where M is the maximum element of the input list and n is the total number of buckets.
Have a look at our Bucket Sort article for examples, pseudocode, and code.
As mentioned before, the time complexity of an algorithm tells the amount of computer time taken by an algorithm to run. It is estimated by the number of elementary operations an algorithm takes to complete its task in terms of characteristics of the input, assuming that it takes a fixed amount of time to execute each elementary operation.
Time complexities of the discussed sorting algorithms:
The space complexity of an algorithm is the amount of space required by the algorithm to complete its task. It is described in terms of the characteristics of the input: the size of the input, the range of the input etcetera.
An in-place algorithm processes the input and produces the output in the same memory location that contains the input without using any auxiliary space.
Space complexities of the discussed sorting algorithms:
For more details on time and space complexity of each of these algorithms, read:
“Time and Space Complexities of Sorting Algorithms Explained”
A sorting algorithm is stable if the relative order of any two equal elements in the original list and the sorted list stays the same.
Input:
son sugar dog duck
If we sort the input by just the first letter
A stable sort would always produce:
dog duck son sugar
An unstable sort might produce:
duck dog sugar son
The first letter of “dog” and “duck” is the same. However, dog appears before duck in the input. In the output produced by a stable sort, dog appears before duck. The same goes with son and sugar.
Here are the different algorithms and their stability:
While solving a problem, it’s crucial to know what is important in the problem.
To choose the best sorting algorithm for a particular problem, we should consider the following factors:
Here's a cheat sheet for quick reference:
Question 1: Can any unstable sorting algorithm be made stable?
Answer: Yes, Any sorting algorithm can be made stable. There can be specific ways to make any particular sorting algorithm stable, but we need a general strategy that can be employed to make any sorting algorithm stable. We can achieve this by considering every element as a pair of its value and its position in the input. We first compare the elements by their value, and if the value turns out to be equal, we compare them by their position. As the position of every element is unique, no two pairs can be identical.
Example:
Input: son sugar dog duck
Modified input: (son, 1) (sugar, 2) (dog, 3) (duck, 4)
Now sort the words by their first letter.
Any sorting algorithm (stable or unstable) would produce:
(dog, 3) (duck, 4) (son, 1) (sugar, 2)
Now remove the position from the pair
Output: dog duck son sugar
(dog, 3) and (duck, 4): The first character of both words is the same, but 3 is less than 4, so (dog, 3) will appear before (duck, 4), maintaining their relative position, thus making the algorithm stable. The same goes for (son, 1) and (sugar, 2).
Question 2: What is the difference between internal and external sorting?
Answer: If all the data to be sorted is stored in the memory throughout the sorting process, the sorting is called internal sorting. In external sorting, data is stored outside the memory (like on disk) and is loaded in small chunks into the memory. External sorting is generally applied when the dataset is very large and can’t fit into the memory.
Commonly, the merge sort algorithm is used for external sorting because it divides sorting into sorting many smaller chunks that fit into the memory. It doesn’t require random access to the dataset.
If you’re looking for guidance and help with getting your prep started, sign up for our free webinar. As pioneers in the field of technical interview prep, we have trained thousands of software engineers to crack the toughest coding interviews and land jobs at their dream companies, such as Google, Facebook, Apple, Netflix, Amazon, and more!
----------
Article contributed by Taara Sinh Aatrey
Attend our webinar on
"How to nail your next tech interview" and learn