Radix Sort

  • Reviewing what I studied, how this work will be explained as well.

Radix Sort is similar to Counting Sort, but it handles different types of inputs. Consider the array vector<int> arr = { 170, 45, 75, 90, 802, 24, 2, 66 };. If we were to sort this array using comparison-based sorts like Merge Sort or Quick Sort, we would achieve a time complexity of O(N log N). However, as we know from Counting Sort, we can achieve a linear time complexity of O(N) under certain conditions.

How it works

Radix Sort leverages a similar approach to achieve efficient sorting. The key idea is to sort numbers digit by digit, starting from the least significant digit (ones place) to the most significant digit. To do this, we need a placeholder for each digit (0 through 9). We iterate through each digit position (ones, tens, hundreds, etc.) and sort the numbers based on that digit.

Let’s implement Radix Sort similarly to Counting Sort. The crucial part is indexing each digit. The expression count[a / exp % 10] returns the index for each digit. We then increment this count. After counting, we accumulate these counts to determine the positions where each number should be placed, just like in Counting Sort.

Finally, we iterate through the array from right to left (last index to first). For each number, count[temp[i] / exp % 10] - 1 gives us the index where the number should be assigned in the sorted array. We subtract 1 because array indices start at 0.assigned.

void CountingSort(vector<int>& arr, int k, int exp)
{
	vector<int> temp = arr; // copy
	
	vector<int> count(k + 1, 0);
	for (auto a : arr)
		count[a / exp % 10] += 1;

	for (int i = 1; i < count.size(); i++)
		count[i] += count[i - 1];

	Print(count);

	for (int i = arr.size() - 1; i >= 0; i--)
	{
		arr[count[temp[i] / exp % 10] - 1] = temp[i];
		count[temp[i] / exp % 10]--;
	}
}

void RadixSort(vector<int>& arr)
{
	int k = 9; // from 0 to 9
	int m = *max_element(arr.begin(), arr.end());

	for (int exp = 1; m / exp > 0; exp *= 10)
	{
		CountingSort(arr, k, exp);
		Print(arr);
	}
}

vector<int> arr = { 170, 45, 75, 90, 802, 24, 2, 66 };
RadixSort(arr);

LeetCode - Roman to Integer [Easy] - 13

How to Solve

Roman numerals are represented by seven different symbols: I, V, X, L, C, D and M.

Symbol       Value
I             1
V             5
X             10
L             50
C             100
D             500
M             1000
For example, 2 is written as II in Roman numeral, just two ones added together. 12 is written as XII, which is simply X + II. The number 27 is written as XXVII, which is XX + V + II.

Roman numerals are usually written largest to smallest from left to right. However, the numeral for four is not IIII. Instead, the number four is written as IV. Because the one is before the five we subtract it making four. The same principle applies to the number nine, which is written as IX. There are six instances where subtraction is used:

I can be placed before V (5) and X (10) to make 4 and 9. 
X can be placed before L (50) and C (100) to make 40 and 90. 
C can be placed before D (500) and M (1000) to make 400 and 900.
Given a roman numeral, convert it to an integer.

Now what we want is basically convert Romans to Integer. How can we solve it. If we have all those unique values stored in some dictionary, then we can check whether we could use those integer and sum it up. The special case would be two letter associated for example IV which is 4. (V - I). With this logic, we can solve it by two approaches.

If we have all those unique values stored in dictionary, then we need to handle the single one, then the two letter one. The time complexity would be the string size (N).

class Solution {
public:
    int romanToInt(string s) {
        std::map<string, int> roman = {
            {"I", 1}, {"V", 5}, {"X", 10}, {"L", 50}, {"C", 100}, {"D", 500}, {"M", 1000}, 
            {"IV", 4}, {"IX", 9}, {"XL", 40}, {"XC", 90}, {"CD", 400}, {"CM", 900}};
        
        if (s.length() <= 0 && s.length() > 16)
            return 0;
    

        int i = 0;
        int sum = 0;
        while (i < s.length())
        {
            // Two String Handle
            if (i < s.length() - 1){
                string doubleStr = s.substr(i, 2);
                
                if (roman.count(doubleStr)){
                    sum += roman[doubleStr];
                    i+=2;
                    continue;
                }
            }

            // OneString
            string singleStr = s.substr(i, 1);
            sum += roman[singleStr];
            i+=1;
        }

        return sum;
    }
};

Okay, let’s think further more. For example, if we have DXCI, then we can calculate D + (C - X) + I or CMXCIV = (M - c) + (c - x) + (v - i), which is very useful in the logic. Also, let’s not use all those special cases which are the two letters combination.

class Solution {
public:
    int romanToInt(string s) {
        std::map<char, int> roman = {
            {'I', 1}, {'V', 5}, {'X', 10}, {'L', 50}, {'C', 100}, {'D', 500}, {'M', 1000}};
        
        if (s.length() <= 0 && s.length() > 16)
            return 0;

        int sum = 0;
        for (int i = 0; i < s.length() - 1; i++) {
            if(roman[s[i]] < roman[s[i+1]])
                sum -= roman[s[i]];
            else
                sum += roman[s[i]];
        }

        return sum + roman[s[s.length()-1]];
    }
};

If we have MCMXCIV, when we think about this problem, we can see that M + (M - C) + (C - X) + (V - I). This case, the constraint is if [i+1] > [i], then we subtract [i], else we add. Then, we can simply get the code above.

Bucket Sort

  • Reviewing what I studied, how this work will be explained as well.

Bucket Sort is a sorting algorithm that is similar in concept to Radix Sort and Counting Sort, but it is designed to handle floating-point numbers. Like these other algorithms, Bucket Sort uses placeholders to sort elements efficiently. However, unlike Radix Sort, which sorts integers digit by digit, Bucket Sort distributes floating-point numbers into buckets and then sorts each bucket individually.

Time Complexity

The time complexity of Bucket Sort can vary depending on the distribution of the input data. In the best case, where the numbers are uniformly distributed, the time complexity can be O(N + K), where N is the number of elements and K is the number of buckets. However, if the numbers are not well-distributed and most elements end up in a single bucket, the time complexity can degrade to O(N^2) due to the sorting algorithm used within each bucket.

Steps to Implement Bucket Sort

  1. Divide Data into Buckets: If there are N data points, divide them into K buckets. Ideally, K should be close to N for optimal performance.
  2. Distribute Elements into Buckets: Place each element into its corresponding bucket based on its value.
  3. Sort Each Bucket: Use a sorting algorithm like Insertion Sort to sort the elements within each bucket.
  4. Merge Sorted Buckets: Combine the sorted elements from all buckets to produce the final sorted array.

Implementation

Here’s an example implementation of Bucket Sort for floating-point numbers. We’ll consider a simple case with 10 buckets and the input array { 0.78f, 0.17f, 0.39f, 0.26f, 0.72f, 0.94f, 0.21f, 0.12f, 0.23f, 0.67f }.

// Insertion Sort
void InsertionSort(vector<float>& bucket)
{
	for (int i = 1; i < bucket.size(); ++i) {
		float key = bucket[i];
		int j = i - 1;
		while (j >= 0 && bucket[j] > key) {
			bucket[j + 1] = bucket[j];
			j--;
		}
		bucket[j + 1] = key;
	}
}

void BucketSort(vector<float>& arr, int num_buckets)
{
	vector<vector<float>> buckets(num_buckets);

	// Put Bucket
	for(auto& a : arr)
	{
		int index = int(a * num_buckets);
		buckets[index].push_back(a);
	}

	for (int i = 0; i < buckets.size(); i++) {
		InsertionSort(buckets[i]);
    }

    // update arr from sorted bucket
	int index = 0;
	for (int i = 0; i < buckets.size(); i++)
	{
		for (int j = 0; j < buckets[i].size(); j++)
		{
			arr[index++] = buckets[i][j];
		}
	}
}

LeetCode - Two Sum [Easy] - 1

The Brute force way is to traverse array for, i and j. and if nums[i] + nums[j] == target, then return the index of two.

class Solution {
public:
    vector<int> twoSum(vector<int>& nums, int target) {
        int n = nums.size();
        for(int i = 0; i < n-1; i++){
            for(int j = i+1; j < n; j++){
                if(nums[i] + nums[j] == target){
                    return {i, j};
                }
            }
        }

        return {};
    }
};

The time complexity for this case is O(N^2), the space complexity is O(1) is because we’re traversing vector twice with the size of N. The vector itself doesn’t take any space, so its O(1). Normally, in this case. We can make it by O(n) for time complexity and space complexity is O(n) by using hash map.

class Solution {
public:
    vector<int> twoSum(vector<int>& nums, int target) {
        unordered_map<int, int> hash;
        int n = nums.size();
        for(int i = 0; i < n; i++){
            int x = target - nums[i];
            if(hash.find(x) != hash.end()){
                return {hash[x], i}; // order doesn't matter
            }
            hash[nums[i]] = i;
        }
        return {};
    }
};

if we switch some logic, which is int x = target - nums[i]; to int x = nums[i] - target;, then we can get the index of the two numbers that add up to the target. By using the dictionary, we can get the index of the two numbers that add up to the target. If we find the number, we can return the index of the two numbers. If we don’t find the number, we can add the number to the dictionary.

Counting Sort

  • Reviewing what I studied, how this work will be explained as well.

Counting Sort is a sorting algorithm that works by counting the number of objects that have each distinct key value, and using arithmetic to determine the positions of each key value in the output sequence. It’s like creating a histogram of the data by counting the number of occurences of each key value.

How it works

  1. Create a count array to store the count of each unique number (the size would be the highest integer + 1)
  2. Count the occurences of each element in the input array and store it in the count array.
  3. Add the previous count to the current count to get the position of the element in the output array.
  4. Create the ouptut array to store the sorted elements.
  5. In Reverse order of output array, place the input array’s element in the output[count[input[i]] - 1] index, and update the count of the element in the count array. (which is subtracting 1 from the count)
void Print(vector<int>& arr)
{
	for (auto a : arr)
		if (a == -1)
			cout << "X ";
		else
			cout << a << " ";
	cout << endl;
}

vector<int> CountingSort(const vector<int>& arr, int k)
{
	vector<int> count(k + 1, 0); 

	for (int i = 0; i < arr.size(); i++) {
		count[arr[i]] += 1;
	}

	// update the count 
	for (int i = 1; i < k + 1; i++) {
		count[i] += count[i - 1];
	}

	cout << "Count: ";
	Print(count);

	vector<int> output(arr.size(), -1);

    // reverse order
	for (int i = output.size() - 1; i >= 0; i--)
	{
	
		output[count[arr[i]] - 1] = arr[i];
		count[arr[i]] --;
		cout << "Count: ";
		Print(count);
	
		cout << "Output: ";
		Print(output);
	}

	return output;
}

Input is { 2, 5, 3, 0, 2, 3, 0, 3 } and the output would be { 0, 0, 2, 2, 3, 3, 3, 5 }.

Time Complexity

The time complexity of Counting Sort is O(n + k) where n is the number of elements in the input array and k is the range of the input.

Space Complexity

The space complexity of Counting Sort is O(n + k) where n is the number of elements in the input array and k is the range of the input.

Stability

Counting Sort is a stable sorting algorithm unlike quick sort.

Partition & Quick Sort

  • Reviewing what I studied, how this work will be explained as well.

Quick Sort is an efficient, comparison-based sorting algorithm that employs the divide and conquer strategy. Unlike Merge Sort which divides arrays into equal halves, Quick Sort creates unbalanced partitions based on a pivot element. This characteristic gives Quick Sort its name - it’s particularly “quick” for many real-world scenarios.

The Divide and Conquer Strategy in Quick Sort

  1. Divide: Select a pivot element and partition the array into two sub-arrays - elements less than the pivot and elements greater than the pivot.
  2. Conquer: Recursively sort the sub-arrays.
  3. Combine: Since the partitions are already sorted in place, no explicit combination step is needed. Let’s actually see how this works.

Partitioning

Partitioning is a crucial step in the Quick Sort algorithm where elements are rearranged around a pivot value. There are two main partitioning schemes: Lomuto and Hoare.

Lomuto Partition Scheme The Lomuto partition scheme typically selects the last element as the pivot. It uses a one-directional scanning technique that maintains three distinct partitions: elements less than the pivot, elements greater than the pivot, and the unknown section.

pair<int, int> lomutoPartition(int data[], int lo, int hi) {
    int pivot = data[hi]; // alternatively you can select the pivot in the middle of the array.

    int i = lo - 1; // int i = 0
    for (int j = lo; j < hi; j++) {
        if(data[j] <= pivot)
            swap(data[++i], data[j]);  // data[i++]
    }

    swap(data[++i], data[hi]);   // data[i]
    return make_pair(i - 1, i + 1);
}

Hoare Partition Scheme The Hoare partition scheme, developed by Sir Charles Antony Richard Hoare (the creator of Quick Sort), typically selects the first element as the pivot. It uses a two-directional scanning technique with two pointers moving in opposite directions.

pair<int, int> hoarePartition(int data[], int lo, int hi) {
    int pivot = data[(lo + hi) / 2];
    int i = lo - 1;
    int j = hi + 1;

    while(true){
        do i++; while(data[i] < pivot);
        do j--; while(data[j] > pivot);

        if (i < j)
            swap(data[i], data[j])
        else 
            break;
    }

    return make_pair(j, j+1);
}

version 2:

pair<int, int> hoarePartition2(int data[], int lo, int hi) {
    int pivot = data[(lo + hi) / 2];

    int i = lo;
    int j = hi;
    while( i <= j ){
        while(data[i] < pivot)
            i++;
        while(data[j] > pivot)
            j--;

        if (i < j)
            swap(data[i++], data[j--]);
        else
            i++;
    }
    return make_pair(j, j+1);
}

version 3: when pivot was selected in the first element

pair<int, int> hoarePartition3(int data[], int lo, int hi) {
    int pivot = data[lo];

    int i = lo;
    int j = hi+1;
    while(true) {  
        do i++; while( i <= hi && data[i] < pivot);
        do j--; while( j > lo && data[i] > pivot);

        if(i < j)
            swap(data[i], data[j]);
        else
            break;
    }
    swap(data[lo], data[hi]);
    return make_pair(j -1, j+1);
}

3-way Partitioning

This is very specific cases, where there are multiple or repetitive numbers:

Then we use 3-way partitioning. There are two 3-way partitioning; hoare partition and Bentley-Macllroy.

pair<int, int> partition3wayHoare(int data[], int lo, int hi) {
    int pivot = data[(lo + hi)/2];

    int lt = lo;
    int gt = hi;
    for (int i = lo; i <= gt;) {
        if(data[i] < pivot)
            swap(data[lt++], data[i++])
        else if (data[i] > pivot)
            swap(data[i], data[gt--])
        else
            i++;
    }
    return make_pair(lt - 1, gt + 1);
}

pair<int, int> partitionBently3way(int data[], int lo, int hi) {
    int pivot = data[(lo + hi) /2];
    int i = lo - 1, j = hi + 1;
    int p = lo - 1, q = hi + 1;

    while(true) {
        while(data[++i] < pivot);
        while(data[--j] > pivot);
    

        if (i < j) {
            swap(data[i], data[j]);
            if (data[i] == pivot)
                swap(data[++p], data[i]);
            if (data[j] == pivot)
                swap(data[--q], data[j]);
        } else {
            if (i == j)
                swap(data[++p], data[i]);
            break;
        }
    }

    i = j +1;
    for (int k = lo; k <= p; k++)
        swap(data[k], data[j--]);
    for (int k = hi; k >= q; k--)
        swap(data[k], data[i++]);
    return make_pair(j, i);
}

So using one of those partition mechanism, we can use one of those above.

How to select right pivot value?

  1. first or last:
  2. random: int pivot = data[rand() % (hi - lo + 1) + lo];
  3. middle
  4. MED3 (Median of Median)

Standard C++ Library: Select Pivot

int choosePivot(int data[], int lo, int hi)
{
    int n = hi - lo + 1;
    int mid = (lo + hi) / 2;
    if (n > 7) {
        if (n > 40) {
            lo = MED3(data, lo, lo + d, lo + 2*d);
            mid = MED3(data, mid - d, mid, mid + d);
            hi = MED3(data, hi - 2*d, hi - d, hi);
        }
        mid = MED3(data, lo, mid, hi);
    }
    return mid;
}

int pivot = data[choosePivot(data, lo, hi)];

Algorithm Walkthrough

Let’s trace through the Quick Sort algorithm with a concrete example:

Initial Array: {5, 3, 8, 4, 9, 1, 6, 2, 7}

First Partition:

  1. Select pivot: Let’s choose the first element, 5
  2. Initialize pointers:
    • Left pointer starts at index 1 (value 3)
    • Right pointer starts at the last index (value 7)
  3. Partitioning process:
    • Move left pointer rightward until finding an element ≥ pivot
    • Move right pointer leftward until finding an element ≤ pivot
    • Swap these elements if pointers haven’t crossed
    • Continue until pointers cross
  4. Final swap: Place pivot at its correct position by swapping with the right pointer
  5. Result: {2, 3, 1, 4, 5, 9, 6, 8, 7}

Now we have two sub-arrays: {2, 3, 1, 4} (elements < 5) and {9, 6, 8, 7} (elements > 5).

Second Partition (Left Sub-array):

  1. Select pivot: 2
  2. Partitioning process: Similar to above
  3. Result: {1, 2, 3, 4} with sub-arrays {1} and {3, 4}

Third Partition (Right Sub-array):

  1. Select pivot: 9
  2. Partitioning process: Similar to above
  3. Result: {7, 6, 8, 9} with sub-arrays {7, 6, 8} and {}

The process continues recursively until all sub-arrays are sorted.

This can be done recursively. Let’s look at the code.


void quickSortRect(int data[], int lo, int hi) {
    if (lo >= hi) return;

    int leftLast, rightFirst;
    auto [leftLast, rightFirst] = partition(data, lo, hi);

    quickSortRec(data, lo, leftLast);
    quickSortRec(data, rightFirst, hi);
}

void quickSort(int data[], int n) {
    quickSortRec(data, 0, n -1)
}

Time Complexity

The time complexity is based on the pivot value, because this will make array to be unbalanced. The worst case would be O(n^2), but the best case would be O(nlogn)

Quick Selection Sort

  • Reviewing what I studied, how this work will be explained as well.

Quick Selection Sort

Quick Selection, also known as Quickselect or Hoare’s Selection Algorithm, is an efficient algorithm designed to find the kth smallest element in an unordered list. Developed by Tony Hoare (the same computer scientist who created Quicksort), this algorithm shares the same partitioning strategy as Quicksort but with a crucial difference in its recursive approach.

Algorithm Overview

Quickselect uses a divide-and-conquer approach similar to Quicksort:

  1. Choose a Pivot: Select an element from the array as the pivot
  2. Partition: Rearrange the array so elements less than the pivot are on the left, and elements greater are on the right
  3. Selective Recursion: Unlike Quicksort which recurses on both partitions, Quickselect only recurses on the partition that contains the kth element we’re looking for

This selective recursion is what gives Quickselect its efficiency advantage over a full sorting algorithm when we only need to find a specific element.

Implementation

int partition(int data[], int lo, int hi) {
    int pivot = data[hi];

    int i = lo -1;
    for(int j = lo; j < hi; j++) {
        if (data[j] <= pivot)
            swap(data[++i], data[j];)
    }
    swap(data[++j], data[hi]);
    return i;
}

void quickSelectRec(int data[], int lo, in hi, int k) 
{
    if (lo == hi) return;

    int pivotIndex = partition(data, lo, hi);

    if (k == pivotIndex)
        return;
    else if (k < pivotIndex) 
        quickSelectRec(data, lo, pivotIndex - 1, k);
    else
        quickSelectRec(data, pivotIndex + 1, hi, k);
}


void quickSelect(int data[], int n, int k) {
    quickSelectRec(data, 0, n - 1, k);
}

Time Complexity

The time complexity of Quickselect varies depending on the scenario:

  1. Best Case: O(n) - This occurs when each partition divides the array into roughly equal halves.
  2. Average Case: O(n) - Even with random pivot selection, the algorithm performs linearly on average.
  3. Worst Case: O(n²) - This happens when the partitioning is maximally unbalanced (e.g., when the array is already sorted and we choose the first or last element as pivot).

The linear average-case time complexity makes Quickselect significantly more efficient than sorting algorithms (which require at least O(n log n) time) when we only need to find a specific element.

Selection Sort

  • Reviewing what I studied, how this work will be explained as well.

Selection Sort is a simple and intuitive comparison-based sorting algorithm. The core idea behind this algorithm is to divide the array into a sorted portion and an unsorted portion, then repeatedly find the smallest (or largest) element from the unsorted portion and place it at the end of the sorted portion.

Algorithm Steps

Selection Sort operates through the following steps:

  1. Initialization: Conceptually divide the array into a sorted portion (initially empty) and an unsorted portion (initially the entire array).
  2. Find Minimum: Find the smallest element in the unsorted portion.
  3. Swap: Exchange the found minimum with the first element of the unsorted portion.
  4. Boundary Shift: Move the boundary between sorted and unsorted portions one position to the right.
  5. Repeat: Continue this process until the entire array is sorted.

You may find this algorithm to be too easy!

Implementation

void SelectionSort(vector<int>& arr) {
    int n = arr.size();

    for (int i = 0; i < n - 1; i++) {
        int minIdx = i;

        for (int j = i + 1; j <= n; j++) {
            if (arr[j] < arr[minIdx])
                minIdx = j;
        }

        swap(arr[i], arr[minIdx]);
    }
}

void printArray(vector<int>& arr) {
    for (int& val : arr) {
        cout << val << " ";
    }
    cout << endl;
}


int main()
{
    vector<int> arr = { 64, 25, 12, 22, 11 };
    cout << "Original array: ";
    printArray(arr);

    SelectionSort(arr);

    cout << "Sorted array: ";
    printArray(arr);
}

The time complexity would be in for-loop (inner loop and outer loop) => O(n²) for all cases.

Kth Selection Sort

Problem Statement: The selection problem is defined as: Given an array of n elements and an integer k (1 ≤ k ≤ n), find the kth smallest (or largest) element in the array.

Implementation


void SelectionSort(vector<int> vec, int lo, int hi)
{
    int minIdx = lo;
    for (int i = lo + 1; i <= hi; i++)
    {
        if(vec[i] < vec[minIdx])
            minIdx = j;
    }
    swap(vec[lo], vec[minIdx]);
}

void PartialSelectionSort(vector<int> vec, int k) 
{
    for (int i = 0; i < k; i++) 
    {
        SelectionSort(vec, i, k)
    }
}

int main() {
    vector<int> vec = { 7, 10, 4, 3, 7, 20, 15 };
    int k = 4;
    PartialSelectionSort(vec, k);
}

The time complexity for this O(kn), but if k is smaller value, it can be interpreted as O(n), but if k is large enough, it is considered to be O(n²)

Merge Sort

  • Reviewing what I studied, how this work will be explained as well.

This sorting algorithm is based on the devide and conquer strategy. The steps in simple array are as follows:

  1. Divide the input array into two halves. (Divide)
  2. Sort each half. (Conquer)
  3. Merge the two halves. (Combine)

Basically, we’re breaking the problem of sorting the array into two smaller problems. (sub-problems), then we merge the two halves, which are sorted.

The problem state: Given some array, and sort it in increasing order. There are two solution can exist: bottom up (for loop), and top down (recursive). Let’s explore what are the code looks like, and the logic behind. If the size of the array or the vector is 0 or 1, it is already sorted. The sort doesn’t happen when it’s divided, it happens when it’s combining(merging).

Let’s look at the recursive way to solve this first (top-down merge sort)

void Merge(vector<int>& a, int lo, int mid, int hi)
{
    aux.resize(a.size())
    int i = lo; j = mid + 1;
    for (int k = lo; k <= hi, k++)
    {
        aux[k] = a[k];
    }

    // you can do while in the similar way, which can be way easier than this
    for (int k = lo; k <= hi; k++)
    {
        if (i > mid) a[k] = aux[j++];                // left array is already sorted, now look at right array
        else if (j > hi) a[k] = aux[i++];            // right array is already sorted, now look at left array
        else if (aux[i] < aux[j]) a[k] = aux[i];     // compare the value, get the min value
        else a[k] = aux[i++];                        // just copy the value
    }
}

void SortHelper(vector<int>& a, int lo, int hi)
{
    if(hi <= lo) return;
    int mid = lo + (hi - lo) / 2;
    SortHelper(a, lo, mid);
    SortHelper(a, mid+1, hi);
    Merge(a, lo, mid, hi);
}

int main() 
{
    std::vector<int> vec = {3, 5, 8, 9, 6, 2};
    SortHelper(vec, 0, vec.size() - 1);
}

Let’s look at the bottom-up approach


void Sort(vector<int>& a)
{
	aux.resize(a.size());

	int N = a.size();

	for (int sz = 1; sz < N; sz = sz + sz) {
		for (int lo = 0; lo < N - sz; lo += sz + sz)
			Merge(a, lo, lo + sz - 1, std::min(lo + sz + sz - 1, N - 1));
	}
}

void Merge(vector<int>& a, int lo, int mid, int hi)
{
	int i = lo, j = mid + 1;
	if (a[mid] <= a[j]) return;
	for (int k = lo; k <= hi; k++)
		aux[k] = a[k];
	for (int k = lo; k <= hi; k++)
	{
		if (i > mid) a[k] = aux[j++];
		else if (j > hi) a[k] = aux[i++];
		else if (aux[j] < aux[i]) a[k] = aux[j++];
		else a[k] = aux[i++];
	}
}


int main() 
{
    std::vector<int> vec = {3, 5, 8, 9, 6, 2};
    Sort(vec, 0, vec.size() - 1);
}

Time Complexity & Accuracy

  • this is definitive way to sort array or vector. If we have a n problem, we divided into n/2 and so on, then level is created is log(N), so this becomes O(n * log n)

Insertion Sort

  • Reviewing what I studied, how this work will be explained as well.

Given this array / vector that has {2, 8, 5, 3, 9, 4}. We want to sort this increasing order.

  1. Work left to right
  2. Examine each item and compare it to items on its left.
  3. Insert the item in the corect position in the array.

We sure that first element is sorted out, that’s why we loop from 1 to vector size. Since we partition the first group (element), then we compare the next value and the far right of the partioned group. (ex: if i index = 2, then j = 2, compare 8 and 5, 5 is smaller than 8, so we swap them.) Also, another case would be if i index = 3, we compare 8, and 3, then swap. Then we compare 3 and 5, and swap. Basically we’re inserting every time if next value and the partioned array’s of last index by comparing. Basically, that’s what inserting happend!

#include <iostream>
#include <vector>

void Print(const std::vector<int>& vec)
{
	for (auto& v : vec) {
		std::cout << v << " ";
	}
	std::cout << std::endl;
}

int main()
{
	std::vector<int> vec = { 2, 8, 5, 3, 9, 4 };
	Print(vec);

	// Logic:
	// first element is always sorted.
	for (int i = 1; i < vec.size(); i++)
		for (int j = i; j > 0 && vec[j-1] > vec[j]; j--)
			std::swap(vec[j - 1], vec[j]);
	
	Print(vec);
}

But we can also simplify, and a bit optimize version by not using std::swap

for (int i = 1; i < vec.size(); i++)
{
	int key = a[i];
	int j = i;
	for(; j >0 && a[j -1] > key; j--)
		a[j] = a[j-1];
	a[j] = key
}

Time Complexity && Worst Case

We know that two for loop, which means it will compare the new value and the partitioned elements. So, it will be O(n^2). Best Case is O(n) when the array is already sorted. But Worst Case is O(n^2) when the array is sorted in decreasing order.

Resource

Pagination