Merge Sort: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 99: Line 99:
One way of estimating the time complexity is to use a recursion tree: {{Internal|Recursion_Trees#Recursion_Tree_Example_for_Merge_Sort|Recursion Tree Example for Merge Sort}}
One way of estimating the time complexity is to use a recursion tree: {{Internal|Recursion_Trees#Recursion_Tree_Example_for_Merge_Sort|Recursion Tree Example for Merge Sort}}


Another way is to use the [[Master_Method|master theorem]]. The running time of merge sort can be expressed by the following [[Recursive_Algorithms_Complexity#Recurrences|recurrence]]:
Another way is to use the [[Master_Method#Examples|master theorem]]. The running time of merge sort can be expressed by the following [[Recursive_Algorithms_Complexity#Recurrences|recurrence]]:
<font size=-1>
<font size=-1>



Revision as of 18:23, 21 September 2021

Internal

Overview

Merge sort is a divide-and-conquer algorithm. The algorithm was invented by John von Neumann in 1945, and it is still used today as the standard sorting algorithm in a number of programming libraries.

Worst-case time Θ(n log n)
Average-case time Θ(n log n)
Best-case time

Because the combine phase of the algorithm makes secondary copies for the entire data array received as argument, merge sort is not considered an in-place sorting algorithm, while QuickSort is.

Algorithm

Merge sort works as follows:

  1. Divide: it divides the initial input array into two roughly equal sub-arrays of n/2 elements each. If n is even, the array is split in two equal halves, if n is odd, the right array is bigger than the left array with one element. The base case and exit from recurrence happens when we see one-element array. This is the time to stop the recurrence.
  2. Conquer: it calls itself recursively to sort the sub-arrays produced as result of the previous step.
  3. Combine: it merges the two sorted sub-arrays to produce the sorted answer, by invoking an auxiliary Θ(n) procedure. The merge procedure merges the arrays using the same algorithm we would use while combining two sorted piles of cards: we use two sorted input piles, we compare the cards at the top of the input piles, pick the smallest one and place it face down on the table. Removing a card from an input pile uncovers a new card in the respective input pile. We repeat the operation until we deplete one input pile, and then we place the remaining pile face down at the top of the output pile.
    /**
     * Merge sorts in-place in a[] but it makes secondary copies.
     *
     * @param to indicates the first array element outside the area to sort. 
     *           It may be equal with the array length, if we want to sort the entire 
     *           array.   
     */
    public static void mergeSort(int[] a, int from, int to) {
        //
        // divide
        //
        int middle = from + (to - from) / 2;
        if (from == middle) {
            //
            // we bottomed out, it means a has at most one element, and it is already sorted,
            // get out of recurrence.
            //
            return;
        }
        //
        // there is a middle that separates non-empty arrays, conquer
        //
        mergeSort(a, from, middle);
        mergeSort(a, middle, to);
        //
        // combine sorted sub-arrays
        //
        merge(a, from, middle, to);
    }

    /**
     * Combines in place. Assumes that the sub-arrays [i, j) and [j, k) 
     * are sorted and merges them in-place.
     */
    public static void merge(int[] a, int i, int j, int k) {
        //
        // make copies
        //
        int[] left = new int[j - i];
        System.arraycopy(a, i, left, 0, left.length);
        int[] right = new int[k - j];
        System.arraycopy(a, j, right, 0, right.length);

        int l = 0;
        int r = 0;
        int dest = i;

        while(l < left.length && r < right.length) {
            if (left[l] <= right[r]) {
                a[dest ++] = left[l];
                l ++;
            }
            else {
                a[dest ++] = right[r];
                r ++;
            }
        }

        //
        // drain leftovers
        //
        while(l < left.length) {
            a[dest ++] = left[l ++];
        }
        while(r < right.length) {
            a[dest ++] = right[r ++];
        }
    }

Time Complexity Analysis

One way of estimating the time complexity is to use a recursion tree:

Recursion Tree Example for Merge Sort

Another way is to use the master theorem. The running time of merge sort can be expressed by the following recurrence:

       │ c                if n == 1
T(n) = │
       │ 2T(n/2) + cn     if n > 1

In the context of the master theorem, the number of subproblems (a) is 2, the subproblem size is n/2, so b is 2, and the combine phase of the algorithm performs Θ(n1) work, so d is 1. a/bd is 1, so according complexity is Θ(n log n).

More details in CLRS page 35.

Correctness

Analyze loop invariants. CLRS page 31.