Recursive Algorithms: Difference between revisions
(Created page with "=Internal= * Algorithms =Overview=") |
|||
Line 2: | Line 2: | ||
* [[Algorithms#Recursive_Algorithms|Algorithms]] | * [[Algorithms#Recursive_Algorithms|Algorithms]] | ||
=Overview= | =Overview= | ||
A '''recursive''' algorithm solves a problem by calling itself recursively one or more times to deal with closely related sub-problems, usually of smaller size. A special class of recursive algorithms are the "divide and conquer" algorithms, but not all recursive algorithms belong to this class. | |||
{{Internal|Recursive_Algorithms#Overview|Recursive Algorithms}} | |||
The running time of recursive algorithms can be estimated using <span id='Recurrence'></span>[[Recursive_Algorithms_Complexity#Recurrence|recurrences]], or [[Recursive_Algorithms_Complexity#Recurrence_Equation|recurrence equations]]. A recursive algorithm requires a '''base case''', where the input is sufficiently small that the output can be immediately computed, rather than recursing further. A typical class of recursive algorithms are [[#Divide_and_Conquer|"divide and conquer" algorithms]], explained below. | |||
Note that not all recursive algorithms are "divide and conquer" algorithms. The [[Binary_Codes#The_Huffman_Algorithm|Huffman algorithm]] is a recursive greedy algorithm that does not follows the "divide and conquer" paradigm. | |||
The fundamental intuition behind why a recursive algorithm may be better than a brute force iterative algorithms is that while the number of sub-problems to be resolved grows, the size of the sub-problem, and the work required to solve it, decreases, and in some cases, the rate of sup-problem proliferation (RSP) is smaller than the rate of work shrinkage (RWS). | |||
=Divide and Conquer= | |||
A common recursive technique is "divide and conquer". The "divide and conquer" is an [[#Algorithm_Paradigms|algorithm design paradigm]] that has proven useful in solving a specific class of computational problems that involves breaking down a large problem into smaller sub-problems and recursing. The technique consists in three steps, applied at each level of the recursion: | |||
# '''Divide''' the problem into several sub-problems that are similar to the original problem but smaller in size. The problems can be equal in size or not. | |||
# '''Conquer''' the sub-problems by solving them recursively, applying the same procedure in which we're currently in. If the sub-problem size is small enough, it may be solved through iterative techniques, or it may be a trivial case that does not need any work. When we reach this stage, we say that the recursion "bottomed out" and we have gotten down to the '''base case'''. | |||
# <span id='Combine'></span>'''Combine''' the solutions to the sub-problems to create a solution to the original problem. This is usually the step that requires the largest amount of ingenuity, so the work is done as [[#Algorithm_Efficiency|efficiently]] as possible. | |||
Examples of algorithms that use divide-and-conquer method are: | |||
* <span id='Binary_Search'></span>[[Binary Search|Binary search]] of sorted or unsorted arrays | |||
* [[Merge_Sort#Overview|Merge sort]]. This is the canonical "divide and conquer" algorithm example | |||
* [[Quicksort|QuickSort]] | |||
* <span id='q23wLp'></span>[[Karatsuba Multiplication|Karatsuba multiplication]], which multiplies two n-digit integers in a time better than n<sup>2</sup> | |||
* <span id='X4slMN'></span>[[Matrix_Multiplication#Strassen|Strassen's algorithm for matrix multiplication]] | |||
* <span id='b3slp3'></span>[[Inversions in an Array|Finding the number of inversions in an array]] | |||
* <span id='Rt3vlM1'></span>[[Maximum Subarray Problem|Maximum subarray problem]] | |||
* <span id='R23wL0'>[[Closest Pair Problem|Closest pair problem]] | |||
===Recursive Algorithms Complexity=== | |||
As long as the size of the sub-problems is equal, the "divide and conquer" algorithm complexity can be precisely described using the Master Method. For other cases, the complexity can be determined using recursion trees and other methods. More details about recursive and "divide and conquer" algorithm complexity in: | |||
{{Internal|Recursive_Algorithms_Complexity#Overview|Recursive Algorithms Complexity}} |
Revision as of 18:58, 9 November 2021
Internal
Overview
A recursive algorithm solves a problem by calling itself recursively one or more times to deal with closely related sub-problems, usually of smaller size. A special class of recursive algorithms are the "divide and conquer" algorithms, but not all recursive algorithms belong to this class.
The running time of recursive algorithms can be estimated using recurrences, or recurrence equations. A recursive algorithm requires a base case, where the input is sufficiently small that the output can be immediately computed, rather than recursing further. A typical class of recursive algorithms are "divide and conquer" algorithms, explained below.
Note that not all recursive algorithms are "divide and conquer" algorithms. The Huffman algorithm is a recursive greedy algorithm that does not follows the "divide and conquer" paradigm.
The fundamental intuition behind why a recursive algorithm may be better than a brute force iterative algorithms is that while the number of sub-problems to be resolved grows, the size of the sub-problem, and the work required to solve it, decreases, and in some cases, the rate of sup-problem proliferation (RSP) is smaller than the rate of work shrinkage (RWS).
Divide and Conquer
A common recursive technique is "divide and conquer". The "divide and conquer" is an algorithm design paradigm that has proven useful in solving a specific class of computational problems that involves breaking down a large problem into smaller sub-problems and recursing. The technique consists in three steps, applied at each level of the recursion:
- Divide the problem into several sub-problems that are similar to the original problem but smaller in size. The problems can be equal in size or not.
- Conquer the sub-problems by solving them recursively, applying the same procedure in which we're currently in. If the sub-problem size is small enough, it may be solved through iterative techniques, or it may be a trivial case that does not need any work. When we reach this stage, we say that the recursion "bottomed out" and we have gotten down to the base case.
- Combine the solutions to the sub-problems to create a solution to the original problem. This is usually the step that requires the largest amount of ingenuity, so the work is done as efficiently as possible.
Examples of algorithms that use divide-and-conquer method are:
- Binary search of sorted or unsorted arrays
- Merge sort. This is the canonical "divide and conquer" algorithm example
- QuickSort
- Karatsuba multiplication, which multiplies two n-digit integers in a time better than n2
- Strassen's algorithm for matrix multiplication
- Finding the number of inversions in an array
- Maximum subarray problem
- Closest pair problem
Recursive Algorithms Complexity
As long as the size of the sub-problems is equal, the "divide and conquer" algorithm complexity can be precisely described using the Master Method. For other cases, the complexity can be determined using recursion trees and other methods. More details about recursive and "divide and conquer" algorithm complexity in: