Recursive Algorithms: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 2: Line 2:
* [[Algorithms#Recursive_Algorithms|Algorithms]]
* [[Algorithms#Recursive_Algorithms|Algorithms]]
=Overview=
=Overview=
A '''recursive''' algorithm solves a problem by calling itself recursively one or more times to deal with closely related sub-problems, usually of smaller size. A special class of recursive algorithms are the "divide and conquer" algorithms, but not all recursive algorithms belong to this class.
A '''recursive''' algorithm solves a problem by calling itself recursively one or more times to deal with closely related sub-problems, usually of smaller size. The running time of recursive algorithms can be estimated using <span id='Recurrence'></span>[[Recursive_Algorithms_Complexity#Recurrence|recurrences]], or [[Recursive_Algorithms_Complexity#Recurrence_Equation|recurrence equations]]. A recursive algorithm requires a '''base case''', where the input is sufficiently small that the output can be immediately computed, rather than recursing further.  
{{Internal|Recursive_Algorithms#Overview|Recursive Algorithms}}


 
A typical class of recursive algorithms are [[#Divide_and_Conquer|"divide and conquer" algorithms]], explained below. Note that not all recursive algorithms are "divide and conquer" algorithms. The [[Binary_Codes#The_Huffman_Algorithm|Huffman algorithm]] is a recursive greedy algorithm that does not follows the "divide and conquer" paradigm.
 
The running time of recursive algorithms can be estimated using <span id='Recurrence'></span>[[Recursive_Algorithms_Complexity#Recurrence|recurrences]], or [[Recursive_Algorithms_Complexity#Recurrence_Equation|recurrence equations]]. A recursive algorithm requires a '''base case''', where the input is sufficiently small that the output can be immediately computed, rather than recursing further. A typical class of recursive algorithms are [[#Divide_and_Conquer|"divide and conquer" algorithms]], explained below.  
 
Note that not all recursive algorithms are "divide and conquer" algorithms. The [[Binary_Codes#The_Huffman_Algorithm|Huffman algorithm]] is a recursive greedy algorithm that does not follows the "divide and conquer" paradigm.


The fundamental intuition behind why a recursive algorithm may be better than a brute force iterative algorithms is that while the number of sub-problems to be resolved grows, the size of the sub-problem, and the work required to solve it, decreases, and in some cases, the rate of sup-problem proliferation (RSP) is smaller than the rate of work shrinkage (RWS).
The fundamental intuition behind why a recursive algorithm may be better than a brute force iterative algorithms is that while the number of sub-problems to be resolved grows, the size of the sub-problem, and the work required to solve it, decreases, and in some cases, the rate of sup-problem proliferation (RSP) is smaller than the rate of work shrinkage (RWS).

Revision as of 19:00, 9 November 2021

Internal

Overview

A recursive algorithm solves a problem by calling itself recursively one or more times to deal with closely related sub-problems, usually of smaller size. The running time of recursive algorithms can be estimated using recurrences, or recurrence equations. A recursive algorithm requires a base case, where the input is sufficiently small that the output can be immediately computed, rather than recursing further.

A typical class of recursive algorithms are "divide and conquer" algorithms, explained below. Note that not all recursive algorithms are "divide and conquer" algorithms. The Huffman algorithm is a recursive greedy algorithm that does not follows the "divide and conquer" paradigm.

The fundamental intuition behind why a recursive algorithm may be better than a brute force iterative algorithms is that while the number of sub-problems to be resolved grows, the size of the sub-problem, and the work required to solve it, decreases, and in some cases, the rate of sup-problem proliferation (RSP) is smaller than the rate of work shrinkage (RWS).

Divide and Conquer

A common recursive technique is "divide and conquer". The "divide and conquer" is an algorithm design paradigm that has proven useful in solving a specific class of computational problems that involves breaking down a large problem into smaller sub-problems and recursing. The technique consists in three steps, applied at each level of the recursion:

  1. Divide the problem into several sub-problems that are similar to the original problem but smaller in size. The problems can be equal in size or not.
  2. Conquer the sub-problems by solving them recursively, applying the same procedure in which we're currently in. If the sub-problem size is small enough, it may be solved through iterative techniques, or it may be a trivial case that does not need any work. When we reach this stage, we say that the recursion "bottomed out" and we have gotten down to the base case.
  3. Combine the solutions to the sub-problems to create a solution to the original problem. This is usually the step that requires the largest amount of ingenuity, so the work is done as efficiently as possible.

Examples of algorithms that use divide-and-conquer method are:

Recursive Algorithms Complexity

As long as the size of the sub-problems is equal, the "divide and conquer" algorithm complexity can be precisely described using the Master Method. For other cases, the complexity can be determined using recursion trees and other methods. More details about recursive and "divide and conquer" algorithm complexity in:

Recursive Algorithms Complexity