Master Method: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 4: Line 4:
=Overview=
=Overview=


The <span id='Master_Method'></span>'''master method''' or '''master theorem''' is a general mathematical tool for analyzing the running time of [[Algorithms#Divide_and_Conquer|divide and conquer algorithms]] that divide a problem in '''equal''' subproblems and call themselves recursively. The fact that the subproblems are equal in size is important: master method does not apply to divide and conquer algorithms that divide the problem in unequal parts. Also, the master theorem offers running time predictions as (bounded from above and below), not only O().
The <span id='Master_Method'></span>'''master method''' or '''master theorem''' is a general mathematical tool for analyzing the running time of [[Algorithms#Divide_and_Conquer|divide and conquer algorithms]] that divide a problem in '''equal''' subproblems and call themselves recursively. The fact that the subproblems are equal in size is important: master method does not apply to divide and conquer algorithms that divide the problem in unequal parts. Also, the master theorem offers running time predictions as Θ() (bounded from above and below), not only O().


=Master Method and Recurrences=
=Master Method and Recurrences=

Revision as of 17:58, 21 September 2021

Internal

Overview

The master method or master theorem is a general mathematical tool for analyzing the running time of divide and conquer algorithms that divide a problem in equal subproblems and call themselves recursively. The fact that the subproblems are equal in size is important: master method does not apply to divide and conquer algorithms that divide the problem in unequal parts. Also, the master theorem offers running time predictions as Θ() (bounded from above and below), not only O().

Master Method and Recurrences

Master method can be applied to predict running time for recursive algorithms whose running time can be expressed as a recurrence (or recurrence equation). The first step in applying the master method is to identify the recurrence that characterizes the algorithm.

We denote the maximum number of operations the algorithm needs to solve the computational problem for a problem of size n with T(n). This is the quantity we want an upper bound for. The recurrence is a way to express T(n) in terms of T() of smaller numbers, specifically the size of the subproblems the recursive algorithm calls itself recursively on.

Every recurrence has two ingredients:

  • a base case that describes the running time where there is no further recursion.
  • the general case, that describes the running time done by the recursive calls invoked at that level plus the work that is done by the current level

In the most generic case, a recurrence looks like this:

       │ Θ(1)         # base case
T(n) = │
       │ aT(n/b) + nd  # general case

Intuition

is a method of analyzing running time of recursive algorithms by using the "master theorem". The master theorem applies to algorithms whose running times can be expressed as a A generic form of a recurrence equation is:

T(n) = a⋅T(n/b) + f(n)

where a ≥ 1, b > 1 and f(n) is a function that approximates the non-recursive cost. Note that the above expression does include a boundary condition for n = 1. The reasons is that T(1) does not significantly change the solution to the recurrence, it may change it with a constant factor, but the order of growth will stay unchanged.

TODO CLRS page 93.

TODO integrate after class:


Integrate class https://www.coursera.org/learn/algorithms-divide-conquer/lecture/HkcdO/formal-statement


Analyzing Divide-and-Conquer Algorithms

TODO integrate after class:

For divide-and-conquer recursive problems, where solving the problem involves dividing the problem into a subproblems of size n/b of the original', applying the algorithm recursively a times on n/b-size problems and then combining the result, the run time can be expressed by the generic recurrence presented below. If the problem size is small enough (n ≤ c) the straightforward solution takes constant time, which is expressed as a constant function Θ(1).

       | Θ(1)                     if n ≤ c
T(n) = |
       | aT(n/b) + D(n) + C(n)    otherwise

where the D(n) is the cost of dividing the n-size problem into a subproblems, and the C(n) is the cost of combining the results.

TODO

  • intuition
  • definition
  • proof
  • Emphasize that is theta, not O




[Next in Algorithm Complexity]