The Knapsack Problem: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
 
(94 intermediate revisions by the same user not shown)
Line 1: Line 1:
=External=
=External=
* https://www.coursera.org/learn/algorithms-greedy/lecture/LIgLJ/the-knapsack-problem
* https://www.coursera.org/learn/algorithms-greedy/lecture/LIgLJ/the-knapsack-problem
* https://www.coursera.org/learn/algorithms-greedy/lecture/0n68L/a-dynamic-programming-algorithm
* https://www.coursera.org/learn/algorithms-greedy/lecture/LADQc/example-review-optional
=Internal=
=Internal=
* [[Algorithms#Dynamic_Programming_Algorithms|Dynamic Programming Algorithms]]
* [[Dynamic_Programming#Canonical_Use|Dynamic Programming]]
 
=Overview=
=Overview=
The knapsack problem shows up every time there's a budget and we want to use it in the most optimal way possible, maximizing the a second dimension. This is a quite fundamental problem which is a canonical use for the [[Algorithms#Dynamic_Programming_Algorithms|dynamic programming paradigm]].
=Problem Defintion=
=Problem Defintion=
The input is given by n items {i<sub>1</sub>, i<sub>2</sub>, ... i<sub>n</sub>}. Each item comes with a non-negative value v<sub>i</sub> and a non-negative and integral size w<sub>i</sub>.  
The input of the problem is given as n items {1, 2 ... n}. Each item comes with a non-negative value v<sub>i</sub> and a non-negative and '''integral''' size w<sub>i</sub>. The integrality of the size is essential, as it allows us to use discrete evaluation steps in the dynamic programming algorithm, as it will be shown below. The maximum  integral non-negative capacity W is also given as input: all items that are part of the solution must have a total maximum size of W or less.
 
Additionally, a non-negative integral capacity W is also given.


The output should be a subset S ⊆ {i<sub>1</sub>, i<sub>2</sub>, ... i<sub>n</sub>} that maximizes the value of all objects of the subset:  
The output should be a subset S ⊆ {1, 2 ... n} that maximizes the total value of all the items in the subset:  
<font size=-1>
<font size=-1>
   ∑v<sub>i</sub>
   ∑v<sub>i</sub>
  i∈S
  i∈S
</font>
</font>
so they all "fit" within the capacity:
in such a way that they all "fit" within the capacity W:
<font size=-1>
<font size=-1>
   ∑w<sub>i</sub> ≤ W
   ∑w<sub>i</sub> ≤ W
  i∈S
  i∈S
</font>
</font>
=Dynamic Programming Solution Discussion=
A dynamic programming solution usually starts with an analysis of the optimal solution and with an attempt to express the optimal solution as a function (recurrence) of smaller solutions. Once the recurrence expression is established, the algorithm proceeds by "brute force" computation of the smaller solutions and the selection of the optimal solution at step i by comparing results computed for previous steps and selecting the maximum. This "brute force" approach is feasible because the number of smaller solutions is asymptotically small, so it makes sense to compute them in an efficient running time. The third step of the algorithm is executed after the final optimal solution is computed, and consists in backtracking the (possibly multidimensional) array of smaller solutions and establishing what input elements contributed to the optimal solution.
The process is applied to the knapsack problem as shown below.
Even if there is no intrinsic sequentiality of the items, it helps if we think of the n input items as labeled from 1 to n and sequentially ordered from 1 to n. The final optimal solution S ⊆ {1, 2 ... n} may consist in one and only one of the following two situations:
'''Case 1''': item n does not belong to the optimal solution. In this case, the optimal solution must include only items from the first n - 1 items, while the total size of the items is smaller or equal with W, and the total value V is the maximum possible.
'''Case 2''': the item n belongs to the optimal solution. In this case the optimal solution consists in item n and the optimal solution of the subset of n - 1 items {1, 2, ... n-1} for which the maximum size is W-w<sub>n</sub>. w<sub>n</sub> is already "allocated" to item n, which we agreed that for this case is already part of the solution.
<font size=-1>
              ┌
              │ ∑v<sub>i</sub>, where n ∉ S and ∑w<sub>i</sub> ≤ W,
              │i∈S                  i∈S
  V = max of │
              │v<sub>n</sub>+∑v<sub>i</sub>, where n ∈ S and ∑ w<sub>i</sub> ≤ W-w<sub>n</sub>
              │  i∈S-{n}              i∈S-{n}
              └
</font>
This reasoning can be applied to an intermediary sub-problem when only i {1, 2, ... i} items are concerned, and yields a recurrence formula we can then use algorithmically. To write down the recurrence formula, we introduce the following notation: let V<sub>i,x</sub> be the maximum value that we can get by using only the first i items {1, 2, ... i} while the total size of all those items combined is at most x. Expressing the value as a function of two discrete variable of the problem is key to the dynamic programming solution: it allows us to iterate over the discrete values in two nested loops and compute subproblem solutions via brute force.
The recurrence for step ''i'' can be expressed with the following formula:
<font size=-1>
              ┌
              │ V<sub>(i-1),x</sub>
V<sub>i,x</sub> = max of │ 
              │ v<sub>i</sub>+V<sub>(i-1),(x-w<sub>i</sub>)</sub>
              └
</font>
The algorithm consists in a double loop over i and x that computes the components of the recurrence:
<font size=-1>
Let W = capacity
Let v<sub>i</sub>, w<sub>i</sub> be the values and the sizes for all items from 1 to n.
Let V = a bidimensional array V(n+1 x W+1) array
Initialize V[0,x] = 0 for x = 0,1,... W
for i = 1 to n: <font color=teal># for each item, starting with first</font>
  for x = 0 to W: <font color=teal># for each possible weight from 0 to the capacity W</font>
    V[i,x] = max(V[i-1,x], V[i-1,x-w<sub>i</sub>] + v<sub>i</sub>) <font color=teal># when x < w<sub>i</sub>, which would yield a negative index, the second operand V[i-1,x-w<sub>i</sub>] + v<sub>i</sub> is ignored.</font>
</font>
The optimal solution is found in V[n, W] after the algorithm completes, as the solution of the biggest subproblem.
However, this only gives the maximum value. If we need to provide the actual items, we need to go through an extra step that backtracks the subproblem array and identifies which item contributed to the optimal solution.
==Playground Implementation==
{{External|https://github.com/ovidiuf/playground/blob/master/learning/stanford-algorithms-specialization/14-dynamic-programming-the-knapsack-problem/src/main/java/playground/stanford/knapsack/Knapsack.java}}

Latest revision as of 21:52, 11 November 2021

External

Internal

Overview

The knapsack problem shows up every time there's a budget and we want to use it in the most optimal way possible, maximizing the a second dimension. This is a quite fundamental problem which is a canonical use for the dynamic programming paradigm.

Problem Defintion

The input of the problem is given as n items {1, 2 ... n}. Each item comes with a non-negative value vi and a non-negative and integral size wi. The integrality of the size is essential, as it allows us to use discrete evaluation steps in the dynamic programming algorithm, as it will be shown below. The maximum integral non-negative capacity W is also given as input: all items that are part of the solution must have a total maximum size of W or less.

The output should be a subset S ⊆ {1, 2 ... n} that maximizes the total value of all the items in the subset:

 ∑vi
i∈S

in such a way that they all "fit" within the capacity W:

 ∑wi ≤ W
i∈S

Dynamic Programming Solution Discussion

A dynamic programming solution usually starts with an analysis of the optimal solution and with an attempt to express the optimal solution as a function (recurrence) of smaller solutions. Once the recurrence expression is established, the algorithm proceeds by "brute force" computation of the smaller solutions and the selection of the optimal solution at step i by comparing results computed for previous steps and selecting the maximum. This "brute force" approach is feasible because the number of smaller solutions is asymptotically small, so it makes sense to compute them in an efficient running time. The third step of the algorithm is executed after the final optimal solution is computed, and consists in backtracking the (possibly multidimensional) array of smaller solutions and establishing what input elements contributed to the optimal solution.

The process is applied to the knapsack problem as shown below.

Even if there is no intrinsic sequentiality of the items, it helps if we think of the n input items as labeled from 1 to n and sequentially ordered from 1 to n. The final optimal solution S ⊆ {1, 2 ... n} may consist in one and only one of the following two situations:

Case 1: item n does not belong to the optimal solution. In this case, the optimal solution must include only items from the first n - 1 items, while the total size of the items is smaller or equal with W, and the total value V is the maximum possible.

Case 2: the item n belongs to the optimal solution. In this case the optimal solution consists in item n and the optimal solution of the subset of n - 1 items {1, 2, ... n-1} for which the maximum size is W-wn. wn is already "allocated" to item n, which we agreed that for this case is already part of the solution.

             ┌ 
             │ ∑vi, where n ∉ S and ∑wi ≤ W,
             │i∈S                  i∈S
  V = max of │
             │vn+∑vi, where n ∈ S and ∑ wi ≤ W-wn
             │  i∈S-{n}              i∈S-{n}
             └

This reasoning can be applied to an intermediary sub-problem when only i {1, 2, ... i} items are concerned, and yields a recurrence formula we can then use algorithmically. To write down the recurrence formula, we introduce the following notation: let Vi,x be the maximum value that we can get by using only the first i items {1, 2, ... i} while the total size of all those items combined is at most x. Expressing the value as a function of two discrete variable of the problem is key to the dynamic programming solution: it allows us to iterate over the discrete values in two nested loops and compute subproblem solutions via brute force.

The recurrence for step i can be expressed with the following formula:

              ┌ 
              │ V(i-1),x
Vi,x = max of │  
              │ vi+V(i-1),(x-wi)

The algorithm consists in a double loop over i and x that computes the components of the recurrence:

Let W = capacity
Let vi, wi be the values and the sizes for all items from 1 to n.
Let V = a bidimensional array V(n+1 x W+1) array
Initialize V[0,x] = 0 for x = 0,1,... W
for i = 1 to n: # for each item, starting with first
  for x = 0 to W: # for each possible weight from 0 to the capacity W
    V[i,x] = max(V[i-1,x], V[i-1,x-wi] + vi) # when x < wi, which would yield a negative index, the second operand V[i-1,x-wi] + vi is ignored.

The optimal solution is found in V[n, W] after the algorithm completes, as the solution of the biggest subproblem.

However, this only gives the maximum value. If we need to provide the actual items, we need to go through an extra step that backtracks the subproblem array and identifies which item contributed to the optimal solution.

Playground Implementation

https://github.com/ovidiuf/playground/blob/master/learning/stanford-algorithms-specialization/14-dynamic-programming-the-knapsack-problem/src/main/java/playground/stanford/knapsack/Knapsack.java