Dynamic Programming

Dynamic Programming is also used in optimization problems. Like divide-and-conquer method, Dynamic Programming solves problems by combining the solutions of sub problems.

Moreover, Dynamic Programming algorithm solves each sub-problem just once and then saves its answer in a table, thereby avoiding the work of re-computing the answer every time.

Two main properties of a problem suggest that the given problem can be solved using Dynamic Programming. These properties are:

  • Overlapping Sub-problems
  • Optimal Substructure

Overlapping Sub-Problems

Similar to Divide-and-Conquer approach, Dynamic Programming also combines solutions to sub-problems. It is mainly used where the solution of one sub-problem is needed repeatedly.

The computed solutions are stored in a table, so that these don’t have to be re-computed. Hence, this technique is needed where overlapping sub-problem exists.

For example, Binary Search does not have overlapping sub-problem. Whereas recursive program of Fibonacci numbers have many overlapping sub-problems.

Optimal Sub-Structure

A given problem has Optimal Substructure Property, if the optimal solution of the given problem can be obtained using optimal solutions of its sub-problems.

For example, the Shortest Path problem has the following optimal substructure property:

If a node x lies in the shortest path from a source node u to destination node v, then the shortest path from u to v is the combination of the shortest path from u to x, and the shortest path from x to v.


Dynamic Programming Example

Let's look at an example. Italian Mathematician Leonardo Pisano Bigollo, whom we commonly know as Fibonacci, discovered a number series by considering the idealized growth of rabbit population. The series is:

1, 1, 2, 3, 5, 8, 13, 21, .....

We can notice that every number after the first two is the sum of the two preceding numbers. Now, let's formulate a function F(n) that will return us the nth Fibonacci number, that means,

F(n) = nth Fibonacci Number

So far, we've known that,

F(1) = 1
F(2) = 1
F(3) = F(2) + F(1) = 2
F(4) = F(3) + F(2) = 3

We can generalize it by:

F(1) = 1
F(2) = 1
.......
F(n) = F(n-1) + F(n-2)

Now if we want to write it as a recursive function, we have F(1) and F(2) as our base case. So our Fibonacci Function would be:

Procedure F(n): //A function to return nth Fibonacci Number
if n is equal to 1
 Return 1
else if n is equal to 2
 Return 1
end if
Return F(n-1) + F(n-2)

Steps of Dynamic Programming Approach

Dynamic Programming algorithm is designed using the following four steps.

  1. Characterize the structure of an optimal solution.
  2. Recursively define the value of an optimal solution.
  3. Compute the value of an optimal solution, typically in a bottom-up fashion.
  4. Construct an optimal solution from the computed information.

Elements of Dynamic Programming

There are basically three elements that characterize a dynamic programming algorithm.

Substructure

Decompose the given problem into smaller sub problems. Express the solution of the original problem in terms of the solution for smaller problems.

Table Structure

After solving the sub-problems, store the results to the sub problems in a table.

This is done because sub problem solutions are reused many times, and we do not want to repeatedly solve the same problem over and over again.

Bottom-up Computation

Using table, combine the solution of smaller sub problems to solve larger sub problems and eventually arrives at a solution to complete problem.


Components of Dynamic Programming

  1. Stages: The problem can be divided into several sub problems, which are called stages. A stage is a small portion of a given problem. For example, in the shortest path problem, they were defined by the structure of the graph.
  2. States: Each stage has several states associated with it. The states for the shortest path problem were the node reached.
  3. Decision: At each stage, there can be multiple choices out of which one of the best decisions should be taken. The decision taken at every stage should be optimal; this is called a stage decision.
  4. Optimal Policy: It is a rule which determines the decision at each stage; a policy is called an optimal policy if it is globally optimal. This is known as Bellman principle of optimality.
  5. Given the current state, the optimal choices for each of the remaining states do not depend on the previous states or decisions. In the shortest path problem, it was not necessary to know how we got a node only that we did.
  6. There exists a recursive relationship that identifies the optimal decisions for stage j, given that stage j+1, has already been solved.
  7. The final stage must be solved by itself.

Applications of Dynamic Programming

  • Matrix Chain Multiplication
  • Optimal Binary Search Trees
  • 0/1 Knapsack Problem
  • Multi Stage Graph
  • Traveling Sales Person Problem
  • Reliability Design
  • All Pair Shortest Path Problem
  • Reliability Design Problem
  • Longest Common Subsequence (LCS)
  • Flight Control and Robotics Control
  • Time-Sharing: It schedules the job to maximize CPU usage