What is Dynamic Programming?

0
34
Dynamic Programming

Do you like the idea of repeating the things that you’ve already answered? Surely, no one does, and a programmer is no exception. So, for always remembering the answers to the already solved sub-problems, dynamic programming is built.

It works on the objective of breaking problems into a series of overlapping subproblems through which solutions for larger problems can be achieved. Therefore, any given problem can be broken down into smaller sub-problems.

Furthermore, these smaller subproblems can still be divided into smaller problems and if the programmer successfully achieved the overlapping subproblems, the question would be encountered. Everything sums up in providing users with a language through which they can avoid the repeated work by remembering partial results.

What is Dynamic Programming?

It is an optimization for plain recursions. Every recursive problem that requires programmers for similar inputs repeatedly can be optimized with the help of dynamic programming. The basic idea of doing so is storing the end results of its smallest subproblem so that users don’t have to re-compute them in the future when required. This easy approach can simplify the process and reduce the time invested in solving exponentials, polynomials, Fibonacci, etc.

In simple words, it can be said that DP is an algorithmic technique through which optimization problems can be solved by breaking them into smaller subproblems. This gets possible because of a simple reality that all the optimal solutions for a complete problem fully rely on the optimal solution for their subproblems. Further, all the solutions get combined for achieving the final solution. It can be said that:

  • The problem selected for dynamic programming can be further divided into smaller subproblems that overlap each other.
  • The final and appropriate solution can only be achieved by connecting the smaller sub-problems.
  • Every dynamic algorithm utilizes memoization.

An example: Fibonacci numbers

For those who don’t know, Fibonacci numbers are a series of numbers in which every number is the sum of the former two numbers. For example, 0, 1, 1, 2, 3, 5, and 8, further this chain can continue to infinity. For finding the nth number of this series, the following formula is used:

Fib(n) = Fib(n-1) + Fib(n-2), for n > 1

As we can observe, the formula breaks down things into smaller subproblems (that are Fib(n-1) and Fib(n-2)). Using this similar approach, we can solve such queries with the help of Dynamic Programming. Following are the major problems that are solved through DP:

  • Fibonacci number series
  • Knapsack problem
  • Tower of Hanoi
  • All pair shortest path by Floyd-Warshall
  • Shortest path by Dijkstra
  • Project scheduling

Every dynamic programming problem is solved with the help of 2 methods, which are top-down and bottom-up. These methods are discussed in the next section in detail.

Methods for Solving Dynamic Programming Problems

1. Top-down using memoization

As the name suggests, we go from the top (bigger problems) to down (subproblems) for finding the final and appropriate solution to the question. While solving a sub-problem in this approach, programmers cache the results so that they don’t have to solve the same thing, again and again, several times. The method of storing the result for solved subproblems is known as memoization.

2. Bottom-up using tabulation

Contrary to the top-down strategy, bottom-down approaches for solving all the sub-problems first. Programmers make use of tabulation and fill up n-dimensional tables, through which the solution to the main question is calculated. The tabulation process is different from memoization because memoization only saves the results of already solved sub-problems.

Tabulation v/s Memoization

The memoization method is much easier as compared to tabulation when it comes to coding. It makes things simple by allowing users to write a ‘memoriser’ wrapper function that will automatically perform all the required actions. However, the tabulation method demands users to write a complete code for ordering.

Memoization (top-down) comes with memory issues that may result in delaying each computation because an individual has to place them into arrays, and the size of all the arrays grows up very quickly. Furthermore, memoization will also combine the time complexities with space complexities. But using tabulation allows users to skip the calculations and therefore, saves time and space complexities.

Note: Time complexity is a function through which users can define the total number of times an algorithm will take for solving a problem. The space complexity is a function that defines the memory consumption by algorithms taken as the input for solving the algorithm.

GroundsTabulationMemoization
CodeIt is quite complex for coding because knowing the order is crucial.Its coding can be done with much ease because its functions already exist.
SpeedIt is much faster to achieve because the programmer would be aware of the complete order.Much slower as compared to tabulation because programmers will create them on the fly.
Table completenessIt requires the complete computation of the tables.Tables don’t have to be computed fully.

Recursion and Dynamic Programming

People often confuse recursion with dynamic programming. Both have some differences that everyone should be aware of. Recursion is a method to find the solution with the help of expressing the value of a function in terms of other values either directly or indirectly. Functions involved in this process are known as recursive functions.

Dynamic programming is recursion itself but with a twist of memoization. Therefore, it includes the calculation and storage of values that are further utilized for solving subproblems that appear again and again. This results in faster code creation along with reducing the time complexity. The basic idea behind dynamic programming is saving time along with using the provided space efficiently.

Clearly, it is observed that recursion saves more space but takes time while dynamic programming utilizes more space but saves time.

Characteristics of DP Problems

Dynamic programming can only be applied to problems that have the following features:

  1. Optimal Substructure: Having an optimal substructure will result in optimal subsolutions that will be summed up further to achieve the final optimal solution. But if a problem lacks the optimal substructure, then the recursive algorithm cannot be defined for finding the solution.
  2. Overlapping Subproblems: This will help recursive algorithms to have repetitive subproblems with a single answer for finalizing the solution. It helps in improving the recursive implementation by calculating all the subproblems in a single shot. However, not having subproblems implies that dynamic programming has no goal to achieve.

Fundamentals of Dynamic Programming

  1. Substructure: Break the main problem into various tiny subproblems and find the optimal solution from the solution of those subproblems.
  2. Table Structure: Create a table containing the solutions of all the subproblems. These solutions will further be reused a lot of times and users will be saved from solving the same thing again and again.
  3. Bottom-up Computation: Combine all the solutions mentioned in the table for solving the main problem. This will eventually result in finding the optimal solution.

Components of Dynamic Programming

  1. Stages: The main problem further gets divided into various subproblems known as stages. This is the smallest fraction possible for any problem.
  2. States: Every stage is associated with some states that is the shortest path problem a node has reached.
  3. Decision: Decisions or stage decisions are taken at every stage and should be optimal. Programmers have multiple choices out of which only the best possible decision should be made.
  4. Optimal Policy: A set of rules or a single rule through which decisions are made at every stage. Every global policy (can be applied globally for various solutions) is known as optimal policy and this process is called the Bellman principle of optimality.

Steps to Solve Dynamic Programming Problems

1. Recognizing the problem

First of all, keep in mind that Dynamic Programming is an optimization technique through which we solve bigger problems by breaking them down into many simpler subproblems. Further, these subproblems get solved once and their solutions are stored in a tabular format for future references.

Recognizing a problem for applying Dynamic Programming is one of the most difficult and crucial steps. In this stage, programmers have to check whether or not the problem can be broken down into various functions of solutions or many similar smaller problems. Following are few ways to recognize the problems that can be solved with the help of dynamic programming:

  • Nearly every problem that asks for maximizing or minimizing a certain quantity is solved with the help of Dynamic Programming. The counting problems that ask for the arrangements of numbers under a few conditions are also solved with the help of DP.
  • Check if the problem is satisfying the overlapping subproblems property or the optimal substructure property. If the answer is yes, then you can surely solve the given problem with the help of DP.

2. Decide the state

Solving DP problems requires users to make various states or subproblems. Creating this is the beginning of solving the problem and therefore, it should be done carefully because all of the further solutions depend upon the choice of state definitions. Therefore, right after ensuring step-1, an individual has to express the problem in terms of the function parameters and also check the changing parameters.

3. Recurrence relation

This is a complex process of solving the problem because it can’t be done without a lot of intuition, observation, and practice. Expressing the recurrence relation with better clarity and simple breakdowns will help users to understand the problem more clearly and make things significantly easier.

Right after figuring out the recurrence relation and specifying the problems with subproblems, an individual can further proceed to the next step.

4. Identifying the base case

A base case is a subproblem that is independent of any other subproblems. To find a base case, programmers have to try out different examples and observe how a problem gets further simplified into smaller subproblems. The goal of doing so is to find a point where it cannot be simplified further because of various parameters.

5. Decide whether to implement it iteratively or recursively

All the steps mentioned above are completely agnostic to either go with the implementation of recursive or iterative. Both of these approaches are required for users to determine the recurrence relation and find the base case.

Now, for making a decision, one should carefully think about the trade-offs. Therefore, an individual should be able to implement either of the implementations as per the scenario and benefits.

6. Adding memoization

Memoization is a technique used for the storage of results of multiple subproblems or functions and returning the cached result when the same inputs occur again. In recursive solutions, the addition of memoization is a simple process that is used for defining the function result. This process includes:

  • Storing the function results into the memory prior to all the return statements.
  • Check the function results before starting any other computation.

7. Time complexity

The total time required for solving all the unique subproblems or states is known as time complexity in Dynamic Programming. With the help of certain rules, users can compute the time complexity with much ease. Following are the two steps for achieving the same:

  1. Count the total number of states. This calculation will totally depend on the number of changing parameters in the problem.
  2. Think about the work done for every subproblem.

The aforementioned steps will help you to solve any type of dynamic programming problem. Every individual has to do a lot of practice for DP problems using this approach to master the skill and solve problems quickly.

Conclusion

Dynamic programming is an optimization of plain recursions and is used on every recursive problem for saving users from solving the same things repeatedly and saving a lot of time. It divides a problem into many similar subproblems and then finds the answer to these subproblems. Further, all of these answers get combined for finding the optimal solution to the main problem.

There are 2 methods of solving the dynamic programming solutions, i.e. top-down and bottom-up. In the top-down or the memorization method, users go first from solving the bigger problems and then the subproblems. The bottom-up method requires users to first solve the subproblems.

Further, the blog contains various fundamentals and components of dynamic programming that will help users understand the concept. The 7 steps guide mentioned at the end of the blog contains all the steps used for solving dynamic programming problems.

LEAVE A REPLY

Please enter your comment!
Please enter your name here