[6594 views]

In this article, we will cover a famous dynamic programming question, "Minimum Path Sum" Algorithm.
So Let's Start.

### What is MINIMUM PATH SUM Problem?

We are given a 2D grid, the thing we need to do is pretty straight forward.

We have been placed at the top left corner and we need to reach the bottom right corner.

There is a cost for coming to any point in the grid, so we need to find such a path that minimizes this cost!

Note that we are allowed to go only in a downward direction or the right side.

Example 1:

Input:

[

[1,3,1],

[1,5,1],

[4,2,1]

]

Output: 7

Explanation: Because the path 1->3->1->1->1 minimizes the sum.

Now what if my choice depends on a "greedy thinking", meaning I only think about my current step, I try to minimize the cost of the step that I am about it take.

A visualization will help you understand it better.

Notice how we took a greedy approach

Red circles denote choices we did not take and green ones represent the choices that we took

Now lets look at one another path

Here also the total cost for the path = 1 + 3 + 1 + 1 + 1 = 7 which is lesser than what we computed earlier!

This is why it is necessary to explore all possibilities! But this isn't feasible to check all of them! There will be too many paths for a larger grid

Well, this is where Dynamic Programming steps in!

We try to break our problem down into subproblems

#### Bottom implementation in Python:

def minPath1(cost):
rows = len(cost)
cols = len(cost[0])
dp = [[0 for i in range(cols)] for i in range (rows)]
return toReach(cost, rows-1, cols-1)
def toReach(cost, row, col):
# Bases cases
if row == 0 and col == 0:
return cost[row][col]
# Edge cases
# Anywhere in the top row and leftmost column can be reached only from the paths
# That are just previous right/below them
if row == 0:
return cost[row][col] + toReach(cost,row,col-1)
if col == 0:
return cost[row][col] + toReach(cost,row-1,col)
if dp[m][n]!=0:
return dp[m][n]
dp[m][n] = cost[row][col] + min(toReach(cost, row-1, col), toReach(cost, row, col-1))
return dp[m][n]
#### Top down implementation in Python:

def minPath2(cost):
rows = len(cost)
cols = len(cost[0])
dp = [[0 for i in range(cols)] for i in range (rows)]
for i in range(1, rows):
dp[i][0] = dp[i-1][0] + cost[i][0]
for j in range(1, cols):
dp[0][j] = dp[0][j-1] + cost[0][j]
for i in range(1, rows):
for j in range(1, cols):
dp[i][j] = cost[i][j] + min(dp[i-1][j], dp[i][j-1])
return dp[-1][-1]

We have been placed at the top left corner and we need to reach the bottom right corner.

There is a cost for coming to any point in the grid, so we need to find such a path that minimizes this cost!

Note that we are allowed to go only in a downward direction or the right side.

Example 1:

Input:

[

[1,3,1],

[1,5,1],

[4,2,1]

]

Output: 7

Explanation: Because the path 1->3->1->1->1 minimizes the sum.

The Example above explains a lot, notice how we took a path which helps us in minimizing the final cost we are going to spend

Now it might look tempting to choose the next Cheapest point for every step we take! (Down or right)

But will it provide us the best result?

Or do we need to try out all the possibilities and find the best result?

Well let's find out!

So let's try and think of the initial intuitions that we can get from this question

- We need to reach the last cell (bottom right), we are starting from the first cell (top left)
- We have only two choices at each cell! Either go right or go down! Which one should we take, we will think later.
- We need to minimize the final cost of reaching the endpoint. The cheaper the better! Think of these numbers in the grid as money

Now what if my choice depends on a "greedy thinking", meaning I only think about my current step, I try to minimize the cost of the step that I am about it take.

A visualization will help you understand it better.

Notice how we took a greedy approach

Red circles denote choices we did not take and green ones represent the choices that we took

- At every cell we look both right and down and choose the one with lower value
- Out of 1 and 3 , 1 is smaller, we go down.
- Out of 4 and 5, 4 is smaller, we again down
- Now we can only go right, so we keep going and finally we reach the end
- The cost: 1 + 1 + 4 + 2 + 1= 9

Now lets look at one another path

Here also the total cost for the path = 1 + 3 + 1 + 1 + 1 = 7 which is lesser than what we computed earlier!

This is why it is necessary to explore all possibilities! But this isn't feasible to check all of them! There will be too many paths for a larger grid

Well, this is where Dynamic Programming steps in!

We try to break our problem down into subproblems

- For reaching a point [i,j] it is obvious that we have to pay that locations cost.
- Other than this, for reaching this point we can either come from the top i.e [i-1,j] or from the left i.e. [i, j-1].
- These are the only two choices that we have! All we need to do is choose the minimum of these two choices.
- So the relation can be thought of as:

### toReach(i,j) = cost(i,j) + min(toReach(i-1,j), toReach(i,j-1))

- Also we need to take care of the edge cases i.e. the out of boundary checks
- Anywhere in the first row or the first column is possible only in one way so we can fill them up first
- And further work is done with the help of Dynamic Programming!