Yello,

Today, I will introduce you a very efficient algorithm called *dynamic programming*.

### “If it’s just an algorithm, why does it sound like it’s totally a different aspect of programming?

### “Why didn’t they give it a boring name like Kruskal’s Algorithm, for example? (no offense)”

The term dynamic programming was originally used by Bellman in 1940. He says he chose the word *dynamic* because the output characteristics of the solution depend upon time… and simply because it sounds cool. The word *programming* refers to the use of the method to find an optimal program.

Let’s see the problem first before actually talking about the algorithm (one of the easiest problems):

Suppose you are climbing up a staircase with a friend. Your friend is talking about how much he likes his pet goldfish for the

fifth timethat day. You do not want to hear that story again, so you want to keep your mind busy with(you know how many steps the staircase has).counting how many possible ways you can jump up the stairs if you can just hop 1 or 2 step(s) at a time

Here are some examples:

Do you see a pattern here? (Hint: It’s a very famous sequence).

The answer is…

(♫♫ drum roll ♫♫)

FIBONACCI SEQUENCE!

#### So, how does it work?

For n = 0, there is only one way to climb the staircase (which is no way). For n = 1, you could do only one hop of one. For n = 2, you can either directly hop two steps or you can jump from first step to second step with a single hop of one. For n = 3, you first have to reach step 1 *somehow* and add a hop of 2 steps *or *you first reach step 2 *somehow* and add a hop of 1 step. For n = 4, either you reach step 3 and add a hop of one or you reach step 2 and add a hop of two. Note that I did not include reaching step 2 and adding two hops of one since it is already included in reaching step 3 and adding one more step (think about it).

(F(n) is the number of ways to reach the nth step).

We can define our solution recursively from now on:

The sequence is exactly the same as Fibonacci except that F(0) = 1 and F(1) = 1 are the base cases here whereas in Fibonacci, F(0)=0 and F(1)=1 are the base cases.

Speaking of implementation, let’s implement this in Java (because I miss coding in Java 😦 )

public classStaircaseProblem {public longnumOfWays(intnumOfSteps) {if(numOfSteps == 1 || numOfSteps == 0)return1;returnnumOfWays(numOfSteps-2) + numOfWays(numOfSteps-1); }public static voidmain(String[] args) { StaircaseProblem sp =newStaircaseProblem();intsteps = 5; System..println(sp.numOfWays(steps)); } }out

This simple, *naive* recursive program gives an output of 8, as it was supposed to do. However, suppose you are climbing to a staircase of 100 steps.

If you try this program with numOfSteps = 100, then your program will take strangely long to run.

**Why? **I said we need to calculate F(4) and F(3) in order to calculate F(5). In order to calculate F(4) we need to calculate F(3) and F(2), and to calculate F(3), we need to calculate F(2) and F(1). Calculating F(4) and F(3) have nothing to do with each other, so we calculate F(3) once for F(4) and once for F(3) itself.

Imagine how many multiple calculations occur in n steps:

If there was just a way to calculate each *subproblem* only once…

But wait! There is one!

This is where dynamic programming comes in. In dynamic programming, we compute a subproblem the first time we encounter it. The key is to *store* the solutions. The next time the subproblem occurs, instead of recomputing its solution, we simply look up the previously computed solution. Instead of time, we use space for storage. The technique of storing solutions instead of recomputing them is called *memoization* (no it’s not a typo, WordPress, stop underlining it).

importjava.util.Arrays;public classStaircaseProblem {static long[]memoization; //new line of codepublic longnumOfWays(intnumOfSteps) {if(numOfSteps == 1 || numOfSteps == 0)return1; //start of new lines of codeif(memoization[numOfSteps] != -1)returnmemoization[numOfSteps];longcurrentResult = numOfWays(numOfSteps-2) + numOfWays(numOfSteps-1);memoization[numOfSteps] = currentResult;returncurrentResult; //end of new lines of code }public static voidmain(String[] args) { StaircaseProblem sp =newStaircaseProblem();intsteps = 5;memoization=new long[steps+1]; //new line of code Arrays.fill(memoization, -1); //new line of code System..println(sp.numOfWays(steps)); } }out

The two codes I wrote are basically the same lines of code except some few modifications about storage. However, it had a huge effect on the run time. For n = 100, the dynamic code took between 200000 ns to 300000 ns each time (about 0.25 milliseconds in average) whereas I waited for more than 5 minutes for the naive code to run before terminating!

**What if you could hop 1, 2 or 3 steps at a time? **You would simply modify the formula as F(n) = F(n-1) + F(n-2) + F(n-3), since you can also start with hopping 3 steps at the same time now.

I guess I will be writing about dynamic programming from time to time since I think it is a very cool algorithm. It makes you use a mathematical approach to any problem in real life, and after finding a pattern and the formula for the problem, it is quite easy to implement.

Hope you enjoyed my first blog post with (serious) source code!

**Source:**