# How do I solve HackerEarth problems

## Consistently get the right solution with the FAST method

Dynamic programming. The last resort of any interviewer set on seeing you fail. Your interview has gone great up until this point, but now it grinds to a standstill. Somewhere in the back of your mind you remember something about arrays and memoization, but the memory is hazy at best. You fumble, you trip, you drop the ball. Game over.

Of all the possible interview topics out there, dynamic programming seems to strike the most fear into people’s hearts.

But it doesn’t have to be that way.

Working with as many people as I do over at Byte by Byte, I started to see a pattern. People aren’t really afraid of dynamic programming itself. They are scared because they don’t know how to approach the problems.

With many interview questions out there, the solutions are fairly intuitive. You can pretty much figure them out just by thinking hard about them. For dynamic programming, and especially bottom-up solutions, however, this is not the case.

## The FAST Method

Dynamic programming solutions are generally unintuitive. When solving the Knapsack problem, why are you creating an array and filling in random values? What do those values mean?

This is why I developed the FAST method for solving dynamic programming problems.

The FAST method is a repeatable process that you can follow every time to find an optimal solution to any dynamic programming problem.

Rather than relying on your intuition, you can simply follow the steps to take your brute force recursive solution and make it dynamic.

The FAST method comprises 4 steps: Find the First solution, Analyze the solution, identify the Subproblems, and Turn around the solution. I will explain how to use these steps using the example of computing the nth fibonacci number.

## Find the first solution

The FAST method is built around the idea of taking a brute force solution and making it dynamic. Therefore the first step is to find that brute force solution. In the case of finding the nth fibonacci number, we can just write a simple recursive function:

public int fib(int n) {
if (n == 0) return 0;
if (n == 1) return 1;
return fib(n-1) + fib(n-2);
}

This solution is really inefficient, but it doesn’t matter at this point, since we’re still going to optimize it.

## Analyze the solution

Next we need to analyze our solution. If we look at the time complexity of our function, we find that our solution will take time. Each recursive call subtracts from and results in two child calls. Therefore the depth of our recursion is and each level has twice as many calls.

.

To optimize a problem using dynamic programming, it must have optimal substructure and overlapping subproblems. Does our problem have those? It definitely has an optimal substructure because we can get the right answer just by combining the results of the subproblems.

It also has overlapping subproblems. If you call , that will recursively call and . then recursively calls and . It’s clear that is being called multiple times during the execution of and therefore we have at least one overlapping subproblem.

With these characteristics we know we can use dynamic programming.

## Identify the subproblems

Unlike some problems, it’s pretty easy to identify and understand the subproblems for our fibonacci numbers. The subproblems are just the recursive calls of and . We know that the result of is just the cth fibonacci number for any value of , so we have our subproblems.

With our subproblems defined, let’s memoize the results. All this means is that we’ll save the result of each subproblem as we compute it and then check before computing any value whether or not it’s already computed. Doing this only requires minimal changes to our original solution.

public int fib(int n) {
if (n < 2) return n; // Create cache and initialize to -1
int[] cache = new int[n+1];
for (int i = 0; i < cache.length; i++) cache[i] = -1; // Fill initial values in cache
cache[0] = 0;
cache[1] = 1; return fib(n, cache);
}private int fib(int n, int[] cache) {
// If value is set in cache, return it
if (cache[n] >= 0) return cache[n]; // Otherwise compute result and add it to the cache before returning
cache[n] = fib(n-1, cache) + fib(n-2, cache);
return cache[n];
}

Our new solution only has to compute each value once, so it runs in time. Because of the cache, though, it also uses space.

## Turn around the solution

The final step is to make our solution iterative (or bottom-up). All we have to do is flip it around. With our previous (top-down) solution, we started with n and repeatedly broke it down into smaller and smaller values until we reached and . Now, instead, we’ll start with the base cases and work our way up. We can compute the fibonacci number for each successive value of until we get to our result.

public int fib(int n) {
if (n == 0) return 0; // Initialize cache
int[] cache = new int[n+1];
cache[1] = 1; // Fill cache iteratively
for (int i = 2; i <= n; i++) {
cache[i] = cache[i-1] + cache[i-2];
}
return cache[n];
}

With this final solution, we again use time and space.
(Note: You can actually do this in space, but that’s beyond the scope of this post.)

Dynamic programming doesn’t have to be hard or scary. By following the FAST method, you can consistently get the optimal solution to any dynamic programming problem as long as you can get a brute force solution.

Knowing the theory isn’t sufficient, however. It is critical to practice applying this methodology to actual problems. If you’re looking for more detailed examples of how to apply the FAST method, check out my free ebook, Dynamic Programming for Interviews. Once you go through the examples in the book, once you’ve understood them and applied them in your practice, you’ll be able to go into any interview with confidence, knowing that not even dynamic programming will trip you up.

Ready to apply this method? Start practicing coding interview questions on Pramp (it’s also free).