Algorithms are an inherent part of software engineering. They are the foundation of modern computing. Without them, solving complex problems or writing efficient code would be extremely difficult, and companies like Google, Meta, and Amazon wouldn’t be able to operate as they do today.
What is An Algorithm?
Algorithms provide a precise, step-by-step method for solving problems, often making tasks more efficient or manageable.
Suppose you want to solve a few problems, and there are no algorithms. For every problem, you have to think of a solution from scratch. Algorithms provide a standardized way to do things that otherwise would take time and effort.
Some examples of algorithms are the backtracking algorithm, sliding window algorithm, dynamic programming, etc.
Example of An Algorithm
Here’s a small example of how an algorithm looks like. The below is an example of the linear search algorithm.
1.Take an array of n elements.
2. Take the target element to search for.
3. Set a loop index i = 0.
4. Repeat step 5 while i < n:
5. If arr[i] == target:
Print: “Element found at index i” & stop the algorithm
Else, increment i by 1.
6. If loop ends without finding the element:
Print: “Element not found”
7. Exit
Array: [10, 20, 30, 40, 50]
Target: 30
- Compare 10 → no
- Compare 20 → no
- Compare 30 → yes → Found at index 2
 → Stop.
Important Algorithms for Coding Interviews
Now that we have defined algorithms, let’s examine the algorithms that are essential in the software engineering ecosystem.
Backtracking Algorithm
The Backtracking Algorithm is a technique for solving problems incrementally. It builds candidates to solutions and abandons a candidate (“backtracking”) as soon as it is determined that the candidate cannot possibly lead to a valid solution. The Backtracking Algorithm is commonly used in solving constraint satisfaction problems like puzzles, permutations, and combinations.
Core Idea: Try all possibilities, but backtrack when a path leads to an invalid state.
Strengths: Solves constraint-based problems (e.g., N-Queens, Sudoku, Permutations).
Drawbacks: Can be slow (exponential time) if not optimized with pruning.
Optimizations: Use memoization or constraint propagation to reduce the search space.
Common Use Cases: Puzzle solving, combinatorics, decision trees.
Example: In the N-Queens problem, the backtracking algorithm places queens one by one in different columns, and if placing a queen leads to a conflict, it backtracks and tries the next possible position.
Sliding Window Algorithm
The Sliding Window Algorithm is used to efficiently solve problems involving subarrays or substrings by maintaining a window that moves across the data. It avoids redundant computations by reusing results from previous steps. This technique is ideal for finding maximum/minimum sums, averages, or specific patterns.
Core Idea: Maintain a subset (window) of elements and slide it across the data to compute results efficiently.
Strengths: Linear time complexity for many subarray/substring problems.
Drawbacks: Mostly useful for contiguous subarray problems.
Optimizations: Use two pointers or deques for dynamic window sizes.
Common Use Cases: Max/min subarray sums, longest substring without repeating characters.
Example: To find the max sum of any subarray of size 3 in the array [2, 1, 5, 1, 3, 2], slide a window of size 3 and update the sum dynamically.
This results in an optimal solution in linear time, O(n).
Recursion Algorithm
Recursion is a programming technique where a function calls itself to solve smaller instances of a problem. Each recursive call should move toward a base case, which stops further calls.
Core Idea: Solve problems by solving smaller versions of themselves.
Strengths: Clean and intuitive for problems with hierarchical or repetitive structure.
Drawbacks: Risk of stack overflow, can be inefficient without memoization.
Optimizations: Tail recursion, memoization, converting to iteration.
Common Use Cases: Tree/graph traversal, factorials, Fibonacci, DFS, backtracking problems.
Example: To calculate factorial: fact(n) = n * fact(n-1) with base case fact(0) = 1.
Recursion is useful for problems like tree traversal, permutations, and divide & conquer.
Dynamic Programing
Dynamic Programming (DP) is a method for solving complex problems by breaking them down into simpler subproblems and storing their results to avoid redundant computations. It is useful for optimization problems with overlapping subproblems and optimal substructure.
Core Idea: Solve overlapping subproblems once and reuse their solutions.
Strengths: Drastically improves performance by avoiding recomputation.
Drawbacks: Requires careful identification of subproblems and dependencies.
Types: Top-down (memoization) or bottom-up (tabulation).
Common Use Cases: Knapsack problem, Fibonacci, longest common subsequence, matrix chain multiplication.
Example: In the Fibonacci sequence, instead of recalculating values, store them in a table:
fib(n) = fib(n-1) + fib(n-2) with fib(0)=0, fib(1)=1.
This reduces time complexity from exponential to linear, O(n).
Greedy Algorithm
Greedy Algorithm builds up a solution piece by piece, always choosing the option that looks best at the moment (locally optimal). It works well when a problem has the greedy-choice property and optimal substructure.
Core Idea: Make the best choice at each step without reconsidering previous choices.
Strengths: Simple and fast (often O(n log n) or better).
Drawbacks: Doesn’t always produce the global optimum.
Validation: Must prove that local optimum leads to global optimum.
Common Use Cases: Activity selection, coin change (with canonical denominations), Huffman coding, Kruskal’s/Prim’s algorithms.
Example: In the coin change problem, to make ₹27 with coins of ₹1, ₹5, and ₹10, a greedy approach picks the largest coin possible at each step: 10 + 10 + 5 + 1 + 1.
This gives an efficient solution in many cases, though not always optimal.
Divide & Conquer Algorithm
Divide and Conquer is an algorithmic technique that solves a problem by dividing it into smaller subproblems, solving each recursively, and then combining their solutions. It’s ideal for problems that can be broken down repeatedly into similar tasks.
Core Idea: Break the problem into smaller parts, solve them independently, and combine.
Strengths: Reduces complexity for large problems.
Drawbacks: Can have overhead from recursive calls and merging.
Examples: Merge Sort, Quick Sort, Binary Search, Fast Fourier Transform (FFT).
Common Use Cases: Sorting, searching, matrix operations, computational geometry.
Example: Merge Sort splits an array into halves, recursively sorts each half, and then merges them.
This approach typically reduces time complexity to O(n log n).
Binary Search Algorithm
Binary Search is an efficient algorithm to find an element in a sorted array by repeatedly dividing the search range in half. It compares the target with the middle element and discards the half where the target can’t lie.
Core Idea: Divide the sorted array in half, and eliminate half the search space each time.
Strengths: Fast search in O(log n) time.
Drawbacks: Requires sorted input and careful index management.
Extensions: Lower bound, upper bound, binary search on answer (e.g., optimization problems).
Common Use Cases: Search in sorted arrays, search space problems (e.g., capacity of ship, minimum time, etc.).
Example: To find 23 in [10, 15, 20, 23, 30, 35], compare with the middle; narrow the range until found.
This algorithm runs in O(log n) time.
Conclusion
Now that you have some understanding of what algorithms are and what the important algorithms are, please try to practice some problems based on these algorithms. Thank you!!!