👋 Hi, this is Gergely with a subscriber-only issue of the Pragmatic Engineer Newsletter. In every issue, I cover challenges at Big Tech and startups through the lens of engineering managers and senior engineers. If you’ve been forwarded this email, you can subscribe here. How experienced engineers get unstuck in coding interviewsIn-person algorithmic interviews are not disappearing from Big Tech and many top scaleups and startups, so it’s useful to know how to succeed at them. Advice and tactics from Mike Mroczka
In-person coding interviews are returning to Big Tech and some startups in 2025 now that LLMs are too good at solving coding challenges, and a plethora of tools offer invisible AI “overlays” which help candidates to cheat during remote interviews. As a result, remote coding interviews today serve up more noise and less signal than ever. In response to AI “helper” tools for remote interviews, many employers are returning to the in-person interview format – while others accept AI tools and adapt their processes like Shopify has. Today, the most popular interview type remains the algorithmic coding interview, in which candidates have an algorithmic challenge, a whiteboard, and 30-60 minutes to come up with a solution. So, how do you succeed in this new wave of old-style tech job interviews? For some professionals, the approach may not be crystal clear if remote interviews have been the norm. To find out, I turned to one of the best people to tackle this question: Mike Mroczka, lead author of the book Beyond Cracking the Coding Interview. It is the official sequel to the classic coding interview preparation “bible,” Cracking the Coding Interview. Reading the original title first is not a prerequisite for Beyond Cracking the Coding Interview, Mike says. If you are interested in this updated book, you can read nine chapters for free (the book has 43 chapters in total).
Mike worked at Google for nearly 4 years, and has been a dedicated interview coach for a decade. In this article, he shares his approach for acing in-person interviews with a framework aptly named “MIKE,” and advice and tactics for getting unstuck in tough coding interviews. We cover:
The bottom of this article could be cut off in some email clients. Read the full article uninterrupted, online. 1. The reality of interviews at Big TechSeveral companies have started to embrace AI coding tools for interviews, including Meta, which is currently trialing them. Elsewhere, Canva and Shopify (as mentioned above), encourage candidates to use AI tools during interviews. But adoption like this is far from being a death knell for tried and true interview formats, according to research. Data from leading mock interview site interviewing.io shows AI has overwhelmingly NOT made algorithmic interviews redundant at Big Tech. It surveyed 53 Big Tech interviewers, mostly at FAANG companies – Facebook, Amazon, Apple, Netflix, and Google – and also at a few large scaleups like Stripe. Aline Lerner, founder at interviewing.io, summarizes the data collected:
At search giant Google, they are doubling down on in-person interviews because of AI. In June, CEO Sundar Pichai confirmed Google is bringing back in-person interviews on the Lex Fridman podcast (emphasis mine):
Alternatives to in-person algorithmic coding interviews have downsides. The most popular other options are:
Trial days or trial weeks. A valid criticism of 30-60 minute coding interviews in any format is that they don’t mirror the actual work of engineers. So, why not create an interview that does? Trial days and weeks are the result in both the remote and in-person format, in which candidates join a team and ship a feature to production. Linear uses remote trial days in their interview process, and Automattic holds an async coding challenge as the final “coding” interview, which can take days to complete. The downside is that trial days can add up to a week of time invested. Linear pays candidates for this, but the demand on time can make it hard for some people to commit to. The biggest downside of in-person interviews is obviously the cost: candidates must travel to an office, and overnight accommodation may be required. Some employers like Big Tech companies usually foot such bills. For candidates interviewing on Fridays, this process might turn into a free city break, as happened when Uber flew me out from London to interview at its office in Amsterdam, across the North Sea. With that, it’s over to Mike… Interview / reality disconnectMarket forces and AI-enabled cheating are changing technical interviews, by forcing a move away from verbatim LeetCode questions back to in-person interviews – or something else entirely. In this new world, knowing how to think outweighs memorizing lists of problems. That’s why we decided to write a sequel to “Cracking the Coding Interview.” Our goal with this new book is to teach the “why” rather than the “what”, and to go deep into problem-solving approaches. Let’s acknowledge the elephant in the room: data structure and algorithm (DS&A) questions have very little to do with software engineers’ daily lives. When was the last time you implemented a binary search tree from scratch, or used backtracking at work? For many people, the answer is “never, except in an interview.” These interviews favor those with time to grind LeetCode. They focus on niche topics most engineers never use, which can make the process feel disconnected from reality, and frankly, a bit demoralizing. Yet despite this, DS&A interviews remain the industry standard at most major tech companies. Why’s that? It’s because hiring managers know something many engineers overlook: there simply aren’t better alternatives. Take-homes are easy to cheat on, and system design and behavioral interviews are harder to scale with limited questions and non-technical answers. All the while, DS&A questions remain the most scalable, cheat-resistant way to test problem-solving skills under pressure. Amid AI-powered cheating and headlines declaring LeetCode dead, Big Tech follows the path of least resistance: rather than invent something unproven, they shift back to in-person interviews to make cheating harder. This format may evolve, but time-pressured coding isn’t going anywhere. So, rather than fighting the system, it’s more pragmatic to embrace the “suck”, and learn to become great at these interviews. 2. The MIKE framework: getting unstuckCoding interviews are meant to challenge you. Even seasoned engineers hit roadblocks, and that’s not failure; it’s the point. Interviewers want to see how you respond when the path forward isn’t obvious. Do you freeze, flail, or push forward with a plan? That’s where the MIKE framework comes in. Based on years of real interview experience and refined by teaching thousands of engineers, MIKE is a structured, reliable approach for navigating moments of uncertainty. Success doesn’t come from avoiding confusion, it comes from knowing exactly what to do when it hits. The four components of MIKEThe MIKE framework consists of four main, problem-solving strategies that experienced engineers employ to solve problems when they get stuck:
This framework isn’t revolutionary; it simply spells out what good problem solvers do instinctively. Let’s explore each component in detail. 3. M: Minimally sketch the naive solutionWhen you’re stuck, the first step is to stop worrying about optimization and just find a working approach. Don’t jump into code, just sketch the simplest brute force solution that solves the problem. Why Start Naively?
How to Sketch a Naive SolutionYou might ask what “sketch” means in this context. You shouldn’t code a solution, but minimally sketching it is a good use of time. For this, I recommend “indented English”, a middle ground between code and vaguely describing it out loud. It skips syntax and focuses on the code structure using indentation. Avoid full-blown pseudocode, with which it’s too easy to slip into writing real (often buggy) code. Consider the “indented English” on the left which focuses on the high-level idea, vs. the “pseudocode” on the right that has a bug. Pitfalls to avoid:
Starting with a naive solution gives clarity, confidence, and a launchpad for optimization. Once that’s locked in, then you can move on to performance. The solution to the bug in the code: We are mixing types in the Python code – which is a common problem in Python. The line “for row in grid” gives us a list, and not an array. Because of this, the following line (“for col in len(grid[row])”) would not work.
Applying step “M” to a real interview problem:Problem: Increasing Triplet Subsequence Given an integer array arr, return true if there exists a triple of indices (i, j, k) such that i < j < k and arr[i] < arr[j] < arr[k]. If no such indices exists, return false.
If your spidey senses are tingling, it may be because you’ve heard of a similar problem called 3Sum. The same brute force solution with nested loops would work here and look like this:
The skeleton of a solution can be sketched in just a minute or two. While our approach is not optimal, it helps convey to an interviewer that you understand the question and the naive way to tackle the problem. And it’s a starting point for optimizations. We will continue to revisit this problem throughout as we work through the framework. 4. I: Identify upper and lower boundsAfter minimally sketching a naive solution, we suggest identifying the upper and lower bounds for your algorithm’s time complexity (the next steps can be in any order). This process of identifying boundaries, which we call boundary thinking, helps narrow down the range of possible solutions and guides you toward the optimal approach. What is boundary thinking?A lower bound establishes a threshold below which no solutions can exist. If a problem has a lower bound of O(n), it means we can’t solve it in O(log n) time, or anything faster than O(n). This helps us avoid wasting time on trying to optimize beyond what’s theoretically possible. An upper bound establishes a threshold above which solutions are not worth considering. While we can always create slower algorithms, the upper bound helps us discard approaches that are too inefficient to pass the interview. Establishing lower boundsThere are two main types of lower bounds to consider: Output-size lower bound: This one is straightforward: the output size is a lower bound for the runtime. With a sequential algorithm, we can’t output an array of size O(n) in less than O(n) time; that’s how long it takes just to write it. For example, in the Increasing Triplet Subsequence problem, if we were asked to return all increasing triplets, the output size would be cubic (think of an already-sorted array), meaning that our brute force algorithm would already be optimal. If the output is just a number or Boolean, the O(1) lower bound isn’t very constraining, but for problems with potentially large outputs – like finding all combinations or permutations – this bound can be quite useful. Task-based lower bound: Sometimes, regardless of the approach, there’s a fundamental “task” that any solution must perform. The runtime of this task establishes a lower bound. For instance, if we need to find a specific value in an unsorted array, we must check every element in the worst case, giving us a task-based lower bound of O(n). This immediately rules out any approaches that would work in O(log n) or O(1) time. Common task-based lower bounds include:
When we have both output-size and task-based lower bounds, we take the greater of the two as our final lower bound. Establishing upper boundsSimilarly, there are two main approaches to establishing upper bounds: Naive upper bound: The time complexity of the naive solution serves as an initial upper bound. If your brute force approach runs in O(n³) time, then you know you don’t need to consider anything slower. This bound is particularly useful because you’ve already done the work of sketching the naive solution in the first step of the MIKE framework. Time Limit Exceeded (TLE) upper bound: This one is a bit strange. In online coding assessments, when a submitted solution is too slow, we get a “Time Limit Exceeded” message. We can anticipate whether a solution will be too slow in an online assessment or interview by understanding the problem’s constraints. For example:
These aren’t hard rules, but they are guidance. If you aren’t given problem constraints during an in-person interview you can always ask the interviewer for them, but be mindful that not everyone will provide useful responses to such a nuanced question. When we have both the naive and TLE upper bounds, we take the lesser of the two as our final upper bound. How does identifying boundaries help?We know what you’re thinking: “that’s a lot of effort. How does it help?” To find out, let’s take a practical example with a new problem. Problem: maximum tunnel depth You're given an nxn binary matrix, tunnel, representing a tunnel network that archeologists have excavated. The ground level is above the first row. The first row is at depth 0, and each subsequent row is deeper underground.
The tunnel starts at the top-left and top-right corners (the cells (0, 0) and (0, n - 1) are always 1’s) and consists of a single connected network. Since the tunnel represents a single connected component in the grid, an input like this would not be valid because there is a 1 that is unreachable:
Write a function that returns the maximum tunnel depth. Pause for a minute; really stop and think through the following questions before proceeding:
Max tunnel depth solutionHopefully, you tried to answer the questions above before reading this! How would you go about solving this problem? If you’re like most engineers with a reasonable amount of data structures and algorithms experience, your first thought might be to use a graph traversal, like a depth-first search or breadth-first search. Given that the matrix is square, our time complexity would be O(n2), and the space complexity would be similar since we have to track visited cells to avoid cycles (plus the recursive call stack in a DFS). What does the “naive” solution look like? The first step in our MIKE framework helps us identify this second (simpler!) approach. If we step back and ask if there are other ways to find the maximum depth, most people notice a significantly more straightforward solution: traverse the tunnel matrix with a pair of nested loops. Looping through the tunnel directly lets us record the deepest point without the need to write a whole graph traversal. Because the tunnel input is guaranteed to always be connected, we don’t have to worry about finding stray ones in the tunnel that would throw off our depth count. The time complexity would be identical to the graph traversal in the worst case, O(n2), but the space complexity is actually better since it is constant (O(1)). What happens when we apply boundary thinking to the problem? The naive approach alone is a novel solution, since it is uncommon for the straightforward approach to be better than a common algorithmic one. However, if we apply boundary thinking we can squeeze more out of this problem. We’ve already identified a constant space solution with a quadratic time complexity, so if there are improvements to be made, they will need to be with the time complexity. Is this possible? Let’s identify the upper and lower bounds to find out. Naive upper bound: O(n²) to iterate through the entire matrix. Lower bound: The output is just a number, so the output-size lower bound is O(1). The task-based lower bound isn’t immediately clear: do we need to check every cell? Your gut might say “yes”, but we should default to the output-size lower bound if we’re uncertain. With bounds of O(1) to O(n2), we should consider approaches from constant time upwards, carefully considering each option within the range:
Mapping time complexities to possible algorithmic approaches helps narrow the choices. These mappings give clues for the options to try: linear scans, binary search, sorting, and trees. After just a minute or two of experimentation, it is fairly obvious that sorting breaks our input and doesn’t seem to help. We can similarly eliminate tree-based solutions like heaps, as there is no clear way to use them. The binary search idea is intriguing, but how can we apply it? Consider the following input. Can we repeatedly halve something to help narrow the search range?
If we check the middle row in the tunnel matrix and scan through it, we will always find one of two scenarios:
Repeating this binary search approach will lead to an O(n log n) solution that’s better than our naive O(n²) approach. Using boundary thinking helps us identify a non-obvious optimization that significantly improves the solution’s efficiency. Boundary thinking is a powerful tool that avoids wasting time on impossible optimizations or unnecessarily complex solutions. By establishing clear bounds, you focus your problem-solving efforts where they’re most likely to bear fruit. Revisiting the increasing triplet sequence with boundary thinkingLet’s see how boundary thinking applies to our Increasing Triplet Sequence problem:
Given our upper bound of O(n3) and our lower bound of O(n), we should focus on algorithms that take O(n) or O(n log n) time, with O(n2) as a possibility (O(n2) is the optimal runtime for 3Sum). 5. K: Keywords in the problemThe third step in the MIKE framework is to look for keywords that act as triggers to try a specific approach with a particular data structure or algorithm. This technique, which we call “trigger thinking”, is a powerful shortcut that experienced engineers use intuitively. Triggers might show up in the input type, output type, constraints, problem description, or even during failed attempts to solve the problem... Subscribe to The Pragmatic Engineer to unlock the rest.Become a paying subscriber of The Pragmatic Engineer to get access to this post and other subscriber-only content. A subscription gets you:
|