The Pragmatic Engineer 09月30日
远程编程面试的AI挑战与应对
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着大型语言模型在编程挑战中的表现越来越好,以及各种隐形AI辅助工具的出现,远程编程面试正面临新的挑战。许多公司开始重新采用面对面面试形式,而另一些公司则调整了流程以适应AI工具。本文探讨了这一趋势,并介绍了如何在这种新环境下成功应对编程面试。

🔍 在远程编程面试中,AI辅助工具的存在使得作弊成为可能,导致面试结果的有效性降低。许多公司,如Meta和Shopify,已经开始试验或允许使用AI工具进行面试,但这并不意味着传统的算法面试会被取代。

📈 大型科技公司,如Google,由于AI的存在,更加重视面对面编程面试,以确保应聘者具备扎实的编程基础。这种趋势表明,至少在可预见的未来,算法面试仍然会是评估技术能力的重要手段。

🧠 尽管数据结构和算法(DS&A)问题在日常工作中很少使用,但它们仍然是大多数大型科技公司面试的标准。这是因为DS&A问题是在压力下测试解决问题能力的最可扩展且防作弊的方式。

🚀 面对编程面试中的难题,MIKE框架提供了一种结构化的方法来解决问题。该框架包括四个主要策略:最小化地勾勒出朴素解决方案、识别上下限、关键词/触发器以及使用助推器。

🔄 MIKE框架的第一步是勾勒出朴素解决方案,即使它效率不高。这有助于理解问题并为优化提供基础。

📊 识别上下限有助于缩小可能的解决方案范围,并指导你找到最佳方法。例如,输出大小可以作为运行时间的下限,而朴素解决方案的运行时间可以作为初始上限。

🔑 关键词/触发器是指问题陈述中暗示特定技术的线索。例如,排序输入数组可能触发二分搜索的想法,而涉及括号的问题通常建议使用栈。

🆘 如果前三个部分无法解决问题,可以使用一系列技术来提供“助推”,例如优化暴力解决方案、寻找有用属性以及降低难度。

In-person coding interviews are returning to Big Tech and some startups in 2025 now that LLMs are too good at solving coding challenges, and a plethora of tools offer invisible AI “overlays” which help candidates to cheat during remote interviews. As a result, remote coding interviews today serve up more noise and less signal than ever. In response to AI “helper” tools for remote interviews, many employers are returning to the in-person interview format – while others accept AI tools and adapt their processes like Shopify has.

Today, the most popular interview type remains the algorithmic coding interview, in which candidates have an algorithmic challenge, a whiteboard, and 30-60 minutes to come up with a solution. So, how do you succeed in this new wave of old-style tech job interviews? For some professionals, the approach may not be crystal clear if remote interviews have been the norm.

To find out, I turned to one of the best people to tackle this question: Mike Mroczka, lead author of the book Beyond Cracking the Coding Interview. It is the official sequel to the classic coding interview preparation “bible,” Cracking the Coding Interview. Reading the original title first is not a prerequisite for Beyond Cracking the Coding Interview, Mike says. If you are interested in this updated book, you can read nine chapters for free (the book has 43 chapters in total).

Mike worked at Google for nearly 4 years, and has been a dedicated interview coach for a decade. In this article, he shares his approach for acing in-person interviews with a framework aptly named “MIKE,” and advice and tactics for getting unstuck in tough coding interviews. We cover:

    The reality of interviews at Big Tech Data suggests in-person coding interviews are here too stay. The process has worked for decades and is well understood, meaning it’s unlikely to have been discarded and forgotten. Also: the interview / reality disconnect

    MIKE framework: a framework for getting unstuck during coding interviews

    Minimally sketch the naive solution

    Identify upper and lower bounds

    Keywords in the problem

    Employ boosters when stuck

    Importance of preparation and practice. Practice makes all the difference: consider recording yourself in mock interviews, then analyse your performance

1. The reality of interviews at Big Tech

Several companies have started to embrace AI coding tools for interviews, including Meta, which is currently trialing them. Elsewhere, Canva and Shopify (as mentioned above), encourage candidates to use AI tools during interviews. But adoption like this is far from being a death knell for tried and true interview formats, according to research.

Data from leading mock interview site interviewing.io shows AI has overwhelmingly NOT made algorithmic interviews redundant at Big Tech. It surveyed 52 Big Tech interviewers, mostly at FAANG companies – Facebook, Amazon, Apple, Netflix, and Google – and also at a few large scaleups like Stripe. Aline Lerner, founder at interviewing.io, summarizes the data collected:

“Answering the question of whether AI has changed interview processes:

    93% of respondents have confirmed that AI has not driven any changes to their interview process.

    Of the 7% who reported changes, these consisted exclusively of adding cheating detection tools to their interviews, rather than modifications to the interview format itself.

None of the interviewers reported that their companies are moving away from algorithmic questions. Some interviewers did note they're no longer asking questions taken verbatim from LeetCode. Others are making modifications to pre-existing algorithmic questions that make those questions harder for LLMs to answer, asking more follow-up questions of candidates, and/or are incorporating additional discussion into interviews.

It looks like algorithmic questions are here to stay – at least for the foreseeable future.”

At search giant Google, they are doubling down on in-person interviews because of AI. In June, CEO Sundar Pichai confirmed Google is bringing back in-person interviews on the Lex Fridman podcast (emphasis mine):

Lex Fridman: “Google has legendary coding interviews, like rigorous interviews for the engineers. Can you comment on how that has changed in the era of AI? The whiteboard interview, I assume, is not allowed to have some prompts.”

Sundar Pichai: “We are making sure we’ll introduce at least one round of in-person interviews for people just to make sure the fundamentals are there. I think they’ll end up being important, but it’s an equally important skill. Look, if you can use these tools to generate better code, I think that’s an asset. So overall, I think it’s a massive positive”.

Alternatives to in-person algorithmic coding interviews have downsides. The most popular other options are:

Trial days or trial weeks. A valid criticism of 30-60 minute coding interviews in any format is that they don’t mirror the actual work of engineers. So, why not create an interview that does? Trial days and weeks are the result in both the remote and in-person format, in which candidates join a team and ship a feature to production. Linear uses remote trial days in their interview process, and Automattic holds an async coding challenge as the final “coding” interview, which can take days to complete. The downside is that trial days can add up to a week of time invested. Linear pays candidates for this, but the demand on time can make it hard for some people to commit to.

The biggest downside of in-person interviews is obviously the cost: candidates must travel to an office, and overnight accommodation may be required. Some employers like Big Tech companies usually foot such bills. For candidates interviewing on Fridays, this process might turn into a free city break, as happened when Uber flew me out from London to interview at its office in Amsterdam, across the North Sea.


With that, it’s over to Mike (and with thanks to the book’s co-author Nil Mamano for their help in editing the below post)…

Interview / reality disconnect

Market forces and AI-enabled cheating are changing technical interviews, by forcing a move away from verbatim LeetCode questions back to in-person interviews – or something else entirely. In this new world, knowing how to think outweighs memorizing lists of problems. That’s why we decided to write a sequel to “Cracking the Coding Interview.” Our goal with this new book is to teach the “why” rather than the “what”, and to go deep into problem-solving approaches.

Let’s acknowledge the elephant in the room: data structure and algorithm (DS&A) questions have very little to do with software engineers’ daily lives. When was the last time you implemented a binary search tree from scratch, or used backtracking at work? For many people, the answer is “never, except in an interview.”

These interviews favor those with time to grind LeetCode. They focus on niche topics most engineers never use, which can make the process feel disconnected from reality, and frankly, a bit demoralizing.

Yet despite this, DS&A interviews remain the industry standard at most major tech companies. Why’s that?

It’s because hiring managers know something many engineers overlook: there simply aren’t better alternatives. Take-homes are easy to cheat on, and system design and behavioral interviews are harder to scale with limited questions and non-technical answers. All the while, DS&A questions remain the most scalable, cheat-resistant way to test problem-solving skills under pressure.

Amid AI-powered cheating and headlines declaring LeetCode dead, Big Tech follows the path of least resistance: rather than invent something unproven, they shift back to in-person interviews to make cheating harder. This format may evolve, but time-pressured coding isn’t going anywhere.

So, rather than fighting the system, it’s more pragmatic to embrace the “suck”, and learn to become great at these interviews.

2. The MIKE framework: getting unstuck

Coding interviews are meant to challenge you. Even seasoned engineers hit roadblocks, and that’s not failure; it’s the point. Interviewers want to see how you respond when the path forward isn’t obvious. Do you freeze, flail, or push forward with a plan?

That’s where the MIKE framework comes in. Based on years of real interview experience and refined by teaching thousands of engineers, MIKE is a structured, reliable approach for navigating moments of uncertainty. Success doesn’t come from avoiding confusion, it comes from knowing exactly what to do when it hits.

The four components of MIKE

The MIKE framework consists of four main, problem-solving strategies that experienced engineers employ to solve problems when they get stuck:

This framework isn’t revolutionary; it simply spells out what good problem solvers do instinctively.

Let’s explore each component in detail.

3. M: Minimally sketch the naive solution

When you’re stuck, the first step is to stop worrying about optimization and just find a working approach. Don’t jump into code, just sketch the simplest brute force solution that solves the problem.

Why Start Naively?

How to Sketch a Naive Solution

You might ask what “sketch” means in this context. You shouldn’t code a solution, but minimally sketching it is a good use of time. For this, I recommend “indented English”, a middle ground between code and vaguely describing it out loud. It skips syntax and focuses on the code structure using indentation. Avoid full-blown pseudocode, with which it’s too easy to slip into writing real (often buggy) code. Consider the “indented English” on the left which focuses on the high-level idea, vs. the “pseudocode” on the right that has a bug.

The implementation on the right has a bug. Do you see it? The solution is in italics at the end of this section

Pitfalls to avoid:

Starting with a naive solution gives clarity, confidence, and a launchpad for optimization. Once that’s locked in, then you can move on to performance.

The solution to the bug in the code: We are mixing types in the Python code – which is a common problem in Python. The line “for row in grid” gives us a list, and not an array. Because of this, the following line (“for col in len(grid[row])”) would not work.

The correct code would be this:

for row in range(len(grid)):   for col in range(len(grid[row])):

Applying step “M” to a real interview problem:

Problem: Increasing Triplet Subsequence

Given an integer array arr, return true if there exists a triple of indices (i, j, k) such that i < j < k and arr[i] < arr[j] < arr[k]. If no such indices exists, return false.

Example 1: arr = [8, 5, 4, 9, 7, 6, 5, 6, 4]Output: True. There are three values in increasing order (4 < 5 < 6) and their indices are also in increasing order (2 < 6 < 7).Example 2: arr = [9, 8, 7, 6]Output: False. No such triplet exists.Example 3: arr = [3, 2, 6, 1, 5, 8]Output: True. There are five valid triplets:• 3, 6, 8; (indices 0, 2, 5)• 3, 5, 8; (indices 0, 4, 5)• 2, 6, 8; (indices 1, 2, 5)• 2, 5, 8; (indices 1, 4, 5)• 1, 5, 8; (indices 3, 4, 5)

If your spidey senses are tingling, it may be because you’ve heard of a similar problem called 3Sum. The same brute force solution with nested loops would work here and look like this:

Algo 1: Nested loopsn = len(arr)Time: O(n^3)Space: O(1)valid_triplet(arr): for i in arr  for j in arr   for k in arr         // if indices and values are in increasing order    if i < j < k and arr[i] < arr[j] < arr[k]      return True    return False

The skeleton of a solution can be sketched in just a minute or two. While our approach is not optimal, it helps convey to an interviewer that you understand the question and the naive way to tackle the problem. And it’s a starting point for optimizations.

We will continue to revisit this problem throughout as we work through the framework.

4. I: Identify upper and lower bounds

After minimally sketching a naive solution, we suggest identifying the upper and lower bounds for your algorithm’s time complexity (the next steps can be in any order). This process of identifying boundaries, which we call boundary thinking, helps narrow down the range of possible solutions and guides you toward the optimal approach.

What is boundary thinking?

A lower bound establishes a threshold below which no solutions can exist. If a problem has a lower bound of O(n), it means we can’t solve it in O(log n) time, or anything faster than O(n). This helps us avoid wasting time on trying to optimize beyond what’s theoretically possible.

An upper bound establishes a threshold above which solutions are not worth considering. While we can always create slower algorithms, the upper bound helps us discard approaches that are too inefficient to pass the interview.

Establishing lower bounds

There are two main types of lower bounds to consider:

Output-size lower bound: This one is straightforward: the output size is a lower bound for the runtime. With a sequential algorithm, we can’t output an array of size O(n) in less than O(n) time; that’s how long it takes just to write it.

For example, in the Increasing Triplet Subsequence problem, if we were asked to return all increasing triplets, the output size would be cubic (think of an already-sorted array), meaning that our brute force algorithm would already be optimal.

If the output is just a number or Boolean, the O(1) lower bound isn’t very constraining, but for problems with potentially large outputs – like finding all combinations or permutations – this bound can be quite useful.

Task-based lower bound: Sometimes, regardless of the approach, there’s a fundamental “task” that any solution must perform. The runtime of this task establishes a lower bound.

For instance, if we need to find a specific value in an unsorted array, we must check every element in the worst case, giving us a task-based lower bound of O(n). This immediately rules out any approaches that would work in O(log n) or O(1) time.

Common task-based lower bounds include:

When we have both output-size and task-based lower bounds, we take the greater of the two as our final lower bound.

Establishing upper bounds

Similarly, there are two main approaches to establishing upper bounds:

Naive upper bound: The time complexity of the naive solution serves as an initial upper bound. If your brute force approach runs in O(n³) time, then you know you don’t need to consider anything slower.

This bound is particularly useful because you’ve already done the work of sketching the naive solution in the first step of the MIKE framework.

Time Limit Exceeded (TLE) upper bound: This one is a bit strange. In online coding assessments, when a submitted solution is too slow, we get a “Time Limit Exceeded” message. We can anticipate whether a solution will be too slow in an online assessment or interview by understanding the problem’s constraints.

For example:

These aren’t hard rules, but they are guidance. If you aren’t given problem constraints during an in-person interview you can always ask the interviewer for them, but be mindful that not everyone will provide useful responses to such a nuanced question.

When we have both the naive and TLE upper bounds, we take the lesser of the two as our final upper bound.

How does identifying boundaries help?

We know what you’re thinking: “that’s a lot of effort. How does it help?” To find out, let’s take a practical example with a new problem.

Problem: maximum tunnel depth

You're given an nxn binary matrix, tunnel, representing a tunnel network that archeologists have excavated. The ground level is above the first row. The first row is at depth 0, and each subsequent row is deeper underground.

The tunnel starts at the top-left and top-right corners (the cells (0, 0) and (0, n - 1) are always 1’s) and consists of a single connected network. Since the tunnel represents a single connected component in the grid, an input like this would not be valid because there is a 1 that is unreachable:

tunnel = [ [1,0,0,1],[1,1,1,1],[0,0,0,0],[0,0,0,1], <- Disconnected 1, so the input is not valid!]

Write a function that returns the maximum tunnel depth.

Pause for a minute; really stop and think through the following questions before proceeding:

Max tunnel depth solution

Hopefully, you tried to answer the questions above before reading this!

How would you go about solving this problem?

If you’re like most engineers with a reasonable amount of data structures and algorithms experience, your first thought might be to use a graph traversal, like a depth-first search or breadth-first search. Given that the matrix is square, our time complexity would be O(n2), and the space complexity would be similar since we have to track visited cells to avoid cycles (plus the recursive call stack in a DFS).

What does the “naive” solution look like?

The first step in our MIKE framework helps us identify this second (simpler!) approach. If we step back and ask if there are other ways to find the maximum depth, most people notice a significantly more straightforward solution: traverse the tunnel matrix with a pair of nested loops.

Looping through the tunnel directly lets us record the deepest point without the need to write a whole graph traversal. Because the tunnel input is guaranteed to always be connected, we don’t have to worry about finding stray ones in the tunnel that would throw off our depth count.

The time complexity would be identical to the graph traversal in the worst case, O(n2), but the space complexity is actually better since it is constant (O(1)).

What happens when we apply boundary thinking to the problem?

The naive approach alone is a novel solution, since it is uncommon for the straightforward approach to be better than a common algorithmic one. However, if we apply boundary thinking we can squeeze more out of this problem.

We’ve already identified a constant space solution with a quadratic time complexity, so if there are improvements to be made, they will need to be with the time complexity. Is this possible? Let’s identify the upper and lower bounds to find out.

Naive upper bound: O(n²) to iterate through the entire matrix.

Lower bound: The output is just a number, so the output-size lower bound is O(1). The task-based lower bound isn’t immediately clear: do we need to check every cell? Your gut might say “yes”, but we should default to the output-size lower bound if we’re uncertain.

With bounds of O(1) to O(n2), we should consider approaches from constant time upwards, carefully considering each option within the range:

Mapping time complexities to possible algorithmic approaches helps narrow the choices. These mappings give clues for the options to try: linear scans, binary search, sorting, and trees.

After just a minute or two of experimentation, it is fairly obvious that sorting breaks our input and doesn’t seem to help. We can similarly eliminate tree-based solutions like heaps, as there is no clear way to use them. The binary search idea is intriguing, but how can we apply it? Consider the following input. Can we repeatedly halve something to help narrow the search range?

tunnel = [ [1,0,0,1],[1,0,1,1],[1,1,1,0],[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0],]

If we check the middle row in the tunnel matrix and scan through it, we will always find one of two scenarios:

    We encounter a 1 in the scanned row. Since we know this must be part of the tunnel, we eliminate the top half of the matrix from our search. We know the maximum depth is at least that middle row or lower in the matrix.

    We do not encounter a 1 in the scanned row. Since the tunnel is a single connected component that starts at the top, any row without a 1 means no further rows can contain the connected tunnel.

Repeating this binary search approach will lead to an O(n log n) solution that’s better than our naive O(n²) approach. Using boundary thinking helps us identify a non-obvious optimization that significantly improves the solution’s efficiency.

Boundary thinking is a powerful tool that avoids wasting time on impossible optimizations or unnecessarily complex solutions. By establishing clear bounds, you focus your problem-solving efforts where they’re most likely to bear fruit.

Revisiting the increasing triplet sequence with boundary thinking

Let’s see how boundary thinking applies to our Increasing Triplet Sequence problem:

    Upper bound: As we sketched in step ‘M’, the naive solution is three nested loops with a time complexity of O(n3), which serves as the naive upper bound. No constraints are given, so we can’t deduce a TLE upper bound.

    Lower bound: The problem asks for a Boolean output (true or false). Thus, the output-size lower bound is O(1), which isn’t very restrictive. To determine if an increasing triplet exists, we must, at a minimum, inspect the elements of the input array. Therefore, the task-based lower bound is O(n). Unlike the tunnel problem, there is nothing obvious to repeatedly halve, so it’s hard to imagine finding a valid triplet without looking at most – if not all – elements in the array in some cases.

Given our upper bound of O(n3) and our lower bound of O(n), we should focus on algorithms that take O(n) or O(n log n) time, with O(n2) as a possibility (O(n2) is the optimal runtime for 3Sum).

5. K: Keywords in the problem

The third step in the MIKE framework is to look for keywords that act as triggers to try a specific approach with a particular data structure or algorithm. This technique, which we call “trigger thinking”, is a powerful shortcut that experienced engineers use intuitively. Triggers might show up in the input type, output type, constraints, problem description, or even during failed attempts to solve the problem.

Read more

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

远程编程面试 AI辅助工具 面对面面试 算法面试 MIKE框架 数据结构与算法
相关文章