Master DSA Patterns,
Land Any Tech Job.
Every pattern explained from scratch — visual walkthroughs, code breakdowns, interview strategy.
⏱Big O Notation
Time & Space Complexity
🤔 What even IS Big O?
Okay so imagine you have a function that searches through a list. If the list has 10 items it takes 10 steps. If it has 1,000 items it takes 1,000 steps. If it has a million items... you get it. We say this is O(n) — it scales linearly with input size.
Big O is a way to describe how much slower does your code get as the data gets bigger? We always measure the worst case and we drop constants (O(3n) = O(n)) and drop smaller terms (O(n² + n) = O(n²)).
📊 The Big-O Ladder (memorize this)
🎤 What interviewers actually care about
Every single coding question ends with "what's the time and space complexity?" If you cannot answer this, you will lose points even if your code works. The interviewer wants to know:
- Can you analyze nested loops? (outer O(n) × inner O(n) = O(n²))
- Do you know when O(n) space is unavoidable vs when you can optimize?
- Can you explain your trade-offs? (faster time = more space is often the deal)
🔢 Space complexity — the forgotten one
Space complexity = how much extra memory your algorithm uses. A recursive function that calls itself n times has O(n) call stack space even if it does not create any data structures. Creating a hashmap with n entries = O(n) space. These matter in real systems!
Always state both time AND space. HashMap lookup = O(1). Sorting = O(n log n). Recursion depth = space complexity.
7-Step Framework
- Read problem twice
- Ask clarifying questions
- Brute force first
- Draw pseudocode
- Code best approach
- Test edge cases
- Study other solutions
# O(1) — constant
def get_first(arr): return arr[0]
# O(n) — single loop
def find(arr, t):
for x in arr:
if x == t: return True
return False
# O(n²) — nested loops
def pairs(arr):
for i in arr:
for j in arr: print(i, j)
# O(log n) — halves each step
def bsearch(arr, t):
l, r = 0, len(arr)-1
while l <= r:
m = (l+r)//2
if arr[m]==t: return m
elif arr[m]<t: l = m+1
else: r = m-1
return -1▦Arrays
HashMap · Two Pointers · Sliding Window · Prefix Sum · Matrix
📦 Arrays — the most fundamental data structure
An array is a contiguous block of memory where every element sits right next to each other. This means random access is O(1) — you can jump to any index instantly. But insertion/deletion in the middle? That's O(n) because you have to shift everything.
In Python, a list IS your array. In Java it is either int[] (fixed size) or ArrayList (dynamic). In interviews, when they say "array problem," they usually mean a Python list or Java ArrayList.
🗺️ The 5 Array Patterns — your complete toolkit
When you need to check "have I seen this before?" in O(1). The trick: trade O(n) space to save O(n) time. Classic: Two Sum — instead of O(n²) nested loops, one pass with a map.
Left pointer starts at index 0, right at the end. They move toward each other based on some condition. Only works on sorted arrays. Turns O(n²) into O(n). Think: "is there a pair that sums to target?"
A window of size k (or variable size) slides through the array. You maintain some property (sum, max, unique chars) as you add one element on the right and remove one on the left. O(n) instead of O(n·k).
Build a running-total array once, then answer any range sum query in O(1). prefix[r+1] - prefix[l] gives the sum of any subarray. Build once, use forever.
2D array problems. DFS/BFS to explore connected regions. Think of the grid as a graph where each cell connects to its 4 neighbors.
🎤 Interview mindset for arrays
When you see an array problem, run through this mental checklist:
- Is it sorted? → consider two pointers or binary search
- Does it involve pairs/triplets? → hashmap or two pointers
- Does it involve a contiguous subarray? → sliding window or prefix sum
- Do you need to check for duplicates or existence? → hashset
- Is it a 2D grid? → DFS/BFS with direction array
[(1,0),(-1,0),(0,1),(0,-1)]
💼 What interviewers test on arrays
- Can you avoid brute force? Two nested loops screams "I do not know the pattern." Show you know the hashmap trick.
- Do you handle edge cases? Empty array, single element, all duplicates, negative numbers.
- Can you optimize space? "Can we do it in O(1) extra space?" is a common follow-up.
HashMap = key→value O(1) lookup. Replace O(n) scans with O(1) membership checks.
seen = {}
for i, n in enumerate(nums):
comp = target - n
if comp in seen: return [seen[comp], i]
seen[n] = ireturn len(nums) != len(set(nums)) is a 1-liner but reads all elements twice. The explicit loop exits early on first duplicate.def containsDuplicate(nums):
seen = set()
for n in nums:
if n in seen: return True
seen.add(n)
return False
# Alt: return len(nums) != len(set(nums))def missingNumber(nums):
n = len(nums)
return n*(n+1)//2 - sum(nums)def findDisappearedNumbers(nums):
seen = set(nums)
return [i for i in range(1, len(nums)+1)
if i not in seen]def twoSum(nums, target):
seen = {}
for i, n in enumerate(nums):
comp = target - n
if comp in seen: return [seen[comp], i]
seen[n] = idef smallerNumbersThanCurrent(nums):
s = sorted(nums); rank = {}
for i, v in enumerate(s):
if v not in rank: rank[v] = i
return [rank[n] for n in nums]When: Sorted array + find a pair/triplet. Start L=0, R=end. Move based on comparison.
| Condition | Action | Why |
|---|---|---|
| sum < target | L++ | Need bigger value |
| sum > target | R-- | Need smaller value |
| sum == target | Record + both move | Found a pair |
def two_sum_sorted(nums, target):
l, r = 0, len(nums) - 1
while l < r:
s = nums[l] + nums[r]
if s == target: return [l, r]
elif s < target: l += 1
else: r -= 1
⚡ Interview insight: Two pointers turns O(n²) brute force into O(n). Works because array is sorted — moving a pointer always increases or decreases the sum predictably.
def minTimeToVisitAllPoints(points):
total = 0
for i in range(len(points)-1):
dx = abs(points[i+1][0]-points[i][0])
dy = abs(points[i+1][1]-points[i][1])
total += max(dx, dy)
return totaldef sortedSquares(nums):
n = len(nums)
res, l, r, pos = [0]*n, 0, n-1, n-1
while l <= r:
ls, rs = nums[l]**2, nums[r]**2
if ls > rs: res[pos]=ls; l+=1
else: res[pos]=rs; r-=1
pos -= 1
return resdef threeSum(nums):
nums.sort(); res = []
for i in range(len(nums)-2):
if i>0 and nums[i]==nums[i-1]: continue
l, r = i+1, len(nums)-1
while l < r:
s = nums[i]+nums[l]+nums[r]
if s==0:
res.append([nums[i],nums[l],nums[r]])
while l<r and nums[l]==nums[l+1]: l+=1
while l<r and nums[r]==nums[r-1]: r-=1
l+=1; r-=1
elif s<0: l+=1
else: r-=1
return resdef minimumAbsDifference(arr):
arr.sort()
min_d = min(arr[i+1]-arr[i] for i in range(len(arr)-1))
return [[arr[i],arr[i+1]] for i in range(len(arr)-1)
if arr[i+1]-arr[i]==min_d]Expand right always. Shrink left only when constraint violated. Each element enters/exits once → O(n).
# Variable window:
l = total = 0
for r in range(len(nums)):
total += nums[r]
while total >= target:
min_len = min(min_len, r-l+1)
total -= nums[l]; l += 1
# Fixed window (size k):
for i, n in enumerate(nums):
window.add(n)
if len(window) > k: window.remove(nums[i-k])def longestMountain(arr):
n, res = len(arr), 0
for i in range(1, n-1):
if arr[i-1]<arr[i]>arr[i+1]:
l, r = i-1, i+1
while l>0 and arr[l-1]<arr[l]: l-=1
while r<n-1 and arr[r]>arr[r+1]: r+=1
res = max(res, r-l+1)
return resdef containsNearbyDuplicate(nums, k):
window = set()
for i, n in enumerate(nums):
if n in window: return True
window.add(n)
if len(window) > k: window.remove(nums[i-k])
return Falsedef minSubArrayLen(target, nums):
l = total = 0; min_len = float('inf')
for r in range(len(nums)):
total += nums[r]
while total >= target:
min_len = min(min_len, r-l+1)
total -= nums[l]; l += 1
return min_len if min_len != float('inf') else 0Build once O(n), query any range in O(1).
prefix = [0] for n in nums: prefix.append(prefix[-1] + n) # sum(l..r): prefix[r+1] - prefix[l]
def maxProfit(prices):
min_p = float('inf'); max_p = 0
for p in prices:
min_p = min(min_p, p)
max_p = max(max_p, p - min_p)
return max_pclass NumArray:
def __init__(self, nums):
self.p = [0]
for n in nums: self.p.append(self.p[-1]+n)
def sumRange(self, l, r): return self.p[r+1]-self.p[l]Grid DFS: find cell, recurse 4 directions, mark visited. Spiral: 4 boundary vars, shrink after each side.
def spiralOrder(matrix):
res = []
T,B = 0,len(matrix)-1
L,R = 0,len(matrix[0])-1
while T<=B and L<=R:
for c in range(L,R+1): res.append(matrix[T][c]); T+=1
for r in range(T,B+1): res.append(matrix[r][R]); R-=1
if T<=B:
for c in range(R,L-1,-1): res.append(matrix[B][c]); B-=1
if L<=R:
for r in range(B,T-1,-1): res.append(matrix[r][L]); L+=1
return resdef numIslands(grid):
def dfs(r, c):
if (r<0 or c<0 or r>=len(grid) or
c>=len(grid[0]) or grid[r][c]=='0'): return
grid[r][c]='0'
dfs(r+1,c);dfs(r-1,c);dfs(r,c+1);dfs(r,c-1)
count = 0
for r in range(len(grid)):
for c in range(len(grid[0])):
if grid[r][c]=='1': count+=1; dfs(r,c)
return count⚡Bit Manipulation
XOR · AND/OR · Shift Operators
🔢 Bits — the stuff computers actually think in
Every number in your computer is ones and zeros. The number 5 is 101 in binary. The number 6 is 110. Bit manipulation lets you do operations directly on these bits — and it is insanely fast because CPUs handle this in a single clock cycle.
You will not need this in most real jobs, but interviewers LOVE it because it separates people who really understand computers from those who do not.
🛠️ The 6 bitwise operators
✨ XOR is the the trick
XOR has these properties: n ^ n = 0 (same number cancels), n ^ 0 = n (zero does nothing). So if you XOR all numbers in [4,1,2,1,2], all the duplicates cancel out and you are left with the lone 4. No extra space needed!
4 ^ 1 ^ 2 ^ 1 ^ 2 = 4 ^ (1^1) ^ (2^2) = 4 ^ 0 ^ 0 = 4
💼 Interview angle
Bit questions are rare but when they appear, they test if you know XOR. The classic: "Find the number that appears once, all others appear twice" → XOR everything. "Count number of 1 bits" → n & (n-1) removes lowest set bit.
Use XOR for O(1) space "find unique" problems. Other tricks:
| Op | Code | Use |
|---|---|---|
| Check odd | n&1 | last bit |
| Power of 2 | n&(n-1)==0 | single bit |
| ×2 / ÷2 | n<<1 / n>>1 | shifts |
def singleNumber(nums):
result = 0
for n in nums: result ^= n
return result
# [4,1,2,1,2]: 0^4=4,^1=5,^2=7,^1=6,^2=4 ✓🧠Dynamic Programming
Memoization · Tabulation · Kadanes
🧠 Dynamic Programming — the boss level
DP is the topic that trips up the most people. But here's the secret: DP is recursion + caching. If you've ever written a recursive function that solves a problem by breaking it into smaller problems, you've basically done DP.
The key insight: if a recursive solution solves the same subproblems multiple times, cache those results. That's DP. Without caching, Fibonacci is O(2ⁿ). With caching (memoization), it is O(n). That's the power.
🔑 The two conditions for DP
The same smaller problem appears multiple times in the recursion. Without this, caching does not help. Example: fib(5) needs fib(4) and fib(3). fib(4) also needs fib(3). Same subproblem!
The optimal solution to the big problem is built from optimal solutions to subproblems. Example: the minimum coins for amount 7 can be built from minimum coins for 7-coin_value.
🏗️ Top-Down vs Bottom-Up
Top-Down (Memoization): Write the natural recursive solution, add a cache (@lru_cache or a dict). Start from the answer and recurse down. Easier to write.
Bottom-Up (Tabulation): Build a table starting from the simplest case up to the answer. A for loop fills dp[0], dp[1]... dp[n]. No recursion. Usually slightly faster due to no function call overhead.
⚡ The mental model for DP
"DP = 'I could have gotten here from several previous states. Which one gives me the best result?' Define what each dp[i] means, figure out which prior states led here, write it as a formula."
Three steps: (1) Define dp[i] in ONE plain English sentence. (2) Write the recurrence from smaller subproblems. (3) Fill base cases. If you can't define dp[i] in one sentence — your state is wrong. Restart.
🔑 Spot the pattern — DP keywords
If the problem says any of these, think DP: "minimum/maximum number of ways," "how many ways," "is it possible," "longest/shortest subsequence," "can you reach." If you can define dp[i] as "best answer for the first i elements," you are golden.
💼 What interviewers look for in DP
- Define dp[i] clearly first — say it out loud: "dp[i] = minimum coins to make amount i"
- State the recurrence — "dp[i] = min(dp[i-c] + 1 for c in coins)"
- Base cases — dp[0] = 0 (zero coins for amount zero)
- Space optimization — if dp[i] only needs dp[i-1] and dp[i-2], you can use rolling variables instead of the full array (O(1) space!)
Bottom-up DP: build the answer table from base cases up. No recursion, no stack overflow.
| Approach | Space | Speed | When to use |
|---|---|---|---|
| Memoization (top-down) | O(n) | Slightly slower | When recursion is more natural |
| Tabulation (bottom-up) | O(n) → O(1) | Faster | Always preferred in interviews |
def climbStairs(n):
if n <= 2: return n
a, b = 1, 2 # dp[1], dp[2]
for _ in range(3, n+1):
a, b = b, a + b # rolling variables
return b
⚡ Key insight: dp[i] = dp[i-1] + dp[i-2] — same as Fibonacci! Once you spot this recurrence, reduce O(n) space to O(1) by keeping only the last 2 values.
def climbStairs(n):
if n <= 2: return n
a, b = 1, 2
for _ in range(3, n+1): a, b = b, a+b
return bdef coinChange(coins, amount):
dp = [float('inf')]*(amount+1); dp[0] = 0
for i in range(1, amount+1):
for coin in coins:
if coin<=i: dp[i]=min(dp[i],dp[i-coin]+1)
return dp[amount] if dp[amount]!=float('inf') else -1def countBits(n):
dp = [0]*(n+1)
for i in range(1, n+1): dp[i] = dp[i>>1]+(i&1)
return dpdef rob(nums):
if not nums: return 0
if len(nums)==1: return nums[0]
prev2, prev1 = 0, 0
for n in nums:
curr = max(prev1, prev2+n)
prev2, prev1 = prev1, curr
return prev1
# dp[i] = max(skip house i, rob house i + best 2 ago)
# Rolling vars → O(1) spaceAt each element: extend (curr+n) OR restart (just n). Negative sum hurts future subarrays → restart.
curr = max(n, curr + n) res = max(res, curr)
def maxSubArray(nums):
curr = res = nums[0]
for n in nums[1:]:
curr = max(n, curr+n)
res = max(res, curr)
return res
# answer=6, subarray=[4,-1,2,1]↩Backtracking
Choose · Explore · Undo
🌳 Backtracking — exploring every possibility cleverly
Backtracking is for problems where you need to build all possible combinations. Think: all subsets, all permutations, solving a Sudoku. The key idea: try an option → recurse → undo the choice → try the next option. It's like a maze — you go down a path, and if it is a dead end, you back up and try another path.
📐 The universal backtracking template
def backtrack(start, path):
result.append(path[:]) # 👈 record at every node (or only at leaves)
for i in range(start, len(nums)):
path.append(nums[i]) # ✅ make choice
backtrack(i+1, path) # 🔁 explore
path.pop() # ↩️ undo choice (BACKTRACK)
The path[:] is crucial — you need to copy the list. If you append path directly, every entry in result will point to the same list and they will all end up identical!
🌲 What the decision tree looks like
Every recursive call is a node in a tree. The start parameter controls which branches you can take (prevents duplicates by only going forward). For [1,2,3]: from root, you can pick 1, 2, or 3. From node [1], you can pick 2 or 3. From [1,2], you can only pick 3. It's a tree of choices!
✂️ Pruning — the optimization
If you know a path can never lead to a valid solution, cut it off early (prune). Example in Combinations: if the remaining numbers are not enough to fill the needed spots, skip. This turns theoretical O(2ⁿ) into something much faster in practice.
💼 Interview tips for backtracking
- Recognize the problem: "all combinations/permutations/subsets" = backtracking
- Always start with the template above and adapt it
- Mention pruning — shows you know optimization
- Time complexity is usually O(2ⁿ) or O(n!) — explain this upfront, it is expected
Draw decision tree FIRST. ALWAYS copy: res.append(path[:]).
def backtrack(path, start):
result.append(path[:]) # copy!
for i in range(start, n):
path.append(nums[i]) # choose
backtrack(path, i+1) # explore
path.pop() # undo| Type | Record when | Count |
|---|---|---|
| Subsets | Every node | 2ⁿ |
| Combos | len==k | C(n,k) |
| Perms | len==n | n! |
def letterCasePermutation(s):
res = []
def bt(i, curr):
if i==len(s): res.append(curr); return
if s[i].isdigit(): bt(i+1, curr+s[i])
else:
bt(i+1, curr+s[i].lower())
bt(i+1, curr+s[i].upper())
bt(0, ''); return resdef subsets(nums):
res = []
def bt(start, path):
res.append(path[:])
for i in range(start, len(nums)):
path.append(nums[i]); bt(i+1,path); path.pop()
bt(0, []); return resdef combine(n, k):
res = []
def bt(start, path):
if len(path)==k: res.append(path[:]); return
need = k-len(path)
for i in range(start, n+1):
if n-i+1<need: break
path.append(i); bt(i+1,path); path.pop()
bt(1,[]); return resdef permute(nums):
res = []
def bt(path, used):
if len(path)==len(nums): res.append(path[:]); return
for n in nums:
if n in used: continue
path.append(n); used.add(n); bt(path,used)
path.pop(); used.remove(n)
bt([],set()); return res⛓Linked Lists
Fast/Slow · Reversal · Dummy Head
🔗 Linked Lists — chains of nodes
A linked list is a chain of nodes, where each node has a val and a next pointer to the next node. Unlike arrays, there is NO random access — to get to node 5, you must walk through nodes 1→2→3→4→5. Insertion/deletion is O(1) if you have the pointer, but finding a node is O(n).
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next # pointer to next node, or None
🐢🐇 Fast & Slow Pointers (Floyds Algorithm)
Start both pointers at head. Each step, slow moves 1 node, fast moves 2 nodes. By the time fast reaches the end, slow is exactly at the middle. It's like a slow runner and a fast runner on a circular track — if there is a cycle, the fast runner will lap the slow one. No cycle? Fast reaches null first.
↩️ Three-Pointer Reversal
To reverse a linked list, you need 3 pointers: prev, curr, nxt. Each step: save next, flip the arrow, advance. In one pass you've reversed the whole list.
prev, curr = None, head
while curr:
nxt = curr.next # 1. save next
curr.next = prev # 2. flip arrow ←
prev = curr # 3. advance prev
curr = nxt # 4. advance curr
return prev # new head
🪆 Dummy Head — the edge case killer
When building or modifying a linked list, create a dummy node at the start: dummy = ListNode(0); dummy.next = head. Then return dummy.next. Why? Because if the head itself needs to be removed or modified, you would need special-case code. The dummy node means every node (including the original head) has a previous node, so your logic stays uniform.
💼 Interview tips for linked lists
- Always draw the list with arrows. Pointer manipulation is the #1 place bugs hide.
- Check for null first:
if not head or not head.next: return head - Use dummy head for merge/delete problems — saves you 10 lines of edge case handling
- Fast/slow is the answer to any "cycle" or "middle" question
curr.next = prev BEFORE saving nxt = curr.next, you lose the rest of the list. Always save nxt FIRST.The tortoise and hare algorithm. Slow moves 1 step, fast moves 2. Works because if there is a cycle, fast will eventually lap slow inside it.
| Problem type | What to detect | Technique |
|---|---|---|
| Linked list cycle | fast laps slow | fast & slow meet |
| Find middle | fast reaches end | slow = middle |
| Cycle start | after meeting | reset one to head |
def has_cycle(head):
slow = fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow == fast: return True
return False
⚡ Interview insight: "Why does resetting to head find the cycle start?" — Math: if meeting point is k steps into cycle, cycle start is k steps from head. Same distance → they meet at the start.
while fast and fast.next gives the left-middle. That's usually what the problem wants.def middleNode(head):
slow = fast = head
while fast and fast.next:
slow = slow.next; fast = fast.next.next
return slowdef hasCycle(head):
slow = fast = head
while fast and fast.next:
slow = slow.next; fast = fast.next.next
if slow == fast: return True
return Falsedef isPalindrome(head):
slow=fast=head
while fast and fast.next: slow=slow.next; fast=fast.next.next
prev,curr=None,slow
while curr:
nxt=curr.next; curr.next=prev; prev=curr; curr=nxt
l,r=head,prev
while r:
if l.val!=r.val: return False
l=l.next; r=r.next
return TrueSave nxt BEFORE flipping or you lose the rest of the list.
prev, curr = None, head
while curr:
nxt = curr.next # ① SAVE
curr.next = prev # ② FLIP
prev = curr # ③ advance
curr = nxt # ④ advance
return prevdef reverseList(head):
prev, curr = None, head
while curr:
nxt = curr.next # SAVE!
curr.next = prev # flip
prev = curr; curr = nxt
return prevdef reverseBetween(head, left, right):
dummy=ListNode(0,head); prev=dummy
for _ in range(left-1): prev=prev.next
curr=prev.next
for _ in range(right-left):
nxt=curr.next; curr.next=nxt.next
nxt.next=prev.next; prev.next=nxt
return dummy.nextDummy node before real head. All deletions handled uniformly. Return dummy.next.
dummy=ListNode(0); dummy.next=head
curr=dummy
while curr.next:
if curr.next.val==val: curr.next=curr.next.next
else: curr=curr.next
return dummy.nextdef removeElements(head, val):
dummy=ListNode(0); dummy.next=head; curr=dummy
while curr.next:
if curr.next.val==val: curr.next=curr.next.next
else: curr=curr.next
return dummy.nextdef mergeTwoLists(l1, l2):
dummy=curr=ListNode(0)
while l1 and l2:
if l1.val<=l2.val: curr.next=l1; l1=l1.next
else: curr.next=l2; l2=l2.next
curr=curr.next
curr.next=l1 or l2
return dummy.next▤Stacks
LIFO · Min Tracking · Bracket Matching · RPN
📚 Stacks — Last In, First Out
A stack is like a stack of plates — you can only add or remove from the top. LIFO: Last In, First Out. In Python, a regular list works perfectly as a stack: append() to push, pop() to pop, [-1] to peek — all O(1).
stack = [] stack.append(5) # push: [5] stack.append(3) # push: [5, 3] stack.pop() # pop → returns 3, stack = [5] stack[-1] # peek → 5 (no remove)
🧩 When to use a stack?
The classic stack use case: anything involving matching or "most recent unsettled". If you see an open bracket, push it. When you see a close bracket, pop from the stack and check if they match. The stack naturally tracks the most recently opened (unsettled) item.
Another pattern: when processing item X, you need quick access to "the most recent item that satisfies some condition." That's a monotonic stack.
📉 Monotonic Stack — the advanced pattern
A monotonic stack stays either always increasing or always decreasing. Used for "next greater element" type problems. When you encounter an element greater than the stack top, that element IS the "next greater" for everything it is larger than. Pop those elements, record their answers, then push the current element.
stack = [] # stores indices, values decrease (top = smallest)
for i, temp in enumerate(temperatures):
while stack and temperatures[stack[-1]] < temp:
idx = stack.pop()
ans[idx] = i - idx # days waited for warmer day
stack.append(i)
💼 Interview angle on stacks
Interviewers test: balanced parentheses (easy), min stack (medium — track min with a parallel stack of (val, current_min) tuples), and daily temperatures (monotonic stack). The min stack approach: instead of storing values, store (value, current_min_so_far) — then peeking the min is always O(1).
A stack's key property: instant access to the most recent unresolved item. Anything involving "matching", "undo", or "most recent" → think stack.
| Problem type | What goes on stack | Pop when |
|---|---|---|
| Matching brackets | Opening brackets | See closing bracket |
| Min stack | (value, min_so_far) tuple | pop() |
| Expression eval | Operands | See operator |
| Next greater | Indices waiting for answer | Find greater element |
stack = [] # Python list as stack stack.append(x) # push: O(1) stack.pop() # pop: O(1) stack[-1] # peek: O(1) bool(stack) # is-empty: O(1) # Min stack trick: stack.append((val, min(val, stack[-1][1] if stack else val)))
⚡ Interview insight: Stack problems often look hard until you ask "what's the most recent unsettled thing I need?" For balanced brackets: most recently opened bracket that has not been closed. That's exactly what a stack tracks.
class MinStack:
def __init__(self): self.s=[]
def push(self,v):
m=min(v,self.s[-1][1] if self.s else v)
self.s.append((v,m))
def pop(self): self.s.pop()
def top(self): return self.s[-1][0]
def getMin(self): return self.s[-1][1]def isValid(s):
stack=[]; match={')':'(', ']':'[', '}':'{'}
for c in s:
if c in match:
if not stack or stack[-1]!=match[c]: return False
stack.pop()
else: stack.append(c)
return not stackdef evalRPN(tokens):
stack=[]
for t in tokens:
if t in '+-*/':
b,a=stack.pop(),stack.pop()
if t=='+': stack.append(a+b)
elif t=='-': stack.append(a-b)
elif t=='*': stack.append(a*b)
else: stack.append(int(a/b))
else: stack.append(int(t))
return stack[0]def sortStack(stack):
temp=[]
while stack:
val=stack.pop()
while temp and temp[-1]>val: stack.append(temp.pop())
temp.append(val)
return tempMonotonic decreasing stack: pop when current > top. Gives O(n) "next greater element" instead of O(n²) nested loops.
stack = [] # indices
for i, t in enumerate(temps):
while stack and temps[stack[-1]] < t:
idx = stack.pop()
ans[idx] = i - idx # days to wait
stack.append(i)def dailyTemperatures(temps):
n = len(temps)
ans = [0] * n
stack = [] # stores indices
for i, t in enumerate(temps):
while stack and temps[stack[-1]] < t:
idx = stack.pop()
ans[idx] = i - idx
stack.append(i)
return ans
# [73,74,75,71,69,72,76,73] → [1,1,4,2,1,1,0,0]⫸Queues
FIFO · BFS · deque · Level Order
🚶 Queues — First In, First Out
A queue is like a line at a store — first person in line gets served first. FIFO: First In, First Out. In Python, always use collections.deque (double-ended queue), never a list. Why? list.pop(0) is O(n) because it shifts all elements. deque.popleft() is O(1). This makes a huge difference in BFS performance.
from collections import deque q = deque() q.append(5) # enqueue right: O(1) q.popleft() # dequeue left: O(1) ← use this, not pop(0)! q.appendleft(3) # add to front: O(1) ← deque advantage q.pop() # remove from back: O(1)
🌊 BFS — the queue main job
Queues power Breadth-First Search (BFS). BFS explores all neighbors at the current distance before going further. Think of it as exploring in rings: all nodes 1 hop away, then all nodes 2 hops away, etc. This guarantees you find the shortest path in an unweighted graph.
from collections import deque
q = deque([start])
visited = {start}
while q:
node = q.popleft() # process oldest node
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
q.append(neighbor) # explore later
🪟 Monotonic Deque — sliding window maximum
A deque works from both ends, which makes it a natural fit for sliding window maximum. Keep the deque in decreasing order of values (pop from back when new element is larger). The front always holds the maximum. Pop from front when that index goes out of the window.
💼 Interview angle
Queue questions are usually BFS questions in disguise. Key insight: BFS naturally finds the minimum number of steps/hops. Any "minimum distance" or "minimum transformations" problem → think BFS. Always track visited nodes to avoid infinite loops!
BFS with deque. Snapshot len(q) → process exactly that many nodes per level.
from collections import deque
q=deque([root])
while q:
for _ in range(len(q)): # snapshot!
node=q.popleft()
if node.left: q.append(node.left)
if node.right: q.append(node.right)from collections import deque
class MyStack:
def __init__(self): self.q=deque()
def push(self,x):
self.q.append(x)
for _ in range(len(self.q)-1): self.q.append(self.q.popleft())
def pop(self): return self.q.popleft()
def top(self): return self.q[0]
def empty(self): return not self.qdef timeRequiredToBuy(tickets, k):
total=0
for i,t in enumerate(tickets):
if i<=k: total+=min(t,tickets[k])
else: total+=min(t,tickets[k]-1)
return totalfrom collections import deque
def reverseFirstK(q, k):
stack=[]
for _ in range(k): stack.append(q.popleft())
while stack: q.append(stack.pop())
for _ in range(len(q)-k): q.append(q.popleft())
return qDeque stores indices, kept in decreasing value order. Front = max. Pop front when out of window. Pop back when new ≥ back.
dq = deque()
for i, n in enumerate(nums):
while dq and nums[dq[-1]] <= n: dq.pop()
dq.append(i)
if dq[0] < i-k+1: dq.popleft() # out of window
if i >= k-1: res.append(nums[dq[0]])from collections import deque
def maxSlidingWindow(nums, k):
dq = deque() # stores indices, decreasing values
res = []
for i, n in enumerate(nums):
# remove elements outside window
while dq and dq[0] < i - k + 1:
dq.popleft()
# remove smaller elements from back
while dq and nums[dq[-1]] <= n:
dq.pop()
dq.append(i)
if i >= k - 1:
res.append(nums[dq[0]])
return res
# [1,3,-1,-3,5,3,6,7] k=3 → [3,3,5,5,6,7]🌳Binary Trees
BFS Level Order · DFS Depth & Path
🌲 Binary Trees — the recursive data structure
A binary tree is a hierarchical structure where each node has at most 2 children: left and right. The root is the top node. Leaf nodes have no children. The height of a tree is the longest path from root to leaf.
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left # left child (or None)
self.right = right # right child (or None)
Trees are naturally recursive — every subtree is itself a tree. This is why DFS on trees works so cleanly: solve the left subtree, solve the right subtree, then combine the results.
🔄 The 3 DFS traversal orders (MEMORIZE THESE)
root → left → rightGood for copying/serializing a tree. You see the parent before children.
left → root → rightFor BSTs, this gives values in sorted order. Extremely useful!
left → right → rootGood for deletion or when you need children's info before parent's. Height calculation uses this.
📏 DFS vs BFS on trees
DFS (recursion or stack): Goes as deep as possible first. Great for: height, diameter, path sums, checking symmetry, anything that needs info from both children.
BFS (queue): Explores level by level. Great for: level order output, minimum depth (stops at first leaf!), right side view, connecting nodes at same level.
⚡ The mental model that unlocks all tree problems
"What does my function do? I trust it completely — it correctly handles any subtree I hand it. Given that, what does the current node do with the results from its children?"
You never need to simulate the whole tree in your head. Just ask: what do I return to my parent? That one question solves 90% of tree problems.
🔑 The recursive pattern for trees
def solve(node):
if not node: # base case: null → return 0 or False
return 0
left = solve(node.left) # trust recursion for left subtree
right = solve(node.right) # trust recursion for right subtree
# combine results + do something with node.val
return 1 + max(left, right) # example: height
Trust the recursion! You do not need to think about the whole tree — ask: "given the answers from my left and right children, what do I return to my parent?"
💼 Interview tips for trees
- Always handle the null case first:
if not node: return 0/None/False - State what your function returns out loud — height? bool? pair (height, is_balanced)?
- Diameter trick: At each node, diameter through this node = left_height + right_height. Track max globally.
- Path sum: Pass remaining sum down. At leaf, check if remaining == node.val.
BFS explores layer by layer using a queue. For trees: dequeue a node, process it, enqueue its children. Level is tracked by queue snapshot size.
| Use BFS when | Use DFS when |
|---|---|
| Shortest path needed | Explore all possibilities |
| Level-by-level output | Path exists check |
| Minimum depth | Height / diameter |
| Right-side view | Subtree problems |
from collections import deque
def level_order(root):
if not root: return []
q, result = deque([root]), []
while q:
level_size = len(q) # snapshot: nodes in this level
level = []
for _ in range(level_size):
node = q.popleft()
level.append(node.val)
if node.left: q.append(node.left)
if node.right: q.append(node.right)
result.append(level)
return result
⚡ Why snapshot size matters: Without tracking level_size, you cannot tell which level a node belongs to because new nodes join the queue before you finish the current level.
from collections import deque
def averageOfLevels(root):
res,q=[],deque([root])
while q:
total,sz=0,len(q)
for _ in range(sz):
n=q.popleft(); total+=n.val
if n.left: q.append(n.left)
if n.right: q.append(n.right)
res.append(total/sz)
return resfrom collections import deque
def minDepth(root):
if not root: return 0
q=deque([(root,1)])
while q:
node,d=q.popleft()
if not node.left and not node.right: return d
if node.left: q.append((node.left,d+1))
if node.right: q.append((node.right,d+1))from collections import deque
def levelOrder(root):
if not root: return []
res,q=[],deque([root])
while q:
level=[]
for _ in range(len(q)):
n=q.popleft(); level.append(n.val)
if n.left: q.append(n.left)
if n.right: q.append(n.right)
res.append(level)
return resDFS on trees = recursion. The call stack IS your stack. At each node: process, then recurse left and right. The key skill is figuring out what to return and what to pass down.
| Pattern | Technique | Example |
|---|---|---|
| Height/depth | Return int up | 1 + max(left, right) |
| Path sum | Pass target down | if leaf: target==val |
| Diameter | left_h + right_h at each node | Track global max |
| Balance check | Return -1 sentinel | if imbalanced: return -1 |
def height(node):
if not node: return 0 # base: null = height 0
left = height(node.left)
right = height(node.right)
return 1 + max(left, right)
# Diameter uses the same DFS, different aggregation:
def diameter_dfs(node):
if not node: return 0
left = diameter_dfs(node.left)
right = diameter_dfs(node.right)
ans[0] = max(ans[0], left + right) # diameter through here
return 1 + max(left, right) # height returned to parent
⚡ The mental model: "I trust my recursion to correctly solve left and right subtrees. Given those results, what do I return to MY parent?" You never need to think about the whole tree.
def maxDepth(root):
if not root: return 0
return 1+max(maxDepth(root.left),maxDepth(root.right))def isSameTree(p,q):
if not p and not q: return True
if not p or not q or p.val!=q.val: return False
return isSameTree(p.left,q.left) and isSameTree(p.right,q.right)def hasPathSum(root, targetSum):
if not root: return False
if not root.left and not root.right: return root.val==targetSum
rem=targetSum-root.val
return hasPathSum(root.left,rem) or hasPathSum(root.right,rem)def diameterOfBinaryTree(root):
mx=[0]
def depth(node):
if not node: return 0
l,r=depth(node.left),depth(node.right)
mx[0]=max(mx[0],l+r); return 1+max(l,r)
depth(root); return mx[0]def invertTree(root):
if not root: return None
root.left,root.right=invertTree(root.right),invertTree(root.left)
return rootdef lowestCommonAncestor(root, p, q):
if not root or root==p or root==q: return root
left=lowestCommonAncestor(root.left,p,q)
right=lowestCommonAncestor(root.right,p,q)
if left and right: return root
return left or right🔍Binary Search Trees
BST Property · In-Order = Sorted · O(log n)
🔍 Binary Search Trees — sorted order in a tree
A BST is a binary tree with one key rule: for every node, all values in the left subtree are smaller, all values in the right subtree are larger. This rule applies recursively to every single node in the tree.
This gives you O(log n) search, insert, and delete on a balanced BST. On a skewed/unbalanced BST these degrade to O(n), which is why interviewers sometimes ask about AVL trees or Red-Black trees (but you rarely need to implement those).
✨ The the property: In-order = Sorted
If you do an in-order traversal (left → root → right) of any BST, you get all values in sorted ascending order. This is the most important BST property for interviews. Many BST problems become straightforward once you realize: "I can do in-order and work with a sorted sequence."
def inorder(node, result=[]):
if not node: return
inorder(node.left, result) # go left first (smaller values)
result.append(node.val) # visit root (in sorted order!)
inorder(node.right, result) # go right last (larger values)
🔎 BST Search — no need to check both sides
Unlike a regular binary tree where you might need to check both subtrees, in a BST you always know which side to go. This eliminates half the tree at each step: O(log n) average case.
def search(node, target):
if not node: return None
if target == node.val: return node
if target < node.val:
return search(node.left, target) # must be in left subtree
return search(node.right, target) # must be in right subtree
⚡ The mental model for BST problems
"Every BST problem exploits ordering. When I'm at a node, I know exactly which half I need — I never search both sides blindly."
Ask: target < current? → go left. target > current? → go right. Equal? → found it (or it's your answer). If you're checking both subtrees, you've probably missed the BST property.
📍 LCA shortcut for BSTs
Lowest Common Ancestor (LCA) of two nodes p and q in a BST: if both are less than root → LCA is in left subtree. If both are greater → right subtree. Otherwise → root IS the LCA (they're on different sides, or one IS the root). This is O(log n) vs O(n) for regular trees!
📊 Kth Smallest — in-order is your friend
In-order traversal visits nodes in sorted order. So "kth smallest" = stop at the kth node during in-order traversal. Use a counter, decrement each visit, stop when counter hits zero.
💼 BST interview checklist
- When you see "BST" → immediately think "in-order = sorted"
- Validate a BST: pass down min/max bounds, not compare with parent
- Insert: recurse left/right, at null create new node
- Delete: the trickiest operation — find inorder successor for nodes with 2 children
left.val < root.val is NOT enough. You need all left subtree values < root. Pass down bounds: validate(node, min_val, max_val).BST property: left subtree < node < right subtree, at EVERY node. This lets you eliminate half the tree at each step.
| Operation | Avg | Worst (skewed) | Note |
|---|---|---|---|
| Search | O(log n) | O(n) | Goes left or right only |
| Insert | O(log n) | O(n) | Find null position, create node |
| Delete | O(log n) | O(n) | Hardest: replace with inorder successor |
| In-order | O(n) | O(n) | Gives SORTED output! |
def search_bst(root, val):
while root:
if val == root.val: return root
root = root.left if val < root.val else root.right
return None
⚡ Interview insight: Always use iterative search (not recursive) to avoid O(h) call stack space. "Worst case O(n)" happens when BST degrades to a linked list (inserting sorted data without rebalancing).
def searchBST(root, val):
if not root or root.val==val: return root
if val<root.val: return searchBST(root.left,val)
return searchBST(root.right,val)def insertIntoBST(root, val):
if not root: return TreeNode(val)
if val<root.val: root.left=insertIntoBST(root.left,val)
else: root.right=insertIntoBST(root.right,val)
return rootdef sortedArrayToBST(nums):
if not nums: return None
mid=len(nums)//2; root=TreeNode(nums[mid])
root.left=sortedArrayToBST(nums[:mid])
root.right=sortedArrayToBST(nums[mid+1:])
return rootdef deleteNode(root, key):
if not root: return None
if key<root.val: root.left=deleteNode(root.left,key)
elif key>root.val: root.right=deleteNode(root.right,key)
else:
if not root.left: return root.right
if not root.right: return root.left
succ=root.right
while succ.left: succ=succ.left
root.val=succ.val
root.right=deleteNode(root.right,succ.val)
return rootIn-order traversal (left → root → right) visits BST nodes in ascending sorted order. This is the single most useful BST property in interviews.
| Traversal | Order | Best for |
|---|---|---|
| Pre-order | root → L → R | Serialize/copy tree |
| In-order | L → root → R | BST sorted output, k-th smallest |
| Post-order | L → R → root | Delete tree, compute height |
def inorder(root, result=[]):
if not root: return
inorder(root.left, result)
result.append(root.val) # ← visits in sorted order!
inorder(root.right, result)
⚡ Interview insight: If a BST problem seems hard, ask "what does in-order give me?" Often the answer becomes straightforward: k-th smallest = stop at k-th in-order visit. Validate BST = check in-order is strictly increasing.
def findTarget(root, k):
seen=set()
def dfs(node):
if not node: return False
if k-node.val in seen: return True
seen.add(node.val)
return dfs(node.left) or dfs(node.right)
return dfs(root)def lowestCommonAncestor(root, p, q):
while root:
if p.val<root.val and q.val<root.val: root=root.left
elif p.val>root.val and q.val>root.val: root=root.right
else: return rootdef getMinimumDifference(root):
res,prev=[float('inf')],[None]
def inorder(node):
if not node: return
inorder(node.left)
if prev[0] is not None: res[0]=min(res[0],node.val-prev[0])
prev[0]=node.val; inorder(node.right)
inorder(root); return res[0]def balanceBST(root):
nums=[]
def inorder(n):
if not n: return
inorder(n.left); nums.append(n.val); inorder(n.right)
def build(l,r):
if l>r: return None
mid=(l+r)//2; node=TreeNode(nums[mid])
node.left=build(l,mid-1); node.right=build(mid+1,r); return node
inorder(root); return build(0,len(nums)-1)def kthSmallest(root, k):
cnt,res=[0],[0]
def inorder(node):
if not node or cnt[0]>=k: return
inorder(node.left)
cnt[0]+=1
if cnt[0]==k: res[0]=node.val
inorder(node.right)
inorder(root); return res[0]△Heaps / Priority Queues
Min-Heap · Max-Heap · Top-K
⛏️ Heaps — always know the min (or max) instantly
A heap is a special tree that always keeps the smallest element at the top (min-heap) or largest (max-heap). You can push and pop in O(log n), and peek at the min/max in O(1). It's like a priority queue — you always serve the highest-priority item first.
Heaps are usually implemented as arrays (not actual tree nodes), but you interact with them as if they're trees. In Python, heapq is always a min-heap. Negate values for max-heap behavior.
🐍 Python heapq — the complete reference
import heapq heap = [] heapq.heappush(heap, 3) # push: O(log n) heapq.heappush(heap, 1) heapq.heappush(heap, 2) heapq.heappop(heap) # pop min → 1: O(log n) heap[0] # peek min → 2: O(1) # Build from existing list: O(n) heapq.heapify([4, 1, 7, 2]) # Max-heap: negate values heapq.heappush(heap, -val) max_val = -heapq.heappop(heap) # Heap of tuples: sorts by first element heapq.heappush(heap, (priority, item))
🏆 The Top-K Pattern — heaps' main use case
Finding K largest numbers in an array of n numbers. Naive: sort → O(n log n). Better: use a min-heap of size k → O(n log k). Keep the k largest elements in the heap. If a new element beats the heap minimum (the weakest in your top-k), evict the minimum and add the new element.
Why min-heap for top-k? Because you want to quickly remove the SMALLEST of your top-k candidates when a better one comes along. The heap minimum = the weakest in your current top-k team.
🔀 Merge K sorted lists — heap at its best
Push the first element from each list into the heap as (val, list_index, element_index). Always pop the minimum, add it to result, then push the next element from that same list. The heap handles the sorting across all k lists.
💼 Heap interview triggers
If the problem says: "kth largest/smallest," "top k," "median of a stream," "merge k sorted," "task scheduling" → it is probably a heap problem. The heap solves all of these in O(n log k) instead of O(n log n).
Key insight: A min-heap of size k tracks the k largest elements seen so far. The top of the heap = k-th largest (the weakest member of your "top k team").
| Goal | Heap type | Size | Complexity |
|---|---|---|---|
| K largest | Min-heap | Keep k | O(n log k) |
| K smallest | Max-heap (negate) | Keep k | O(n log k) |
| All sorted | Any | All n | O(n log n) |
import heapq
def kth_largest(nums, k):
heap = []
for num in nums:
heapq.heappush(heap, num) # min-heap
if len(heap) > k:
heapq.heappop(heap) # evict smallest
return heap[0] # k-th largest
⚡ Why min-heap for k LARGEST? You want to quickly evict the smallest of your current top-k when a bigger challenger arrives. The heap minimum = the weakest in your top-k. When you find something bigger → evict the weakest.
import heapq
def findKthLargest(nums, k):
heap=[]
for n in nums:
heapq.heappush(heap,n)
if len(heap)>k: heapq.heappop(heap)
return heap[0]import heapq
def kClosest(points, k):
heap=[]
for x,y in points:
heapq.heappush(heap,(-x*x-y*y,[x,y]))
if len(heap)>k: heapq.heappop(heap)
return [p for _,p in heap]from collections import Counter
import heapq
def topKFrequent(nums, k):
freq=Counter(nums); heap=[]
for val,cnt in freq.items():
heapq.heappush(heap,(cnt,val))
if len(heap)>k: heapq.heappop(heap)
return [v for _,v in heap]import heapq
from collections import Counter,deque
def leastInterval(tasks, n):
freq=Counter(tasks)
heap=[-c for c in freq.values()]; heapq.heapify(heap)
cooldown=deque(); time=0
while heap or cooldown:
time+=1
if heap:
cnt=1+heapq.heappop(heap)
if cnt: cooldown.append((cnt,time+n))
if cooldown and cooldown[0][1]==time:
heapq.heappush(heap,cooldown.popleft()[0])
return time◎Graphs
BFS · DFS · 3-Color Cycle · Bellman-Ford
◎ Graphs — the most flexible data structure
A graph is a collection of nodes (vertices) connected by edges. Unlike trees, graphs can have cycles, multiple paths between nodes, disconnected components, and directed or undirected edges. Trees are actually a special case of graphs (connected, acyclic).
In interviews, graphs usually appear as: grid problems (each cell is a node, edges go to 4 neighbors), course prerequisite problems (directed graph), or social network problems (undirected graph).
🗺️ Graph representations
# Adjacency List (most common in interviews)
from collections import defaultdict
graph = defaultdict(list)
graph[0].append(1) # edge 0→1
graph[0].append(2) # edge 0→2
graph[1].append(3) # edge 1→3
# For undirected graphs, add both directions:
for u, v in edges:
graph[u].append(v)
graph[v].append(u) # ← do not forget this!
# Grid as graph: neighbors = 4 directions
dirs = [(1,0),(-1,0),(0,1),(0,-1)]
for dr, dc in dirs:
nr, nc = r+dr, c+dc
if 0<=nr
🔵🟡⚫ 3-Color DFS — the actual colors explained
For cycle detection in directed graphs, we use 3 colors to represent node states. Think of it like a traffic light for your DFS:
⚡ The mental model for graph traversal
"A graph problem is really a 'don't visit the same node twice' problem. Everything else — finding paths, counting components, detecting cycles — is controlled exploration."
Two questions first: (1) Need shortest path? → BFS. (2) Need deep exploration or back-edge detection? → DFS. Pick your weapon, add a visited set, go.
[(0,1),(0,-1),(1,0),(-1,0)].📐 BFS vs DFS for graphs
BFS: Use a queue. Explores level by level. Finds shortest path in unweighted graphs. Best for: minimum hops, word ladder, island count with distance.
DFS: Use recursion or a stack. Explores as deep as possible. Best for: cycle detection, topological sort, connected components, checking if path exists.
💼 Graph interview playbook
- Grid = graph: every grid problem with regions, islands, paths is a graph problem
- Count connected components: outer loop over all nodes, BFS/DFS from unvisited ones, count calls
- Cycle in directed graph: 3-color DFS. Gray node revisited = cycle.
- Topological sort: DFS, add to result on POST-visit (after children). Reverse for topo order. Or BFS with in-degree counting (Kahns algorithm).
- Shortest path: BFS for unweighted. Dijkstra (heap + BFS) for weighted.
BFS guarantees shortest path in unweighted graphs (explores by distance). Clone uses BFS to ensure every node is visited exactly once.
| Algorithm | Data structure | Guarantees |
|---|---|---|
| BFS | Queue (deque) | Shortest path, level order |
| DFS | Stack/recursion | Finds A path (not shortest) |
from collections import deque
def clone_graph(node):
if not node: return None
clones = {node: Node(node.val)} # original → clone map
q = deque([node])
while q:
curr = q.popleft()
for neighbor in curr.neighbors:
if neighbor not in clones:
clones[neighbor] = Node(neighbor.val)
q.append(neighbor)
clones[curr].neighbors.append(clones[neighbor])
return clones[node]
⚡ The map serves two purposes: (1) visited check (do not revisit), (2) O(1) lookup of existing clones when wiring neighbors. Without the map, you would create duplicate clones.
from collections import deque
def cloneGraph(node):
if not node: return None
clones={node:Node(node.val)}; q=deque([node])
while q:
curr=q.popleft()
for nb in curr.neighbors:
if nb not in clones:
clones[nb]=Node(nb.val); q.append(nb)
clones[curr].neighbors.append(clones[nb])
return clones[node]def findCheapestPrice(n, flights, src, dst, k):
prices=[float('inf')]*n; prices[src]=0
for _ in range(k+1):
tmp=prices[:] # MUST COPY!
for u,v,w in flights:
if prices[u]!=float('inf'):
tmp[v]=min(tmp[v],prices[u]+w)
prices=tmp
return prices[dst] if prices[dst]!=float('inf') else -1For directed graphs, you need 3 colors to detect cycles. 2 colors (visited/unvisited) is not enough — you would miss back edges.
| Color | Meaning | Cycle signal |
|---|---|---|
| ⬤ White (0) | Not yet visited | — |
| ⬤ Gray (1) | Currently in DFS stack | Finding gray = CYCLE! |
| ⬤ Black (2) | Fully explored | Safe, no cycle through here |
def has_cycle(graph, n):
color = [0] * n # all white
def dfs(node):
color[node] = 1 # gray: in progress
for neighbor in graph[node]:
if color[neighbor] == 1: return True # back edge = cycle!
if color[neighbor] == 0: # unvisited
if dfs(neighbor): return True
color[node] = 2 # black: done
return False
return any(dfs(i) for i in range(n) if color[i] == 0)
⚡ Why not 2 colors? In a directed graph, revisiting a black (done) node is fine — it means multiple paths lead there. Only revisiting a GRAY node (still in our current DFS path) signals a cycle.
def canFinish(numCourses, prerequisites):
graph={i:[] for i in range(numCourses)}
for c,p in prerequisites: graph[c].append(p)
state=[0]*numCourses
def dfs(c):
if state[c]==1: return False # cycle!
if state[c]==2: return True
state[c]=1
for p in graph[c]:
if not dfs(p): return False
state[c]=2; return True
return all(dfs(c) for c in range(numCourses))BFS/DFS from each unvisited node. Each fresh call = new component. Multi-source BFS: start from ALL border cells simultaneously.
for node in range(n):
if node not in visited:
bfs(node, visited)
count += 1 # one full componentfrom collections import deque, defaultdict
def countComponents(n, edges):
graph = defaultdict(list)
for u, v in edges:
graph[u].append(v); graph[v].append(u)
visited = set()
def bfs(node):
q = deque([node]); visited.add(node)
while q:
cur = q.popleft()
for nb in graph[cur]:
if nb not in visited:
visited.add(nb); q.append(nb)
count = 0
for node in range(n):
if node not in visited:
bfs(node); count += 1
return countfrom collections import deque
def pacificAtlantic(heights):
rows, cols = len(heights), len(heights[0])
def bfs(starts):
visited = set(starts); q = deque(starts)
while q:
r, c = q.popleft()
for dr,dc in [(1,0),(-1,0),(0,1),(0,-1)]:
nr,nc = r+dr, c+dc
if (0<=nr<rows and 0<=nc<cols
and (nr,nc) not in visited
and heights[nr][nc]>=heights[r][c]):
visited.add((nr,nc)); q.append((nr,nc))
return visited
pac = bfs([(r,0) for r in range(rows)]+[(0,c) for c in range(cols)])
atl = bfs([(r,cols-1) for r in range(rows)]+[(rows-1,c) for c in range(cols)])
return list(pac & atl)