Microsoft SDE Intern Interview Experience
💼 LTIMindtree Interview Experience (On-Campus) | Fresher | 2026
Salesforce SMTS | Interview Experience | Rejected
JPMC | SDE2 (Associate) - Java Backend - Interview Experience + Compensation
Microsoft - SDE2 - Coding Round
Neuron7.ai R1 Coding | Reject
Summary
I had a coding round with Neuron7.ai, which unexpectedly turned into a Low-Level Design (LLD) and Data Structures & Algorithms (DSA) round, focusing on implementing a distributed rate limiter using the Token Bucket algorithm. Unfortunately, I ran out of time and received a rejection.
Full Experience
Round 1 was supposed to be coding round, but the interviewer took a different direction
The interviewer clarified there would be 2 questions, with first one being LLD + DSA and second question being pure DSA.
He wanted me to implement a ratelimiter, which was distrubuted in nature and handled all edge cases
You’re building a backend service used by millions of users. To prevent abuse and ensure fair usage, you want to rate limit users — specifically:
Each user can make up to 100 requests per minute.
Requests exceeding this rate should be rejected.
Design and implement a rate limiter using the Token Bucket algorithm, ensuring:
Efficient refill of tokens over time.
Accurate rate limiting, even in a distributed system (multiple API servers).
The system is resilient to concurrency issues and ensures atomicity of updates.
I was allowed to google syntax, there were followup on how we would handle concurrency in the code. It was more of a test around your knowledge on redis functions. I had to look through all the methods that were available, but in the end we ran out of time.
Got a reject email after a day.
Interview Questions (1)
You’re building a backend service used by millions of users. To prevent abuse and ensure fair usage, you want to rate limit users — specifically: Each user can make up to 100 requests per minute. Requests exceeding this rate should be rejected.
Design and implement a rate limiter using the Token Bucket algorithm, ensuring: Efficient refill of tokens over time. Accurate rate limiting, even in a distributed system (multiple API servers). The system is resilient to concurrency issues and ensures atomicity of updates.