Salesforce USA SMTS Offer
Summary
I successfully navigated a multi-round interview process for a Senior MTS role at Salesforce USA, which included coding, system design, and behavioral questions, ultimately leading to an offer.
Full Experience
My Salesforce USA SMTS Interview Experience
I recently interviewed for a Senior MTS role at Salesforce USA, and I'm thrilled to share that I received an offer. The interview process was quite extensive, covering multiple rounds with varied experiences. Here's a breakdown of how it went:
Hiring Manager Screen (45 mins)
This round primarily focused on my work experience, my role, and the impact I had in previous positions. It was almost entirely dedicated to behavioral questions, and we actually ran over the scheduled 45 minutes, extending to over an hour.
Online Assessment (60 mins)
The online assessment consisted of four standard LeetCode problems. The final problem boiled down to finding the longest increasing subsequence. Unfortunately, I couldn't code the optimal O(n log n) solution within the time limit, which resulted in 7-8 test cases failing.
Onsite Interviews
Coding (60 mins)
This round started with 1-2 behavioral questions. The main coding challenge was a standard linked list problem: reversing a sublist from a given 'fromIndex' to 'toIndex'. I discussed some simple follow-ups and optimization strategies. Towards the end, we also delved into system design aspects, specifically around the cloud service providers I've worked with, predominantly AWS. Discussions included DynamoDB, Lambda, and SQS.
System Design (60 mins)
Similar to the previous round, this also began with 1-2 behavioral questions. The initial part of the session was a rapid-fire Q&A, covering fundamental concepts like 'What is a REST API?', 'What is HTTPS?', and 'What is JWT?'. The interviewer aimed to test the depth of my foundational knowledge. With only 20 minutes left, we finally moved to the core system design question: designing a service similar to TicketMaster. This was an intense discussion where the interviewer meticulously grilled me on every statement. We had very in-depth conversations, touching upon ensuring atomicity in ACID-compliant databases (literally down to the thread level), why Elastic Search speeds up search queries, what an inverted index is, and potential problems with Elastic Search. This round also went significantly overtime, by about 25 minutes, which felt unprofessional and contributed to a rather poor candidate experience.
Coding / Design (60 mins)
This round also included 2 behavioral questions. It felt more like a knowledge-testing session rather than a typical coding or design round. We started with basic questions like 'What is the difference between Arrays and Lists?', 'What is caching?', and 'How to implement a hashmap?'. We also discussed cache eviction strategies when memory is full, and the implementation details of LRU and LFU caches. Following this, we moved to design discussions related to a Stock Broker app. The core problem was 'What is the best way to show historical prices of a stock to the client?', to which I suggested using a time-series database. We then discussed its pros/cons and why it speeds up time-related queries. We also covered 'How to handle stock ticker changes on the app?', where I proposed a pub/sub model instead of regular polling, leading to questions about 'What is a Pub/Sub?' and 'How does Redis pub/sub work?'. The conversation continued with discussions on Kafka, its internals, and message brokers in general. It was quite peculiar that I didn't write a single line of code in this round. For a senior/staff candidate, some of these foundational questions felt borderline insulting, suggesting a lack of a clear script for the interview.
Behavioral (45 mins)
This round was entirely behavioral. I was asked to pick a project of my choice, explain its design choices, detail my role's impact, and discuss challenges I faced. Standard behavioral questions included scenarios like taking calculated risks, experiences with project failures, examples of organization-wide or team-wide impact, and any processes I changed within my team.
Coding (60 mins)
The final round again started with 1-2 behavioral questions. The coding challenge was a problem from HackerRank. The prompt was quite lengthy, but the problem itself wasn't overly difficult. It took me some time to fully understand the problem statement, sample inputs, and expected outputs. Once coded, it passed all the provided test cases.
Overall, I found some of the interview rounds to be quite unstructured, with some questions feeling inappropriate for a senior/staff level candidate. Despite these observations, the recruiter reached out the very next day to extend an offer.
Interview Questions (9)
In the online assessment, one of the problems was to find the longest increasing subsequence in an array. I struggled to implement the O(n log n) solution, which resulted in failing some test cases.
During one of the coding rounds, I was given a standard linked list problem: reverse a sublist within a given linked list from a specified 'fromIndex' to 'toIndex'.
We had discussions on various AWS cloud services, specifically focusing on DynamoDB, Lambda, and SQS, related to my experience with cloud service providers.
I was asked fundamental questions covering 'What is a REST API?', 'What is HTTPS?', and 'What is JWT?'. The interviewer aimed to gauge the depth of my knowledge in these areas.
I was challenged to design a service similar to TicketMaster. This involved a deep dive into architectural choices and addressing various system design considerations, including ensuring atomicity in ACID-compliant databases (even at the thread level), the role of Elastic Search in speeding up search queries, inverted indexes, and potential problems with Elastic Search.
I was asked a series of foundational questions: 'What is the difference between Arrays and Lists?', 'What is caching?', 'How to implement a hashmap?', 'What happens if you cannot fit the cache in memory (cache eviction)?', 'How is LRU implemented?', and 'How is LFU implemented?'
The discussion revolved around designing aspects of a Stock Broker application. Specifically, 'What is the best way to show historical prices of a stock to the client?' This led to a discussion on time-series databases, their pros/cons, and why they speed up time-related queries. We also covered 'How to handle stock ticker changes on the app?', where I proposed using pub/sub instead of regular polling, leading to questions like 'What is a Pub/Sub?' and 'How does Redis pub/sub work?'
There was a knowledge-testing discussion focused on Kafka, its internal mechanisms, and message brokers in general.
I was asked to discuss a project of my choice, explaining design choices, my role's impact, and challenges faced. Standard behavioral questions included: 'Tell me about a calculated risk you took', 'Have you ever failed in your projects?', 'Provide an example of organization-wide or team-wide impact', and 'Describe any processes you changed in your team'.