Summary
I recently interviewed with Tower Research Capital, where I was presented with a challenging system design and optimization problem focused on reducing API latency for word counting across a large dataset of books to achieve microsecond-level performance.
Full Experience
I recently had the opportunity to interview with Tower Research Capital, a prominent High Frequency Trading (HFT) firm. The technical challenge presented during the interview was quite complex, focusing heavily on latency optimization, which is crucial in HFT environments. I was tasked with designing a system to drastically reduce the time taken for a specific operation.
Interview Questions (1)
Suppose, there is a bookapi that gets us the booknames which is bookapi.getBooks(). This API call has a latency = 500 ms (milliseconds) and there are 1k books in the library.
Subsequently, there is another call that we can make after finding the book name which gets us the content of that particular book: bookapi.getText(bookname). This API call can send us 1 billion words per book and will have a latency = 1000 ms (milliseconds).
Now, if we have to find the count of a specific word in a book (for example, "Ram" in "Ramayana") with a target latency of 400 microseconds. How would we be able to achieve this? The interviewer was interested in potential caching mechanisms or custom data structures that could address this.