EightFold AI | AI Assisted Coding Round | Senior Engineer | Question And Experience Attached

eightfold ai logo
eightfold ai
· Senior Engineer
April 10, 2026 · 0 reads

Summary

I completed the AI Assisted Coding Round for a Senior Engineer position at EightFold AI, solved the provided system‑design problem, and passed to the next round.

Full Experience

Recently went through this round.

Generic FYI for future candidates - The interviewers there are mostly inexperienced (3-4 years). They dont possess the ability to judge you by your code quality or thought process or anything you feel should have been acknowledged.

Their focus - just make it run, they wont even cross question you for anything (no why about anything). Just finish it anyhow. Try to give minimum time to any intro talk at start of the interview otherwise you loose time on main problem.

result - Passed, moved to next round

Expectation - Have a local IDE ready with any AI coding assisstant. I used Claude in VS Code. I also precreated a instrcutions file for Claude(again interviewer didnt care , just get me the solution).

Problem Statement Build a system for a cloud service provider that processes incoming API traffic from millions of clients. The traffic consists of API request logs streamed in real time from various sources (e.g., microservices, edge servers) in different formats. Your goal is to design and implement a software solution that analyzes these logs, computes usage metrics, enforces dynamic rate limits, and outputs actionable insights to a designated directory—all while adapting to evolving log formats and scaling to handle high throughput.

Problem Details

  • Input Structure: API request logs are written as files (e.g., JSON, CSV) into an input directory. Logs arrive from multiple sources, each with its own format, and are organized into subfolders by source ID (e.g., input/sourceA/log1.json). New log files are continuously appended or updated.
  • Dynamic Nature:
    • New log files are added in real time (e.g., every few seconds).
    • Existing log files may be appended with new entries.
    • Sources can disappear, and new sources with entirely new log formats can appear without notice.
  • Requirements:
    • Design a generic output format for API usage insights (e.g., per‑client request counts, latency stats, rate limit status).
    • Continuously monitor the input directory for new or updated log files and process them within a 5‑second window.
    • Compute metrics such as:
      • Total requests per client (identified by an API key or client ID).
      • Average latency per client.
      • Top 5 most frequent endpoints per source.
    • Enforce dynamic rate limits:
      • Each client has a configurable limit (e.g., 1000 requests/hour), stored in a config file (limits.json).
      • Flag clients exceeding their limits in the output.
    • Handle multiple input formats (e.g., JSON, CSV) and allow new formats to be processed without code changes.
    • Optimize for high throughput (millions of requests/hour) and low processing lag.
    • Output insights as JSON files in an output directory (e.g., output/sourceA/insights.json), updated in real time.

Sample input (note that csv and json might have diff key values, we need a configuration to map it to right internal names/fields)

Format 1 (JSON - Source: "EdgeServer"):

{ 
  "timestamp": "2025-04-04T10:00:01Z",
  "requests": [
    {
      "api_key": "abc123",
      "endpoint": "/v1/users",
      "method": "GET",
      "latency_ms": 120,
      "status": 200
    },
    {
      "api_key": "xyz789",
      "endpoint": "/v1/orders",
      "method": "POST",
      "latency_ms": 250,
      "status": 201
     }
  ]
}

Format 2 (CSV - Source: "Microservice"):

 timestamp,client_id,endpoint,method,response_time_ms,status_code
 2025-04-04T10:00:02Z,def456,/v1/auth,POST,180,200
 2025-04-04T10:00:03Z,ghi789,/v1/users,GET,90,404

Output Requirements:

  • ... (rest of description omitted for brevity)

Interview Questions (1)

1.

Real‑time API Log Processing System Design

System Design

Design and implement a scalable system for a cloud service provider that continuously ingests API request logs from millions of clients. The logs arrive in multiple formats (JSON, CSV) placed in an input directory with subfolders per source. The system must monitor the directory for new or updated files, process logs within a 5‑second window, compute per‑client and per‑source metrics (total requests, average latency, top 5 endpoints), enforce dynamic rate limits defined in a configuration file, handle evolving log formats without code changes, and output insights as JSON files to an output directory in real time.

Preparation Tips

I prepared by having a local IDE ready with an AI coding assistant (Claude in VS Code) and pre‑created an instructions file for the assistant, so I could quickly generate a solution during the interview.

📣 Found this helpful? Please share it with friends who are preparing for interviews!

Discussion (0)

Share your thoughts and ask questions

Join the Discussion

Sign in with Google to share your thoughts and ask questions

No comments yet

Be the first to share your thoughts and start the discussion!