Resolved -
At 9:52 UTC TIME, we started seeing platform slowness and increased request timeouts. After initiating incident response, we traced the degradation to MongoDB. A specific index operation (query path) began taking significantly longer than usual. MongoDB continued to report as “healthy,” but services that depend on it experienced severe latency and, in some cases, stopped responding.
Mitigation and resolution
To stabilize the system and reduce load, we temporarily stopped application connections to MongoDB. We then upgraded MongoDB to a stronger machine (higher-capacity instance). After confirming MongoDB performance returned to normal and all required functions were operating correctly, we restored the connections. Platform performance recovered, and the system is now back to a healthy state.
Jan 8, 11:49 UTC
Identified -
The issue has been identified and a fix is being implemented.
Jan 8, 09:52 UTC
Investigating -
We are currently investigating this issue.
Jan 8, 09:50 UTC