At 9:52 UTC TIME, we started seeing platform slowness and increased request timeouts. After initiating incident response, we traced the degradation to MongoDB. A specific index operation (query path) began taking significantly longer than usual. MongoDB continued to report as “healthy,” but services that depend on it experienced severe latency and, in some cases, stopped responding. Mitigation and resolution
To stabilize the system and reduce load, we temporarily stopped application connections to MongoDB. We then upgraded MongoDB to a stronger machine (higher-capacity instance). After confirming MongoDB performance returned to normal and all required functions were operating correctly, we restored the connections. Platform performance recovered, and the system is now back to a healthy state.
Posted Jan 08, 2026 - 11:49 UTC
Identified
The issue has been identified and a fix is being implemented.
Posted Jan 08, 2026 - 09:52 UTC
Investigating
We are currently investigating this issue.
Posted Jan 08, 2026 - 09:50 UTC
This incident affected: API, Customer Hub (Portal), Partners API, and Evidence Submission.