As of 12:47 UTC all services are now operating normally. Between 12:03 UTC and 12:29 UTC processing of alerts was delayed, potentially resulting in the misfiring of alerts. Up to 10% of requests to Grafana between 12:03 UTC and 12:13 UTC returned 503s resulting in partial graphs. Despite the initial update, ingestion across all protocols was unaffected, however connectivity issues across our aggregation layer resulted in the delayed processing of datapoints. No data was lost and this incident is now resolved.
Posted May 06, 2019 - 12:47 UTC
As of 12:03 UTC we are experiencing intermittent network connectivity issues, causing disruption across multiple services, resulting in partial renders and delayed processing of alerts as well as delayed ingestion of datapoints over HTTP. No data has been lost.
Posted May 06, 2019 - 12:21 UTC
This incident affected: Graph rendering, Ingestion, and Alerting.