What is a JSON feed? Learn more

JSON Feed Viewer

Browse through the showcased feeds, or enter a feed URL below.

Now supporting RSS and Atom feeds thanks to Andrew Chilton's feed2json.org service

CURRENT FEED

Google Cloud Status Dashboard Updates

A feed by Google Cloud Platform

XML


RESOLVED: Incident 20002 - We've received a report of an issue with Stackdriver Monitoring

Permalink - Posted on 2020-01-18 10:00

The issue with Stackdriver Monitoring has been resolved for all affected projects as of Saturday, 2020-01-18 01:53 US/Pacific. We thank you for your patience while we've worked on resolving the issue.


RESOLVED: Incident 20001 - We've received a report of an issue with Google Cloud Storage.

Permalink - Posted on 2020-01-18 09:54

The issue with Google Cloud Storage us-west2 has been resolved for all affected users as of Saturday, 2020-01-18 01:53 US/Pacific. We thank you for your patience while we've worked on resolving the issue.


UPDATE: Incident 20001 - We've received a report of an issue with Google Cloud Storage.

Permalink - Posted on 2020-01-18 08:59

Description: We believe the issue with Google Cloud Storage us-west2 is partially resolved. We are making sure current situation is stable. We do not have an ETA for full resolution at this point. We will provide an update by Saturday, 2020-01-18 02:00 US/Pacific with current details.


UPDATE: Incident 20001 - We've received a report of an issue with Google Cloud Storage.

Permalink - Posted on 2020-01-18 07:57

Description: We believe the issue with Google Cloud Storage us-west2 is partially resolved. We do not have an ETA for full resolution at this point. We will provide an update by Saturday, 2020-01-18 01:00 US/Pacific with current details.


UPDATE: Incident 20002 - We've received a report of an issue with Stackdriver Monitoring

Permalink - Posted on 2020-01-18 07:53

Description: We are investigating a potential issue with Stackdriver Monitoring affecting the us-west2 region. We will provide more information by Saturday, 2020-01-18 02:00 US/Pacific.


UPDATE: Incident 20002 - We've received a report of an issue with Stackdriver Monitoring

Permalink - Posted on 2020-01-18 07:11

Description: We are investigating a potential issue with Stackdriver Monitoring affecting the us-west2 region. We will provide more information by Friday, 2020-01-17 23:49 US/Pacific.


UPDATE: Incident 20001 - We've received a report of an issue with Google Cloud Storage.

Permalink - Posted on 2020-01-18 06:59

Description: We are experiencing an issue with Google Cloud Storage in us-west2 beginning at Friday, 2020-01-17 21:05 US/Pacific. Symptoms: errors on requests. Our engineering team continues to investigate the issue. We will provide an update by Saturday, 2020-01-18 00:00 US/Pacific with current details. We apologize to all who are affected by the disruption.


RESOLVED: Incident 20001 - Elevated deployment errors

Permalink - Posted on 2020-01-18 00:55

The issue with Google Cloud Composer environment operations has been resolved for all affected projects as of Friday, 2020-01-17 16:47 US/Pacific. We thank you for your patience while we’ve worked on resolving the issue.


RESOLVED: Incident 20001 - Elevated deployment errors

Permalink - Posted on 2020-01-18 00:55

The issue with Google App Engine deployments has been resolved for all affected projects as of Friday, 2020-01-17 16:47 US/Pacific. We thank you for your patience while we’ve worked on resolving the issue.


RESOLVED: Incident 20001 - Deployment Manager error rates are high, also affecting App Engine Flexible and Cloud Composer.

Permalink - Posted on 2020-01-18 00:54

The issue with Deployment Manager operations, including App Engine Flexible deployments, Cloud Composer environment operations has been resolved for all affected projects as of Friday, 2020-01-17 16:47 US/Pacific. We thank you for your patience while we've worked on resolving the issue.


UPDATE: Incident 20001 - Elevated deployment errors

Permalink - Posted on 2020-01-18 00:13

Our engineers have determined this issue to be linked to a single Google incident. For regular status updates, please visit [https://status.cloud.google.com/incident/zall/20001](https://status.cloud.google.com/incident/zall/20001). No further updates will be made through this incident.


UPDATE: Incident 20001 - Elevated deployment errors

Permalink - Posted on 2020-01-18 00:12

Our engineers have determined this issue to be linked to a single Google incident. For regular status updates, please visit [https://status.cloud.google.com/incident/zall/20001](https://status.cloud.google.com/incident/zall/20001). No further updates will be made through this incident.


UPDATE: Incident 20001 - Deployment Manager error rates are high, also affecting App Engine Flexible and Cloud Composer.

Permalink - Posted on 2020-01-18 00:05

Description: We are experiencing an issue with Cloud Deployment Manager, beginning at Friday, 2020-01-17 15:04 US/Pacific. Symptoms: high error rate on Deployment Manager operations, including App Engine Flexible deployments, Cloud Composer environment operations. We've taken additional steps to add resources to resolve the issue. Our engineering team continues to investigate at high priority. We will provide an update by Friday, 2020-01-17 16:30 US/Pacific with current details.


RESOLVED: Incident 20001 - BigQuery exports for Stackdriver Logs delayed in europe-west1

Permalink - Posted on 2020-01-14 02:50

The issue with Stackdriver Logging has been resolved for all affected projects as of Monday, 2020-01-13 18:50 US/Pacific. We thank you for your patience while we've worked on resolving the issue.


UPDATE: Incident 20001 - BigQuery exports for Stackdriver Logs delayed in europe-west1

Permalink - Posted on 2020-01-14 02:00

Description: Mitigation work is still underway by our engineering team. The backlog has stopped growing, and a fix was rolled out to finish processing the remaining logs. We will provide more information by Monday, 2020-01-13 19:00 US/Pacific. Diagnosis: All customers in europe-west1 exporting from Stackdriver Logs to BigQuery will experience delays. Workaround: None at this time.


UPDATE: Incident 20001 - BigQuery exports for Stackdriver Logs delayed in europe-west1

Permalink - Posted on 2020-01-13 22:54

Description: Mitigation work is still underway by our engineering team. The backlog has stopped growing, and a fix is rolling out to finish processing the remaining logs. We will provide more information by Monday, 2020-01-13 18:00 US/Pacific. Diagnosis: All customers in europe-west1 exporting from Stackdriver Logs to BigQuery will experience delays. Workaround: None at this time.


UPDATE: Incident 20001 - BigQuery exports for Stackdriver Logs delayed in europe-west1

Permalink - Posted on 2020-01-13 22:04

Description: Mitigation work is still underway by our engineering team. No significant improvements to backlog yet. We will provide more information by Monday, 2020-01-13 17:30 US/Pacific. Diagnosis: All customers in europe-west1 exporting from Stackdriver Logs to BigQuery will experience delays. Workaround: None at this time.


UPDATE: Incident 20001 - BigQuery exports for Stackdriver Logs delayed in europe-west1

Permalink - Posted on 2020-01-13 20:34

Description: Delays began on Monday, 2020-01-13 08:15 US/Pacific and the backlog is currently ~5.5 hours. Mitigation work is currently underway by our engineering team. We do not have an ETA for mitigation at this point. We will provide more information by Monday, 2020-01-13 14:00 US/Pacific. Diagnosis: All customers in europe-west1 exporting from Stackdriver Logs to BigQuery will experience delays. Workaround: None at this time.


RESOLVED: Incident 20001 - Google’s production network experienced a temporary reduction in capacity, due to multiple fiber cuts in optical links interconnecting Sofia, Bulgaria

Permalink - Posted on 2020-01-09 22:14

### ISSUE SUMMARY On Wednesday, 18 December, 2019, a part of Google’s production network experienced a temporary reduction in capacity, due to multiple fiber cuts in optical links interconnecting Sofia, Bulgaria with other points-of-presence. This resulted in severe congestion on remaining links to Sofia for a duration of 1 hour and 1 minute. Access to Google Cloud products and services through Internet Service Providers (ISPs) in Bulgaria, Turkey, Northern Macedonia, Azerbaijan, Greece, Cyprus, Kosovo, Serbia and Iraq, which rely heavily on the Google point-of-presence in Sofia, Bulgaria was degraded. Users outside the affected countries were not impacted by this issue. ### DETAILED DESCRIPTION OF IMPACT On Wednesday, 18 December, 2019, from 23:43 to Thursday, 19 December, 2019 at 00:44 US/Pacific, access to Google products and services (including Google Cloud Platform) through ISPs in Bulgaria, Turkey, Northern Macedonia, Azerbaijan, Greece, Cyprus, Bosnia, Kosovo, Serbia and Iraq, which rely heavily on the Google point-of-presence in Sofia, Bulgaria, experienced severe congestion for a duration of 1 hour and 1 minute. End users, who use ISPs which rely heavily on the Google peering links in Sofia to access Google Cloud services, were affected by the severe congestion between the Sofia point-of-presence and Cloud Regions across the globe. Cloud traffic to/from the region dropped by 60% during the one hour window with degraded connectivity. End-users in Turkey, who generated the bulk of the Cloud traffic to/from the region, experienced up to a 77% drop in traffic during the incident window. ### ROOT CAUSE Google maintains a network point-of-presence (PoP) with caching and peering infrastructure in Sofia, Bulgaria. The Sofia PoP provides network peering to many providers in Eastern Europe. These network providers in turn enable access to Google services to users in Bulgaria, Turkey, Northern Macedonia, Azerbaijan, Greece, Cyprus, Bosnia, Kosovo, Serbia and Iraq. Sofia is connected to the rest of Google’s production network through multiple independent optical pathways located throughout Europe. This incident was triggered by dual, unrelated (yet overlapping), faults on high-capacity optical network links in both Bucharest, Romania and Munich, Germany that significantly reduced the network capacity of the interconnect between Sofia and the Google production network. Prior to the outage there was a fiber cut in Bucharest/Romania severing the connectivity between Frankfurt/Germany and Sofia/Bulgaria. A second fiber cut in Munich/Germany impacted two separate optical paths: -- Circuits between Frankfurt/Germany and Sofia/Bulgaria were rendered inoperable. -- Circuits between Munich/Germany and Sofia/Bulgaria were left with less than 10% of its normal capacity. Once these links were disrupted, the small amount of remaining capacity between the Sofia and Munich metros continued to attract traffic while unable to fully support it. This brief period of reduced capacity resulted in severe congestion for customers of ISPs heavily reliant on the peering links in Sofia, Bulgaria for accessing Google products and services. Once all traffic that was being sent through peering links in Sofia was redirected through alternative, operational points of presence, the incident was fully mitigated. ### REMEDIATION AND PREVENTION Google Engineers were automatically alerted to packet loss between the Munich and Sofia metros on 2019-12-18 at 23:47 US/Pacific and immediately began investigating. On 2019-12-19 at 00:24 Google Engineers identified the root cause of the packet loss and took decisive mitigation action to redirect traffic away from the peering links in Sofia, Bulgaria. By 00:44 all impacted traffic was successfully redirected to adjacent functional network links, fully mitigating the impact to Google Cloud customers. In addition to addressing the root cause of the network link disruption, we will be improving the processes for detecting network PoPs with severely constrained connectivity, and implementing a new feature in our existing networking administration tooling to effectively redirect traffic away from these PoPs without delay. This will reduce the total time to resolution for similar classes of issues in the future. To ensure this feature is properly utilized during emergency situations, training will be delivered to Google Engineers. Google is committed to quickly and continually improving our technology and operations to prevent service disruptions. We appreciate your patience and apologize again for the impact on your organization. We thank you for your business.


RESOLVED: Incident 20001 - We've received a report of an issue with Cloud Spanner.

Permalink - Posted on 2020-01-09 03:10

The issue with Cloud Spanner, where customers are unable to change node count for instances in multi-region configuration "nam3", has been resolved for all affected users as of Wednesday, 2020-01-08 19:08 US/Pacific. We thank you for your patience while we've worked on resolving the issue.