[2026-02-02] Incident Thread #186184
Replies: 6 comments 3 replies
-
UpdateGitHub Actions hosted runners are experiencing high wait times across all labels. Self-hosted runners are not impacted. |
Beta Was this translation helpful? Give feedback.
-
UpdateOur upstream provider has applied a mitigation to address queuing and job failures on hosted runners. |
Beta Was this translation helpful? Give feedback.
-
UpdateBased on our telemetry, most customers should see full recovery from failing GitHub Actions jobs on hosted runners. |
Beta Was this translation helpful? Give feedback.
-
Incident ResolvedThis incident has been resolved. |
Beta Was this translation helpful? Give feedback.
-
|
Hello, Your status page shows the incident as resolved, but Actions runs remain queued and a Pages build is blocked for my repo. Details: Repository: mariush444/Osmand-tools |
Beta Was this translation helpful? Give feedback.
-
Incident SummaryOn February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enterprise Importer, and Pages. All regions and runner types were impacted. Self-hosted runners on other providers were not impacted. This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
❗ An incident has been declared:
Incident with Actions
Subscribe to this Discussion for updates on this incident. Please upvote or emoji react instead of commenting +1 on the Discussion to avoid overwhelming the thread. Any account guidance specific to this incident will be shared in thread and on the Incident Status Page.
Beta Was this translation helpful? Give feedback.
All reactions