티스토리 수익 글 보기

티스토리 수익 글 보기

tag:www.githubstatus.com,2005:/history GitHub Status – Incident History 2026-03-03T21:20:51Z GitHub tag:www.githubstatus.com,2005:Incident/28771344 2026-03-03T21:11:30Z 2026-03-03T21:11:30Z Claude Opus 4.6 Fast not appearing for some Copilot users <p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>21:11</var> UTC</small><br><strong>Resolved</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>21:05</var> UTC</small><br><strong>Update</strong> – We believe that all expected users still have access to Claude Opus 4.6. We confirm that no users have lost access.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>20:31</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28769993 2026-03-03T20:09:16Z 2026-03-03T20:09:16Z Incident with Copilot and Actions <p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>20:09</var> UTC</small><br><strong>Resolved</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>20:06</var> UTC</small><br><strong>Update</strong> – We’re seeing recovery across all services. We’re continuing to monitor for full recovery.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:55</var> UTC</small><br><strong>Update</strong> – Actions is operating normally.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:54</var> UTC</small><br><strong>Update</strong> – Git Operations is operating normally.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:36</var> UTC</small><br><strong>Update</strong> – Git Operations is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:33</var> UTC</small><br><strong>Update</strong> – We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:31</var> UTC</small><br><strong>Update</strong> – Copilot is operating normally.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:31</var> UTC</small><br><strong>Update</strong> – Pull Requests is operating normally.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:28</var> UTC</small><br><strong>Update</strong> – Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:27</var> UTC</small><br><strong>Update</strong> – Issues is operating normally.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:25</var> UTC</small><br><strong>Update</strong> – Webhooks is operating normally.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:25</var> UTC</small><br><strong>Update</strong> – Codespaces is operating normally.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:24</var> UTC</small><br><strong>Update</strong> – Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:23</var> UTC</small><br><strong>Update</strong> – Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:17</var> UTC</small><br><strong>Update</strong> – We’ve identified the issue and have applied a mitigation. We’re seeing recovery of services. We continue to montitor for full recovery.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:15</var> UTC</small><br><strong>Update</strong> – API Requests is operating normally.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:14</var> UTC</small><br><strong>Update</strong> – API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:11</var> UTC</small><br><strong>Update</strong> – Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:05</var> UTC</small><br><strong>Update</strong> – Pull Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:04</var> UTC</small><br><strong>Update</strong> – Webhooks is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:03</var> UTC</small><br><strong>Update</strong> – We’re seeing some service degradation across GitHub services. We’re currently investigating impact.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:02</var> UTC</small><br><strong>Update</strong> – Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:00</var> UTC</small><br><strong>Update</strong> – Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>19:00</var> UTC</small><br><strong>Update</strong> – API Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>18:59</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded availability for Actions, Copilot and Issues</p> tag:www.githubstatus.com,2005:Incident/28753588 2026-03-03T05:54:17Z 2026-03-03T05:54:17Z Delayed visibility of newly added issues on project boards <p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>05:54</var> UTC</small><br><strong>Resolved</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>05:53</var> UTC</small><br><strong>Update</strong> – This incident has been resolved. Project board updates are now processing in near-real-time.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>04:36</var> UTC</small><br><strong>Update</strong> – The backlog of delayed updates is expected to fully clear within approximately 1 hour, after which project board updates will return to near-real-time.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>04:17</var> UTC</small><br><strong>Update</strong> – The fix has been deployed and processing speeds have returned to normal. There is a backlog of delayed updates that will continue to be worked through — we’re estimating how long that will take and will provide an update in the next 60 minutes.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>03:22</var> UTC</small><br><strong>Update</strong> – The fix is still building and is expected to deploy within 60 minutes. The current delay for GitHub Projects updates has increased to up to 5 hours.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>02:27</var> UTC</small><br><strong>Update</strong> – We’re deploying a fix targeting the increased delay in GitHub Projects updates. The rollout should complete within 60 minutes. If successful, the current delay of up to 4 hours should begin to decrease.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>01:40</var> UTC</small><br><strong>Update</strong> – The delay for project board updates has increased to up to 3 hours. We’ve identified a potential cause and are working on remediation.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>00:52</var> UTC</small><br><strong>Update</strong> – Project board updates — including adding issues, pull requests, and changing fields such as “Status” — are currently delayed by 1–2 hours. Normal behavior is near-real-time. We’re actively investigating the root cause.</p><p><small>Mar <var data-var=’date’> 3</var>, <var data-var=’time’>00:05</var> UTC</small><br><strong>Update</strong> – The impact extends beyond adding issues to project boards. Adding pull requests and updating fields such as “Status” may also be affected. We’re continuing to investigate the root cause.</p><p><small>Mar <var data-var=’date’> 2</var>, <var data-var=’time’>23:46</var> UTC</small><br><strong>Update</strong> – Newly added issues are taking 30–60 minutes to appear on project boards, compared to the normal near-real-time behavior. We’re investigating the root cause and possible mitigations.</p><p><small>Mar <var data-var=’date’> 2</var>, <var data-var=’time’>23:12</var> UTC</small><br><strong>Update</strong> – Newly added issues can take up to 30 minutes to appear on project boards. We’re investigating the cause of this delay.</p><p><small>Mar <var data-var=’date’> 2</var>, <var data-var=’time’>23:11</var> UTC</small><br><strong>Update</strong> – Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var=’date’> 2</var>, <var data-var=’time’>23:10</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28750704 2026-03-02T22:04:27Z 2026-03-02T22:04:27Z Incident with Pull Requests /pulls <p><small>Mar <var data-var=’date’> 2</var>, <var data-var=’time’>22:04</var> UTC</small><br><strong>Resolved</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var=’date’> 2</var>, <var data-var=’time’>22:04</var> UTC</small><br><strong>Update</strong> – The issue on https://github.com/pulls is now fully resolved. All tabs are working again.</p><p><small>Mar <var data-var=’date’> 2</var>, <var data-var=’time’>21:04</var> UTC</small><br><strong>Update</strong> – We’re deploying a fix for pull request filtering. Full rollout across all regions is expected within 60 minutes.</p><p><small>Mar <var data-var=’date’> 2</var>, <var data-var=’time’>20:02</var> UTC</small><br><strong>Update</strong> – We are experiencing issues with the Pull Requests dashboard that prevent users from filtering their pull requests. We have identified a mitigation and are deploying a fix. We’ll post another update by 21:00 UTC.</p><p><small>Mar <var data-var=’date’> 2</var>, <var data-var=’time’>19:23</var> UTC</small><br><strong>Update</strong> – We are seeing a degraded experience when attempting to filter the /pulls dashboard. We are working on a mitigation.</p><p><small>Mar <var data-var=’date’> 2</var>, <var data-var=’time’>19:11</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Pull Requests</p> tag:www.githubstatus.com,2005:Incident/28703281 2026-02-27T23:49:05Z 2026-02-27T23:49:05Z Incident with Copilot agent sessions <p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>23:49</var> UTC</small><br><strong>Resolved</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>23:45</var> UTC</small><br><strong>Update</strong> – We have identified the cause of the elevated errors and are rolling out a fix to production. We are observing initial recovery in Copilot agent sessions.</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>23:35</var> UTC</small><br><strong>Update</strong> – We are investigating networking issues with some requests to our models.</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>23:18</var> UTC</small><br><strong>Update</strong> – We are investigating a spike in errors in Copilot agent sessions</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>23:18</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28688857 2026-02-27T06:04:02Z 2026-02-27T21:04:29Z Code view fails to load when content contains some non-ASCII characters <p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>06:04</var> UTC</small><br><strong>Resolved</strong> – Starting February 26, 2026 at 22:10 UTC through February 27, 05:50 UTC, the repository browsing UI was degraded and users were unable to load pages for files and directories with non-ASCII characters (including Japanese, Chinese, and other non-Latin scripts). On average, the error rate was 0.014% and peaked at 0.06% of requests to the service. Affected users saw 404 errors when navigating to repository directories and files with non-ASCII names. This was due to a code change that altered how file and directory names were processed, which caused incorrectly formatted data to be stored in an application cache.<br /><br />We mitigated the incident by deploying a fix that invalidated the affected cache entries and progressively rolling it out across all production environments.<br /><br />We are working to improve our pre-production testing to cover non-ASCII character handling, establish better cache invalidation mechanisms, and enhance our monitoring to detect this type of failure mode earlier, to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>06:03</var> UTC</small><br><strong>Update</strong> – We have cleared all caches and everything is operating normally.</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>05:21</var> UTC</small><br><strong>Update</strong> – We have mitigated the issue but are working on invalidating caches in order to fix the issue for all impacted repos.</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>04:17</var> UTC</small><br><strong>Update</strong> – We have performed a mitigation but some repositories may still see issues. We are working on a full mitigation.</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>03:28</var> UTC</small><br><strong>Update</strong> – We are looking into recent code changes to mitigate the error loading some code view pages.</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>03:08</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28686655 2026-02-27T00:04:01Z 2026-02-27T00:04:01Z High latency on webhook API requests <p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>00:04</var> UTC</small><br><strong>Resolved</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>00:02</var> UTC</small><br><strong>Update</strong> – Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Feb <var data-var=’date’>27</var>, <var data-var=’time’>00:01</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28676547 2026-02-26T11:06:32Z 2026-03-03T08:32:53Z Incident with Copilot <p><small>Feb <var data-var=’date’>26</var>, <var data-var=’time’>11:06</var> UTC</small><br><strong>Resolved</strong> – On February 26, 2026, between 09:27 UTC and 10:36 UTC, the GitHub Copilot service was degraded and users experienced errors when using Copilot features including Copilot Chat, Copilot Coding Agent and Copilot Code Review. During this time, 5-15% of affected requests to the service returned errors.<br /><br />The incident was resolved by infrastructure rebalancing.<br /><br />We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.</p><p><small>Feb <var data-var=’date’>26</var>, <var data-var=’time’>11:06</var> UTC</small><br><strong>Update</strong> – Copilot is operating normally.</p><p><small>Feb <var data-var=’date’>26</var>, <var data-var=’time’>10:22</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28663359 2026-02-25T16:44:50Z 2026-02-25T16:44:50Z Incident with Copilot Agent Sessions impacting CCA/CCR <p><small>Feb <var data-var=’date’>25</var>, <var data-var=’time’>16:44</var> UTC</small><br><strong>Resolved</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Feb <var data-var=’date’>25</var>, <var data-var=’time’>16:38</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28630011 2026-02-24T00:46:29Z 2026-02-26T22:55:02Z Code search experiencing degraded performance <p><small>Feb <var data-var=’date’>24</var>, <var data-var=’time’>00:46</var> UTC</small><br><strong>Resolved</strong> – Between 2026-02-23 19:10 and 2026-02-24 00:46 UTC, all lexical code search queries in GitHub.com and the code search API were significantly slowed, and during this incident, between 5 and 10% of search queries timed out. This was caused by a single customer who had created a network of hundreds of orchestrated accounts which searched with a uniquely expensive search query. This search query concentrated load on a single hot shard within the search index, slowing down all queries. After we identified the source of the load and stopped the traffic, latency returned to normal.<br /><br />To avoid this situation occurring again in the future, we are making a number of improvements to our systems, including: improved rate limiting that accounts for highly skewed load on hot shards, improved system resilience for when a small number of shards time out, improved tooling to recognize abusive actors, and capabilities that will allow us to shed load on a single shard in emergencies.</p><p><small>Feb <var data-var=’date’>24</var>, <var data-var=’time’>00:38</var> UTC</small><br><strong>Update</strong> – We have identified a cause for the latency and timeouts and have implemented a fix. We are observing initial recovery now.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>23:10</var> UTC</small><br><strong>Update</strong> – Customers using code search continue to see increased latency and timeout errors. We are working to mitigate issues on the affected shard.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>22:22</var> UTC</small><br><strong>Update</strong> – Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are taking steps to isolate and mitigate the affected shard.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>21:18</var> UTC</small><br><strong>Update</strong> – Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are continuing to investigate the cause and steps to mitigate.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>20:33</var> UTC</small><br><strong>Update</strong> – We are continuing to investigate elevated latency and timeouts for code search.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>19:59</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28630931 2026-02-23T21:30:42Z 2026-03-02T16:34:09Z Incident with Issues and Pull Requests Search <p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>21:30</var> UTC</small><br><strong>Resolved</strong> – On February 23, 2026, between 21:01 UTC and 21:30 UTC the Search service experienced degraded performance, resulting in an average of 3.5% of search requests for Issues and Pull Requests being rejected. During this period, updates to Issues and Pull Requests may not have been immediately reflected in search results. <br /><br />During a routine migration, we observed a spike in internal traffic due to a configuration change in our search index. We were alerted to the increase in traffic as well as the increase in error rates and rolled back to the previous stable index. <br /><br />We are working to enable more controlled traffic shifting when promoting a new index to allow us to detect potential limitations earlier and ensure these operations succeed in a more controlled manner.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>21:24</var> UTC</small><br><strong>Update</strong> – Some customers are seeing timeout errors when searching for issues or pull requests. Team is currently investigating a fix.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>21:16</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Issues and Pull Requests</p> tag:www.githubstatus.com,2005:Incident/28627142 2026-02-23T17:03:56Z 2026-03-03T17:40:47Z Incident with Actions <p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>17:03</var> UTC</small><br><strong>Resolved</strong> – On February 23, 2026, between 15:00 UTC and 17:00 UTC, GitHub Actions experienced degraded performance. During the time, 1.8% of Actions workflow runs experienced delayed starts with an average delay of 15 minutes. The issue was caused by a connection rebalancing event in our internal load balancing layer, which temporarily created uneven traffic distribution across sites and led to request throttling. <br /><br />To prevent recurrence, we are tuning connection rebalancing behavior to spread client reconnections more gradually during load balancer reloads. We are also evaluating improvements to site-level traffic affinity to eliminate the uneven distribution at its source. We have overprovisioned critical paths to prevent any impact if a similar event occurs before those workstreams finish. Finally, we are enhancing our monitoring to detect capacity imbalances proactively.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>16:17</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/28625959 2026-02-23T16:19:33Z 2026-03-03T15:36:35Z Incident with Copilot <p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>16:19</var> UTC</small><br><strong>Resolved</strong> – On February 23, 2026, between 14:45 UTC and 16:19 UTC, the Copilot service was degraded for Claude Haiku 4.5 model. On average, 6% of the requests to this model failed due to an issue with an upstream provider. During this period, automated model degradation notifications directed affected users to alternative models. No other models were impacted. The upstream provider identified and resolved the issue on their end. <br />We are working to improve automatic model failover mechanisms to reduce our time to mitigation of issues like this one in the future.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>15:59</var> UTC</small><br><strong>Update</strong> – Copilot is operating normally.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>15:59</var> UTC</small><br><strong>Update</strong> – The issues with our upstream model provider have been resolved, and Haiku 4.5 is once again available in Copilot Chat and across IDE integrations.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>15:13</var> UTC</small><br><strong>Update</strong> – Our provider has recovered and we are not seeing errors but we are awaiting a signal from them that the issue will not regress before we go green.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>14:56</var> UTC</small><br><strong>Update</strong> – We are experiencing degraded availability for the Haiku 4.5 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Feb <var data-var=’date’>23</var>, <var data-var=’time’>14:56</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28581207 2026-02-20T20:41:24Z 2026-02-27T02:49:58Z Extended job start delays for larger hosted runners <p><small>Feb <var data-var=’date’>20</var>, <var data-var=’time’>20:41</var> UTC</small><br><strong>Resolved</strong> – On February 20, 2026, between 17:45 UTC and 20:41 UTC, 4.2% of workflows running on GitHub Larger Hosted Runners were delayed by an average of 18 minutes. Standard, Mac, and Self-Hosted Runners were not impacted. <br /><br />The delays were caused by communication failures between backend services for one deployment of larger runners. Those failures prevented expected automated scaling and provisioning of larger hosted runner capacity within that deployment. This was mitigated when the affected infrastructure was recycled, larger runner pools in the affected deployment successfully scaled up, and queued jobs processed. <br /><br />We are working to improve the time to detect and diagnose this class of failures and improve the performance of recovery mechanisms for this degraded network state. In addition, we have architectural changes underway that will enable other deployments to pick up work in similar situations, so there is no customer impact due to deployment-specific infrastructure issues like this.</p><p><small>Feb <var data-var=’date’>20</var>, <var data-var=’time’>20:36</var> UTC</small><br><strong>Update</strong> – The team continues to investigate issues with some larger runner jobs being queued for a long time. We are though seeing improvement in the queue times. We will continue providing updates on the progress towards mitigation.</p><p><small>Feb <var data-var=’date’>20</var>, <var data-var=’time’>20:01</var> UTC</small><br><strong>Update</strong> – We are investigating reports of degraded performance for Larger Hosted Runners</p><p><small>Feb <var data-var=’date’>20</var>, <var data-var=’time’>20:00</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28573454 2026-02-20T11:41:39Z 2026-03-03T15:33:34Z Incident with Copilot GPT-5.1-Codex <p><small>Feb <var data-var=’date’>20</var>, <var data-var=’time’>11:41</var> UTC</small><br><strong>Resolved</strong> – On February 20, 2026, between 07:30 UTC and 11:21 UTC, the Copilot service experienced a degradation of the GPT 5.1 Codex model. During this time period, users encountered a 4.5% error rate when using this model. No other models were impacted.<br />The issue was resolved by a mitigation put in place by the external model provider. GitHub is working with the external model provider to further improve the resiliency of the service to prevent similar incidents in the future.</p><p><small>Feb <var data-var=’date’>20</var>, <var data-var=’time’>11:19</var> UTC</small><br><strong>Update</strong> – The issues with our upstream model provider have been resolved, and GPT 5.1 Codex is once again available in Copilot Chat and across IDE integrations [VSCode, Visual Studio, JetBrains].<br />We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Feb <var data-var=’date’>20</var>, <var data-var=’time’>10:36</var> UTC</small><br><strong>Update</strong> – We are still experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /></p><p><small>Feb <var data-var=’date’>20</var>, <var data-var=’time’>10:02</var> UTC</small><br><strong>Update</strong> – We are experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br />Other models are available and working as expected.</p><p><small>Feb <var data-var=’date’>20</var>, <var data-var=’time’>10:02</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/28546747 2026-02-18T19:20:16Z 2026-02-18T19:20:16Z Degraded performance in merge queue <p><small>Feb <var data-var=’date’>18</var>, <var data-var=’time’>19:20</var> UTC</small><br><strong>Resolved</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Feb <var data-var=’date’>18</var>, <var data-var=’time’>19:18</var> UTC</small><br><strong>Update</strong> – We have seen significant recovery in merge queue we are continuing to monitor for any other degraded services.</p><p><small>Feb <var data-var=’date’>18</var>, <var data-var=’time’>18:27</var> UTC</small><br><strong>Update</strong> – We are investigating reports of issues with merge queue. We will continue to keep users updated on progress towards mitigation.</p><p><small>Feb <var data-var=’date’>18</var>, <var data-var=’time’>18:26</var> UTC</small><br><strong>Update</strong> – Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Feb <var data-var=’date’>18</var>, <var data-var=’time’>18:25</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28530813 2026-02-17T19:06:24Z 2026-02-23T23:25:53Z Intermittent authentication failures on GitHub <p><small>Feb <var data-var=’date’>17</var>, <var data-var=’time’>19:06</var> UTC</small><br><strong>Resolved</strong> – On February 17, 2026, between 17:07 UTC and 19:06 UTC, some customers experienced intermittent authentication failures affecting GitHub Actions, parts of Git operations, and other authentication-dependent requests. On average, the Actions error rate was approximately 0.6% of affected API requests. Git operations ssh read error rate was approximately 0.29%, while ssh write and http operations were not impacted. During the incident, a subset of requests failed due to token verification lookups intermittently failing, leading to 401 errors and degraded reliability for impacted workflows.<br /><br />The issue was caused by elevated replication lag in the token verification database cluster. In the days leading up to the incident, the token store’s write volume grew enough to exceed the cluster’s available capacity. Under peak load, older replica hosts were unable to keep up, replica lag increased, and some token lookups became inconsistent, resulting in intermittent authentication failures.<br /><br />We mitigated the incident by adjusting the database replica topology to route reads away from lagging replicas and by adding/bringing additional replica capacity online. Service health improved progressively after the change, with GitHub Actions recovering by ~19:00 UTC and the incident resolved at 19:06 UTC.<br /><br />We are working to prevent recurrence by improving the resilience and scalability of our underlying token verification data stores to better handle continued growth.</p><p><small>Feb <var data-var=’date’>17</var>, <var data-var=’time’>18:55</var> UTC</small><br><strong>Update</strong> – We are continuing to monitor the mitigation and continuing to see signs of recovery.</p><p><small>Feb <var data-var=’date’>17</var>, <var data-var=’time’>18:18</var> UTC</small><br><strong>Update</strong> – We have rolled out a mitigation and are seeing signs of recovery and are continuing to monitor.</p><p><small>Feb <var data-var=’date’>17</var>, <var data-var=’time’>17:46</var> UTC</small><br><strong>Update</strong> – We have identified a low rate of authentication failures affecting GitHub App server to server tokens, GitHub Actions authentication tokens, and git operations. Some customers may experience intermittent API request failures when using these tokens. We believe we’ve identified the cause and are working to mitigate impact.</p><p><small>Feb <var data-var=’date’>17</var>, <var data-var=’time’>17:46</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Actions and Git Operations</p> tag:www.githubstatus.com,2005:Incident/28487533 2026-02-13T22:58:42Z 2026-02-20T14:11:43Z Disruption with some GitHub services regarding file upload <p><small>Feb <var data-var=’date’>13</var>, <var data-var=’time’>22:58</var> UTC</small><br><strong>Resolved</strong> – On February 13, 2026, between 21:46 UTC and 22:58 UTC (72 minutes), the GitHub file upload service was degraded and users uploading from a web browser on GitHub.com were unable to upload files to repositories, create release assets, or upload manifest files. During the incident, successful upload completions dropped by ~85% from baseline levels. This was due to a code change that inadvertently modified browser request behavior and violated CORS (Cross-Origin Resource Sharing) policy requirements, causing upload requests to be blocked before reaching the upload service.<br /><br />We mitigated the incident by reverting the code change that introduced the issue.<br /><br />We are working to improve automated testing for browser-side request changes and to add monitoring/automated safeguards for upload flows to reduce our time to detection and mitigation of similar issues in the future.</p><p><small>Feb <var data-var=’date’>13</var>, <var data-var=’time’>22:30</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28471802 2026-02-12T20:34:55Z 2026-02-19T16:47:14Z Disruption with some GitHub services <p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>20:34</var> UTC</small><br><strong>Resolved</strong> – Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.<br /> <br />The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>19:59</var> UTC</small><br><strong>Update</strong> – Next Edit Suggestions availability is recovering. We are continuing to monitor until fully restored.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>19:18</var> UTC</small><br><strong>Update</strong> – We are experiencing degraded availability in Australia and Brazil for Copilot completions and suggestions. We are working to resolve the issue<br /></p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>18:46</var> UTC</small><br><strong>Update</strong> – We are experiencing degraded availability in Australia for Copilot completions and suggestions. We are working to resolve the issue<br /></p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>18:36</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28469110 2026-02-12T16:50:01Z 2026-02-19T16:31:52Z Intermittent disruption with Copilot completions and inline suggestions <p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>16:50</var> UTC</small><br><strong>Resolved</strong> – Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.<br /><br />The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>15:33</var> UTC</small><br><strong>Update</strong> – We are experiencing degraded availability in Western Europe for Copilot completions and suggestions. We are working to resolve the issue.<br /></p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>14:08</var> UTC</small><br><strong>Update</strong> – We are experiencing degraded availability in some regions for Copilot completions and suggestions. We are working to resolve the issue.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>14:06</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28466893 2026-02-12T11:12:16Z 2026-02-14T00:15:38Z Disruption with some GitHub services <p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>11:12</var> UTC</small><br><strong>Resolved</strong> – From Feb 12, 2026 09:16:00 UTC to Feb 12, 2026 11:01 UTC, users attempting to download repository archives (tar.gz/zip) that include Git LFS objects received errors. Standard repository archives without LFS objects were not affected. On average, the archive download error rate was 0.0042% and peaked at 0.0339% of requests to the service. This was caused by deploying a corrupt configuration bundle, resulting in missing data used for network interface connections by the service.<br /><br />We mitigated the incident by applying the correct configuration to each site. We have added checks for corruption in this deployment, and will add auto-rollback detection for this service to prevent issues like this in the future.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>11:01</var> UTC</small><br><strong>Update</strong> – We have resolved the issue and are seeing full recovery.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>10:39</var> UTC</small><br><strong>Update</strong> – We are investigating an issue with downloading repository archives that include Git LFS objects.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>10:38</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28465240 2026-02-12T09:56:06Z 2026-02-13T08:21:03Z Incident with Codespaces <p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>09:56</var> UTC</small><br><strong>Resolved</strong> – On February 12, 2026, between 00:51 UTC and 09:35 UTC, users attempting to create or resume Codespaces experienced elevated failure rates across Europe, Asia and Australia, peaking at a 90% failure rate.<br /><br />The disconnects were triggered by a bad configuration rollout in a core networking dependency, which led to internal resource provisioning failures. We are working to improve our alerting thresholds to catch issues before they impact customers and strengthening rollout safeguards to prevent similar incidents.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>09:56</var> UTC</small><br><strong>Update</strong> – Recovery looks consistent with Codespaces creating and resuming successfully across all regions. <br /><br />Thank you for your patience.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>09:42</var> UTC</small><br><strong>Update</strong> – Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>09:39</var> UTC</small><br><strong>Update</strong> – We are seeing widespread recovery across all our regions. <br /><br />We will continue to monitor progress and will resolve the incident when we are confident on durable recovery.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>09:04</var> UTC</small><br><strong>Update</strong> – We have identified the issue causing Codespace create/resume actions to fail and are applying a fix. This is estimated to take ~2 hours to complete but impact will begin to reduce sooner than that.<br /><br />We will continue to monitor recovery progress and will report back when more information is available.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>08:32</var> UTC</small><br><strong>Update</strong> – We now understand the source of the VM create/resume failures and are working with our partners to mitigate the impact.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>08:02</var> UTC</small><br><strong>Update</strong> – We are seeing an increase in Codespaces creation and resuming failures across multiple regions, primarily in EMEA. Our team are analysing the situation and are working to mitigate this impact.<br /><br />While we are working, customers are advised to create Codespaces in US East and US West regions via the “New with options…” button when creating a Codespace.<br /><br />More updates as we have them.</p><p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>07:53</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded availability for Codespaces</p> tag:www.githubstatus.com,2005:Incident/28458937 2026-02-12T00:59:23Z 2026-02-19T20:42:35Z Disruption with some GitHub services <p><small>Feb <var data-var=’date’>12</var>, <var data-var=’time’>00:59</var> UTC</small><br><strong>Resolved</strong> – On February 11 between 16:37 UTC and 00:59 UTC the following day, 4.7% of workflows running on GitHub Larger Hosted Runners were delayed by an average of 37 minutes. Standard Hosted and self-hosted runners were not impacted. <br /><br />This incident was caused by capacity degradation in Central US for Larger Hosted Runners. Workloads not pinned to that region were picked up by other regions, but were delayed as those regions became saturated. Workloads configured with private networking in that region were delayed until compute capacity in that region recovered. The issue was mitigated by rebalancing capacity across internal and external workloads and general increases in capacity in affected regions to speed recovery. <br /><br />In addition to working with our compute partners on the core capacity degradation, we are working to ensure other regions are better able to absorb load with less delay to customer workloads. For pinned workflows using private networking, we are shipping support soon for customers to failover if private networking is configured in a paired region.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>21:33</var> UTC</small><br><strong>Update</strong> – Actions is experiencing capacity constraints with larger hosted runners, leading to high wait times. Standard hosted labels and self-hosted runners are not impacted.<br /> <br />The issue is mitigated and we are monitoring recovery.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>19:37</var> UTC</small><br><strong>Update</strong> – We’re continuing to work toward mitigation with our capacity provider, and adding capacity.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>19:00</var> UTC</small><br><strong>Update</strong> – Actions is experiencing capacity constraints with larger hosted runners, leading to high wait times. Standard hosted labels and self-hosted runners are not impacted.<br /><br />We’re working with the capacity provider to mitigate the impact.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>18:58</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of impacted performance for some GitHub services.</p> tag:www.githubstatus.com,2005:Incident/28456613 2026-02-11T17:15:03Z 2026-02-19T22:18:52Z Incident with API Requests <p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>17:15</var> UTC</small><br><strong>Resolved</strong> – On February 11, 2026, between 13:51 UTC and 17:03 UTC, the GraphQL API experienced degraded performance due to elevated resource utilization. This resulted in incoming client requests waiting longer than normal, timing out in certain cases. During the impact window, approximately 0.65% of GraphQL requests experienced these issues, peaking at 1.06%. <br /><br />The increased load was due to an increase in query patterns that drove higher than expected resource utilization of the GraphQL API. We mitigated the incident by scaling out resource capacity and limiting the capacity available to these query patterns. <br /><br />We’re improving our telemetry to identify slow usage growth and changes in GraphQL workloads. We’ve also added capacity safeguards to prevent similar incidents in the future.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>17:13</var> UTC</small><br><strong>Update</strong> – We’ve observed recovery for the GraphQL service latency.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>16:54</var> UTC</small><br><strong>Update</strong> – We’re continuing to remediate the service degradation and scaling out to further mitigate the potential for latency impact.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>15:54</var> UTC</small><br><strong>Update</strong> – We’ve identified a dependency of GraphQL that is in a degraded state and are working on remediating the issue.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>15:27</var> UTC</small><br><strong>Update</strong> – We’re investigating increased latency for GraphQL traffic.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>15:26</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for API Requests</p> tag:www.githubstatus.com,2005:Incident/28456612 2026-02-11T15:46:40Z 2026-02-17T18:33:31Z Incident with Copilot <p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>15:46</var> UTC</small><br><strong>Resolved</strong> – On February 11, 2025, between 14:30 UTC and 15:30 UTC, the Copilot service experienced degraded availability for requests to Claude Haiku 4.5. During this time, on average 10% of requests failed with 23% of sessions impacted. The issue was caused by an upstream problem from multiple external model providers that affected our ability to serve requests. <br /><br />The incident was mitigated once one of the providers resolved the issue and we rerouted capacity fully to that provider. We have improved our telemetry to improve incident observability and implemented an automated retry mechanism for requests to this model to mitigate similar future upstream incidents.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>15:46</var> UTC</small><br><strong>Update</strong> – Copilot is operating normally.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>15:46</var> UTC</small><br><strong>Update</strong> – The issues with our upstream model provider have been resolved, and Claude Haiku 4.5 is once again available in Copilot Chat and across IDE integrations.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>15:27</var> UTC</small><br><strong>Update</strong> – We are experiencing degraded availability for the Claude Haiku 4.5 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br />Other models are available and working as expected.</p><p><small>Feb <var data-var=’date’>11</var>, <var data-var=’time’>15:26</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p>