티스토리 수익 글 보기

티스토리 수익 글 보기

tag:www.githubstatus.com,2005:/history GitHub Status – Incident History 2025-11-10T02:55:58Z GitHub tag:www.githubstatus.com,2005:Incident/27032025 2025-11-06T00:06:13Z 2025-11-06T00:06:13Z Incident with Copilot <p><small>Nov <var data-var=’date’> 6</var>, <var data-var=’time’>00:06</var> UTC</small><br><strong>Resolved</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Nov <var data-var=’date’> 6</var>, <var data-var=’time’>00:06</var> UTC</small><br><strong>Update</strong> – We have recovered from our earlier performance issues. Copilot code completions should be functioning normally at this time.</p><p><small>Nov <var data-var=’date’> 5</var>, <var data-var=’time’>23:41</var> UTC</small><br><strong>Update</strong> – Copilot Code Completions are partially unavailable. Our engineering team is engaged and investigating.</p><p><small>Nov <var data-var=’date’> 5</var>, <var data-var=’time’>23:41</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/27031453 2025-11-05T23:26:56Z 2025-11-05T23:26:56Z Copilot Code Completions partially unavailable <p><small>Nov <var data-var=’date’> 5</var>, <var data-var=’time’>23:26</var> UTC</small><br><strong>Resolved</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Nov <var data-var=’date’> 5</var>, <var data-var=’time’>23:26</var> UTC</small><br><strong>Update</strong> – We have identified and resolved the underlying issues with Code Completions. Customers should see full recovery.</p><p><small>Nov <var data-var=’date’> 5</var>, <var data-var=’time’>22:57</var> UTC</small><br><strong>Update</strong> – We are investigating increased error rates affecting Copilot Code Completions. Some users may experience delays or partial unavailability. Our engineering team is monitoring the situation and working to identify the cause.</p><p><small>Nov <var data-var=’date’> 5</var>, <var data-var=’time’>22:56</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26989085 2025-11-03T19:20:34Z 2025-11-05T23:35:09Z Incident with Packages <p><small>Nov <var data-var=’date’> 3</var>, <var data-var=’time’>19:20</var> UTC</small><br><strong>Resolved</strong> – On November 3, 2025, between 14:10 UTC and 19:20 UTC, GitHub Packages experienced degraded performance, resulting in failures for 0.5% of Nuget package download requests. The incident resulted from an unexpected change in usage patterns affecting rate limiting infrastructure in the Packages service.<br /><br />We mitigated the issue by scaling up services and refining our rate limiting implementation to ensure more consistent and reliable service for all users. To prevent similar problems, we are enhancing our resilience to shifts in usage patterns, improving capacity planning, and implementing better monitoring to accelerate detection and mitigation in the future.</p><p><small>Nov <var data-var=’date’> 3</var>, <var data-var=’time’>17:27</var> UTC</small><br><strong>Update</strong> – We have applied the mitigation and are starting to see signs of recovery. We will continue to monitor the health of the system.</p><p><small>Nov <var data-var=’date’> 3</var>, <var data-var=’time’>17:10</var> UTC</small><br><strong>Update</strong> – We are continuing to work on mitigation.</p><p><small>Nov <var data-var=’date’> 3</var>, <var data-var=’time’>15:58</var> UTC</small><br><strong>Update</strong> – Progress on mitigation continues but no recovery seen yet to error rates. We will continue to provide updates as we have them.</p><p><small>Nov <var data-var=’date’> 3</var>, <var data-var=’time’>15:33</var> UTC</small><br><strong>Update</strong> – We are continuing to see high error rates for package downloads. Our team is working on ways to mitigate this urgently<br /><br />Next update in 20 minutes</p><p><small>Nov <var data-var=’date’> 3</var>, <var data-var=’time’>15:18</var> UTC</small><br><strong>Update</strong> – Our investigations are continuing and we are working to mitigate impact. Thank you for your patience as we work on this.</p><p><small>Nov <var data-var=’date’> 3</var>, <var data-var=’time’>14:35</var> UTC</small><br><strong>Update</strong> – We are seeing increased failure rates of up to 15% for GitHub Packages downloads with users experiencing 5xx errors.<br /><br />We are investigating and working towards mitigation. We will continue to provide updates as they are available.</p><p><small>Nov <var data-var=’date’> 3</var>, <var data-var=’time’>14:33</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Packages</p> tag:www.githubstatus.com,2005:Incident/26962592 2025-11-01T06:14:40Z 2025-11-04T23:05:08Z Incident with using workflow_dispatch for Actions <p><small>Nov <var data-var=’date’> 1</var>, <var data-var=’time’>06:14</var> UTC</small><br><strong>Resolved</strong> – On November 1, 2025, between 2:30 UTC and 6:14 UTC, Actions workflows could not be triggered manually from the UI. This impacted all customers queueing workflows from the UI for most of the impact window. The issue was caused by a faulty code change in the UI, which was promptly reverted once the impact was identified. Detection was delayed due to an alerting gap for UI breaks in this area when all underlying APIs are still healthy. We are implementing enhanced alerting and additional automated tests to prevent similar regressions and reduce detection time in the future.</p><p><small>Nov <var data-var=’date’> 1</var>, <var data-var=’time’>06:14</var> UTC</small><br><strong>Update</strong> – Actions is operating normally.</p><p><small>Nov <var data-var=’date’> 1</var>, <var data-var=’time’>06:14</var> UTC</small><br><strong>Update</strong> – We have mitigated the issue for manually dispatching workflows via the UI</p><p><small>Nov <var data-var=’date’> 1</var>, <var data-var=’time’>05:35</var> UTC</small><br><strong>Update</strong> – We have identified the cause of the issue and are working towards a mitigation</p><p><small>Nov <var data-var=’date’> 1</var>, <var data-var=’time’>05:05</var> UTC</small><br><strong>Update</strong> – We are investigating issues manually dispatching workflows via the GitHub UI for Actions. The Workflow Dispatch API is unaffected.</p><p><small>Nov <var data-var=’date’> 1</var>, <var data-var=’time’>04:43</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/26947406 2025-10-30T23:00:46Z 2025-11-05T00:39:55Z Disruption with some GitHub services <p><small>Oct <var data-var=’date’>30</var>, <var data-var=’time’>23:00</var> UTC</small><br><strong>Resolved</strong> – On October 30th we shipped a change that broke 3 links in the “Solutions” dropdown of the marketing navigation seen on https://github.com/home. We noticed internally the broken links and declared an incident so our users would know no other functionality was impacted. We were able to revert a change and are evaluating our testing and rollout processes to prevent future incidents like these.</p><p><small>Oct <var data-var=’date’>30</var>, <var data-var=’time’>22:54</var> UTC</small><br><strong>Update</strong> – Links on GitHub’s landing https://github.com/home are not working. Primary user workflows (PRs, Issues, Repos) are not impacted.</p><p><small>Oct <var data-var=’date’>30</var>, <var data-var=’time’>22:47</var> UTC</small><br><strong>Update</strong> – Dotcom main navigation broken links.</p><p><small>Oct <var data-var=’date’>30</var>, <var data-var=’time’>22:47</var> UTC</small><br><strong>Investigating</strong> – We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26929372 2025-10-29T23:15:09Z 2025-10-31T19:55:57Z Experiencing connection issues across Actions, Codespaces, and possibly other services <p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>23:15</var> UTC</small><br><strong>Resolved</strong> – On October 29th, 2025 between 14:07 UTC and 23:15 UTC, multiple GitHub services were degraded due to a broad outage in one of our service providers:<br /><br />- Users of Codespaces experienced failures connecting to new and existing Codespaces through VSCode Desktop or Web. On average the Codespace connection error rate was 90% and peaked at 100% across all regions throughout the incident period.<br />- GitHub Actions larger hosted runners experienced degraded performance, with 0.5% of overall workflow runs and 9.8% of larger hosted runner jobs failing or not starting within 5 minutes. These recovered by 20:40 UTC.<br />- The GitHub Enterprise Importer service was degraded, with some users experiencing migration failures during git push operations and most users experiencing delayed migration processing.<br />- Initiation of new trials for GitHub Enterprise Cloud with Data Residency were also delayed during this time.<br />- Copilot Metrics via the API could not access the downloadable link during this time. There were approximately 100 requests during the incident that would have failed the download. Recovery began around 20:25 UTC.<br /><br />We were able to apply a number of mitigations to reduce impact over the course of the incident, but we did not achieve 100% recovery until our service provider’s incident was resolved.<br /><br />We are working to reduce critical path dependencies on the service provider and gracefully degrade experiences where possible so that we are more resilient to future dependency outages.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>23:15</var> UTC</small><br><strong>Update</strong> – Codespaces is operating normally.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>22:06</var> UTC</small><br><strong>Update</strong> – Codespaces continues to recover</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>21:02</var> UTC</small><br><strong>Update</strong> – Actions is operating normally.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>21:01</var> UTC</small><br><strong>Update</strong> – Actions has fully recovered.<br /><br />Codespaces continues to recover. Regions across Europe and Asia are healthy, so US users may want to choose one of those regions following these instructions: https://docs.github.com/en/codespaces/setting-your-user-preferences/setting-your-default-region-for-github-codespaces.<br /><br />We expect full recovery across the board before long.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>20:56</var> UTC</small><br><strong>Update</strong> – Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>20:34</var> UTC</small><br><strong>Update</strong> – We are beginning to see small signs of recovery, but connections are still inconsistent across services and regions. We expect to see gradual recovery from here.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>19:21</var> UTC</small><br><strong>Update</strong> – We continue to see improvement in Actions larger runners jobs. Larger runners customers may still experience longer than normal queue times, but we expect this to rapidly improve across most runners. <br /><br />ARM64 runners, GPU runners, and some runners with private networking may still be impacted.<br /><br />Usage of Codespaces via VS Code (but not via SSH) is still degraded.<br /><br />GitHub and Azure teams continue to collaborate towards full resolution.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>19:05</var> UTC</small><br><strong>Update</strong> – Codespaces is experiencing degraded availability. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>18:58</var> UTC</small><br><strong>Update</strong> – Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>18:12</var> UTC</small><br><strong>Update</strong> – Impact to most larger runner jobs should now be mitigated. ARM64 runners are still impacted. GitHub and Azure teams continue to collaborate towards full resolution.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>17:40</var> UTC</small><br><strong>Update</strong> – Codespaces is experiencing degraded availability. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>17:26</var> UTC</small><br><strong>Update</strong> – Additional impact from this incident:<br /><br />We’re currently investigating an issue causing the Copilot Metrics API report URLs to fail for 28-day and 1-day enterprise and user reports. We are collaborating with Azure teams to restore access as soon as possible.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>17:11</var> UTC</small><br><strong>Update</strong> – We are seeing ongoing connection failures across Codespaces and Actions, including Enterprise Migrations. <br /><br />Linux ARM64 standard hosted runners are failing to start, but Ubuntu latest and Windows latest are not affected at this time. <br /><br />SSH connections to Codespaces may be successful, but connections via VS Code are consistently failing. <br /><br />GitHub and Azure teams are coordinating to mitigate impact and resolve the root issues.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>16:31</var> UTC</small><br><strong>Update</strong> – Actions impact is primarily limited to larger runner jobs at this time. This also impacts enterprise migrations.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>16:19</var> UTC</small><br><strong>Update</strong> – Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>16:17</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/26933445 2025-10-29T21:49:45Z 2025-10-29T22:06:13Z Disruption with Copilot Bing search tool <p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>21:49</var> UTC</small><br><strong>Resolved</strong> – A cloud resource used by the Copilot bing-search tool was deleted as part of a resource cleanup operation. Once this was discovered, the resource was recreated. Going forward, more effective monitoring will be put in place to catch this issue earlier.</p><p><small>Oct <var data-var=’date’>29</var>, <var data-var=’time’>21:34</var> UTC</small><br><strong>Investigating</strong> – We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26916087 2025-10-28T17:11:45Z 2025-10-31T19:11:33Z Inconsistent results when using the Haiku 4.5 model <p><small>Oct <var data-var=’date’>28</var>, <var data-var=’time’>17:11</var> UTC</small><br><strong>Resolved</strong> – From October 28th at 16:03 UTC until 17:11 UTC, the Copilot service experienced degradation due to an infrastructure issue which impacted the Claude Haiku 4.5 model, leading to a spike in errors affecting 1% of users. No other models were impacted. The incident was caused due to an outage with an upstream provider. We are working to improve redundancy during future occurrences.</p><p><small>Oct <var data-var=’date’>28</var>, <var data-var=’time’>17:11</var> UTC</small><br><strong>Update</strong> – The issues with our upstream model provider have been resolved, and Claude Haiku 4.5 is once again available in Copilot Chat and across IDE integrations.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Oct <var data-var=’date’>28</var>, <var data-var=’time’>16:42</var> UTC</small><br><strong>Update</strong> – Usage of the Haiku 4.5 model with Copilot experiences is currently degraded. We are investigating and working to remediate. Other models should be unaffected.</p><p><small>Oct <var data-var=’date’>28</var>, <var data-var=’time’>16:39</var> UTC</small><br><strong>Investigating</strong> – We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26903197 2025-10-27T17:51:51Z 2025-10-29T18:55:52Z Disruption with viewing some repository pages from large organizations <p><small>Oct <var data-var=’date’>27</var>, <var data-var=’time’>17:51</var> UTC</small><br><strong>Resolved</strong> – Between October 23, 2025 19:27:29 UTC and October 27, 2025 17:42:42 UTC, users experienced timeouts when viewing repository landing pages. We observed the timeouts for approximately 5,000 users across less than 1,000 repositories including forked repositories. The impact was limited to logged in users accessing repositories in organizations with more than 200,000 members. Forks of repositories from affected large organizations were also impacted. Git operations were functional throughout this period.<br /><br />This was caused by feature flagged changes impacting organization membership. The changes caused unintended timeouts for organization membership count evaluations which led to repository landing pages not loading.<br /><br />The flag was turned off and a fix addressing the timeouts was deployed, including additional optimizations to better support organizations of this size. We are reviewing related areas and will continue to monitor for similar performance regressions.</p><p><small>Oct <var data-var=’date’>27</var>, <var data-var=’time’>17:51</var> UTC</small><br><strong>Update</strong> – We have deployed the fix and resolved the issue.</p><p><small>Oct <var data-var=’date’>27</var>, <var data-var=’time’>17:16</var> UTC</small><br><strong>Update</strong> – The fix for this issue has been validated and is being deployed. This fix will also resolve related timeouts on the Access settings page of the impacted repositories and forks.</p><p><small>Oct <var data-var=’date’>27</var>, <var data-var=’time’>16:35</var> UTC</small><br><strong>Update</strong> – Viewing code in repositories in or forked from very large organizations (200k+ members) are not loading in the desktop web UI, showing a unicorn instead. A fix has been identified and is being tested. Access via git and access to specific pages within the repository, such as pull requests, are not impacted, nor is accessing the repository via the mobile web UI.</p><p><small>Oct <var data-var=’date’>27</var>, <var data-var=’time’>16:25</var> UTC</small><br><strong>Investigating</strong> – We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26872058 2025-10-24T14:17:07Z 2025-10-24T14:17:07Z githubstatus.com was unavailable UTC 2025 Oct 24 02:55 to 03:13 <p><small>Oct <var data-var=’date’>24</var>, <var data-var=’time’>14:17</var> UTC</small><br><strong>Resolved</strong> – On UTC Oct 24 2:55 – 3:15 AM, githubstatus.com was unreachable due to service interruption with our status page provider. <br />During this time, GitHub systems were not experiencing any outages or disruptions.<br />We are working our vendor to understand how to improve availability of githubstatus.com.</p> tag:www.githubstatus.com,2005:Incident/26869397 2025-10-24T10:10:20Z 2025-11-04T18:39:05Z git operations over ssh seeing increased latency on github.com <p><small>Oct <var data-var=’date’>24</var>, <var data-var=’time’>10:10</var> UTC</small><br><strong>Resolved</strong> – From Oct 22, 2025 15:00 UTC to Oct 24, 2025 14:30 UTC git operations via SSH saw periods of increased latency and failed requests, with failure rates ranging from 1.5% to a single spike of 15%. Git operations over http were not affected. This was due to resource exhaustion on our backend ssh servers. <br /><br />We mitigated the incident by increasing the available resources for ssh connections. We are improving the observability and dynamic scalability of our backend to prevent issues like this in the future.</p><p><small>Oct <var data-var=’date’>24</var>, <var data-var=’time’>10:07</var> UTC</small><br><strong>Update</strong> – We have found the source of the slowness and mitigated it. We are watching recovery before we status green but no user impact is currently observed.</p><p><small>Oct <var data-var=’date’>24</var>, <var data-var=’time’>09:31</var> UTC</small><br><strong>Investigating</strong> – We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26861074 2025-10-23T20:25:52Z 2025-10-28T05:11:02Z Incident with Actions – Larger hosted runners <p><small>Oct <var data-var=’date’>23</var>, <var data-var=’time’>20:25</var> UTC</small><br><strong>Resolved</strong> – On October 23, 2025, between 15:54 UTC and 19:20 UTC, GitHub Actions larger hosted runners experienced degraded performance, with 1.4% of overall workflow runs and 29% of larger hosted runner jobs failing to start or timing out within 5 minutes.<br /><br />The full set of contributing factors is still under investigation, but the customer impact was due to database performance degradation, triggered by routine database changes causing a load profile that triggered a bug in the underlying database platform used for larger runners.<br /><br />Impact was mitigated through a combination of scaling up the database and reducing load. We are working with partners to resolve the underlying bug and have paused similar database changes until it is resolved.</p><p><small>Oct <var data-var=’date’>23</var>, <var data-var=’time’>20:25</var> UTC</small><br><strong>Update</strong> – Actions is operating normally.</p><p><small>Oct <var data-var=’date’>23</var>, <var data-var=’time’>19:33</var> UTC</small><br><strong>Update</strong> – Actions larger runner job start delays and failure rates are recovering. Many jobs should be starting as normal. We’re continuing to monitor and confirm full recovery.</p><p><small>Oct <var data-var=’date’>23</var>, <var data-var=’time’>18:17</var> UTC</small><br><strong>Update</strong> – We continue to investigate problems with Actions larger runners. We’re continuing to see signs of improvement, but customers are still experiencing jobs queueing or failing due to timeout.</p><p><small>Oct <var data-var=’date’>23</var>, <var data-var=’time’>17:36</var> UTC</small><br><strong>Update</strong> – We continue to investigate problems with Actions larger runners. We’re seeing limited signs of recovery, but customers are still experiencing jobs queueing or failing due to timeout.</p><p><small>Oct <var data-var=’date’>23</var>, <var data-var=’time’>16:59</var> UTC</small><br><strong>Update</strong> – We continue to investigate problems with Actions larger runners. Some customers are experiencing jobs queueing or failing due to timeout.</p><p><small>Oct <var data-var=’date’>23</var>, <var data-var=’time’>16:36</var> UTC</small><br><strong>Update</strong> – We’re investigating problems with larger hosted runners in Actions. Our team is working to identify the cause. We’ll post another update by 17:03 UTC.</p><p><small>Oct <var data-var=’date’>23</var>, <var data-var=’time’>16:33</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/26848498 2025-10-22T15:53:37Z 2025-10-24T20:22:11Z Incident with API Requests <p><small>Oct <var data-var=’date’>22</var>, <var data-var=’time’>15:53</var> UTC</small><br><strong>Resolved</strong> – On October 22, 2025, between 14:06 UTC and 15:17 UTC, less than 0.5% of web users experienced intermittent slow page loads on GitHub.com. During this time, API requests showed increased latency, with up to 2% timing out. <br /><br />The issue was caused by elevated loads on one of our databases caused by a poorly performing query, which impacted performance for a subset of requests.<br /><br />We identified the source of the load and optimized the query to restore normal performance. We’ve added monitors for early detection for query performance, and we continue to monitor the system closely to ensure ongoing stability. <br /></p><p><small>Oct <var data-var=’date’>22</var>, <var data-var=’time’>15:53</var> UTC</small><br><strong>Update</strong> – API Requests is operating normally.</p><p><small>Oct <var data-var=’date’>22</var>, <var data-var=’time’>15:17</var> UTC</small><br><strong>Update</strong> – We have identified a possible source of the issue and there is currently no user impact but we are continuing to investigate and will not resolve this incident until we have more confidence in our mitigations and investigation results.</p><p><small>Oct <var data-var=’date’>22</var>, <var data-var=’time’>14:37</var> UTC</small><br><strong>Update</strong> – Some users may see slow, timing out requests or not found when browsing repos. We have identified slowness in our platform and are investigating.</p><p><small>Oct <var data-var=’date’>22</var>, <var data-var=’time’>14:29</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for API Requests</p> tag:www.githubstatus.com,2005:Incident/26837586 2025-10-21T17:39:34Z 2025-10-23T13:21:33Z Disruption with some GitHub services <p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>17:39</var> UTC</small><br><strong>Resolved</strong> – On October 21, 2025, between 13:30 and 17:30 UTC, GitHub Enterprise Cloud Organization SAML Single Sign-On experienced degraded performance. Customers may have been unable to successfully authenticate into their GitHub Organizations during this period. Organization SAML recorded a maximum of 0.4% of SSO requests failing during this timeframe.<br /><br />This incident stemmed from a failure in a read replica database partition responsible for storing license usage information for GitHub Enterprise Cloud Organizations. This partition failure resulted in users from affected organizations, whose license usage information was stored on this partition, being unable to access SSO during the aforementioned window. A successful SSO requires an available license for the user who is accessing a GitHub Enterprise Cloud Organization backed by SSO.<br />The failing partition was subsequently taken out of service, thereby mitigating the issue. <br /><br />Remedial actions are currently underway to ensure that a read replica failure does not compromise the overall service availability.<br /></p><p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>17:18</var> UTC</small><br><strong>Update</strong> – Mitigation continues, the impact is limited to Enterprise Cloud customers who have configured SAML at the organization level.</p><p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>17:11</var> UTC</small><br><strong>Update</strong> – We continuing to work on mitigation of this issue.</p><p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>16:33</var> UTC</small><br><strong>Update</strong> – We’ve identified the issue affecting some users with SAML/OIDC authentication and are actively working on mitigation. Some users may not be able to authenticate during this time.</p><p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>16:03</var> UTC</small><br><strong>Update</strong> – We’re seeing issues for a small amount of customers with SAML/OIDC authentication for GitHub.com users. We are investigating.</p><p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>16:00</var> UTC</small><br><strong>Investigating</strong> – We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26833707 2025-10-21T12:28:19Z 2025-10-24T15:15:26Z Incident with Actions <p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>12:28</var> UTC</small><br><strong>Resolved</strong> – On October 21, 2025, between 07:55 UTC and 12:20 UTC, GitHub Actions experienced degraded performance. During this time, 2.11% workflow runs failed to start within 5 minutes, with an average delay of 8.2 minutes. The root cause was increased latency on a node in one of our Redis clusters, triggered by resource contention after a patching event became stuck. <br /><br />Recovery began once the patching process was unstuck and normal connectivity to the Redis cluster was restored at 11:45 UTC, but it took until 12:20 UTC to clear the backlog of queued work. We are implementing safeguards to prevent this failure mode and enhancing our monitoring to detect and address problems like this more quickly in the future.</p><p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>11:59</var> UTC</small><br><strong>Update</strong> – We were able to apply a mitigation and we are now seeing recovery.</p><p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>11:37</var> UTC</small><br><strong>Update</strong> – We are seeing about 10% of Actions runs taking longer than 5 minutes to start, we’re still investigating and will provide an update by 12:00 UTC.</p><p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>09:59</var> UTC</small><br><strong>Update</strong> – We are still seeing delays in starting some Actions runs and are currently investigating. We will provide updates as we have more information.</p><p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>09:25</var> UTC</small><br><strong>Update</strong> – We are seeing delays in starting some Actions runs and are currently investigating.</p><p><small>Oct <var data-var=’date’>21</var>, <var data-var=’time’>09:12</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/26820913 2025-10-20T16:40:02Z 2025-10-21T20:25:22Z Disruption with Grok Code Fast 1 in Copilot <p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>16:40</var> UTC</small><br><strong>Resolved</strong> – From October 20th at 14:10 UTC until 16:40 UTC, the Copilot service experienced degradation due to an infrastructure issue which impacted the Grok Code Fast 1 model, leading to a spike in errors affecting 30% of users. No other models were impacted. The incident was caused due to an outage with an upstream provider.</p><p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>16:39</var> UTC</small><br><strong>Update</strong> – The issues with our upstream model provider continue to improve, and Grok Code Fast 1 is once again stable in Copilot Chat, VS Code and other Copilot products.</p><p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>16:07</var> UTC</small><br><strong>Update</strong> – We are continuing to work with our provider on resolving the incident with Grok Code Fast 1 which is impacting 6% of users. We’ve been informed they are implementing fixes but users can expect some requests to intermittently fail until all issues are resolved.<br /></p><p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>14:47</var> UTC</small><br><strong>Update</strong> – We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>14:46</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26815001 2025-10-20T11:01:14Z 2025-10-23T21:44:48Z Codespaces creation failling <p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>11:01</var> UTC</small><br><strong>Resolved</strong> – On October 20, 2025, between 08:05 UTC and 10:50 UTC the Codespaces service was degraded, with users experiencing failures creating new codespaces and resuming existing ones. On average, the error rate for codespace creation was 39.5% and peaked at 71% of requests to the service during the incident window. Resume operations averaged 23.4% error rate with a peak of 46%. This was due to a cascading failure triggered by an outage in a 3rd-party dependency required to build devcontainer images.<br /><br />The impact was mitigated when the 3rd-party dependency recovered.<br /><br />We are investigating opportunities to make this dependency not a critical path for our container build process and working to improve our monitoring and alerting systems to reduce our time to detection of issues like this one in the future.</p><p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>10:56</var> UTC</small><br><strong>Update</strong> – We are now seeing sustained recovery. As we continue to make our final checks, we hope to resolve this incident in the next 10 minutes.</p><p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>10:15</var> UTC</small><br><strong>Update</strong> – We are seeing early signs of recovery for Codespaces. The team will continue to monitor and keep this incident active as a line of communication until we are confident of full recovery.</p><p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>09:34</var> UTC</small><br><strong>Update</strong> – We are continuing to monitor Codespace’s error rates and will report further as we have more information.</p><p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>09:01</var> UTC</small><br><strong>Update</strong> – We are seeing increased error rates with Codespaces generally. This is due to a third party provider experiencing problems. This impacts both creation of new Codespaces and resumption of existing ones.<br /><br />We continue to monitor and will report with more details as we have them.</p><p><small>Oct <var data-var=’date’>20</var>, <var data-var=’time’>08:56</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded availability for Codespaces</p> tag:www.githubstatus.com,2005:Incident/26788390 2025-10-17T14:12:45Z 2025-10-20T12:56:38Z Disruption with push notifications <p><small>Oct <var data-var=’date’>17</var>, <var data-var=’time’>14:12</var> UTC</small><br><strong>Resolved</strong> – On October 17th, 2025, between 12:51 UTC and 14:01 UTC, mobile push notifications failed to be delivered for a total duration of 70 minutes. This affected github.com and GitHub Enterprise Cloud in all regions. The disruption was related to an erroneous configuration change to cloud resources used for mobile push notification delivery.<br /><br />We are reviewing our procedures and management of these cloud resources to prevent such an incident in the future.</p><p><small>Oct <var data-var=’date’>17</var>, <var data-var=’time’>14:01</var> UTC</small><br><strong>Update</strong> – We’re investigating an issue with mobile push notifications. All notification types are affected, but notifications remain accessible in the app’s inbox. For 2FA authentication, please open the GitHub mobile app directly to complete login.</p><p><small>Oct <var data-var=’date’>17</var>, <var data-var=’time’>13:11</var> UTC</small><br><strong>Investigating</strong> – We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26754510 2025-10-14T18:57:11Z 2025-10-17T15:32:24Z Disruption with some GitHub services <p><small>Oct <var data-var=’date’>14</var>, <var data-var=’time’>18:57</var> UTC</small><br><strong>Resolved</strong> – On October 14th, 2025, between 18:26 UTC and 18:57 UTC a subset of unauthenticated requests to the commit endpoint for certain repositories received 503 errors. During the event, the average error rate was 3%, peaking at 3.5% of total requests.<br /><br />This event was triggered by a recent configuration change and some traffic pattern shifts on the service. We were alerted of the issue immediately and made changes to the configuration in order to mitigate the problem. We are working on automatic mitigation solutions and better traffic handling in order to prevent issues like this in the future.</p><p><small>Oct <var data-var=’date’>14</var>, <var data-var=’time’>18:26</var> UTC</small><br><strong>Investigating</strong> – We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26752063 2025-10-14T16:00:29Z 2025-10-17T18:18:35Z Disruption with GPT-5-mini in Copilot <p><small>Oct <var data-var=’date’>14</var>, <var data-var=’time’>16:00</var> UTC</small><br><strong>Resolved</strong> – On Oct 14th, 2025, between 13:34 UTC and 16:00 UTC the Copilot service was degraded for GPT-5 mini model. On average, 18% of the requests to GPT-5 mini failed due to an issue with our upstream provider.<br /><br />We notified the upstream provider of the problem as soon as it was detected and mitigated the issue by failing over to other providers. The upstream provider has since resolved the issue.<br /><br />We are working to improve our failover logic to mitigate similar upstream failures more quickly in the future.</p><p><small>Oct <var data-var=’date’>14</var>, <var data-var=’time’>16:00</var> UTC</small><br><strong>Update</strong> – GPT-5-mini is once again available in Copilot Chat and across IDE integrations.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Oct <var data-var=’date’>14</var>, <var data-var=’time’>15:42</var> UTC</small><br><strong>Update</strong> – We are continuing to see degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We continue to work with the model provider to resolve the issue.<br />Other models continue to be available and working as expected.</p><p><small>Oct <var data-var=’date’>14</var>, <var data-var=’time’>14:50</var> UTC</small><br><strong>Update</strong> – We continue to see degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We continue to work with the model provider to resolve the issue.<br />Other models continue to be available and working as expected.</p><p><small>Oct <var data-var=’date’>14</var>, <var data-var=’time’>14:07</var> UTC</small><br><strong>Update</strong> – We are experiencing degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br />Other models are available and working as expected.</p><p><small>Oct <var data-var=’date’>14</var>, <var data-var=’time’>14:05</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26702024 2025-10-09T16:40:52Z 2025-10-15T21:35:15Z Incident with Webhooks <p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>16:40</var> UTC</small><br><strong>Resolved</strong> – On October 9th, 2025, between 14:35 UTC and 15:21 UTC, a network device in maintenance mode that was undergoing repairs was brought back into production before repairs were completed. Network traffic traversing this device experienced significant packet loss.<br /><br />Authenticated users of the github.com UI experienced increased latency during the first 5 minutes of the incident. API users experienced up to 7.3% error rates, after which it stabilized to about 0.05% until mitigated. Actions service experienced 24% of runs being delayed for an average of 13 minutes. Large File Storage (LFS) requests experienced minimally increased error rate, with 0.038% of requests erroring.<br /><br />To prevent similar issues, we are enhancing the validation process for device repairs of this category.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>16:39</var> UTC</small><br><strong>Update</strong> – All services have fully recovered.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>16:27</var> UTC</small><br><strong>Update</strong> – Actions has fully recovered but Notifications is still experiencing delays. We will continue to update as the system is fully restored to normal operation.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>16:24</var> UTC</small><br><strong>Update</strong> – Actions is operating normally.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>16:08</var> UTC</small><br><strong>Update</strong> – Pages is operating normally.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>16:04</var> UTC</small><br><strong>Update</strong> – Git Operations is operating normally.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>16:02</var> UTC</small><br><strong>Update</strong> – Actions and Notifications are still experiencing delays as we process the backlog. We will continue to update as the system is fully restored to normal operation.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:51</var> UTC</small><br><strong>Update</strong> – Pull Requests is operating normally.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:48</var> UTC</small><br><strong>Update</strong> – Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:44</var> UTC</small><br><strong>Update</strong> – We are seeing full recovery in many of our systems, but delays are still expected for actions. We will continue to update as the system is fully restored to normal operation.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:43</var> UTC</small><br><strong>Update</strong> – Webhooks is operating normally.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:40</var> UTC</small><br><strong>Update</strong> – Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:39</var> UTC</small><br><strong>Update</strong> – Issues is operating normally.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:38</var> UTC</small><br><strong>Update</strong> – Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:26</var> UTC</small><br><strong>Update</strong> – API Requests is operating normally.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:25</var> UTC</small><br><strong>Update</strong> – We identified a faulty network component and have removed it from the infrastructure. Recovery has started and we expect full recovery shortly.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:20</var> UTC</small><br><strong>Update</strong> – Pull Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:20</var> UTC</small><br><strong>Update</strong> – Git Operations is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:17</var> UTC</small><br><strong>Update</strong> – Actions is experiencing degraded availability. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:11</var> UTC</small><br><strong>Update</strong> – We are investigating widespread reports of delays and increased latency in various services. We will continue to keep users updated on progress toward mitigation.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:09</var> UTC</small><br><strong>Update</strong> – Issues is experiencing degraded availability. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:09</var> UTC</small><br><strong>Update</strong> – API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>15:09</var> UTC</small><br><strong>Update</strong> – Pages is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>14:50</var> UTC</small><br><strong>Update</strong> – Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>14:45</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded availability for Webhooks</p> tag:www.githubstatus.com,2005:Incident/26701366 2025-10-09T13:56:06Z 2025-10-15T12:45:49Z Multiple GitHub API endpoints are experiencing errors <p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>13:56</var> UTC</small><br><strong>Resolved</strong> – Between 13:39 UTC and 13:42 UTC on Oct 9, 2025, around 2.3% of REST API calls and 0.4% Web traffic were impacted due to the partial rollout of a new feature that had more impact on one of our primary databases than anticipated. When the feature was partially rolled out it performed an excessive number of writes per request which caused excessive latency for writes from other API and Web endpoints and resulted in 5xx errors to customers. <br /><br />The issue was identified by our automatic alerting and reverted by turning down the percentage of traffic to the new feature, which led to recovery of the data cluster and services. <br /><br />We are working to improve the way we roll out new features like this and move the specific writes from this incident to a storage solution more suited to this type of activity. We have also optimized this particular feature to avoid its rollout from having future impact on other areas of the site. We are also investigating how we can even more quickly identify issues like this.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>13:54</var> UTC</small><br><strong>Update</strong> – A feature was partially rolled out that had high impact on one of our primary databases but we were able to roll it back. All services are recovered but we will monitor for recovery before statusing green.</p><p><small>Oct <var data-var=’date’> 9</var>, <var data-var=’time’>13:52</var> UTC</small><br><strong>Investigating</strong> – We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26679334 2025-10-08T00:05:41Z 2025-10-14T20:05:38Z Disruption with some GitHub services <p><small>Oct <var data-var=’date’> 8</var>, <var data-var=’time’>00:05</var> UTC</small><br><strong>Resolved</strong> – On October 7, 2025, between 7:48 PM UTC and October 8, 12:05 AM UTC (approximately 4 hours and 17 minutes), the audit log service was degraded, creating a backlog and delaying availability of new audit log events. The issue originated in a third-party dependency.<br /><br />We mitigated the incident by working with the vendor to identify and resolve the issue. Write operations recovered first, followed by the processing of the accumulated backlog of audit log events.<br /><br />We are working to improve our monitoring and alerting for audit log ingestion delays and strengthen our incident response procedures to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Oct <var data-var=’date’> 7</var>, <var data-var=’time’>22:45</var> UTC</small><br><strong>Update</strong> – We are seeing recovery of audit log ingestion and continue to monitor recovery.</p><p><small>Oct <var data-var=’date’> 7</var>, <var data-var=’time’>21:51</var> UTC</small><br><strong>Update</strong> – We are seeing recovery of audit log ingestion and continue to monitor recovery.</p><p><small>Oct <var data-var=’date’> 7</var>, <var data-var=’time’>21:17</var> UTC</small><br><strong>Update</strong> – We continue to apply mitigations and monitor for recovery.</p><p><small>Oct <var data-var=’date’> 7</var>, <var data-var=’time’>20:33</var> UTC</small><br><strong>Update</strong> – We have identified an issue causing delayed audit log event ingestion and are working on a mitigation.</p><p><small>Oct <var data-var=’date’> 7</var>, <var data-var=’time’>19:48</var> UTC</small><br><strong>Update</strong> – Ingestion of new audit log events is delayed</p><p><small>Oct <var data-var=’date’> 7</var>, <var data-var=’time’>19:48</var> UTC</small><br><strong>Investigating</strong> – We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26633369 2025-10-03T03:47:27Z 2025-10-10T15:45:07Z Incident with Copilot <p><small>Oct <var data-var=’date’> 3</var>, <var data-var=’time’>03:47</var> UTC</small><br><strong>Resolved</strong> – <p>On October 3rd, between approximately 10:00 PM and 11:30 Eastern, the Copilot service experienced degradation due to an issue with our upstream provider. Users encountered elevated error rates when using the following Claude models: Claude Sonnet 3.7, Claude Opus 4, Claude Opus 4.1, Claude Sonnet 4, and Claude Sonnet 4.5. No other models were impacted.</p><p>The issue was mitigated by temporarily disabling affected endpoints while our provider resolved the upstream issue. GitHub is working with our provider to further improve the resiliency of the service to prevent similar incidents in the future.</p></p><p><small>Oct <var data-var=’date’> 3</var>, <var data-var=’time’>03:47</var> UTC</small><br><strong>Update</strong> – This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Oct <var data-var=’date’> 3</var>, <var data-var=’time’>03:04</var> UTC</small><br><strong>Update</strong> – The upstream provider is implementing a fix. Services are recovering. We are monitoring the situation.</p><p><small>Oct <var data-var=’date’> 3</var>, <var data-var=’time’>02:42</var> UTC</small><br><strong>Update</strong> – We’re seeing degraded experience across Anthropic models. We’re working with our partners to restore service.</p><p><small>Oct <var data-var=’date’> 3</var>, <var data-var=’time’>02:41</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26615316 2025-10-02T22:33:20Z 2025-10-06T22:39:20Z Degraded Gemini 2.5 Pro experience in Copilot <p><small>Oct <var data-var=’date’> 2</var>, <var data-var=’time’>22:33</var> UTC</small><br><strong>Resolved</strong> – Between October 1st, 2025 at 1 AM UTC and October 2nd, 2025 at 10:33 PM UTC, the Copilot service experienced a degradation of the Gemini 2.5 Pro model due to an issue with our upstream provider. Before 15:53 UTC on October 1st, users experienced higher error rates with large context requests while using Gemini 2.5 Pro. After 15:53 UTC and until 10:33 PM UTC on October 2nd, requests were restricted to smaller context windows when using Gemini 2.5. Pro. No other models were impacted.<br /><br />The issue was resolved by a mitigation put in place by our provider. GitHub is collaborating with our provider to enhance communication and improve the ability to reproduce issues with the aim to reduce resolution time.</p><p><small>Oct <var data-var=’date’> 2</var>, <var data-var=’time’>22:26</var> UTC</small><br><strong>Update</strong> – We have confirmed that the fix for the lower token input limit for Gemini 2.5 Pro is in place and are currently testing our previous higher limit to verify that customers will experience no further impact.</p><p><small>Oct <var data-var=’date’> 2</var>, <var data-var=’time’>17:13</var> UTC</small><br><strong>Update</strong> – The underlying issue for the lower token limits for Gemini 2.5 Pro has been identified and a fix is in progress. We will update again once we have tested and confirmed that the fix is correct and globally deployed.</p><p><small>Oct <var data-var=’date’> 2</var>, <var data-var=’time’>02:52</var> UTC</small><br><strong>Update</strong> – We are continuing to work with our provider to resolve the issue where some Copilot requests using Gemini 2.5 Pro return an error indicating a bad request due to exceeding the input limit size.</p><p><small>Oct <var data-var=’date’> 1</var>, <var data-var=’time’>18:16</var> UTC</small><br><strong>Update</strong> – We are continuing to investigate and test solutions internally while working with our model provider on a deeper investigation into the cause. We will update again when we have identified a mitigation.</p><p><small>Oct <var data-var=’date’> 1</var>, <var data-var=’time’>17:37</var> UTC</small><br><strong>Update</strong> – We are testing other internal mitigations so that we can return to the higher maximum input length. We are still working with our upstream model provider to understand the contributing factors for this sudden decrease in input limits.</p><p><small>Oct <var data-var=’date’> 1</var>, <var data-var=’time’>16:49</var> UTC</small><br><strong>Update</strong> – We are experiencing a service regression for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. The maximum input length of Gemini 2.5 prompts been decreased. Long prompts or large context windows may result in errors. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Oct <var data-var=’date’> 1</var>, <var data-var=’time’>16:43</var> UTC</small><br><strong>Investigating</strong> – We are investigating reports of degraded performance for Copilot</p>