티스토리 수익 글 보기
The post How we rebuilt the search architecture for high availability in GitHub Enterprise Server appeared first on The GitHub Blog.
]]>So much of what you interact with on GitHub depends on search—obviously the search bars and filtering experiences like the GitHub Issues page, but it is also the core of the releases page, projects page, the counts for issues and pull requests, and more. Given that search is such a core part of the GitHub platform, we’ve spent the last year making it even more durable. That means, less time spent managing GitHub Enterprise Server, and more time working on what your customers care most about.
In recent years, GitHub Enterprise Server administrators had to be especially careful with search indexes, the special database tables optimized for searching. If they didn’t follow maintenance or upgrade steps in exactly the right order, search indexes could become damaged and need repair, or they might get locked and cause problems during upgrades. Quick context if you’re not running High Availability (HA) setups, they’re designed to keep GitHub Enterprise Server running smoothly even if part of the system fails. You have a primary node that handles all the writes and traffic, and replica nodes that stay in sync and can take over if needed.

Much of this difficulty comes from how previous versions of Elasticsearch, our search database of choice, were integrated. HA GitHub Enterprise Server installations use a leader/follower pattern. The leader (primary server) receives all the writes, updates, and traffic. Followers (replicas) are designed to be read-only. This pattern is deeply ingrained into all of the operations of GitHub Enterprise Server.
This is where Elasticsearch started running into issues. Since it couldn’t support having a primary node and a replica node, GitHub engineering had to create an Elasticsearch cluster across the primary and replica nodes. This made replicating data straightforward and additionally gave some performance benefits, since each node could locally handle search requests.

Unfortunately, the problems of clustering across servers eventually began to outweigh the benefits. For example, at any point Elasticsearch could move a primary shard (responsible for receiving/validating writes) to a replica. If that replica was then taken down for maintenance, GitHub Enterprise Server could end up in a locked state. The replica would wait for Elasticsearch to be healthy before starting up, but Elasticsearch couldn’t become healthy until the replica rejoined.
For a number of GitHub Enterprise Server releases, engineers at GitHub tried to make this mode more stable. We implemented checks to ensure Elasticsearch was in a healthy state, as well as other processes to try and correct drifting states. We went as far as attempting to build a “search mirroring” system that would allow us to move away from the clustered mode. But database replication is incredibly challenging and these efforts needed consistency.
What changed?
After years of work, we’re now able to use Elasticsearch’s Cross Cluster Replication (CCR) feature to support HA GitHub Enterprise.
“But David,” you say, “That’s replication between clusters. How does that help here?”
I’m so glad you asked. With this mode, we’re moving to use several, “single-node” Elasticsearch clusters. Now each Enterprise server instance will operate as independent single node Elasticsearch clusters.

CCR lets us share the index data between nodes in a way that is carefully controlled and natively supported by Elasticsearch. It copies data once it’s been persisted to the Lucene segments (Elasticsearch’s underlying data store). This ensures we’re replicating data that has been durably persisted within the Elasticsearch cluster.
In other words, now that Elasticsearch supports a leader/follower pattern, GitHub Enterprise Server administrators will no longer be left in a state where critical data winds up on read-only nodes.
Under the hood
Elasticsearch has an auto-follow API, but it only applies to indexes created after the policy exists. GitHub Enterprise Server HA installations already have a long-lived set of indexes, so we need a bootstrap step that attaches followers to existing indexes, then enables auto-follow for anything created in the future.
Here’s a sample of what that workflow looks like:
function bootstrap_ccr(primary, replica):
# Fetch the current indexes on each
primary_indexes = list_indexes(primary)
replica_indexes = list_indexes(replica)
# Filter out the system indexes
managed = filter(primary_indexes, is_managed_ghe_index)
# For indexes without follower patterns we need to
# initialize that contract
for index in managed:
if index not in replica_indexes:
ensure_follower_index(replica, leader=primary, index=index)
else:
ensure_following(replica, leader=primary, index=index)
# Finally we will setup auto-follower patterns
# so new indexes are automatically followed
ensure_auto_follow_policy(
replica,
leader=primary,
patterns=[managed_index_patterns],
exclude=[system_index_patterns]
)
This is just one of the new workflows we’ve created to enable CCR in GitHub Enterprise Server. We’ve needed to engineer custom workflows for failover, index deletion, and upgrades. Elasticsearch only handles the document replication, and we’re responsible for the rest of the index’s lifecycle.
How to get started with CCR mode
To get started using the new CCR mode, reach out to support@github.com and let them know you’d like to use the new HA mode for GitHub Enterprise Server. They’ll set up your organization so that you can download the required license.
Once you’ve downloaded your new license, you’ll need to set `ghe-config app.elasticsearch.ccr true`. With that finished, administrators can run a `config-apply` or an upgrade on your cluster to move to 3.19.1, which is the first release to support this new architecture.
When your GitHub Enterprise Server restarts, Elasticsearch will migrate your installation to use the new replication method. This will consolidate all the data onto the primary nodes, break clustering across nodes, and restart replication using CCR. This update may take some time depending on the size of your GitHub Enterprise Server instance.
While the new HA method is optional for now, we’ll be making it our default over the next two years. We want to ensure there’s ample time for GitHub Enterprise administrators to get their feedback in, so now is the time to try it out.
We’re excited for you to start using the new HA mode for a more seamless experience managing GitHub Enterprise Server.
Want to get the most out of search on your High Availability GitHub Enterprise Server deployment? Reach out to support to get set up with our new search architecture!
The post How we rebuilt the search architecture for high availability in GitHub Enterprise Server appeared first on The GitHub Blog.
]]>The post Join or host a GitHub Copilot Dev Days event near you appeared first on The GitHub Blog.
]]>The way we build software is changing fast. AI is no longer a “someday” tool. It’s reshaping how we plan, write, review, and ship code right now. As products evolve faster than ever, developers are expected to keep up just as quickly. That’s why GitHub Copilot Dev Days exists: for developers to level up together on how they can use AI-assisted coding today.
GitHub Copilot Dev Days is a global series of hands-on, in-person, community-led events designed to help developers explore real-world AI-assisted coding with GitHub Copilot. Join us for the knowledge, stay for the great food, good vibes, and plenty of fun along the way. Find an event near you and register today.
Who is GitHub Copilot Dev Days for?
Anyone and everyone who is looking to improve their development workflow and learn something new! We have events run by and for folks from professional developers to students. Sessions cover various levels and programming backgrounds.
If it’s your first time trying out AI-assisted development, this event will introduce you to the tools and best practices to succeed from day one. If you’re more advanced, we’re excited to show you the latest tips and tricks to ensure you’re fully up to date.
What to expect from a GitHub Copilot Dev Day
Each event will feature live demos, practical sessions, and interactive workshops with high-quality training content. We will focus on real workflows you can use right away, whether you’re already using Copilot daily or just getting started. Your hosts are development experts: GitHub Stars, Microsoft MVPs, GitHub Campus Experts, Microsoft Student Ambassadors, GitHub and Microsoft employees, to name a few.
We will have training materials covering the GitHub Copilot CLI, Cloud Agent, GitHub Copilot in VS Code, Visual Studio, and other editors, and more! Different events will focus on different topics, so be sure to review the registration page beforehand.
The specific event details will vary, as each community event organizer might tweak the event to fit the interests of their local developer community. Here is a sample agenda:
- Introductory session: 30-45 minutes on GitHub Copilot.
- Local community session: 30-45 minutes by a local developer or community leader on relevant topics.
- Hands-on workshop: 1 hour of coding and practical exercises.
All events are an opportunity to connect with your local developer community, learn something new, and enjoy some snacks and swag!
Events begin in March
Events are now live in cities around the world starting in March. Spots are limited and dates are approaching—now’s the time to grab a seat.
Want to bring GitHub Copilot Dev Days to your user group? Fill out our form.
The post Join or host a GitHub Copilot Dev Days event near you appeared first on The GitHub Blog.
]]>The post GitHub for Beginners: Getting started with GitHub Issues and Projects appeared first on The GitHub Blog.
]]>Welcome back to GitHub for Beginners, our series designed to help you navigate GitHub like a pro and get the most out of it! We’re bringing you a double dose by sharing these episodes in video format and adding them to our blog, so you can consume the material in whichever form works better for you!
We’re now entering our third season! In the previous seasons, we focused on a general introduction to all things Git and then spent an entire season talking about Copilot. This season, we’re going back to the basics and exploring the core features of GitHub. Our goal is to help you level up—or dust off—your GitHub skills.
In this episode, we’re going to start with two of GitHub’s most powerful collaboration tools: GitHub Issues and Projects. By the end of this post, you’ll know how to create an issue, how to sync your issues to a GitHub Project board, and how to use GitHub Projects to track your work.
Let’s get into it! Or, watch the full video above!
Why issues and projects matter
GitHub Issues and Projects are essential tools for any team that wants to stay organized and collaborate efficiently. Issues help you track tasks, bugs, and ideas, all in a clear shared space—ensuring nothing slips through the cracks. Projects look at the bigger picture, letting you visualize, prioritize, and plan your work, turning individual issues into an actionable workflow that everyone can follow.
By combing issues and projects, you can coordinate work, communicate progress, and ensure that everyone’s aligned toward the same goal.
Creating your first GitHub Issue
GitHub Issues are a simple way to capture anything that needs attention in your project, whether that’s a new idea, a bug, or a task. Think of them as the building blocks of what you want to add to or update in your project. They help teams work collaboratively and stay organized, while making sure that every item is addressed.
So how do you create an issue? Here’s a sample repository where you can try this out for yourself by following these steps.
- Navigate to gh.io/gfb-issues in your browser.
- At the top of the page, click the Issues tab.
- In the top-right of the window, click the green New issue button.
- A window will appear with a couple of boxes. First, give your issue a clear title by entering it in the box under “Add a title.”
- Provide a description for your issue in the “Add a description” box. Your description should include all of the relevant information necessary to set expectations. When someone attempts to resolve this issue, what is the behavior they should add or fix?
- On the right-hand side of the window, you have several options that you can configure.
- Assignee: Assign someone to work on this issue directly. You can even assign yourself to show what you’re working on.
- Labels: Classify your issue into one of several predesigned categories or potentially add a new one.
- Type: Indicate whether your issue is about something like a bug or a task.
- Projects: Add your issue to projects, which we’ll be covering later in this post.
- Milestone: Assign your issue to a specific milestone.
- Click the green Create button below the issue description.
Congratulations! You have just created your first issue! Once an issue is created, anyone on the team can jump in and comment. All they need to do is enter some text in the comment field and then click the Comment button. You can even link issues together by typing # and then the number of the issue in the comment field. GitHub will automatically provide a clickable link to the numbered issue. This is super handy for tracking related work.
And when the work is finished, you can close the issue, letting the entire team know that it’s been addressed.
Creating your first GitHub Project
Projects give you a way to group GitHub Issues together in a visual dashboard that’s perfect for planning, organizing, and tracking work. It’s a tool for breaking big goals down into manageable tasks that you can track all in one place.
Now let’s walk through creating a project board. Here’s a sample repository where you can try this out, or you can do it in your own repository. If you still have the repository open from the walkthrough above, you can skip the first step.
- Navigate to gh.io/gfb-issues in your browser.
- At the top of the page, click the Projects tab.
- In the top-right of the window, click the green New project button.
- A window will pop up providing several templates for you to start from. Select the Kanban template.
- Enter a title in the “Project name” field.
- Unselect the checkbox for Bulk import items.
- At the bottom of the window, select Create project.
GitHub will now create the project for you and take you to the project view. You’ll see the name of your project at the top of the window. In addition, GitHub automatically created columns for you, but you can customize them however you want.
Notice that underneath your project’s name, there are several tabs. These correspond to different views of the project board. Feel free to click on them to see what the different views look like in your project.
To manage your project board, click the three dots in the top-right corner and select Settings. This opens a new page where you can:
- Manage who has access to your project board.
- Create custom fields.
- Update any current fields.
- Change your project name.
- Provide or change the project description.
- Include a README.
- Create a copy of your project board.
- Change the visibility of your project.
To customize charts
At the top of the window, select the Insights button. On the following page, you’ll be able to view, create, and customize charts. You can change the layout of any of your charts by clicking the Configure button and choosing options from the dropdown menus.
To check the status of workflows
Going back to the main window, you can see a Workflows button next to the Insights button on the top. Select the Workflows button. Now you can see several built-in workflows on the left side of the window that you can use to update the status of items based on certain events. For example, you can automatically set the status to todo when an item is added to your project, close issues when the issue’s status in your project is changed or set the status to Done when an issue is closed.
To add a status update
Going back to the main project view once again, select the Add status update button at the very top. This opens a window where you can add a status report of your project’s health and progress.
Connecting issues and projects
Combining issues and projects together is how you’re able to get a complete view of your work. Think of issues as individual tasks, and projects as the dashboard that organizes those tasks. When you link issues to a project board, you can visualize where everything stands at a glance.
Go ahead and open your project board if it isn’t open already. At the bottom of the window, select Add item. You can either create a new issue right here, or you could search for existing ones. Click the plus icon and select Add item from repository. Select the issue or issues that you want to add to this project board. You can use the checkbox at the top of the list to automatically select all items at once if you want to add them all.
Once you’ve selected the issues you want to add, press the Add elected items button at the bottom of the window. Notice how this adds all of the selected issues to your project board as cards. You can select the cards and move them between status columns as work progresses. You can also click on the issue to see the full details of the issue.
And here’s something cool. The issue and your project board are now synchronized. When you change the status of the issue in one, it will automatically be reflected in the other. You can move multiple issues at once, organizing them into columns that match your workflow, and get that satisfying feeling when you finally move them into the “Done” column.
End-to-end flow
Let’s now simulate a real workflow that you might use in your team project by going through the following steps.
- Navigate to your project board if you are not already there.
- Click Add item at the bottom of your first column, indicating that you are adding a new issue to your workflow.
- Click the plus icon and select Create new issue from the context menu.
- Enter a title and description for the issue, describing a new feature you want to add.
- Select Create to create the issue and automatically add it to the project board.
- Underneath the description, select Labels and select enhancement from the list of options.
- Click the issue in the project board to open it.
- In the right-hand column, click the gear next to “Assignees” to assign this issue to someone on your team.
- Add comments to the issue to signify making progress or providing updates.
- Now let’s say that some work was done to address this issue. Open a pull request, and in the comment field, mention that it closes the relevant issue. Do this by entering
Closes #and then the number connected with the issue in the pull request description. This will automatically connect to the issue and provide a link to it for ease of reference. - Merge the pull request, and it will automatically close the issue.
This workflow keeps everyone aligned without the need for constant status meetings. Everything you need to know is there, both in the issue and on the project board!
What’s next?
Now you know how to use GitHub Issues and Projects to organize your work with GitHub. Start simple with a few issues, build out your project board as you go, and you’ll have a system that scales with your team.
If you want to learn even more about GitHub Issues and GitHub Projects, make sure to check out our documentation.
Happy coding!
The post GitHub for Beginners: Getting started with GitHub Issues and Projects appeared first on The GitHub Blog.
]]>The post From idea to pull request: A practical guide to building with GitHub Copilot CLI appeared first on The GitHub Blog.
]]>Most developers already do real work in the terminal.
We initialize projects there, run tests there, debug CI failures there, and make fast, mechanical changes there before anything is ready for review. GitHub Copilot CLI fits into that reality by helping you move from intent to reviewable diffs directly in your terminal—and then carry that work into your editor or pull request.
This blog walks through a practical workflow for using Copilot CLI to create and evolve an application, based on a new GitHub Skills exercise. The Skills exercise provides a guided, hands-on walkthrough; this post focuses on why each step works and when to use it in real projects.
What Copilot CLI is (and is not)
Copilot CLI is a GitHub-aware coding agent in your terminal. You can describe what you want in natural language, use /plan to outline the work before touching code, and then review concrete commands or diffs before anything runs. Copilot may reason internally, but it only executes commands or applies changes after you explicitly approve them.
In practice, Copilot CLI helps you:
- Explore a problem based on your intent
- Propose structured plans using
/plan(or you can hit Shift + Tab to enter planning mode), or suggest concrete commands and diffs you can review - Generate or modify files
- Explain failures where they occur
What it does not do:
- Silently run commands or apply changes without your approval
- Replace careful design work
- Eliminate the need for review
You stay in control of what runs, what changes, and what ships.
Step 1: Start with intent, not scaffolding
Instead of starting by choosing a framework or copying a template, start by stating what you want to build.
From an empty directory, run:
copilot
> Create a small web service with a single JSON endpoint and basic tests
If you want to generate a proposal in a single prompt instead of entering interactive mode, you can also run:
copilot -p "Create a small web service with a single JSON endpoint and basic tests"
In the Skills exercise, this pattern is used repeatedly: describe intent first, then decide which suggested commands you actually want to run.
At this stage, Copilot CLI is exploring the problem space. It may:
- Suggest a stack
- Outline files
- Propose setup commands
Nothing runs automatically. You inspect everything before deciding what to execute. This makes the CLI a good place to experiment before committing to a design.
Step 2: Scaffold only what you’re ready to own
Once you see a direction you’re comfortable with, ask Copilot CLI to help scaffold:
> Scaffold this as a minimal Node.js project with a test runner and README
This is where Copilot CLI is most immediately useful. It can:
- Create directories and config,
- Wire basic project structure,
- Generate boilerplate you would otherwise type or copy by hand.
Copilot CLI does not “own” the project structure. It suggests scaffolding based on common conventions, which you should treat as a starting point, not a prescription.
The important constraint is that you’re always responsible for the result. Treat the output like code from a teammate: review it, edit it, or discard it.
Step 3: Iterate at the point of failure
Run your tests directly inside Copilot CLI:
Run all my tests and make sure they pass
When something fails, ask Copilot about that exact failure in the same session:
> Why are these tests failing?
If you want a concrete proposal instead of an explanation, try:
> Fix this test failure and show the diff
This pattern—run (!command), inspect, ask, review diff—keeps the agent grounded in real output instead of abstract prompts.
💡Pro tip: In practice, explain is useful when you want understanding, while suggest is better when you want a concrete proposal you can review. Learn more about slash commands in Copilot CLI in our guide.
Step 4: Make mechanical or repo-wide changes
Copilot CLI is also well suited to changes that are easy to describe but tedious to execute:
> Rename all instances of X to Y across the repository and update tests
Because these changes are mechanical and scoped, they’re easy to review and easy to roll back. The CLI gives you a concrete diff instead of a wall of generated text.
Step 5: Move into your editor when you need to start shaping your code
Eventually, speed matters less than precision.
This is the natural handoff point to your editor or IDE, so it can:
- Reason about edge cases
- Refine APIs
- Make design decisions
Copilot works there too, but the key point is why you switch environments. The CLI helps you quickly get to something real. The IDE is where you can shape your code into exactly what you want.
A good rule of thumb:
- CLI: use
/plan, generate a/diff, and move quickly with low ceremony - IDE: use
/IDEwhen you need to refine logic and make decisions you’ll defend in review - GitHub: commit, open a pull request with the command
/delegate, and collaborate asynchronously
Step 6: Ship on GitHub
Once the changes look good, commit and open a pull request which you can do through the Copilot CLI in natural language:
Add and commit all files with a applicable descriptive messages, push the changes.
Create a pull request and add Copilot as a reviewer
Now the work becomes durable:
- Reviewable by teammates
- Testable in CI
- Ready for async iteration
This is where Copilot’s value compounds as part of a flow that ends with shipping versus just being a single surface. The Skills exercise intentionally ends here, because this is where Copilot’s value becomes durable: in commits, pull requests, and review (not just suggestions).
One workflow, three moments
A helpful mental model for Copilot looks like this:
- CLI: prove value quickly with low ceremony
- IDE: shape and refine your code
- GitHub: review, collaborate, and ship
Copilot CLI is powerful precisely because it fits into this system instead of trying to replace it.
Take this with you
Copilot CLI is most useful when you treat it like a tool for momentum, not a replacement for judgment.
Used well, it helps you move from intent to concrete changes faster: exploring ideas, scaffolding projects, diagnosing failures, and handling mechanical work directly in the terminal. When precision matters, you move into your editor. When the work is ready to share, it lands on GitHub as a pull request—reviewable, testable, and shippable.
That flow matters more than any single command.
If you take one thing away from this guide, it’s this: Copilot works best when it fits naturally into how developers already build software. Start in the CLI to get unstuck or move quickly, slow down in the IDE to make decisions you can stand behind, and rely on GitHub to make the work durable.
Get started with GitHub Copilot CLI or take the Skills course >
The post From idea to pull request: A practical guide to building with GitHub Copilot CLI appeared first on The GitHub Blog.
]]>The post What’s new with GitHub Copilot coding agent appeared first on The GitHub Blog.
]]>You open an issue before lunch. By the time you’re back, there’s a pull request waiting.
That’s what GitHub Copilot coding agent is built for. It works in the background, fixing bugs, adding tests, cleaning up debt, and comes back with a pull request when it’s done. While you’re writing code in your editor with Copilot in real time, the coding agent is handling the work you’ve delegated.
A few recent updates make that handoff more useful. Here’s what shipped and how to start using it.
Visual learner? Watch the video above! ☝️
Choose the right model for each task
The Agents panel now includes a model picker.
Before, every background task ran on a single default model. You couldn’t pay for a more robust model to complete harder work or prioritize speed on routine tasks.
Now you can. Use a faster model for straightforward work like adding unit tests. Upgrade your model for a gnarly refactor or integration tests with real edge cases. If you’d rather not think about it, leave it on auto.
To get started:
- Open the Agents panel (top-right in GitHub), select your repo, and pick a model.
- Write a clear prompt and kick off the task.
- Leave the model on auto if you’d rather let GitHub choose.
Model selection is available for Copilot Pro and Pro+ users now, with support for Business and Enterprise coming soon.
Learn more about model selection with Copilot coding agent. 👉
Pull requests that arrive in better shape
The painful part of reviewing agent output has always been the cleanup. You open the diff and there it is: logic that technically works, but nobody would write it that way.
Copilot coding agent now reviews its own changes using Copilot code review before it opens the pull request. It gets feedback, iterates, and improves the patch. By the time you’re tagged for review, someone already went through it.
In one session, the agent caught that its own string concatenation was overly complex and fixed it before the pull request landed. That kind of thing used to be your problem.
To get started:
- Assign an issue to Copilot or create a task from the Agents panel.
- Click into the task to view the logs.
- See the moments where the agent ran Copilot code review and applied feedback.
Review the pull request when prompted. Copilot requests your review only after it has iterated.
Learn more about Copilot code review + Copilot coding agent. 👉
Security checks that run while the agent works
Just like with human-generated code, AI-generated code can introduce real risks: vulnerable patterns, secrets accidentally committed, dependencies with known CVEs. The difference is it does it faster. And you really don’t want to find that in review.
Copilot coding agent now runs code scanning, secret scanning, and dependency vulnerability checks directly inside its workflow. If a dependency has a known issue, or something looks like a committed API key, it gets flagged before the pull request opens.
Code scanning is normally part of GitHub Advanced Security. With Copilot coding agent, you get it for free.
To get started:
- Run any task through the Agents panel.
- Check the session logs as it runs. You’ll see scanning entries as the agent works.
- Review the pull request. It’s already been through the security filter.
Learn more about security scanning in Copilot coding agent. 👉
Custom agents that follow your team’s process
A short prompt leaves a lot to judgment. And that judgment isn’t always consistent with how your team actually works.
Custom agents let you codify it. Create a file under .github/agents/ and define a specific approach. A performance optimizer agent, for example, can be wired to benchmark first, make the change, then measure the difference before opening a pull request.
In a recent GitHub Checkout demo, that’s exactly what happened. The agent benchmarked a lookup, made a targeted fix, and came back with a 99% improvement on that one function. Small scope, real data, no guessing.
You can share custom agents across your org or enterprise too, so the same process applies everywhere teams are using the coding agent.
To get started:
- Create an agent file under
.github/agents/in your repo. - Open the Agents panel and start a new task.
- Select your custom agent from the options.
- Write a prompt scoped to what that agent does.
Learn more about creating custom agents. 👉
Move between cloud and local without losing context
Sometimes you start something in the cloud and want to finish it locally. Sometimes you’re deep in your terminal and want to hand something off without losing your flow. Either way, switching contexts used to mean starting the conversation over.
Now it doesn’t. Pull a cloud session into your terminal and you get the branch, the logs, and the full context. Or press & in the CLI to push work back to the cloud and keep going on your end.
To get started:
- Start a task with Copilot coding agent and wait for the session to appear.
- Click “Continue in Copilot CLI” and copy the command.
- Paste it in your terminal to load the session locally with branch, logs, and context intact.
- Press the ampersand symbol (
&) in the CLI to delegate work back to the cloud and keep going locally.
Learn more about Copilot coding agent + CLI handoff. 👉
What this adds up to
Copilot coding agent has come a long way. Model selection, self-review, security scanning, custom agents, CLI handoff—and that’s just what shipped recently. The team is actively working on private mode, planning before coding, and using the agent for things that don’t even need a pull request, like summarizing issues or generating reports. There’s a lot more coming. Stay tuned.
Share feedback on what ships next in GitHub Community discussions.
Get started with GitHub Copilot coding agent >
The post What’s new with GitHub Copilot coding agent appeared first on The GitHub Blog.
]]>The post Multi-agent workflows often fail. Here’s how to engineer ones that don’t. appeared first on The GitHub Blog.
]]>If you’ve built a multi-agent workflow, you’ve probably seen it fail in a way that’s hard to explain.
The system completes, and agents take actions. But somewhere along the way, something subtle goes wrong. You might see an agent close an issue that another agent just opened, or ship a change that fails a downstream check it didn’t know existed.
That’s because the moment agents begin handling related tasks—triaging issues, proposing changes, running checks, and opening pull requests—they start making implicit assumptions about state, ordering, and validation. Without providing explicit instructions, data formats, and interfaces, things won’t go the way you planned.
Through our work on agentic experiences at GitHub across GitHub Copilot, internal automations, and emerging multi-agent orchestration patterns, we’ve seen multi-agent systems behave much less like chat interfaces and much more like distributed systems.
This post is for engineers building multi-agent systems. We’ll walk through the most common reasons they fail and the engineering patterns that make them more reliable.
1. Natural language is messy. Typed schemas make it reliable.
Multi-agent workflows often fail early because agents exchange messy language or inconsistent JSON. Field names change, data types don’t match, formatting shifts, and nothing enforces consistency.
Just like establishing contracts early in development helps teams collaborate without stepping on each other, typed interfaces and strict schemas add structure at every boundary. Agents pass machine-checkable data, invalid messages fail fast, and downstream steps don’t have to guess what a payload means.
Most teams start by defining the data shape they expect agents to return:
type UserProfile = {
id: number;
email: string;
plan: "free" | "pro" | "enterprise";
};
This changes debugging from “inspect logs and guess” to “this payload violated schema X.” Treat schema violations like contract failures: retry, repair, or escalate before bad state propagates.
The bottom line: Typed schemas are table stakes in multi-agent workflows. Without them, nothing else works. See how GitHub Models enable structured, repeatable AI workflows in real projects. 👉
2. Vague intent breaks agents. Action schemas make it clear.
Even with typed data, multi-agent workflows still fail because LLMs don’t follow implied intent, only explicit instructions.
“Analyze this issue and help the team take action” sounds clear. But different agents may close, assign, escalate, or do nothing—each reasonable, none automatable.
Action schemas fix this by defining the exact set of allowed actions and their structure. Not every step needs structure, but the outcome must always resolve to a small, explicit set of actions.
Here’s what an action schema might look like:
const ActionSchema = z.discriminatedUnion("type", [
{ type: "request-more-info", missing: string[] },
{ type: "assign", assignee: string },
{ type: "close-as-duplicate", duplicateOf: number },
{ type: "no-action" }
]);
With this in place, agents must return exactly one valid action. Anything else fails validation and is retried or escalated.
The bottom line: Most agent failures are action failures. For reducing ambiguity even earlier in the workflow—at the instruction level—this guide to writing effective custom instructions is helpful. 👉
3. Loose interfaces create errors. MCP adds the structure agents need.
Typed schemas, constrained actions, and structured reasoning only work if they’re consistently enforced. Without enforcement, they’re conventions, not guarantees.
Model Context Protocol (MCP) is the enforcement layer that turns these patterns into contracts.
MCP defines explicit input and output schemas for every tool and resource, validating calls before execution.
{
"name": "create_issue",
"input_schema": { ... },
"output_schema": { ... }
}
With MCP, agents can’t invent fields, omit required inputs, or drift across interfaces. Validation happens before execution, which prevents bad state from ever reaching production systems.
The bottom line: Schemas define structure whereas action schemas define intent. MCP enforces both. Learn more about how MCP works and why it matters. 👉
Moving forward together
Multi-agent systems work when structure is explicit. When you add typed schemas, constrained actions, and structured interfaces enforced by MCP, agents start behaving like reliable system components.
The shift is simple but powerful: treat agents like code, not chat interfaces.
Learn how MCP enables structured, deterministic agent-tool interactions. 👉
The post Multi-agent workflows often fail. Here’s how to engineer ones that don’t. appeared first on The GitHub Blog.
]]>The post How AI is reshaping developer choice (and Octoverse data proves it) appeared first on The GitHub Blog.
]]>You know that feeling when a sensory trigger instantly pulls you back to a moment in your life? For me, it’s Icy Hot. One whiff and I’m back to 5 a.m. formation time in the army. My shoulders tense. My body remembers. It’s not logical. It’s just how memory works. We build strong associations between experiences and cues around them. Those patterns get encoded and guide our behavior long after the moment passes.
That same pattern is happening across the software ecosystem as AI becomes a default part of how we build. For example, we form associations between convenience and specific technologies. Those loops influence what developers reach for, what they choose to learn, and ultimately, which technologies gain momentum.
Octoverse 2025 data illustrates this in real time. And it’s not subtle.
In August 2025, TypeScript surpassed both Python and JavaScript to become the most-used language on GitHub for the first time ever. That’s the headline. But the deeper story is what it signals: AI isn’t just speeding up coding. It’s reshaping which languages, frameworks, and tools developers choose in the first place.

The convenience loop is how memory becomes behavior
When a task or process goes smoothly, your brain remembers. Convenience captures attention. Reduced friction becomes a preference—and preferences at scale can shift ecosystems.
Eighty percent of new developers on GitHub use Copilot within their first week. Those early exposures reset the baseline for what “easy” means.
When AI handles boilerplate and error-prone syntax, the penalty for choosing powerful but complex languages disappears. Developers stop avoiding tools with high overhead and start picking based on utility instead. The language adoption data shows this behavioral shift:
- TypeScript grew 66% year-over-year
- JavaScript grew 24%
- Shell scripting usage in AI-generated projects jumped 206%
That last one matters. We didn’t suddenly love Bash. AI absorbed the friction that made shell scripting painful. So now we use the right tool for the job without the usual cost.
This is what Octoverse is really showing us: developer choice is shifting toward technologies that work best with the tools we’re already using.
The technical reason behind the shift
There are concrete, technical reasons AI performs better with strongly typed languages.
Strongly typed languages give AI much clearer constraints. In JavaScript, a variable could be anything. In TypeScript, declaring x: string immediately eliminates all non-string operations. That constraint matters. Constraints help AI generate more reliable, contextually correct code. And developers respond to that reliability.
That effect compounds when you look at AI model integration across GitHub. Over 1.1 million public repositories now use LLM SDKs. This is mainstream adoption, not fringe experimentation. And it’s concentrating around the languages and frameworks that work best with AI.

Moving fast without breaking your architecture
AI tools are amplifying developer productivity in ways we haven’t seen before. The question is how to use them strategically. The teams getting the best results aren’t fighting the convenience loop. They’re designing their workflows to harness it while maintaining the architectural standards that matter.
For developers and teams
Establish patterns before you generate. AI is fantastic at following established patterns, but struggles to invent them cleanly. If you define your first few endpoints or components with strong structure, Copilot will follow those patterns. Good foundations scale. Weak ones get amplified.
Use type systems as guardrails, not crutches. TypeScript reduces errors, but passing type checks isn’t the same as expressing correct business logic. Use types to bound the space of valid code, not as your primary correctness signal.
Test AI-generated code harder, not less. There’s a temptation to trust AI output because it “looks right” and passes initial checks. Resist that. Don’t skip testing.
For engineering leaders
Recognize the velocity jump and prepare for its costs. AI-assisted development often produces a 20–30 percent increase in throughput. That’s a win. But higher throughput means architectural drift can accumulate faster without the right guardrails.
Standardize before you scale. Document patterns. Publish template repositories. Make your architectural decisions explicit. AI tools will mirror whatever structures they see.
Track what AI is generating, not just how much. The Copilot usage metrics dashboard (now in public preview for Enterprise) lets you see beyond acceptance rates. You can track daily and weekly active users, agent adoption percentages, lines of code added and deleted, and language and model usage patterns across your organization. The dashboard answers a critical question: how well are teams using AI?
Use these metrics to identify patterns. If you’re seeing high agent adoption but code quality issues in certain teams, that’s a signal those teams need better prompt engineering training or stricter review standards. If specific languages or models correlate with higher defect rates, that’s data you can act on. The API provides user-level granularity for deeper analysis, so you can build custom dashboards that track the metrics that matter most to your organization.
Invest in architectural review capacity. As developers become more productive, senior engineering time becomes more valuable, not less. Someone must ensure the system remains coherent as more code lands faster.
Make architectural decisions explicit and accessible. AI learns from context. ADRs, READMEs, comments, and well-structured repos all help AI generate code aligned with your design principles.
What the Octoverse 2025 findings mean for you
The technology choices you make today are shaped by forces you may not notice: convenience, habit, AI-assisted flow, and how much friction each stack introduces..
💡 Pro tip: Look at the last three technology decisions you made. Language for a new project, framework for a feature, tool for your workflow. How much did AI tooling support factor into those choices? If the answer is “not much,” I’d bet it factored in more than you realized.
AI isn’t just changing how fast we code. It’s reshaping the ecosystem around which tools work best with which languages. Once those patterns set in, reversing them becomes difficult.
If you’re choosing technologies without considering AI compatibility, you’re setting yourself up for future friction. If you’re building languages or frameworks, AI support can’t be an afterthought.
Here’s a challenge
Next time you start a project, notice which technologies feel “natural” to reach for. Notice when AI suggestions feel effortless and when they don’t. Those moments of friction and flow are encoding your future preferences right now.
Are you choosing your tools consciously, or are your tools choosing themselves through the path of least resistance?
We’re all forming our digital “Icy Hot” moments. The trick is being aware of them.
Looking to stay one step ahead? Read the latest Octoverse report and try the Copilot usage metrics dashboard.
The post How AI is reshaping developer choice (and Octoverse data proves it) appeared first on The GitHub Blog.
]]>The post What to expect for open source in 2026 appeared first on The GitHub Blog.
]]>Over the years (decades), open source has grown and changed along with software development, evolving as the open source community becomes more global.
But with any growth comes pain points. In order for open source to continue to thrive, it’s important for us to be aware of these challenges and determine how to overcome them.
To that end, let’s take a look at what Octoverse 2025 reveals about the direction open source is taking. Feel free to check out the full Octoverse report, and make your own predictions.
Growth that’s global in scope
In 2025, GitHub saw about 36 million new developers join our community. While that number alone is huge, it’s also important to see where in the world that growth comes from. India added 5.2 million developers, and there was significant growth across Brazil, Indonesia, Japan, and Germany.
What does this mean? It’s clear that open source is becoming more global than it was before. It also means that oftentimes, the majority of developers live outside the regions where the projects they’re working on originated. This is a fundamental shift. While there have always been projects with global contributors, it’s now starting to become a reality for a greater number of projects.
Given this global scale, open source can’t rely on contributors sharing work hours, communication strategies, cultural expectations, or even language. The projects that are going to thrive are the ones that support the global community.
One of the best ways to do this is through explicit communication maintained in areas like contribution guidelines, codes of conduct, review expectations, and governance documentation. These are essential infrastructure for large projects that want to support this community. Projects that don’t include these guidelines will have trouble scaling as the number of contributors increases across the globe. Those that do provide them will be more resilient, sustainable, and will provide an easier path to onboard new contributors.
The double-edged sword of AI
AI has had a major role in accelerating global participation over 2025. It’s created a pathway that makes it easier for new developers to enter the coding world by dramatically lowering the barrier to entry. It helps contributors understand unfamiliar codebases, draft patches, and even create new projects from scratch. Ultimately, it has helped new developers make their first contributions sooner.
However, it has also created a lot of noise, or what is called “AI slop.” AI slop is a large quantity of low-quality—and oftentimes inaccurate—contributions that don’t add value to the project. Or they are contributions that would require so much work to incorporate, it would be faster to implement the solution yourself.
This makes it harder than ever to maintain projects and make sure they continue moving forward in the intended direction. Auto-generated issues and pull requests increase volume without always increasing the quality of the project. As a result, maintainers need to spend more time reviewing contributions from developers with vastly variable levels of skill. In a lot of cases, the amount of time it takes to review the additional suggestions has risen faster than the number of maintainers.
Even if you remove AI slop from the equation, the sheer volume of contributions has grown, potentially to unmanageable levels. It can feel like a denial of service attack on human attention.
This is why maintainers have been asking: how do you sift through the noise and find the most important contributions? Luckily, we’ve added some tools to help. There are also a number of open source AI projects specifically trying to address the AI slop issue. In addition, maintainers have been using AI defensively, using it to triage issues, detect duplicate issues, and handle simple maintenance like the labeling of issues. By helping to offload some of the grunt work, it gives maintainers more time to focus on the issues that require human intervention and decision making.
Expect the open source projects that continue to expand and grow over the next year to be those that incorporate AI as part of the community infrastructure. In order to deal with this quantity of information, AI cannot be just a coding assistant. It needs to find ways to ease the pressure of being a maintainer and find a way to make that work more scalable.
Record growth is healthy, if it’s planned for
On the surface, record global growth looks like success. But this influx of newer developers can also be a burden. The sheer popularity of projects that cover basics, such as contributing your first pull request to GitHub, shows that a lot of these new developers are very much in their infancy in terms of comfort with open source. There’s uncertainty about how to move forward and how to interact with the community. Not to mention challenges with repetitive onboarding questions and duplicate issues.
This results in a growing gap between the number of participants in open source projects and the number of maintainers with a sense of ownership. As new developers grow at record rates, this gap will widen.
The way to address this is going to be less about having individuals serving as mentors—although that will still be important. It will be more about creating durable systems that show organizational maturity. What does this mean? While not an exhaustive list, here are some items:
- Having a clear, defined path to move from contributor to reviewer to maintainer. Be aware that this can be difficult without a mentor to help guide along this path.
- Shared governance models that don’t rely on a single timezone or small group of people.
- Documentation that provides guidance on how to contribute and the goals of the project.
By helping to make sure that the number of maintainers keeps relative pace with the number of contributors, projects will be able to take advantage of the record growth. This does create an additional burden on the current maintainers, but the goal is to invest in a solid foundation that will result in a more stable structure in the future. Projects that don’t do this will have trouble functioning at the increased global scale and might start to stall or see problems like increased technical debt.
But what are people building?
It can’t be denied that AI was a major focus—about 60% of the top growing projects were AI focused. However, there were several that had nothing to do with AI. These projects (e.g., Home Assistant, VS Code, Godot) continue to thrive because they meet real needs and support broad, international communities.

Just as the developer space is growing on a global scale, the same can be said about the projects that garner the most interest. These types of projects that support a global community and address their needs are going to continue to be popular and have the most support.
This just continues to reinforce how open source is really embracing being a global phenomenon as opposed to a local one.
What this year will likely hold
Open source in 2026 won’t be defined by a single trend that emerged over 2025. Instead, it will be shaped by how the community responds to the pressures identified over the last year, particularly with the surge in AI and an explosively growing global community.
For developers, this means that it’s important to invest in processes as much as code. Open source is scaling in ways that would have been impossible to imagine a decade ago, and the important question going forward isn’t how much it will grow—it’s how can you make that growth sustainable.
The post What to expect for open source in 2026 appeared first on The GitHub Blog.
]]>The post Securing the AI software supply chain: Security results across 67 open source projects appeared first on The GitHub Blog.
]]>Modern software is built on open source projects. In fact, you can trace almost any production system today, including AI, mobile, cloud, and embedded workloads, back to open source components. These components are the invisible infrastructure of software: the download that always works, the library you never question, the build step you haven’t thought about in years, if ever.
A few examples:
- curl moves data for billions of systems, from package managers to CI pipelines.
- Python, pandas, and SciPy sit underneath everything from LLM research to ETL workflows and model evaluation.
- Node.js, LLVM, and Jenkins shape how software is compiled, tested, and shipped across industries.
When these projects are secure, teams can adopt automation, AI‑enhanced tooling, and faster release cycles without adding risk or slow down development. When they aren’t, the blast radius crosses project boundaries, propagating through registries, clouds, transitive dependencies, and production systems, including AI systems, that react far faster than traditional workflows.
Securing this layer is not only about preventing incidents; it’s about giving developers confidence that the systems they depend on—whether for model training, CI/CD, or core runtime behavior—are operating on hardened, trustworthy foundations. Open source is shared industrial infrastructure that deserves real investment and measurable outcomes.
That is the mission of the GitHub Secure Open Source Fund: to secure open source projects that underpin the digital supply chain, catalyze innovation, and are critical to the modern AI stack.
We do this by directly linking funding to verified security outcomes and by giving maintainers resources, hands‑on security training, and a security community where they can raise their highest‑risk concerns and get expert feedback.
Why securing critical open source projects matters
A single production service can depend on hundreds or even thousands of transitive dependencies. As Log4Shell demonstrated, when one widely used project is compromised, the impact is rarely confined to a single application or company.
Investing in the security of widely used open source projects does three things at once:
- It reinforces that security is a baseline requirement for modern software, not optional labor.
- It gives maintainers time, resources, and support to perform proactive security work.
- It reduces systemic risk across the global software supply chain.
This security work benefits everyone who writes, ships, or operates code, even if they never interact directly with the projects involved. That gap is exactly what the GitHub Secure Open Source Fund was built to close. In Session 1 & 2, 71 projects made significant security improvements. In Session 3, 67 open source projects delivered concrete security improvements to reduce systemic risk across the software supply chain.
Session 3, by the numbers
- 67 projects
- 98 maintainers
- $670,000 in non-dilutive funding powered by GitHub Sponsors
- 99% of projects completed the program with core GitHub security features enabled
Real security results across all sessions:
- 138 projects
- 219 maintainers
- 38 countries represented by participating projects
- $1.38M in non-dilutive funding powered by GitHub Sponsors
- 191 new CVEs Issued
- 250+ new secrets prevented from being leaked
- 600+ leaked secrets were detected and resolved
- Billions of monthly downloads powered by alumni projects
Plus, in just the last 6 months:
- 500+ CodeQL alerts fixed
- 66 secrets blocked
Where security work happened in Session 3
Session 3 focused on improving security across the systems developers rely on every day. The projects below are grouped by the role they play in the software ecosystem.
Core programming languages and runtimes 🤖
CPython • Himmelblau • LLVM • Node.js • Rustls
These projects define how software is written and executed. Improvements here flow downstream to entire ecosystems.
This group includes CPython, Node.js, LLVM, Rustls, and related tooling that shapes compilation, execution, and cryptography at scale.

For example, improvements to CPython directly benefit millions of developers who rely on Python for application development, automation, and AI workloads. LLVM maintainers identified security improvements that complement existing investments and reduce risk across toolchains used throughout the industry.
When language runtimes improve their security posture, everything built on top of them inherits that resilience.

Web, networking, and core infrastructure libraries 📚
Apache APISIX• curl• evcc • kgateway• Netty• quic-go• urllib3 • Vapor
These projects form the connective tissue of the internet. They handle HTTP, TLS, APIs, and network communication that nearly every application depends on.
This group includes curl, urllib3, Netty, Apache APISIX, quic-go, and related libraries that sit on the hot path of modern software.

Build systems, CI/CD, and release tooling 🧰
Apache Airflow • Babel • Foundry • Gitoxide • GoReleaser • Jenkins • Jupyter Docker Stacks • node-lru-cache • oapi-codegen • PyPI / Warehouse • rimraf • webpack
Compromising build tooling compromises the entire supply chain. These projects influence how software is built, tested, packaged, and shipped.
Session 3 included projects such as Jenkins, Apache Airflow, GoReleaser, PyPI Warehouse, webpack, and related automation and release infrastructure.
Maintainers in this category focused on securing workflows that often run with elevated privileges and broad access. Improvements here help prevent tampering before software ever reaches users.

Data science, scientific computing, and AI foundations 📊
ACI.dev • ArviZ • CocoIndex • OpenBB Platform • OpenMetadata • OpenSearch • pandas • PyMC • SciPy • TraceRoot
These projects sit at the core of modern data analysis, research, and AI development. They are increasingly embedded in production systems as well as research pipelines.
Projects such as pandas, SciPy, PyMC, ArviZ, and OpenSearch participated in Session 3. Maintainers expanded security coverage across large and complex codebases, often moving from limited scanning to continuous checks on every commit and release.
Many of these projects also engaged deeply with AI-related security topics, reflecting their growing role in AI workflows.

Developer tools and productivity utilities ⚒️
AssertJ • ArduPilot • AsyncAPI Initiative • Bevy • calibre • DIGIT • fabric.js • ImageMagick • jQuery • jsoup • Mastodon • Mermaid • Mockoon • p5.js • python-benedict • React Starter Kit • Selenium • Sphinx• Spyder • ssh_config• Thunderbird for Android • Two.js • xyflow • Yii framework
These projects shape the day-to-day experience of writing, testing, and maintaining software.
The group includes tools such as Selenium, Sphinx, ImageMagick, calibre, Spyder, and other widely used utilities that appear throughout development and testing environments.
Improving security here reduces the risk that developer tooling becomes an unexpected attack vector, especially in automated or shared environments.

Identity, secrets, and security frameworks 🔒
external-secrets • Helmet.js • Keycloak • Keyshade • Oauth2 (Ruby) • varlock • WebAuthn (Go)
These projects form the backbone of authentication, authorization, secrets management, and secure configuration.
Session 3 participants included projects such as Keycloak, external-secrets, oauth2 libraries, WebAuthn tooling, and related security frameworks.
Maintainers in this group often reported shifting from reactive fixes to systematic threat modeling and long-term security planning, improving trust for every system that depends on them.


Security as shared infrastructure
One of the most durable outcomes of the program was a shift in mindset.
Maintainers moved security from a stretch goal to a core requirement. They shifted from reactive patching to proactive design, and from isolated work to shared practice. Many are now publishing playbooks, sharing incident response exercises, and passing lessons on to their contributor communities.
That is how security scales: one-to-many.
What’s next: Help us make open source more secure
Securing open source is basic maintenance for the internet. By giving 67 heavily used projects real funding, three focused weeks, and direct help, we watched maintainers ship fixes that now protect millions of builds a day. This training, taught by the GitHub Security Lab and top cybersecurity experts, allows us to go beyond one-on-one education and enable one-to-many impact.
For example, many maintainers are working to make their playbooks public. The incident-response plans they rehearsed are forkable. The signed releases they now ship flow downstream to every package manager and CI pipeline that depends on them.
Join us in this mission to secure the software supply chain at scale.
- Projects and maintainers: Apply now to the GitHub Secure Open Source Fund and help make open source safer for everyone. Session 4 begins April 2026. If you write code, rely on open source, or want the systems you depend on to remain trustworthy, we encourage you to apply.
- Funding and Ecosystem Partners: Become a Funding or Ecosystem Partner and support a more secure open source future. Join us on this mission to secure the software supply chain at scale!
Thank you to all of our partners
We couldn’t do this without our incredible network of partners. Together, we are helping secure the open source ecosystem for everyone!
Funding Partners: Alfred P. Sloan Foundation, American Express, Chainguard, Datadog, Herodevs, Kraken, Mayfield, Microsoft, Shopify, Stripe, Superbloom, Vercel, Zerodha, 1Password

Ecosystem Partners: Atlantic Council, Ecosyste.ms, CURIOSS, Digital Data Design Institute Lab for Innovation Science, Digital Infrastructure Insights Fund, Microsoft for Startups, Mozilla, OpenForum Europe, Open Source Collective, OpenUK, Open Technology Fund, OpenSSF, Open Source Initiative, OpenJS Foundation, University of California, OWASP, Santa Cruz OSPO, Sovereign Tech Agency, SustainOSS

The post Securing the AI software supply chain: Security results across 67 open source projects appeared first on The GitHub Blog.
]]>The post Automate repository tasks with GitHub Agentic Workflows appeared first on The GitHub Blog.
]]>Imagine visiting your repository in the morning and feeling calm because you see:
- Issues triaged and labelled
- CI failures investigated with proposed fixes
- Documentation has been updated to reflect recent code changes.
- Two new pull requests that improve testing await your review.
All of it visible, inspectable, and operating within the boundaries you’ve defined.
That’s the future powered by GitHub Agentic Workflows: automated, intent-driven repository workflows that run in GitHub Actions, authored in plain Markdown and executed with coding agents. They’re designed for people working in GitHub, from individuals automating a single repo to teams operating at enterprise or open-source scale.
At GitHub Next, we began GitHub Agentic Workflows as an investigation into a simple question: what does repository automation with strong guardrails look like in the era of AI coding agents? A natural place to start was GitHub Actions, the heart of scalable repository automation on GitHub. By bringing automated coding agents into actions, we can enable their use across millions of repositories, while keeping decisions about when and where to use them in your hands.
GitHub Agentic Workflows are now available in technical preview. In this post, we’ll explain what they are and how they work. We invite you to put them to the test, to explore where repository-level AI automation delivers the most value.
AI repository automation: A revolution through simplicity
The concept behind GitHub Agentic Workflows is straightforward: you describe the outcomes you want in plain Markdown, add this as an automated workflow to your repository, and it executes using a coding agent in GitHub Actions.
This brings the power of coding agents into the heart of repository automation. Agentic workflows run as standard GitHub Actions workflows, with added guardrails for sandboxing, permissions, control, and review. When they execute, they can use different coding agent engines—such as Copilot CLI, Claude Code, or OpenAI Codex—depending on your configuration.
The use of GitHub Agentic Workflows makes entirely new categories of repository automation and software engineering possible, in a way that fits naturally with how developer teams already work on GitHub. All of them would be difficult or impossible to accomplish traditional YAML workflows alone:
- Continuous triage: automatically summarize, label, and route new issues.
- Continuous documentation: keep READMEs and documentation aligned with code changes.
- Continuous code simplification: repeatedly identify code improvements and open pull requests for them.
- Continuous test improvement: assess test coverage and add high-value tests.
- Continuous quality hygiene: proactively investigate CI failures and propose targeted fixes.
- Continuous reporting: create regular reports on repository health, activity, and trends.
These are just a few examples of repository automations that showcase the power of GitHub Agentic Workflows. We call this Continuous AI: the integration of AI into the SDLC, enhancing automation and collaboration similar to continuous integration and continuous deployment (CI/CD) practices.
GitHub Agentic Workflows and Continuous AI are designed to augment existing CI/CD rather than replace it. They do not replace build, test, or release pipelines, and their use cases largely do not overlap with deterministic CI/CD workflows. Agentic workflows run on GitHub Actions because that is where GitHub provides the necessary infrastructure for permissions, logging, auditing, sandboxed execution, and rich repository context.
In our own usage at GitHub Next, we’re finding new uses for agentic workflows nearly every day. Throughout GitHub, teams have been using agentic workflows to create custom tools for themselves in minutes, replacing chores with intelligence or paving the way for humans to get work done by assembling the right information, in the right place, at the right time. A new world of possibilities is opening for teams and enterprises to keep their repositories healthy, navigable, and high-quality.
Let’s talk guardrails and control
Designing for safety and control is non-negotiable. GitHub Agentic Workflows implements a defense-in-depth security architecture that protects against unintended behaviors and prompt-injection attacks.
Workflows run with read-only permissions by default. Write operations require explicit approval through safe outputs, which map to pre-approved, reviewable GitHub operations such as creating a pull request or adding a comment to an issue. Sandboxed execution, tool allowlisting, and network isolation help ensure that coding agents operate within controlled boundaries.
Guardrails like these make it practical to run agents continuously, not just as one-off experiments. See our security architecture for more details.
One alternative approach to agentic repository automation is to run coding agent CLIs, such as Copilot or Claude, directly inside a standard GitHub Actions YAML workflow. This approach often grants these agents more permission than is required for a specific task. In contrast, GitHub Agentic Workflows run coding agents with read-only access by default and rely on safe outputs for GitHub operations, providing tighter constraints, clearer review points, and stronger overall control.
A simple example: A daily repo report
Let’s look at an agentic workflow which creates a daily status report for repository maintainers.
In practice, you will usually use AI assistance to create your workflows. The easiest way to do this is with an interactive coding agent. For example, with your favorite coding agent, you can enter this prompt:
Generate a workflow that creates a daily repo status report for a maintainer. Use the instructions at https://github.com/github/gh-aw/blob/main/create.md
The coding agent will interact with you to confirm your specific needs and intent, write the Markdown file, and check its validity. You can then review, refine, and validate the workflow before adding it to your repository.
This will create two files in .github/workflows:
daily-repo-status.md(the agentic workflow)daily-repo-status.lock.yml(the corresponding agentic workflow lock file, which is executed by GitHub Actions)
The file daily-repo-status.md will look like this:
---
on:
schedule: daily
permissions:
contents: read
issues: read
pull-requests: read
safe-outputs:
create-issue:
title-prefix: "[repo status] "
labels: [report]
tools:
github:
---
# Daily Repo Status Report
Create a daily status report for maintainers.
Include
- Recent repository activity (issues, PRs, discussions, releases, code changes)
- Progress tracking, goal reminders and highlights
- Project status and recommendations
- Actionable next steps for maintainers
Keep it concise and link to the relevant issues/PRs.
This file has two parts:
- Frontmatter (YAML between
---markers) for configuration - Markdown instructions that describe the job in natural language in natural language
The Markdown is the intent, but the trigger, permissions, tools, and allowed outputs are spelled out up front.
If you prefer, you can add the workflow to your repository manually:
- Create the workflow: Add
daily-repo-status.mdwith the frontmatter and instructions. - Create the lock file:
gh extension install github/gh-awgh aw compile
- Commit and push: Commit and push files to your repository.
- Add any required secrets: For example, add a token or API key for your coding agent.
Once you add this workflow to your repository, it will run automatically or you can trigger it manually using GitHub Actions. When the workflow runs, it creates a status report issue like this:

What you can build with GitHub Agentic Workflows
If you’re looking for further inspiration Peli’s Agent Factory is a guided tour through a wide range of workflows, with practical patterns you can adapt, remix, and standardize across repos.
A useful mental model: if repetitive work in a repository can be described in words, it might be a good fit for an agentic workflow.
If you’re looking for design patterns, check out ChatOps, DailyOps, DataOps, IssueOps, ProjectOps, MultiRepoOps, and Orchestration.
Uses for agent-assisted repository automation often depend on particular repos and development priorities. Your team’s approach to software development will differ from those of other teams. It pays to be imaginative about how you can use agentic automation to augment your team for your repositories for your goals.
Practical guidance for teams
Agentic workflows bring a shift in thinking. They work best when you focus on goals and desired outputs rather than perfect prompts. You provide clarity on what success looks like, and allow the workflow to explore how to achieve it. Some boundaries are built into agentic workflows by default, and others are ones you explicitly define. This means the agent can explore and reason, but its conclusions always stay within safe, intentional limits.
You will find that your workflows can range from very general (“Improve the software”) to very specific (“Check that all technical documentation and error messages for this educational software are written in a style suitable for an audience of age 10 or above”). You can choose the level of specificity that’s appropriate for your team.
GitHub Agentic Workflows use coding agents at runtime, which incur billing costs. When using Copilot with default settings, each workflow run typically incurs two premium requests: one for the agentic work and one for a guardrail check through safe outputs. The models used can be configured to help manage these costs. Today, automated uses of Copilot are associated with a user account. For other coding agents, refer to our documentation for details. Here are a few more tips to help teams get value quickly:
- Start with low-risk outputs such as comments, drafts, or reports before enabling pull request creation.
- For coding, start with goal-oriented improvements such as routine refactoring, test coverage, or code simplification rather than feature work.
- For reports, use instructions that are specific about what “good” looks like, including format, tone, links, and when to stop.
- Agentic workflows create an agent-only, sub-loop that’s able to be autonomous because agents are acting under defined terms. But it’s important that humans stay in the broader loop of forward progress in the repository, through reports, issues, and pull requests. With GitHub Agentic Workflows, pull requests are never merged automatically, and humans must always review and approve.
- Treat the workflow Markdown as code. Review changes, keep it small, and evolve it intentionally.
Continuous AI works best if you use it in conjunction with CI/CD. Don’t use agentic workflows as a replacement for GitHub Actions YAML workflows for CI/CD. This approach extends continuous automation to more subjective, repetitive tasks that traditional CI/CD struggle to express.
Build the future of automation with us
GitHub Agentic Workflows are available now in technical preview and are a collaboration between GitHub, Microsoft Research, and Azure Core Upstream. We invite you to try them out and help us shape the future of repository automation.
We’d love for you to be involved! Share your thoughts in the Community discussion, or join us (and tons of other awesome makers) in the #agentic-workflows channel of the GitHub Next Discord. We look forward to seeing what you build with GitHub Agentic Workflows. Happy automating!
Try GitHub Agentic Workflows in a repo today! Install gh-aw, add a starter workflow or create one using AI, and run it. Then, share what you build (and what you want next).
The post Automate repository tasks with GitHub Agentic Workflows appeared first on The GitHub Blog.
]]>