티스토리 수익 글 보기
The post Automate repository tasks with GitHub Agentic Workflows appeared first on The GitHub Blog.
]]>Imagine visiting your repository in the morning and feeling calm because you see:
- Issues triaged and labelled
- CI failures investigated with proposed fixes
- Documentation has been updated to reflect recent code changes.
- Two new pull requests that improve testing await your review.
All of it visible, inspectable, and operating within the boundaries you’ve defined.
That’s the future powered by GitHub Agentic Workflows: automated, intent-driven repository workflows that run in GitHub Actions, authored in plain Markdown and executed with coding agents. They’re designed for people working in GitHub, from individuals automating a single repo to teams operating at enterprise or open-source scale.
At GitHub Next, we began GitHub Agentic Workflows as an investigation into a simple question: what does repository automation with strong guardrails look like in the era of AI coding agents? A natural place to start was GitHub Actions, the heart of scalable repository automation on GitHub. By bringing automated coding agents into actions, we can enable their use across millions of repositories, while keeping decisions about when and where to use them in your hands.
GitHub Agentic Workflows are now available in technical preview. In this post, we’ll explain what they are and how they work. We invite you to put them to the test, to explore where repository-level AI automation delivers the most value.
AI repository automation: A revolution through simplicity
The concept behind GitHub Agentic Workflows is straightforward: you describe the outcomes you want in plain Markdown, add this as an automated workflow to your repository, and it executes using a coding agent in GitHub Actions.
This brings the power of coding agents into the heart of repository automation. Agentic workflows run as standard GitHub Actions workflows, with added guardrails for sandboxing, permissions, control, and review. When they execute, they can use different coding agent engines—such as Copilot CLI, Claude Code, or OpenAI Codex—depending on your configuration.
The use of GitHub Agentic Workflows makes entirely new categories of repository automation and software engineering possible, in a way that fits naturally with how developer teams already work on GitHub. All of them would be difficult or impossible to accomplish traditional YAML workflows alone:
- Continuous triage: automatically summarize, label, and route new issues.
- Continuous documentation: keep READMEs and documentation aligned with code changes.
- Continuous code simplification: repeatedly identify code improvements and open pull requests for them.
- Continuous test improvement: assess test coverage and add high-value tests.
- Continuous quality hygiene: proactively investigate CI failures and propose targeted fixes.
- Continuous reporting: create regular reports on repository health, activity, and trends.
These are just a few examples of repository automations that showcase the power of GitHub Agentic Workflows. We call this Continuous AI: the integration of AI into the SDLC, enhancing automation and collaboration similar to continuous integration and continuous deployment (CI/CD) practices.
GitHub Agentic Workflows and Continuous AI are designed to augment existing CI/CD rather than replace it. They do not replace build, test, or release pipelines, and their use cases largely do not overlap with deterministic CI/CD workflows. Agentic workflows run on GitHub Actions because that is where GitHub provides the necessary infrastructure for permissions, logging, auditing, sandboxed execution, and rich repository context.
In our own usage at GitHub Next, we’re finding new uses for agentic workflows nearly every day. Throughout GitHub, teams have been using agentic workflows to create custom tools for themselves in minutes, replacing chores with intelligence or paving the way for humans to get work done by assembling the right information, in the right place, at the right time. A new world of possibilities is opening for teams and enterprises to keep their repositories healthy, navigable, and high-quality.
Let’s talk guardrails and control
Designing for safety and control is non-negotiable. GitHub Agentic Workflows implements a defense-in-depth security architecture that protects against unintended behaviors and prompt-injection attacks.
Workflows run with read-only permissions by default. Write operations require explicit approval through safe outputs, which map to pre-approved, reviewable GitHub operations such as creating a pull request or adding a comment to an issue. Sandboxed execution, tool allowlisting, and network isolation help ensure that coding agents operate within controlled boundaries.
Guardrails like these make it practical to run agents continuously, not just as one-off experiments. See our security architecture for more details.
One alternative approach to agentic repository automation is to run coding agent CLIs, such as Copilot or Claude, directly inside a standard GitHub Actions YAML workflow. This approach often grants these agents more permission than is required for a specific task. In contrast, GitHub Agentic Workflows run coding agents with read-only access by default and rely on safe outputs for GitHub operations, providing tighter constraints, clearer review points, and stronger overall control.
A simple example: A daily repo report
Let’s look at an agentic workflow which creates a daily status report for repository maintainers.
In practice, you will usually use AI assistance to create your workflows. The easiest way to do this is with an interactive coding agent. For example, with your favorite coding agent, you can enter this prompt:
Generate a workflow that creates a daily repo status report for a maintainer. Use the instructions at https://github.com/github/gh-aw/blob/main/create.md
The coding agent will interact with you to confirm your specific needs and intent, write the Markdown file, and check its validity. You can then review, refine, and validate the workflow before adding it to your repository.
This will create two files in .github/workflows:
daily-repo-status.md(the agentic workflow)daily-repo-status.lock.yml(the corresponding agentic workflow lock file, which is executed by GitHub Actions)
The file daily-repo-status.md will look like this:
---
on:
schedule: daily
permissions:
contents: read
issues: read
pull-requests: read
safe-outputs:
create-issue:
title-prefix: "[repo status] "
labels: [report]
tools:
github:
---
# Daily Repo Status Report
Create a daily status report for maintainers.
Include
- Recent repository activity (issues, PRs, discussions, releases, code changes)
- Progress tracking, goal reminders and highlights
- Project status and recommendations
- Actionable next steps for maintainers
Keep it concise and link to the relevant issues/PRs.
This file has two parts:
- Frontmatter (YAML between
---markers) for configuration - Markdown instructions that describe the job in natural language in natural language
The Markdown is the intent, but the trigger, permissions, tools, and allowed outputs are spelled out up front.
If you prefer, you can add the workflow to your repository manually:
- Create the workflow: Add
daily-repo-status.mdwith the frontmatter and instructions. - Create the lock file:
gh extension install github/gh-awgh aw compile
- Commit and push: Commit and push files to your repository.
- Add any required secrets: For example, add a token or API key for your coding agent.
Once you add this workflow to your repository, it will run automatically or you can trigger it manually using GitHub Actions. When the workflow runs, it creates a status report issue like this:

What you can build with GitHub Agentic Workflows
If you’re looking for further inspiration Peli’s Agent Factory is a guided tour through a wide range of workflows, with practical patterns you can adapt, remix, and standardize across repos.
A useful mental model: if repetitive work in a repository can be described in words, it might be a good fit for an agentic workflow.
If you’re looking for design patterns, check out ChatOps, DailyOps, DataOps, IssueOps, ProjectOps, MultiRepoOps, and Orchestration.
Uses for agent-assisted repository automation often depend on particular repos and development priorities. Your team’s approach to software development will differ from those of other teams. It pays to be imaginative about how you can use agentic automation to augment your team for your repositories for your goals.
Practical guidance for teams
Agentic workflows bring a shift in thinking. They work best when you focus on goals and desired outputs rather than perfect prompts. You provide clarity on what success looks like, and allow the workflow to explore how to achieve it. Some boundaries are built into agentic workflows by default, and others are ones you explicitly define. This means the agent can explore and reason, but its conclusions always stay within safe, intentional limits.
You will find that your workflows can range from very general (“Improve the software”) to very specific (“Check that all technical documentation and error messages for this educational software are written in a style suitable for an audience of age 10 or above”). You can choose the level of specificity that’s appropriate for your team.
GitHub Agentic Workflows use coding agents at runtime, which incur billing costs. When using Copilot with default settings, each workflow run typically incurs two premium requests: one for the agentic work and one for a guardrail check through safe outputs. The models used can be configured to help manage these costs. Today, automated uses of Copilot are associated with a user account. For other coding agents, refer to our documentation for details. Here are a few more tips to help teams get value quickly:
- Start with low-risk outputs such as comments, drafts, or reports before enabling pull request creation.
- For coding, start with goal-oriented improvements such as routine refactoring, test coverage, or code simplification rather than feature work.
- For reports, use instructions that are specific about what “good” looks like, including format, tone, links, and when to stop.
- Agentic workflows create an agent-only, sub-loop that’s able to be autonomous because agents are acting under defined terms. But it’s important that humans stay in the broader loop of forward progress in the repository, through reports, issues, and pull requests. With GitHub Agentic Workflows, pull requests are never merged automatically, and humans must always review and approve.
- Treat the workflow Markdown as code. Review changes, keep it small, and evolve it intentionally.
Continuous AI works best if you use it in conjunction with CI/CD. Don’t use agentic workflows as a replacement for GitHub Actions YAML workflows for CI/CD. This approach extends continuous automation to more subjective, repetitive tasks that traditional CI/CD struggle to express.
Build the future of automation with us
GitHub Agentic Workflows are available now in technical preview and are a collaboration between GitHub, Microsoft Research, and Azure Core Upstream. We invite you to try them out and help us shape the future of repository automation.
We’d love for you to be involved! Share your thoughts in the Community discussion, or join us (and tons of other awesome makers) in the #agentic-workflows channel of the GitHub Next Discord. We look forward to seeing what you build with GitHub Agentic Workflows. Happy automating!
Try GitHub Agentic Workflows in a repo today! Install gh-aw, add a starter workflow or create one using AI, and run it. Then, share what you build (and what you want next).
The post Automate repository tasks with GitHub Agentic Workflows appeared first on The GitHub Blog.
]]>The post How to streamline GitHub API calls in Azure Pipelines appeared first on The GitHub Blog.
]]>Azure Pipelines is a cloud-based continuous integration and continuous delivery (CI/CD) service that automatically builds, tests, and deploys code similarly to GitHub Actions. While it is part of Azure DevOps, Azure Pipelines has built-in support to build and deploy code stored in GitHub repositories.
Because Azure Pipelines is fully integrated into GitHub development flows, pipelines can be triggered by pushes or pull requests, and it reports the results of the job execution back to GitHub via GitHub status checks. This way, developers can easily see if a given commit is healthy or block pull request merges if the pipeline is not compliant with GitHub rulesets.
When you need additional functionality, you can use either extensions available in the marketplace or GitHub APIs to deepen the integration with GitHub. Below, we’ll show how you can streamline the process of calling the GitHub API from Azure Pipelines by abstracting authentication with GitHub Apps and introducing a custom Azure DevOps extension, this will allow pipeline authors to easily authenticate against GitHub and call GitHub APIs without implementing authentication logic themselves. This approach provides enhanced security through centralized credential management, improved maintainability by standardizing GitHub integrations, time savings through cross-project reusability, and simplified operations with centrally managed updates for bug fixes.
Common use cases and scenarios
The GitHub API is very rich, so the possibilities for customization are almost endless. Some of the most common scenarios for GitHub calls in Azure Pipelines include:
- Setting status checks on commits or pull requests: Report the success or failure of pipeline steps (like tests, builds, or security scans) back to GitHub, enabling rulesets utilization to enforce policies, and providing clear feedback to developers about the health of their code changes.
- Adding comments to pull requests: Automatically post pipeline results, test coverage reports, performance metrics, or deployment information directly to pull request discussions, keeping all relevant information in one place for code reviewers.
- Updating files in repositories: Automatically update documentation, configuration files, or version numbers as part of your CI/CD process, such as updating a
CHANGELOG.mdfile or bumping version numbers in package files. - Managing GitHub Issues: Automatically create, update, or close issues based on pipeline results, such as creating bug reports when tests fail or closing issues when related features are successfully deployed.
- Integrating with GitHub Advanced Security: Send code scanning results to GitHub’s code scanning, enabling centralized vulnerability management, security insights, and supporting DevSecOps practices across your development workflow.
- Managing releases and assets: Automatically create GitHub releases and upload build artifacts, binaries, or documentation as release assets when deployments are successful, streamlining your release management process.
- Tracking deployments with GitHub deployments: Integrate with GitHub’s deployment API to provide visibility into deployment history and status directly in the GitHub interface.
- Triggering GitHub Actions workflows: Orchestrate hybrid CI/CD scenarios where Azure Pipelines handles certain build or deployment tasks and then triggers GitHub Actions workflows for additional processing or notifications.
Understanding GitHub API: REST vs. GraphQL
The GitHub API provides programmatic access to most of GitHub’s features and data, offering two distinct interfaces: REST and GraphQL. The REST API follows RESTful principles and provides straightforward HTTP endpoints for common operations like managing repositories, issues, pull requests, and workflows. It’s well documented, easy to get started with, and supports authentication via personal access tokens, GitHub Apps, or OAuth tokens.
GitHub’s GraphQL API offers a more flexible and efficient approach to data retrieval. Unlike REST, where you might need multiple requests to gather related data, GraphQL allows you to specify exactly what data you need in a single request, reducing over-fetching and under-fetching of data. This is particularly valuable when you need to retrieve complex, nested data structures or when you want to optimize network requests in your applications. You can see some examples in Exploring GitHub CLI: How to interact with GitHub’s GraphQL API endpoint.
Both APIs serve as the foundation for integrating GitHub’s functionality into external tools, automating workflows, and building custom solutions that extend GitHub’s capabilities.
How to choose the right authentication method
GitHub offers three primary authentication methods for accessing its APIs. Personal Access Tokens (PATs) are the simplest method, providing a token tied to a user account with specific permissions. OAuth tokens are designed for third-party applications that need to act on behalf of different users, implementing a standard authorization flow where users grant specific permissions to the application.
GitHub Apps provide the most robust and scalable solution, operating as their own entities with fine-grained permissions, installation-based access, and higher rate limits — making them ideal for organizations and production applications that need to interact with multiple repositories or organizations while maintaining tight security controls.
| Authentication Type | Pros | Cons |
|---|---|---|
| Personal Access Tokens (PATs) | – Simple to create and use – Quick to get started – Good for personal automation – Can be scoped to multiple organizations – Configurable permissions per token – Admins can revoke organization access – Configurable expiration dates – Work with most GitHub API libraries – No additional infrastructure needed | – Tied to user account lifecycle – Limited to user’s permissions – Classic PATs have coarse-grained permissions – Require manual rotation – Browser-based management only – If compromised, expose all accessible organization(s)/repositories |
| OAuth Tokens | – Standard OAuth 2.0 flow – Organization admins control app access – Can act on behalf of multiple users – Excellent for web applications – User-approved permissions – Refresh token mechanism – Widely supported by frameworks – Good for user-facing applications | – Require storing refresh tokens securely – Need server infrastructure – More complex than PATs for simple automation – Still tied to user accounts – Require initial browser authorization – Token management complexity – Potential for scope creep – User revocation affects functionality |
| GitHub Apps | – Act as independent identity – Fine-grained, repository-level permissions – Installation-based access control – Tokens can be scoped down at runtime – Short-lived tokens (1 hour max) – Higher rate limits – Best security model available – No user account dependency – Audit trail for all actions – Can be installed across multiple orgs | – More complex initial setup – Require JWT implementation – May be overkill for simple scenarios – Require understanding of installation concept – Private key management responsibility – More moving parts to maintain – Not all APIs support Apps |
PATs have two flavors: classic and fine-grained. Classic PATs provide repository-wide access with coarse permissions. Fine-grained PATs offer more granular control, since they are scoped to a single organization, allow specified permissions at the repository level, and limit access to specific repositories. Administrators can also require approval of fine-grained tokens before they can be used, making them a more secure choice for repository access management. However, they currently do not support all API calls and still have some limitations compared to classic PATs.
Because of their fine-grained permissions, security features, and higher rate limits, GitHub Apps are the ideal choice for machine-to-machine integration with Azure Pipelines. What’s more, the short-lived tokens and installation-based access model provide better security controls compared to PATs and OAuth tokens, making them particularly well-suited for automation in CI/CD scenarios.
Registering and installing a GitHub App
In order to use an application for authentication, register it as a GitHub App, and then install it on the accounts, organizations, or enterprises the application will interact with.
These are the steps to follow:
- Register the GitHub App in GitHub enterprise, organization, or account.
- Make sure to select the appropriate permissions for the application. The permissions will determine what the application can do in the enterprise, organization, and repositories to which it has access.
- Permissions may be modified at any time. Note that if the application is already installed, changes will require a new authorization from the owner administrators before they take effect.
- Take care to understand the consequences of making the app public or private. It is very likely that you will want to make the app private, as it is only intended to be used by you or your organization. The semantics of public and private also vary depending on the GitHub Enterprise Cloud type (Enterprise with personal accounts, with managed users, or with data residency).
- If a private key was generated, save it in a safe place. Private keys are used to authenticate against GitHub to generate an installation token. Note that a key can be revoked or up to 20 more may be generated if desired.
- Install the GitHub App on the accounts or organizations the application will interact with.
- When an app is installed, select which repositories the app will have access to. Options include all repositories (current and future) or you can select individual repositories.
Note: An unlimited number of GitHub Apps may be installed on each account, but only 100 GitHub Apps may be registered per enterprise, organization, or account.
GitHub App authentication flow
GitHub Apps use a two-step authentication process to access the GitHub API. First, the app authenticates itself using a JSON Web Token (JWT) signed with its private key. This JWT proves the app’s identity but doesn’t provide access to any GitHub resource. To call GitHub APIs, the app needs to obtain an installation token. Installation tokens are scoped (enterprise, organization, or account) access tokens that are generated using the app’s JWT authentication. These tokens are short-lived (valid for one hour) and can only access the resources on the scope they are installed on (enterprise, organization, or repository) and use at max the permissions granted during the app’s installation.
To obtain an installation token, there are two approaches: either use a known installation ID, or retrieve the ID by calling the installations API. Once the app has the installation ID, it requests a new token using that ID. The resulting installation token inherits the app’s permissions and repository access for that installation. It can optionally request the token with reduced permissions or limited to specific repositories — a useful security feature when you don’t need the app’s full access scope.
The resulting installation token can then be used to make GitHub API calls with the returned permissions.
Note: The application can also authenticate on a user’s behalf, but it’s not an ideal scenario for CI/CD pipelines where we want to use a service account and not a user account.

From a pipeline perspective, generating an installation token is all that’s needed to call GitHub APIs.
Pipeline authors have three main options to generate installation tokens in Azure Pipelines:
- Use a command-line tool: Several tools are available that can generate installation tokens directly from a pipeline step. For example, gh-token is a popular open source tool that handles the entire token generation process.
- Write custom scripts: Implement the token generation process using bash/curl or PowerShell scripts following the authentication steps described above. This grants full control over the process but requires more implementation effort.
- Use Azure Pipeline tasks: While Azure Pipelines doesn’t provide built-in GitHub App authentication, you can either:
- Find a suitable task in the Azure DevOps marketplace.
- Create a custom task that implements the GitHub App authentication flow.
Next, we’ll explore creating a custom task using an Azure DevOps extension to provide an integration with GitHub App authentication and dynamically generated installation tokens.
Azure DevOps extension for GitHub App authentication
When creating an integration between Azure Pipelines and GitHub, security of the app private key should be top of mind. Possession of this key grants permissions to generate installation tokens and make API calls on behalf of the app, so it must be stored securely. Within Azure Pipelines, we have several options for storing sensitive data:
- Azure Pipeline secrets store, which can be accessed via secret variables
- Azure Pipelines secure files
- Azure Pipelines service connections, which are project-level resources used to store authentication details for external services
Service connections in Azure Pipelines provide several key benefits for managing external service authentication, including:
- Centralized access control where administrators can specify which pipelines can use the connection
- Support for multiple authentication schemes
- Ability to share connections across multiple pipelines within a project
- Built-in security controls for managing who can view or modify connection details
- Keep sensitive credentials hidden from pipeline authors while still allowing usage
- Shared connections across multiple projects, reducing duplication and management overhead
For GitHub App authentication, service connections are particularly valuable because they:
- Securely store the app’s private key
- Allow administrators to configure and enforce connection behaviors
- Provide better security compared to storing secrets directly in pipelines or variable groups
For those eager to explore the sample code, check out the repository. The key components and configuration are detailed below.
Creating a custom Azure DevOps extension
Azure DevOps extensions are packages that add new capabilities to Azure DevOps services. In our case, we need to create an extension that provides two key components:
- Custom service connection type for securely storing GitHub App credentials (and other settings)
- Custom task that uses those credentials to generate installation tokens
An extension consists of a manifest file that describes what the extension provides, along with the actual implementation code.
The development process involves creating the extension structure, defining the service connection schema, implementing the custom task logic in PowerShell (Windows only) or JavaScript/TypeScript for cross-platform compatibility, and packaging everything into a distributable format. Once created, the extension can be published privately for your organization or shared publicly through the Azure DevOps Marketplace, making it available for others who have similar GitHub integration needs.
We are not going to do a full walkthrough of the extension creation process, but we will demonstrate the most important steps. You can find all the information here:
- Extensions overview
- How to build an extension
- How to create an extension
- How to add a custom pipelines task extension.
Adding a custom service connection
To enable GitHub App authentication in Azure Pipelines, we need to create a custom service connection type since there isn’t a built-in one. This can be done by adding a custom endpoint contribution to our extension, which will define how the service connection stores and validates the GitHub App credentials, and provides a user-friendly UI for configuring the connection settings like App ID, private key, and other properties.
We need to add a contribution of type ms.vss-endpoint.service-endpoint-type to the extension contributions manifest. This contribution will define the service connection type and its properties, like the authentication scheme, the endpoint schema, and the input fields that will be displayed in the service connection configuration dialogue.
Something like this (see a snippet below, or explore the full contribution definition in reference implementation):
"contributions": [
{
"id": "github-app-service-endpoint-type",
"description": "GitHub App Service Connection",
"type": "ms.vss-endpoint.service-endpoint-type",
"targets": [ "ms.vss-endpoint.endpoint-types" ],
"properties": {
"name": "githubappauthentication",
"isVerifiable": false,
"displayName": "GitHub App",
"url": {
"value": "https://api.github.com/",
"displayName": "GitHub API URL",
"isVisible": "true"
},
...
},
Once you install the extension, you can add/manage the service connection of type “GitHub App” and configure the app’s ID, private key, and other settings. The service connection will securely store the private key and can be used by custom tasks to generate installation tokens in a pipeline.

In addition to storing the private key, the custom service connection can also store other settings, such as the GitHub API URL and the app client ID. It can also be used to limit token permissions or scope the token to specific repositories. By optionally enforcing these settings at the service connection level, administrators can ensure consistency and security, rather than leaving configuration decisions to pipeline authors.

Adding a custom task
Now that we have a secure way to store the GitHub App credentials, we can create a custom task that will use the service connection to generate an installation token. The task will be a TypeScript application (cross platform) and use the Azure DevOps Extension SDK.
While I already shared the full walkthrough of creating a custom task, here is an abbreviated list to follow:
- Create the custom task skeleton
- Declare the inputs and outputs on the task manifest (
task.json) - Implement the code
- Declare the task and its assets on the extension manifest (
vss-extension.json)
I have created an extension sample that contains both the service connection as well as a custom task that generates a GitHub installation token for API calls. Since the extension is not published to the marketplace, you have to (privately) publish under your account, share it with your Azure DevOps enterprise or organization, and then install it on all organizations where you want to use the custom task.
Jump to the next section If you choose this path, as you are now ready to use the custom task in your pipeline.
Note: The sample includes both a GitHub Actions workflow and an Azure Pipelines YAML pipeline that builds and packages the extension as an Azure DevOps extension that can be published in the Azure DevOps marketplace.
Using the custom task in Azure Pipelines
The task supports receiving the private key, as a string, a file (to be combined with secure files), or preferably a service connection (see input parameters).
Assuming you have a service connection named my-github-app-service-connection, let’s see how can use task to create a comment in a pull request in the GitHub repository that triggers the pipeline using the GitHub CLI to call the GitHub API:
steps:
- task: create-github-app-token@1
displayName: create installation token
name: getToken
inputs:
githubAppConnection: my-github-app-service-connection
- bash: |
pr_number=$(System.PullRequest.PullRequestNumber)
repo=$(Build.Repository.Name)
echo "Creating comment in pull request #${pr_number} in repository ${repo}"
gh api -X POST "/repos/${repo}/issues/${pr_number}/comments" -f body="Posting a comment from Azure Pipelines"
displayName: Create comment in pull request
condition: eq(variables['Build.Reason'], 'PullRequest')
env:
GH_TOKEN: $(getToken.installationToken)
Running this pipeline will result in a comment being posted in the pull request:

Pretty simple, right? The task will create an installation token using the service connection and export it as a variable, which can be accessed as getToken.installationToken (with getToken being the identifier of the step). It can then be used to authenticate against GitHub, in this case using the GitHub CLI command, which will take care of the API call and authentication for us (we could have also used curl or any other HTTP client).
The task also exports other variables:
tokenExpiration: the expiration date of the generated token, in ISO 8601 formatinstallationId: the ID of the installation for which the token was generated
Unlocking powerful automation capabilities beyond basic CI/CD
By leveraging GitHub Apps for authentication, organizations can establish secure, scalable Azure Pipelines integrations that provide fine-grained permissions, short-lived tokens, and better security controls compared to traditional PATs.
The custom Azure DevOps extension approach provides a seamless integration experience that abstracts away the complexities of GitHub App authentication. Through service connections and custom tasks, pipeline authors can easily generate installation tokens without worrying about JWT generation, installation ID management, or token lifecycle concerns.
The streamlined approach also enables development teams to implement rich GitHub integrations, including automated status checks, pull request comments, issue management, security scanning integration, and deployment tracking. The result? A more cohesive development workflow where Azure Pipelines and GitHub work together seamlessly to provide comprehensive visibility and automation throughout the software development lifecycle.
Whether you’re looking to enhance your existing CI/CD processes or build entirely new automated workflows, the combination of Azure Pipelines and GitHub API through GitHub Apps provides a robust foundation for modern DevOps practices. This will allow you to enrich your existing pipelines with GitHub capabilities as you move your code from Azure Repos to GitHub.
Explore more blog posts covering a range of topics essential for enterprise software development >
The post How to streamline GitHub API calls in Azure Pipelines appeared first on The GitHub Blog.
]]>The post Switching from Bitbucket Server and Bamboo to GitHub just got easier appeared first on The GitHub Blog.
]]>Switching from these tools to GitHub Enterprise Cloud and GitHub Actions has become easier, safer, and even more seamless, with today’s launch of new migration tools.
GitHub Enterprise Importer (GEI) now supports migrations from Bitbucket Server and Bitbucket Data Center to GitHub Enterprise Cloud, and GitHub Actions Importer now allows you to move from any of Atlassian’s CI/CD products–Bitbucket, Bamboo Server, and Bamboo Data Center–to GitHub Actions.
A complex landscape
Companies across the globe rely on DevOps to fuel growth and drive innovation. However, the developer tech stack has become more complicated with countless tools that don’t always integrate easily or work together, resulting in a disjointed experience and operational overhead. And despite industry-wide investments in DevOps, developers report that the most time-consuming thing they’re doing at work besides writing code is waiting on builds and tests. Concerns regarding data security and the risks associated with coordinating workflows across numerous tools are often raised.
Navigating this intricate landscape has become increasingly challenging. Investing in the developer experience can not only result in four to five times more revenue growth, but having a single, integrated platform allows for developers to spend their time doing what they do best—building great software and making an impact.
GitHub is that single, integrated platform, used by over 100 million developers. With built-in integrations and APIs, GitHub Enterprise Cloud features enterprise-ready, scalable CI/CD deployment automation with GitHub Actions, collaboration tools, natively-embedded application security testing, and the industry’s first AI-pair programmer, GitHub Copilot.
Migrating your data has never been easier
If you’re making the move to GitHub, we know you’ll have data you want to bring with you so your team can hit the ground running. We also know that fear of migration can be a big barrier to switching, which is why we’ve worked hard to make moving quick, low cost, and painless.
GitHub Enterprise Importer is our tried and tested migration tool, used by thousands of GitHub customers to migrate more than 700,000 repositories to the GitHub platform.
Today, we’re launching support for migrations from Bitbucket Server and Bitbucket Data Center, allowing you to seamlessly bring your code, pull requests, reviews, and comments to GitHub Enterprise Cloud.
With GitHub Enterprise Importer, we were able to migrate hundreds of repos from BitBucket Server to GitHub Enterprise up to 70% faster than we could have otherwise.
To migrate a repository, all you need to do is install our extension for the GitHub CLI and then run the gh bbs2gh migrate-repo command. For more details, and to learn about our tools for planning your migration and moving large numbers of repositories, see “Migrating repositories from Bitbucket Server to GitHub Enterprise Cloud” in the GitHub Docs.
Bring your CI/CD pipelines with your data
In addition to moving your repositories to GitHub with GitHub Enterprise Importer, you can also move your CI/CD pipelines to GitHub Actions with GitHub Actions Importer—a migration tool that helps you plan, forecast, and automate your CI migrations. In addition to BitBucket and Bamboo, GitHub Actions Importer already supports migrations from Azure DevOps, CircleCI, GitLab, Jenkins, and Travis CI.
GitHub Actions Importer is especially designed to help organizations when manual migration is not feasible. For organizations with large and sophisticated infrastructure, CI migrations are often a manual and time-intensive endeavor. GitHub Actions Importer speeds up this process while minimizing cost and the potential for error. In fact, since its inception, GitHub Actions Importer has helped thousands of users evaluate and test the migration of nearly a quarter million pipelines.
GitHub Actions Importer uses a phased approach to simplify the migration process:
- Planning. In this phase, you analyze your existing CI/CD usage to build a roadmap for your migration.
- Testing. Here you conduct dry-run migrations to validate that the converted workflows function the same as existing pipelines. GitHub Actions Importer supports unlimited iterations in this step to ensure any custom behavior is accurately encapsulated in your new GitHub Actions workflows.
- Migration. In this last phase, GitHub Actions Importer generates validated workflows and opens pull requests to add them to your GitHub repository. To finalize your migration, you should plan to review these workflows and migrate those constructs that could not be migrated automatically.
To get started, head to our documentation or explore Importer Labs, our learning environment with tutorials for each supported migration path.
Get started today
With GitHub Enterprise Importer and GitHub Actions Importer’s new support for migrations from Atlassian’s DevOps tools, it’s easier than ever to switch to the GitHub platform.
Want to learn more about GitHub Enterprise? Get in touch with our sales team—we’ll be happy to help.
The post Switching from Bitbucket Server and Bamboo to GitHub just got easier appeared first on The GitHub Blog.
]]>The post Best practices for organizations and teams using GitHub Enterprise Cloud appeared first on The GitHub Blog.
]]>When a new customer starts using GitHub Enterprise, one of the first questions they usually ask is: How do we structure the organizations within our enterprise?
Even experienced GitHub administrators frequently reevaluate and seek guidance on how they should group organizations and teams within their enterprise.
Why? An effective structure is crucial in maximizing the value of GitHub and enhancing the experience for you and your developers. Enterprise structural decisions can determine whether you accelerate with DevSecOps and Innersourcing principles that enable efficient workflows, or remain stuck in silos and forfeit potential productivity gains.
The purpose of this post is to provide you with a set of guidelines to help you design the most suitable organization and team structure for your enterprise and to effectively align GitHub with your company’s culture.
DevSecOps, Innersource, and GitHub
First, let’s take a step back and talk about why you should focus on culture to begin with.
The term DevSecOps has been around for quite some time now. While exact definitions for it differ, DevSecOps generally stands for taking a holistic approach to the Software Development Life Cycle (SDLC) by building a culture of shared responsibility within a project team to deliver value in a fast, secure manner.
By using GitHub, you can expand the scope of collaboration beyond just one or a few teams. GitHub’s goal is to enable collaboration throughout your entire enterprise. To achieve this, you can leverage a strategy known as “innersource,” which takes all the values and lessons learned from the open source community and puts them behind the safe walls of your enterprise.
The success of innersourcing shows. According to a recent Forrester report, GitHub Enterprise can produce up to a 433% return on investment over three years. The ROI is mostly achieved by increasing your developers’ productivity up to 22%. Philips shared that 70% of their developers reported an innersource approach that improved their development experience. For a better understanding of how innersourcing works, check out this brief on how to accelerate innovation with innersource.
Enterprise challenges
The answer to setting up an innersource-friendly developer platform could be simple: let anyone see and propose changes to any content, anywhere! In GitHub terms, this would mean using a single organization with the default repository base permissions set to read or write. Done!

But enterprises usually don’t work that way. There are many requirements that make access control and user permissions a challenge, such as the need to protect valuable intellectual property, stay compliant with audit or regulatory requirements, guard top-secret code and data, and follow security protocols. Additionally, it can be a challenge to manage the opinions and desires of the numerous individuals who make up your enterprise.
Finding the right balance between these requirements and fully realizing the innersourcing values described above can be hard, but the organizational structure of your GitHub enterprise and the flexibility allowed by the platform can help in reaching this sweet spot between compliance and efficiency. So, let’s see what you can do to find it.
What is a GitHub organization?
Let’s define a GitHub organization by referencing our “Book on GitHub Enterprise Cloud Adoption”:
Organizations are the “owners” of shared repositories, discussions, and projects. They let administrators set more granular policies on repositories belonging to the organization and user behavior within the organization. […]
Organizations also serve as a roll-up mechanism for reporting. […]
Consumption-based services, such as GitHub Actions and Codespaces, are reported at both the repository and organization level. Spending limits on these services can be set on a per-organization basis.
In other words, a GitHub organization is a grouping mechanism for repositories, teams, projects, settings and controls that, critically, keeps these things completely separate from each other.
- From the perspective of your end users, organizations are the highest level of abstraction of your enterprise, defining boundaries of what they can see and do in your enterprise. Users can be grouped in teams within organizations, but organizations are the only way to strictly group various resources such as apps, teams and repositories within an enterprise at GitHub.
- From the perspective of an enterprise administrator, organizations are a grouping mechanism to set up a group of policies, controls and reporting mechanisms among a set of repositories and teams.
To summarize, every organization is a sort of intentional “silo,” with its own policy and content boundaries, and therefore administrative overhead. However, it is also the primary method of grouping and enforcing controls on resources where these divisions are necessary for your business.
What is a team?
Again, referencing the “Book on GitHub Enterprise Cloud Adoption”:
GitHub Teams group users of common projects or specialized skills, they are often the mechanism for providing role based access to collections of repositories. […]
A team belongs to an organization, and an organization can have many teams. There are several ways teams help establish culture within your enterprise, for example, through access controls, communication, such as discussions, work and knowledge sharing in code reviews, and roles.
There is a lot of flexibility when it comes to creating teams, and whether you choose to create only a few broad teams or also many finer-grained teams is entirely up to you. You can centralize team creation to just your administrators, or you can allow anyone who is a member of your organizations to create their own teams for collaboration and communication if you prefer. For those in need of additional structure in their team permission inheritance, you can even nest teams. Nested teams create hierarchy for inherited access in child teams from their parent team, although remember that they can also add complexity in tracking permissions models, so use them wisely!

The general recommendation is to establish teams based on areas of responsibility, communities, and common projects or product families. Also important to note: team membership may be managed through groups if using supported identity providers for easy management (with team synchronization or group SCIM, depending on your access model.)
General guidelines
So, how should you go about finding that sweet spot between maximizing innersourcing and conforming with all the desired controls? We can start by establishing some universal truths no matter your situation. This will help establish guard-rails before we have a deeper look at the impacts on individual GitHub features.
1. Make maximizing collaboration and visibility your main goal
This shouldn’t come as a surprise since innersourcing is one of the main topics of this post. Innersourcing thrives on wide visibility and collaborative capabilities.
So, when you think about how to structure your organizations, rather than coming from the perspective of “How can we best protect our resources?”, take the stance of “How can we maximize collaboration, given our required controls?”. This mindset will make it easier for you to find creative solutions or maybe even question existing restrictions rather than accepting them for hard truths.
Often enough, strict controls will not apply to all of your developers or projects. Thus, it’s important to target the right resources for required controls and give more freedom to the rest. Adopting this mindset will not only help you prioritize enterprise-scale efficiency over individual desires, but add more weight to the importance of streamlining processes and tools to avoid wild growth and shadow-IT approaches.
For instance, imagine an individual development team had its own organization. The ability to freely adjust settings to meet their needs and install applications seems to provide a lot of flexibility. However, the efficiency gains that result from standardization between policies and comprehensive collaboration across repositories (for example, repository rulesets, reusable workflows, and code security to name a few) are even greater.
2. Less organizations lead to less siloing and overhead
“Try to have as few organizations as possible to meet your requirements”—this is the mantra you might have heard from GitHub and the first conclusion we drew in the section on organizations. Of course, there are ways to overcome the complexities of many organizations, such as having a mature collaboration culture, using internal repositories, or simply adding users to multiple organizations. Nonetheless, more organizations will induce additional work and overhead for administrators.
Automation around GitHub structure for example, through the GitHub Terraform Provider, can help to reduce that administrative burden somewhat. Of course, it requires someone to create and maintain those automations. And even the most sophisticated automation probably won’t scale indefinitely, so a good strategy around when and why you will create new organizations or work with existing ones is still wise from the outset. You may also want to take advantage of GitHub Apps, which provide an increased rate limit for GitHub Enterprise customers to help support scalability and can be installed at the organization level to help support even multi-org setups.
In addition to the technical considerations, don’t underestimate the impact that bridging multiple organizations can have on your developers’ feeling of belonging. Being a member of one large group with frequent interactions is distinct from being a member of several smaller groups.
In summary, the impact of fewer organizations can vary significantly, depending on the technical capabilities, cultural readiness for collaboration, and size of your enterprise. However, the generalization that it leads to less siloing is always true to some extent.
3. It’s easier to scale out than to scale in
Adding new organizations is generally easier than removing or merging existing ones.
Merging organizations requires finding a common ground in terms of policies, settings, and ownerships; this may result in developers losing some freedom they were accustomed to. Conversely, creating a new organization provides developers with all the freedom to structure it according to their requirements.
Therefore, having fewer organizations from the start increases your flexibility in reacting to changing requirements in the future.
4. Organize based on product ownership and shared responsibilities
Building organizations around existing micro-structures like business units or even teams is rarely a recipe for success. It can codify the silos that you are trying to overcome and make it difficult to accommodate structural business changes (and we know those can happen frequently with little prior notice.) Instead, focus on what needs to be managed within products (for example, source code, release cycles, communication, and access control) and determine where there is overlap. Grouping organizations based on shared responsibilities can help streamline operations, address any gaps or similarities in workflows, and provide a holistic view on the structure of your development process.
As an example, consider two teams in a microservice application: one is the producer of a service and the other is the consumer. Although two separate teams and potentially in different business units, both teams share ownership and the responsibility to find the right architecture for the service communication. As a result, the teams must collaborate if the entire project is to be successful. So, in terms of GitHub, if you put the two teams into different organizations to accommodate for the existing structure, there would be no easy way to encourage planning and communication between the two teams (we will talk more about this in the Impact section).

5. Establish clear rules for organization creation
Define clear rules for qualifying the creation of an additional organization. This approach will not only help identify what’s important to your enterprise, it will also make it easy for anyone in your enterprise to understand the boundaries and purpose of each organization.
Possible rules could be:
- An organization can only be created for strictly separated legal entities or subsidiaries where developers are not allowed to discover each other or each other’s work
- Organizations are used as a grouping of related projects and services of decoupled business sections (for example, in a bank this could be retail, wholesale and wealth management) or fundamental, reusable core services (landing zone, corporate identity UI framework, internal billing system)
- New organizations can only be created for specific use-cases that do not directly drive business value and will never overlap in function. For example, an organization for a recruiting-hackathon or an organization to fork internal open source repositories so they can be vetted and scanned by the security team
Feature Impact
Moving beyond high level overviews and guidelines, let’s discuss how you can leverage organization-related features to impact the level of control and innersourcing.
Permissions and visibility
Each organization has its own pool of users and/or owners. Aside from **internal repositories and internal packages, **only users from within the organization are able to collaborate on non-public resources in an organization.
We recommend making strategic use of internal repositories and packages whenever possible.
Internal repositories and packages at GitHub are visible to every full member of the enterprise by default (outside collaborators are not full members, only collaborators) even if they do not belong to the organization the repository or package belongs to. They can interact with the repository in a read-only manner by opening and commenting on issues and, depending on enterprise policy, fork those repositories into their own organization’s or private space, allowing for open source like contributions through pull-requests. In this way, internal repositories help automate an innersource, “open by default” approach without having to apply piecemeal policies for every repository in every organization. This can also alleviate some of the administrative overhead of managing repository visibility and discoverability across multiple organizations. To take it a step further, you can make “internal” the standard, default repository visibility for new repositories in your organizations, though you may still want to allow for other visibilities where the need arises.
A few helpful tips and insights when it comes to permissions and visibility:
- Standardize team structure. Users can only tag teams and other users from within the same organization. Therefore, use consistent namings and duplication where it makes sense to enable quick communications and collaboration across team-boundaries and multiple organizations
- Be mindful of non-admin user visibility. Any member of the organization will be able to see all members of an organization. If you have to shield users from each other, multiple organizations is the only way. Additionally, enterprise members will always be able to see the names and existence of all organizations within the enterprise
- Be mindful of administrator permissions. Resources can be hidden within an organization, but access to an organization can be granted both by enterprise and organization administrators. If you truly need to shield repositories, projects, or packages from each other, having separate organizations is the safer way as only enterprise owners can control organization creation.
- Use automation. You will have to take care of permissions-settings for every organization individually. When it comes to maintenance, this can increase depending on the number of organizations.
Search and discoverability
Having the ability to view and interact with resources is one thing. How easy it is to locate them is another matter altogether. The discoverability of resources is a critical aspect of innersourcing.
There are several ways to search and discover resources in GitHub, the most common being:
- From the organization view, search on any type of resource (repositories, teams, packages, users, projects) by its name.
- Use the newly improved general search. The search can be scoped to a single organization, using org:<org-name>, but not to an enterprise.
- Customize your organization profile page with pinned repositories and organization profile READMEs.
Increasing the number of organizations can make it easier to find resources if you are aware of their location, but it can become considerably more difficult if you are not. As the number of organizations grows, the “where to look” aspect may become more and more ambiguous. Establishing efficient innersourcing practices is often dependent on the ability to easily locate resources across team boundaries.
Consider the case of user-tagging as an example. If your developers can easily @-tag a team from anywhere or search for its existence within an organization, they are much more likely to reach out to the team. In contrast, if your developers had to leave their current context, search for the team, and open an issue in one of their repositories to contact them, it would be more cumbersome.
Apps, Webhooks and APIs
GitHub offers a variety of integration options through GitHub Apps, Webhooks, and APIs, which can be leveraged to enforce good development practices, such as branch protection rule sets, workflows, and more. It is important to note that most of these integrations are scoped to organizations, so they cannot be installed on enterprise level.
Thus, installing and configuring GitHub Apps and Webhooks for each organization individually can increase the administrative burden. App installations using the GitHub REST API are limited to the repository level. Many GitHub Apps available in the marketplace only allow connection to a single external system due to tool limitations (for example, the Azure Boards integration.) You may run into limitations with how to configure your integrations with a single-tenant tool across a multi-organization GitHub setup.
As the number of organizations in your enterprise increases, the complexity of implementing enterprise-wide automation using the GitHub API also grows; this is due to authorization scope. For example, listing repositories can only be done at the organization level. Therefore, if you want to list all repositories in your enterprise, you must first list all organizations and then loop through each one, which requires a broader range of permissions.
More organizations can provide greater flexibility, as it allows for individualized integrations and fine-tuned permissions. Though, this should be balanced with calculating the increased administrative overhead for the enterprise-wide automation.
Impact on other features
There are several other features than can be impacted by your organization structure to take into consideration:
- Teams roll up to the organization level only, not higher.
- Secrets and action variables can be configured at the organization level, but not higher.
- Every organization has its own, unique URL (for example,
https://github.com/octoinc-org) under which all other resources are grouped (for example,https://github.com/octoinc-org/my-repository). There is no such grouping for an enterprise, meaning the enterprise URL (for example,https://github.com/enterprises/octoinc-enterprise) is mostly used for administrative purposes. - An organization will inherit policies and settings enforced on an enterprise. Enterprise-level policies do not allow exceptions; they apply globally across the enterprise. Whatever is not enforced as an enterprise-level policy can be individually set per organization.
Long story short
That was a lot to digest, so let’s try to sum things up.
In general, it’s recommended to minimize the number of organizations in your enterprise. Each additional organization can increase administrative overhead and potentially make collaboration more challenging. To counter that, more organizations provide greater flexibility on an individual level and may be necessary to protect resources in order to comply with regulatory or other restrictions. Furthermore, the key is to find a balance between the two using DevSecOps and Innersourcing principles.
DevSecOps and Innersourcing are proven methods for improving developer productivity by fostering collaboration. However, in an enterprise, restrictions and limitations exist that make it a challenge to foster this culture. In GitHub, organizations are the biggest lever to achieving both, improving collaboration and culture and staying compliant. The organization structure has many impacts on many features of GitHub, how they can be used (or how easy it is to use them), and thus how easy it is for developers to work together.
Where to go from here?
A good starting point is to ask yourself the following questions:
- What is your target culture? How do you envision the future of development within your enterprise?
- What limitations exist in your enterprise? Which ones can you get around? Which ones are unavoidable?
- Which GitHub features and their impacts are most important to you in your enterprise?
- Who will be the people most impacted by those decisions? Who do you need to convince?
We also highly recommend reading “The Book on GitHub Enterprise Cloud Adoption”—specifically, the chapter about organization structure as it will dive deeper into these archetypes.

For example, the ‘Red-Green’ Architecture above is a very elegant way of limiting the impact of restrictions to specific organizations, while having a main-innersourcing-organization where collaboration can thrive.
To learn more
Read more about best practices for structuring organizations in your enterprise in our documentation.
The post Best practices for organizations and teams using GitHub Enterprise Cloud appeared first on The GitHub Blog.
]]>The post Moving from a product to a service mindset appeared first on The GitHub Blog.
]]>But this change goes beyond technology implementation and touches how we work as software engineering teams. In some cases, it has caused us to adopt new internal processes, work more iteratively with our customers and ultimately, think in a different mindset. As an industry, we’re transitioning away from shipping products to shipping services.
In this blog, we explore how this impacts software engineering teams and end users and how you can set yourself up for success on your service-led journey.
The role of a developer has evolved
The role of a developer has drastically changed, whether you’re comparing that over the last 10, 20, 30 years or beyond—most recently, with the disruption of AI and Large Language Models (LLMs), cloud computing, the internet, mainframes, graphical user interfaces, and more. Developers have been evolving and refining their development practices throughout these evolutions to keep up to date with the latest technologies and paradigms.
Diving into some of these evolutions further:
- Working with multiple programming languages and frameworks.With cloud computing and microservices, individual engineering teams are empowered to choose the most appropriate programming language, framework and platform for their needs.
- Infrastructure-as-code (IaC). Cloud has provided APIs access to manage and configure the underlying infrastructure that our teams build upon. Scaling infrastructure to deal with unexpected spikes or spinning up a deployment stamp in another region has never been easier.
- Increased consumer expectations. Imagine if your favorite shopping app was unavailable. What would you do? You’d probably complete your purchase elsewhere, and may even switch loyalty. Those expectations translate into enhanced engineering requirements, such as Service Level Agreements (SLAs), Recovery Time/Point Objectives (RTOs/RPOs), fast page loads and more, which add additional engineering complexity and pressure for our teams.
Throughout these evolutions, there is a subtle underlying expectation that I haven’t explicitly called out. The lines between development, testing, operations and similar begin to blur. This isn’t surprising, as this is one of the critical premises of a DevOps cultural transformation, removing silos, and focusing on delivering value for your customers.
This means that many of our engineers are likely cross-skilling and learning areas outside their areas of expertise. Developers are learning more about infrastructure as code and networking concepts. Operations teams are learning more about the applications and design patterns being adopted. This is before we even consider platforms like Kubernetes, where the responsibilities are further intertwined.
This is different to several years ago, where software engineering teams would ship approved application versions to users on discs (take your choice of CD or floppy disk). Product teams would prioritize the most critical features, while customer feature requests may have to wait for a later product release, months, or even years later. The cadence and pace of releases were also slower compared to now.
One example is Microsoft’s transition from Office as a boxed product to Office 365. This has also translated to subscription-based payments and customer-focused models, where updates are shipped frequently and quality is essential in continuously delighting customers, and delivering value in the platform.
But this change doesn’t happen overnight and is a journey. The DevOps transformation you’ve likely already embarked on is a vital part of that. Let’s identify some of the common challenges and strategies to overcome them.
Navigating the change: challenges and strategies
We’ve covered some of those strategies and challenges in our introduction, so let’s examine them more explicitly.
Shipping more frequently, safely
One of the promises of DevOps is to break down barriers across teams. This leads to a mindset where the engineering team thinks about the application holistically rather than application versus infrastructure. This subtle change already shifts the perspective away from product development. The team now takes end-to-end accountability for the build, testing and release to customers, shifting the focus to the customer’s experience rather than the internal friction and product development viewpoint.
Automation is commonly adopted to enable this change, particularly in Continuous Integration (CI) and Continuous Delivery (CD).
CI allows you to regularly check that the code compiles successfully and that the project passes some tests.
Have you started writing code, only to find that it does not compile? Or, perhaps you’ve written a new method, but found other parts of the codebase are not passing their tests?
Building a robust CI process helps you gain confidence in the quality of the software that you’re creating. CD then allows you to automate the release of that application to your target environments, and progress those towards production.
Have you ever accidentally executed a script against a production environment intended for dev? How about manually publishing a new release, to find that you have incorrectly configured the environment? Or, attempting to diagnose why the application will not deploy to your environment after following your internally documented steps?
CD allows you to remove the overhead that can slow down a release of the software to your users. Most importantly, the potential for error-prone human intervention is reduced, and a consistent release process enables more releases.
As organizations gain confidence in their processes, they may begin shipping directly to production. For example, an engineering team might ship versions of their product to the engineering organization and use that in their day-to-day work. Critically, if the application has flaws, it will impact the engineering team’s productivity. Therefore, standards need to be followed, and a level of quality needs to be baked into the process.
Bring quality into everything that you do
Quality can mean many different things, and entirely depends on where you are in your transformation:
- Does the code compile?
- Does the code successfully pass your tests?
- Does the code pass a certain level of code coverage?
But we can consider a wide variety of checks:
- Is the user interface accessible for all users (for example, screen readers, high contrast, and similar)?
- Does the application scale to our expectations?
- Does the application respond in an expected timeframe (for example, less than a certain number of milliseconds)?
- Is the application as reliable as we expect it to be? What happens if a core service fails?
- Are there vulnerable code paths in the code that we have written?
- Are we relying upon vulnerable dependencies in our project?
- Have we accidentally leaked secrets, which risks continuity of our ongoing services?
This is where several of the puzzle pieces start coming together. Cloud and IaC allow us to create new environments on demand as part of our automated processes. With that, we can begin bringing scalability tests, performance tests, chaos tests, accessibility tests and more to a running instance of our service, configured in a like-for-like environment to production.
That way, we’re no longer hypothesizing whether the application can scale. Instead, we’re able to deploy a version of the application, and simulate the expected scenarios.
However, these checks should not first occur on builds/releases from our production codebase. Ideally, we should be bringing many of these checks earlier into our development flow, checking for quality in a pull request before we merge to the main branch.
Customer-focused development, not product-focused
So far, we’ve been on a journey. Building automation and quality into everything we do so we can frequently deliver updated product versions to our customers.
But as we ship more frequently and move towards a cloud deployment model, our customers’ expectations shift. Depending on the service, downtime may be unacceptable. Similarly, customers may expect transactions to be completed within a given time or that the application meets specific accessibility standards.
The most important part here is the customer, and understanding what is most important to them. From a business perspective, this is important for several reasons:
- You must be perceived as bringing value to a prospective customer to attract new users. Therefore, if your application is hard to use, slow, and does not meet their needs, they will be unlikely to choose you over a competitor.
- In a service model, organizations typically pay for access through subscriptions. These subscriptions are recurring payments (potentially monthly or annual), meaning it’s vital to continually demonstrate value. If you don’t, you risk customer churn to competitors.
Rather than believing we know what customers want, we must know what they want by building feedback loops into our platforms, building communities with our top users, and, ultimately, using their feedback to prioritize our backlog.
Continuous learning and improvement
As I’ve outlined, running these services can come with high expectations. Depending on your size, scale, or even the types of customers, you may have a high set of availability targets to reach.
When things go wrong, it’s okay to demonstrate continuous learning, identify areas for improvement, and remediate it for the future. When something goes wrong, conduct a blameless retrospective. The idea is not to find who to blame, but to discover the root cause of the problem and determine how automation, refined processes, additional tests, and more can be adopted to prevent similar issues from happening in the future. Treat each failure as an opportunity to learn and improve.
Much like the functional requirements I’ve alluded to throughout the process (specific platform features), non-functional requirements should also be considered on our backlog. Communicating with transparency and with your customers in mind is vital. What have you identified? What are you going to do about it? And how are you going to improve it in the future?
From strategies to action
We’ve talked through how the industry has transformed, and the strategies that can overcome the potential challenges which may arise. But how do we put this into practice?
Implement CI/CD
Consider your current development practices:
- Have you prevented manual/human error-prone issues from taking place? Manual tasks, whether typing the incorrect configuration value or dragging and dropping deployment files, can quickly go wrong.
- Are there enough quality checks in place today? You are aiming to provide an exceptional experience to your customers. Are you including checks and balances within the development lifecycle to deliver on that? As challenges and live site issues arise, are you adding those to the backlog to prevent future repercussions?
- Is your production codebase always in a shippable state? Are you preventing your engineering teams from directly committing to the production codebase? In other words, have you implemented branch protection rules or repository rules to ensure changes are made through pull requests? This will allow you to run some of these checks earlier in your development process, preventing issues from ever reaching the production codebase.
If you’ve answered ‘yes’ to all three questions, then you’re progressing along the right lines! Investing in automation will help bring consistency and rigor to your deployment lifecycle.
‘Everything’ as code
‘Code’ has historically referred to application code. But now, we can create our infrastructure and CI/CD workflows as code. This evolution has a subtle set of benefits which are worth remembering.
Our code is stored in version control, which means we’re able to reuse the same practices that we’re used to as our application code:
- Changes can be viewed over time, allowing us to understand who made a change and why.
- You can use pull requests to ensure that each change is reviewed by at least one other human. After all, two heads are better than one, as they say!
- Changes to workflow code appear next to app-code changes in the same commit or pull request, allowing us to understand and verify their relationship.
- Automation can be triggered based on changes to a branch of code, and can surface results in a commonly-visible area (the pull request). Therefore, whether from an operations background, a traditional developer, or a data scientist, we can benefit from automating quality checks by storing our code in version control.
Check out this blog if you’re interested in learning more about GitOps, and how you can operate your infrastructure using similar techniques to application code.
Promote a culture of innovation, collaboration, and learning
Many enterprises operate in silos, but building a culture of innovation, collaboration, and learning is possible. The open source community builds openly, asynchronously, globally, and at scale, many lessons that we can adopt for our internal development.
Check out how you can reuse these practices for your internal software development.
This is the first step of a more extensive cultural transformation. Once you’ve removed those silos, consider how you can begin collaborating across your organization. What are the most critical customer priorities, and how are you collectively working towards solving those? Are you capturing and sharing feedback so that it can be delivered to the right teams across the organization?
And finally, challenges happen. While no one enjoys live site incidents, it’s a natural occurrence in the nature of our work. The vital part is being able to learn from those situations:
- What went wrong?
- How can you prevent it in the future?
- Have you communicated to your customers in a transparent and customer-focused way?
To put this into context, here are some of the ways that GitHub communicates openly:
- We publish a public roadmap, so you have an idea of our upcoming priorities.
- The communities discussions area provides an opportunity for you to give us feedback, and for you to discuss your proven practices, challenges, and opportunities with other community members.
- The GitHub Blog provides updates on significant product announcements, thought leadership, and policy updates relating to the industry.
- The GitHub Changelog is a more granular set of updates on our recent releases, including incremental feature updates.
- Our GitHub Status page provides an up-to-date view of the platform’s operational health. If there are any ongoing issues, you’ll be able to find out about them here.
- GitHub publishes monthly availability reports, including a summary of each incident, the cause, and steps taken to improve for the future.
Conclusion
Through this blog, we’ve explored the journey from a product to a service mindset. Much of this has been driven by new industry opportunities and trends (such as DevOps transformation, cloud computing and IaC). Even so, there is much for us to consider along this path:
- Implement CI/CD to help you ship more frequently and safely by bringing quality into everything you do.
- Adopt an ‘everything as code’ mentality to benefit from the above automation practices.
- Promote a culture of innovation, collaboration and learning by pivoting your focus to the customer, accepting that things may go wrong, and adopting a culture of continuous improvement.
The post Moving from a product to a service mindset appeared first on The GitHub Blog.
]]>The post Applying GitOps principles to your operations appeared first on The GitHub Blog.
]]>Fundamentally, these processes depend on some source of the truth. In GitHub, that source is a Git repository hosted on GitHub. Git repositories are great for this, as they allow us to store our code in a way that versions can be changed and tracked over time. As a result, we are able to view a timeline of edits to files and their contents.
Tip: If you’re looking for an overview on Git repositories and how they work, check out our Git guides and explore some of our interactive courses on GitHub Skills.
Building and releasing software is a common use case of these workflows, but it is not the only one. There are operational considerations behind your code; you rely heavily on the reliability and consistency of the environments that host and underpin your software. Ensuring that these systems are online, managed, and monitored is vital. This includes the day-to-day tasks of managing desired state configuration while avoiding configuration drift, managing access permissions and deploying and managing new environments. These operational tasks have traditionally been manual and error-prone, potentially spanning several different tools and platforms to achieve an outcome.
To ensure smooth service management throughout these changes, teams have typically relied upon frameworks such as Information Technology Infrastructure Library (ITIL) and other change management practices (such as change advisory boards) to minimize risk and disruption to the business.
Is there another way? Could we use our Git repository as the source of truth for these operational tasks, and somehow reconcile changes with our real-world view?
What is GitOps?
Let’s first establish a common definition of GitOps. The Linux Foundation’s OpenGitOps landing page describes four key principles:
- Declarative
A system managed by GitOps must have its desired state expressed declaratively. - Versioned and Immutable
Desired state is stored in a way that enforces immutability, versioning, and retains a complete version history. - Pulled Automatically
Software agents automatically pull the desired state declarations from the source. - Continuously Reconciled
Software agents continuously observe actual system state and attempt to apply the desired state.
In other words, GitOps builds upon the cloud native principles and ideas that we have been introduced to through platforms like Kubernetes. The ‘source of truth’ is stored and versioned in our Git repositories. That source is typically declarative, meaning that the state is described in a desired and immutable manner. In other words, we describe what we want, rather than how it should happen (otherwise known as imperative).
The idea of storing a desired state in Git is powerful. As a result, we’re able to get a full history of changes over time on the desired state of our environment, system or configuration. This gives us auditability into the changes that were made, who made them, and why.
Building on this concept of auditability, we can adopt features like GitHub branch protection rules or repository rules to put guardrails in place, ensuring that changes to the desired state pass a set of quality checks (such as linting, smoke tests) and perhaps manual reviews using the four eyes principle. That way, if the desired state needs updating (for example, deploying additional machines, reconfiguring an application, adjusting permissions, etc.), we can use the same continuous integration and continuous delivery (CI/CD) principles that we are familiar with when building and deploying software, and bring those changes in as a pull request from a feature branch.
But what if the current state (the live configuration of the environment) deviates from the desired state (in our Git repository)? Perhaps someone has manually tweaked a configuration setting, deployed a new machine or adjusted permissions manually.
An automated process (typically an agent) analyzes the current state and the desired state on an ongoing basis. It then makes the necessary changes to bring the current world to an accurate representation of the desired state. In other words, the agent is there to address any configuration drift that occurs. You could consider that as a ‘pull’ model, where the state of the existing environment is continuously reconciled against the desired state. As updates are made to the desired state, they are automatically detected by the agent, and it makes the needed changes to the environment.

Consider a platform like Kubernetes or technologies like Flux or Argo CD. They rely upon an agent continuously evaluating the current state of the environment against the intended desired state.
However, there are many platforms which do not have an agent capability to enable continuous reconciliation. In these cases, I have seen teams opt to use CI/CD workflows to push the desired state to their environment. Compared to an agent continuously reconciling the changes, updates to the current state would be slower (that is based on the triggers of our CI/CD workflow). Worse still, discrepancies in the current state and the desired state could emerge, as drift detection would not be available out of the box. This would also technically break the third principle from the OpenGitOps definition. In other words, if an agent is not available for your scenario, then CI/CD tools such as GitHub Actions to deliver changes to your environment may be an option, but bear in mind the tradeoffs you would be taking.
GitOps principles are not just limited to applications, and could be considered from an operational perspective. Tools like Ansible or Terraform could be used to describe the configuration of your infrastructure. Then, platforms, such as Ansible Automation Platform or Terraform Cloud, could help reconcile updated configuration changes to the estate.
GitHub’s entitlements project
Let’s look at another example. Last year, GitHub open sourced its identity and access management solution, Entitlements. It’s something that we use day-in and day-out to manage access to applications, distribution lists, organizational structure, and more, here at GitHub.
Storing these permission mappings in source control provides a clear source of truth which is reliable, and auditable. We can easily navigate through the historical changes in our repository to determine why changes have been made and by whom. Assuming that pull requests have been used to update the desired state, additional context on the relevant discussions, approvals and checks may be traceable as well.
This once again drives the importance of good governance practices, not just in our software projects, but also in repositories where we’re adopting GitOps principles to drive infrastructure configuration, or our identity and access management permissions. In particular, branch protection rules or repository rules could:
- Ensure that the minimum number of required reviewers have approved the changes.
- Use the CODEOWNERs file to automatically assign required reviewers to the pull request based on the files which have been changed, so relevant approvers are kept in the loop.
- Lint contributions in pull requests to ensure they still meet the expected standards.
- Execute a dry-run of the configuration permissions to identify the impact of a change, before accepting a pull request.
This also takes away the risk of error-prone manual changes, allowing automation to scale and accelerate the operational needs of the business. Our operations teams are still fully involved in the process. Their role may evolve into managing the workflows to enable these quality checks, reviewing proposed changes to permissions, and approving or rejecting based on the business need and justification.
Recommended practices
As with many projects and patterns, there are several recommended practices that we can consider adopting:
- As practiced by open source communities and innersource communities; a clear and simple README in your repository is important to showcase what the project is, how folks can contribute, and your expectations around contributions.
- Quality is something I’ve kept highlighting throughout the post. It is an important part of any workflow to ensure you’re shipping valuable experiences to your customers. Consider this as an evolution to your change management practices, and how you can integrate quality directly into your workflows:
- Use branch protection rules or repository rules to bring quality into your contribution process, before code ever reaches your production branch:
- Consider requiring at least one reviewer, so direct merges to your production branch are disallowed.
- Consider turning off bypassing, if you prefer that all changes must go through the process without adhering to all checks.
- Consider requiring status checks (specific GitHub Action workflows) to ensure the needed automated checks are a core criteria to allow a merge.
- Use GitHub Action Workflows on a pull request trigger to automate quality checks before they reach your production code:
- Consider linting the changes, to ensure the code follows a set of standard practices.
- Consider a dry-run of your configuration changes, so that you know what is changing before merging to your production environment. This could be a terraform plan, a bicep deployment what-if, or similar with the tools in your own workflow.
- If you are particularly mature in your processes, you could consider a smoke test and deploy the changes in some temporary environment (not production), to assess that the changes are as you expect.
- Use branch protection rules or repository rules to bring quality into your contribution process, before code ever reaches your production branch:
Wrapping up
This has been a whistle stop tour on GitOps, with a special focus on our entitlements project which was open sourced last year! It has also highlighted the importance of quality checks outside of your software build and release pipelines, and how they can integrate into GitOps workflows as well.
The post Applying GitOps principles to your operations appeared first on The GitHub Blog.
]]>The post 3 benefits of migrating and consolidating your source code appeared first on The GitHub Blog.
]]>But there are other benefits of consolidating and simplifying your toolkit that may be surprising–especially when migrating your source code and collaboration history to GitHub.
Today, we’ll explore three benefits that will support enterprises in a business climate where everyone is being asked to do more with less, as well as some resources to help get started on the migration journey.
1. Enable developer self-service culture
Some of the benefits enterprises can achieve with DevOps are improved productivity, security, and collaboration. Silos should be broken down and traditionally separated teams should be working in a cohesive and cloud native way.
Another benefit that DevOps enables, which is a key part of the Platform Engineering technology approach, is the ability for development teams to self-service workflows, processes, and controls which traditionally have either been manual, or tightly-coupled with other teams. A great example of this was covered in a previous blog where we described how to build consistent and shared IaC workflows. IaC workflows can be created by operations teams, if your enterprise separation of duties governance policies require this, but self-serviced when needed by development teams.
But this type of consistent, managed, and governable, self-service culture would not be possible if you have multiple source code management tools in your enterprise. If development teams have to spend time figuring out which tool has the source of truth for the workflow they need to execute, the benefits of DevOps and Platform Engineering quickly deteriorate.
There is no better place to migrate the core of your self-service culture to than GitHub–which is the home to 100 million developers and counting. Your source code management tool should be an enabler for developer productivity and happiness or else they will be reluctant to use it. And if they don’t use it, you won’t have a self-service culture within your enterprise.
2. Save time and money during audits
The latest Forrester report on the economic impact of GitHub Enterprise Cloud and GitHub Advanced Security, determined a 75% improvement in time spent managing tools and code infrastructure. But one of the potentially surprising benefits is related to implementing DevOps and cloud native processes that would both help developers and auditors save time and money.
If your tech stack includes multiple source code tools, and other development tools which may not be integrated our have overlapping capabilities, each time your security, compliance, and audit teams need to review the source of truth for your delivery artifacts, you will need to gather artifacts and setup walkthroughs for each of the tools. This can lead to days and even weeks of lost time and money on simply preparing and executing audits–taking your delivery teams away from creating business value.
Working with GitHub customers, Forrester identified and quantified key benefits of investing in GitHub Enterprise Cloud and GitHub Advanced Security. The corresponding GitHub Ent ROI Estimate Calculator includes factors for time saving on IT Audit preparations related to the number of non-development security or audit staff involved in software development. This itself can lead to hundreds of thousands if not millions of dollars of time savings.
What is not factored into the calculator is the potential time savings for development teams who have a single source of truth for their code and collaboration history. A simplified and centrally auditable tech stack with a single developer-friendly core source code management platform will enable consistent productivity even during traditionally time-consuming audit and compliance reviews–for both developers and non-developers.
3. Keep up with innovation
If you are using another source code platform besides GitHub, or if GitHub is one of several tools that are providing the overlapping functionality, some of your teams may be missing out on the amazing innovations that have been happening lately.
Generative AI is enabling some amazing capabilities and GitHub is at the forefront with our AI pair-programmer, GitHub Copilot. The improvements to developer productivity are truly amazing and continue to improve.

GitHub continues to innovate with the news about GitHub Copilot X, which is not only adopting OpenAI’s new GPT-4 model, but introducing chat and voice for GitHub Copilot, and bringing GitHub Copilot to pull requests, the command line, and docs to answer questions on your projects.
Innovations like this need to be rolled-out in a controlled and governable manner within many enterprises. But if your techstack is overly complex and you have several source code management tools, the roll-out may take a long time or may be stalled while security and compliance reviews take place.
However, if your development core is GitHub, security and compliance reviews can happen once, on a centrally managed platform that is well understood and secure. And you’ll be front row for all of the amazing new innovations that GitHub will be releasing down the road.
Get started today
If you are planning on migrating your source code and collaboration history to GitHub and have questions, thankfully, many other enterprises have done this already with great success and there are resources to help you through the process.
Visit our GitHub Enterprise Importer documentation for details on supported migration paths, guides for different migration sources, and more.
If you want to learn more about how GitHub can benefit your business, while increasing developer velocity and collaboration, see how GitHub Enterprise can help.
The post 3 benefits of migrating and consolidating your source code appeared first on The GitHub Blog.
]]>The post Building organization-wide governance and re-use for CI/CD and automation with GitHub Actions appeared first on The GitHub Blog.
]]>A strong focus on automation can reap benefits, particularly in your development workflow and DevOps lifecycle. We know that continuous integration (CI) can help accelerate development and enhance overall quality, while continuous delivery (CD) and continuous deployment (CD) can help reduce time-to-production and test for quality in live environments (for example, API fuzzing, infrastructure-as-code [IaC] validation, performance and load testing, and much more).
While many companies are continuing along their DevOps journey, there isn’t one clear path to adopting these principles. In fact, there are usually multiple application teams creating similar build and deployment pipelines in parallel. These teams operate at different maturity levels (that is, how rigorous their quality gates are; from simple builds and linting, through to high levels of test coverage and automated tests in a live environment). In some cases, a team might be managing a separate tool, such as Jira, TeamCity or similar. In others, teams may be standardized on an underlying platform but choose alternate approaches, such as using a Command Line Interface (CLI) in Bash or PowerShell, or some in-platform ‘helper’ approach (such as GitHub Actions, Azure DevOps Tasks or similar).
This potential disparity brings several challenges in a business setting:
- The multitude of platforms adopted typically causes significant operational overhead and challenges in consolidating your toolkit.
- Teams are likely reinventing the wheel. By analyzing the languages, frameworks, target platforms and deployment approaches used, you’ll see patterns emerge.
- Quality is likely not well-governed across an organization. This can introduce risk when pushing to production. How can you be certain whether a component has been rigorously performance tested, or pushed to production after only checking for a successful build?
- In high-regulatory environments, you may need to demonstrate compliance against certain checks. With a decentralized CI/CD model, attesting compliance is challenging to justify. As a result, it requires a significant amount of work to ensure compliance across application teams.
So, with the groundwork laid out, what are potential solutions? Fortunately, GitHub Actions has a few features that may be able to help.
How can GitHub help?
GitHub Actions is GitHub’s answer to automation and CI/CD, with the ability to trigger based on several GitHub events. Alongside a rich ecosystem of community and third-party actions, the platform provides a number of primitives to assist you in governing your workflows.
Let’s explore the platform features available to help govern CI/CD at scale across your company.
Branch protection rules and required status checks
With GitHub Actions, your workflows are stored as source in your repository alongside your code. That means they can benefit from the same governance practices as your other code.
When writing code, you typically want to follow a consistent process to bring changes to your codebase. Ideally, there would be a set of quality gates that must be passed before the changes can be brought into production. This is where branch protection rules come in. They allow you to enforce standards, so that quality can be maintained.
Protection rules can be combined with status checks to ensure that your code is meeting a set of conditions.

These checks could include tasks like test execution, build verification or validating that no new security vulnerabilities have been brought into the project. Required status checks can be mandated on a repository by using branch protection rules, encouraging practices that lead to higher quality code being pushed to production.
Reusable workflows
Instead of repeating the same set of steps across multiple workflows, you can define them once in a reusable workflow. And, well, reuse them!
Think of them like a function in software, which is generic and reusable. Or, if you’re familiar with IaC, think of reusable workflows like a template, acting as a “cookie cutter” for different patterns of your workflow.
With that in mind, it’s typical to see a reusable workflow take several parameters and use those to determine the action (pun intended!) within the workflow. As an example, below is a reusable workflow, which:
- Takes a config-path as an input string (think of this as a parameter to a function).
- Passes envPat as a named secret value. GitHub Actions recognizes that this value should be obfuscated.
- Executes the workflow on a GitHub-hosted Ubuntu runner. This workflow runs the actions/labeler GitHub Action using the provided inputs and secrets.
on:
workflow_call:
inputs:
config-path:
required: true
type: string
secrets:
envPAT:
required: true
jobs:
reusable_workflow_job:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/labeler@v4
with:
repo-token: ${{ secrets.envPAT }}
configuration-path: ${{ inputs.config-path }}
The above reusable workflow shows a workflow that could be scaled across a company for repeated use. An application team would pass in the relevant secrets and define their own configuration file (in a path of their choosing, as opposed to a predefined path), while executing the needed checks and balances.
Application teams can consume the reusable workflow in their own workflows, using a snippet similar to the example below:
jobs:
call-workflow-passing-data:
uses: octo-org/example-repo/.github/workflows/reusable-workflow.yml@main
with:
config-path: .github/labeler.yml
secrets:
envPAT: ${{ secrets.envPAT }}
This raises a question. When might you want to share workflows? First, you’ll want to consider how broadly to share them. Are some business-specific (for example, relating to the processes of a business unit), or are some company policies and should be reused throughout?
Division-wide
These workflows could be shaped by a business unit’s processes, a given business unit choosing a specific cloud provider, division-wide governance policies, or numerous other scenarios.
Consider creating a repository for the purpose of sharing workflows across the division. That way, application teams in the division can consume from the centrally maintained repository. If any workflows are sensitive, then you could consider creating a private repository, and only sharing as needed.
Company-wide
Some workflows may warrant sharing across the entire company, for example, companywide policies or practices. This makes sense for scenarios where a business wants to provide paved paths that include built-in guard rails.
Consider a business that has adopted common languages or similar target deployment platforms across teams. They may want to provide templatized workflows that include unit testing and linting as standard. Or, performance testing to some standard endpoints for a given cloud provider’s hosting platform.
In essence, the scope of commonality for these workflows exists at the company level. You could once again consider a repository for the purposes of sharing these reusable assets across the company.
Required workflows
Required workflows were recently released in public beta. They ensure that a specified workflow is executed in a pull request (appearing as a status check), and are configured at the GitHub organization scope. This is useful when you want to take a step further than empowerment, and mandate that specific steps are completed.
As an example, consider the rollout of a security scanning tool. To ensure that all teams are scanning for security issues in their projects before merging code to production, you could consider making this a required workflow.

However, it’s worth considering the tradeoff when adopting this approach. How opinionated and imposing do you want to be on application teams that are using GitHub Actions? Alternatively, how much do you want to empower those teams to choose the appropriate reusable workflows for their scenario?
This decision will depend on your team’s risk appetite and whether cultural norms would allow for standardization of practices across the company.
GitHub Actions Importer
We know that adopting CI/CD is not as simple as creating a new workflow. In many cases, you already have incumbent tooling (perhaps multiple) to complete your DevOps automation needs. Fortunately, GitHub has released the GitHub Actions Importer.
This tooling helps you to migrate pipelines from Azure DevOps, CircleCI, GitLab, Jenkins, and Travis CI. While not completely foolproof, the team have found that on average over 90% of tasks and constructs used in a workflow have been successfully converted. In case you missed it, the tooling became generally available last month (March)!
In real terms, this can help accelerate your adoption of GitHub Actions. In turn, you could then convert those workflows into reusable workflows, and share your commonly-used recommended patterns across the organization, benefitting your wider engineering community.
After all, that’s exactly what innersource is about: contributing to the success of others, and building on the work of others!
What guardrails can you put in place?
For some organizations and engineering leaders, the prospect of reusing CI/CD and automation practices across the company may seem daunting. However, there are several considerations to help mitigate risk while ensuring your teams can continue innovating and being successful.
Permissions and secrets
To deploy your application to an environment, such as Azure, AWS, GCP, or on-premises, you must grant some level of access to your CI/CD platform. This typically means providing passwords, or certificates and having robust operational procedures to manage those.
But what if you didn’t have to worry about secrets at all? What if you could deploy to a target deployment environment without using a secret? Fortunately, that is possible using GitHub Actions and OpenID Connect (OIDC). Check out my extensive blog post on the topic.
| Tip: with OIDC, you’re still logging on with a service principal on the target platform. This means, you need to consider the permissions which have been granted to that service principal.
Make sure to consider the principle of least privilege. Does it make sense for all teams to reuse the same service principal? (Probably not!) Or does it make more sense to monitor activity and access from service principals per application, per environment? This approach may now seem more appealing when used with OIDC, as you don’t have to worry about password rotations and the associated operational overhead. |
Take a moment and think about how you can combine this with the concepts we’ve explored so far. Your application teams can depend on a number of reusable workflows shared internally. Employees can submit a pull request to enhance those workflows, which would be reviewed by (and benefit) the wider community. Those reusable workflows may contain actions that leverage GitHub Action’s OIDC capabilities and remove the need for passwords.
In other words, you are starting to pull together a series of templates that bring together recommended practices from across the organization and reduce the toil by removing credentials and passwords as a requirement (and therefore the need for password rotations and similar). This is a win for application and operations teams.
Manage the permissions of your workflows
Each GitHub Action workflow run is executed with a set of permissions so that it can interact with GitHub Services. For example, pushing a new package to GitHub Packages. We recently updated the default permissions so that it is read-only by default. However, you can explicitly set these permissions in your workflow’s YAML definition.
| Tip: once again, make sure to use the principle of least privilege. Only grant the permissions that are truly required for your job, or overall workflow. |
Governance: allow/deny specific actions and reusable workflows
Organizations typically have rigorous component governance processes in place, helping them understand the dependencies they have adopted in the software they build. But how do you govern the use of GitHub Actions and reusable workflows?
At the enterprise level, you can use policies to restrict the use of GitHub Actions, specifying whether all actions and reusable workflows are allowed, only those within the enterprise, or a more controlled list.

When selecting the option “Allow enterprise, and select non-enterprise, actions and reusable workflows,” you can access further granularity. This includes only allowing actions created by GitHub, marketplace actions by verified creators, and allowing specified actions and reusable workflows. You can find more information about these governance options in GitHub Docs.
These same policies can be set at the organization level, or you can set GitHub Actions Permissions at the repository level. This allows you to empower decision-making at the level which makes most sense in your company.
Dependencies: keeping GitHub Actions up to date
Just like open source dependencies, it’s important to keep your GitHub Actions up to date. When using a GitHub Action, you can specify a version number, which ties to a Git commit ID, branch name, or version number associated with a Git tag.
If there is a newer version, Dependabot can generate a pull request to update your GitHub Action workflow and point to the most recent version.
Find out more about keeping your actions up to date with Dependabot.
Wrap-up
Automation in the developer lifecycle is critical to accelerating delivery, maintaining quality, and delivering value to your users. However, silos across businesses can prevent teams from collaborating effectively.
GitHub Enterprise and GitHub Actions can help bring teams together to share internal CI/CD best practices. We have also explored several opportunities to establish policies and governance at scale by building paved paths or guardrails using reusable workflows. This allows you to set teams up for success and empower them to do their best work, while adopting recommended procedures across the organization.
Want to learn more? Come and join us for a webinar on 7 strategies for end-to-end CI/CD governance on May 18th!
The post Building organization-wide governance and re-use for CI/CD and automation with GitHub Actions appeared first on The GitHub Blog.
]]>The post How to build a consistent workflow for development and operations teams appeared first on The GitHub Blog.
]]>HCL’s growth shows the importance of bringing together the worlds of infrastructure, operations, and developers. This was always the goal of DevOps. But in reality, these worlds remain siloed for many enterprises.
In this post we’ll look at the business and cultural influences that bring development and operations together, as well as security, governance, and networking teams. Then, we’ll explore how GitHub and HashiCorp can enable consistent workflows and guardrails throughout the entire CI/CD pipeline.
The traditional world of operations (Ops)
Armon Dadgar, co-founder of HashiCorp, uses the analogy of a tree to explain the traditional world of Ops. The trunk includes all of the shared and consistent services you need in an enterprise to get stuff done. Think of things like security requirements, Active Directory, and networking configurations. A branch represents the different lines of business within an enterprise, providing services and products internally or externally. The leaves represent the different environments and technologies where your software or services are deployed: cloud, on-premises, and container environment, among others.
In many enterprises, the communication channels and processes between these different business areas can be cumbersome and expensive. If there is a significant change to the infrastructure or architecture, multiple tickets are typically submitted to multiple teams for reviews and approvals across different parts of the enterprise. Change Advisory Boards are commonly used to protect the organization. The change is usually unable to proceed unless the documentation is complete. Commonly, there’s a set of governance logs and auditable artifacts which are required for future audits.
Wouldn’t it be more beneficial for companies if teams had an optimized, automated workflow that could be used to speed up delivery and empower teams to get the work done in a set of secure guardrails? This could result in significant time and cost savings, leading to added business value.
After all, a recent Forrester report found that over three years, using GitHub drove 433% ROI for a composite organization simply with the combined power of all GitHub’s enterprise products. Not to mention the potential for time savings and efficiency increase, along with other qualitative benefits that come with consistency and streamlining work.
Your products and services would be deployed through an optimized path with security and governance built-in, rather than a sluggish, manual and error-prone process. After all, isn’t that the dream of DevOps, GitOps, and Cloud Native?
Introducing IaC
Let’s use a different analogy. Think of IaC as the blueprint for resources (such as servers, databases, networking components, or PaaS services) that host our software and services.
If you were architecting a hospital or a school, you wouldn’t use the same overall blueprint for both scenarios as they serve entirely different purposes with significantly different requirements. But there are likely building blocks or foundations that can be reused across the two designs.
IaC solutions, such as HCL, allow us to define and reuse these building blocks, similarly to how we reuse methods, modules, and package libraries in software development. With it being IaC, we can start adopting the same recommended practices for infrastructure that we use when collaborating and deploying on applications.
After all, we know that teams that adopt DevOps methodologies will see improved productivity, cloud-enabled scalability, collaboration, and security.
A better way to deliver
With that context, let’s explore the tangible benefits that we gain in codifying our infrastructure and how they can help us transform our traditional Ops culture.
Storing code in repositories
Let’s start with the lowest-hanging fruit. With it being IaC, we can start storing infrastructure and architectural patterns in source code repositories such as GitHub. This gives us a single source of truth with a complete version history. This allows us to easily rollback changes if needed, or deploy a specific version of the truth from history.
Teams across the enterprise can collaborate in separate branches in a Git repository. Branches allow teams and individuals to be productive in “their own space” and not have to worry about negatively impacting the in-progress work of other teams, away from the “production” source of truth (typically, the main branch).
Terraform modules, the reusable building blocks mentioned in the last section, are also stored and versioned in Git repositories. From there, modules can be imported to the private registry in Terraform Cloud to make them easily discoverable by all teams. When a new release version is tagged in GitHub, it is automatically updated in the registry.
Collaborate early and often
As we discussed above, teams can make changes in separate branches to not impact the current state. But what happens when you want to bring those changes to the production codebase? If you’re unfamiliar with Git, then you may not have heard of a pull request before. As the name implies, we can “pull” changes from one branch into another.
Pull requests in GitHub are a great way to collaborate with other users in the team, being able to get peer reviews so feedback can be incorporated into your work. The pull request process is deliberately very social, to foster collaboration across the team.
In GitHub, you could consider setting branch protection rules so that direct changes to your main branch are not allowed. That way, all users must go through a pull request to get their code into production. You can even specify the minimum number of reviewers needed in branch protection rules.
Tip: you could use a special type of file, the CODEOWNERS file in GitHub, to automatically add reviewers to a pull request based on the files being edited. For example, all HCL files may need a review by the core infrastructure team. Or IaC configurations for line of business core banking systems might require review by a compliance team.
Unlike Change Advisory Boards, which typically take place on a specified cadence, pull requests become a natural part of the process to bring code into production. The quality of the decisions and discussions also evolves. Rather than being a “yes/no” decision with recommendations in an external system, the context and recommendations can be viewed directly in the pull request.
Collaboration is also critical in the provisioning process, and GitHub’s integrations with Terraform Cloud will help you scale these processes across multiple teams. Terraform Cloud offers workflow features like secure storage for your Terraform state and direct integration with your GitHub repositories for a turnkey experience around the pull request and merge lifecycle.
Bringing automated quality reviews into the process
Building on from the previous section, pull requests also allow us to automatically check the quality of the changes that are being proposed. It is common in software to check that the application still compiles correctly, that unit tests pass, that no security vulnerabilities are introduced, and more.
From an IaC perspective, we can bring similar automated checks into our process. This is achieved by using GitHub status checks and gives us a clear understanding of whether certain criteria has been met or not.
GitHub Actions are commonly used to execute some of these automated checks in pull requests on GitHub. To determine the quality of IaC, you could include checks such as:
- Validating that the code is syntactically correct (for example, Terraform validate).
- Linting the code to ensure a certain set of standards are being followed (for example, TFLint or Terraform format).
- Static code analysis to identify any misconfigurations in your infrastructure at “design time” (for example, tfsec or terrascan).
- Relevant unit or integration tests (using tools such as Terratest).
- Deploying the infrastructure into a “smoke test”environment to verify that the infrastructure configuration (along with a known set of parameters) results deploy into a desired state.
Getting started with Terraform on GitHub is easy. Versions of Terraform are installed on our Linux-based GitHub-hosted runners, and HashiCorp has an official GitHub Action to set up Terraform on a runner using a Terraform version that you specify.
Compliance as an automated check
We recently blogged about building compliance, security, and audit into your delivery pipelines and the benefits of this approach. When you add IaC to your existing development pipelines and workflows, you’ll have the ability to describe previously manual compliance testing and artifacts as code directly into your HCL configurations files.
A natural extension to IaC, policy as code allows your security and compliance teams to centralize the definitions of your organization’s requirements. Terraform Cloud’s built-in support for the HashiCorp Sentinel and Open Policy Agent (OPA) frameworks allows policy sets to be automatically ingested from GitHub repositories and applied consistently across all provisioning runs. This ensures policies are applied before misconfigurations have a chance to make it to production.
An added bonus mentioned in another recent blog is the ability to leverage AI-powered compliance solutions to optimize your delivery even more. Imagine a future where generative AI could create compliance-focused unit-tests across your entire development and infrastructure delivery pipeline with no manual effort.
Security in the background
You may have heard of Dependabot, our handy tool to help you keep your dependencies up to date. But did you know that Dependabot supports Terraform? That means you could rely on Dependabot to help keep your Terraform provider and module versions up to date.
Checks complete, time to deploy
With the checks complete, it’s now time for us to deploy our new infrastructure configuration! Branching and deployment strategies is beyond the scope of this post, so we’ll leave that for another discussion.
However, GitHub Actions can help us with the deployment aspect as well! As we explained earlier, getting started with Terraform on GitHub is easy. Versions of Terraform are installed on our Linux-based GitHub-hosted runners, and HashiCorp has an official GitHub Action to set up Terraform on a runner using a Terraform version that you specify.
But you can take this even further! In Terraform, it is very common to use the command terraform plan to understand the impact of changes before you push them to production. terraform apply is then used to execute the changes.
Reviewing environment changes in a pull request
HashiCorp provides an example of automating Terraform with GitHub Actions. This example orchestrates a release through Terraform Cloud by using GitHub Actions. The example takes the output of the terraform plan command and copies the output into your pull request for approval (again, this depends on the development flow that you’ve chosen).
Reviewing environment changes using GitHub Actions environments
Let’s consider another example, based on the example from HashiCorp. GitHub Actions has a built-in concept of environments. Think of these environments as a logical mapping to a target deployment location. You can associate a protection rule with an environment so that an approval is given before deploying.
So, with that context, let’s create a GitHub Action workflow that has two environments—one which is used for planning purposes, and another which is used for deployment:
name: 'Review and Deploy to EnvironmentA'
on: [push]
jobs:
review:
name: 'Terraform Plan'
environment: environment_a_plan
runs-on: ubuntu-latest
steps:
- name: 'Checkout'
uses: actions/checkout@v2
- name: 'Terraform Setup'
uses: hashicorp/setup-terraform@v2
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
- name: 'Terraform Init'
run: terraform init
- name: 'Terraform Format'
run: terraform fmt -check
- name: 'Terraform Plan'
run: terraform plan -input=false
deploy:
name: 'Terraform'
environment: environment_a_deploy
runs-on: ubuntu-latest
needs: [review]
steps:
- name: 'Checkout'
uses: actions/checkout@v2
- name: 'Terraform Setup'
uses: hashicorp/setup-terraform@v2
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
- name: 'Terraform Init'
run: terraform init
- name: 'Terraform Plan'
run: terraform apply -auto-approve -input=false
Before executing the workflow, we can create an environment in the GitHub repository and associate protection rules with the environment_a_deploy. This means that a review is required before a production deployment.
Learn more
Check out HashiCorp’s Practitioner’s Guide to Using HashiCorp Terraform Cloud with GitHub for some common recommendations on getting started. And find out how we at GitHub are using Terraform to deliver mission-critical functionality faster and at lower cost.
The post How to build a consistent workflow for development and operations teams appeared first on The GitHub Blog.
]]>The post 3 common DevOps antipatterns and cloud native strategies that can help appeared first on The GitHub Blog.
]]>Antipatterns are when teams focus on short term goals and concentrate on tooling without factoring in people and process. According to Martin Fowler (co-author of the Agile Manifesto), a software pattern is something that “should have recurrence, which means the solution must be applicable in lots of different situations. If you are talking about something that’s a one-off, then it’s not worth adding the name to the profession’s vocabulary.” An antipattern can be thought of as taking something that should be a one-off and wrongly applying it to other situations.
Here, at GitHub, we want to help you avoid DevOps antipatterns, as they could be holding back your team’s success, productivity, and happiness. Let’s examine a few DevOps antipatterns and see how GitHub and cloud native strategies can be used to create patterns, which are scalable and reusable for many situations.
GitHub is a platform for developers, and not just from a productivity perspective—for happiness and our holistic experience. When you are happy and can get in a state of flow…and stay there because you don’t have to context switch between cumbersome tooling and manual actions, you tap into your true potential and productivity naturally ensues.
Most common DevOps antipatterns
- Unscalable team structures: This antipattern can start out positively with an exemplary team that is helping transform an enterprise. But when the team tries to apply their model to other situations across the organization, they can turn into a bottleneck and hold back innovation.
In the book, Team Topologies, the authors, Manuel Pais, and Matthew Skelton, describe this antipattern with two examples:
- DevOps team that may be mature and highly functioning, but start consolidating “compartmentalized knowledge, such as configuration management, monitoring, and deployment strategies, within an organization.”
- Infrastructure-as-code in a cloud team that is run with the same processes and communication channels as a traditional infrastructure team. “If the cloud team simply mimics the current behaviors and infrastructure processes, the organization will incur the same delays and bottlenecks for software delivery as before.”
Microsoft describes this antipattern in the context of a cloud operating model, which also applies to DevOps. Although this small team may be DevOps and cloud experts, they also might be responsible for domains that aren’t their area of expertise. This can quickly lead to bottlenecks, since the team only approves measures that are fully understood.
One strategy to address this DevOps antipattern is to leverage the declarative patterns of cloud native systems. If your DevOps CI/CD pipelines can be described “as code,”your pipelines will be able to take advantage of core GitHub capabilities, such as pull requests and version history. You’ll also benefit from the use of innersource, oversight, teams feeling empowered to propose changes, and collaboration through pull requests.
By leveraging this strategy, the exemplar DevOps team can create scalable open source communities within their organizations for DevOps empowerment without being considered a bottleneck to innovation.
-
Not shifting left enough: “Shifting left” has become a buzzword in the industry, but is also a key pattern for DevOps and cloud native enablement. It’s related to the DevOps pattern of frequency reduces difficulty. If security testing and governance is difficult, move as far left as possible and automate as part of your continuous integration processes as early as you can.
DevOps antipatterns can also occur if security, testing, and governance are infrequent and happen at the end of your delivery pipeline; this makes them more difficult to execute and will slow down productivity.
To help address security issues earlier, common OWASP vulnerabilities mitigation can happen directly in your GitHub workflow without sacrificing developer productivity or happiness. With some strategies, you can replace manual security processes with automated processes, which will easily scale and keep your developers in the flow.
There is little value in creating an automated end-to-end CI/CD pipeline if testing is highly manual and occurs late in the value stream. By not implementing unit tests correctly or completely, future changes will eventually introduce regression bugs that never get uncovered prior to a release and can be insidious to find.
Microsoft provides the following strategy to help teams move their testing further left: “Unit tests allow testing to ‘shift-left’ and introduce quality control as soon as the first line of code for a new feature is written.” But to make testing truly ‘shift-left’ your testing tools will need to be integrated with your code in GitHub. You’ll want to ensure that your testing tools and GitHub are integrated based on loose coupling. (Here are three strategies to help you get started.)
Shifting governance left is not always well understood. But if governance, auditability, separation of duties, and shared responsibilities are treated as concepts that happen outside of your DevOps pipeline, your team’s flow of value may come to a screeching halt once it starts to interact with other parts of your organization.
You can start with security guardrails like the OWASP vulnerabilities strategy mentioned above. Then, refer to common frameworks like NIST Cybersecurity, and start mapping controls to existing GitHub workflows, such as code reviews via the pull request. You’d be surprised at how many controls are built into the work developers do every day in GitHub.
-
Tools but not people: This may be the hardest DevOps antipattern to diagnosis and address but also the most important to get right. It may also be the most common antipattern since many DevOps transformations focus on tooling without addressing the importance of having their teams be in the flow and happy.
You may have the best tooling in your organizations, but if your team structure has not changed since the old waterfall days, you may end up with unclear responsibilities. This can lead to poor decision-making and unnecessary interference in other people’s positions.
The first strategy to address this antipattern is to realize the importance of communication channels in your teams. This concept was famously captured by Melvin Conway and is called Conway’s Law. Conway’s Law is essentially the observation that the architectures of software systems look remarkably similar to the organization of the development team that built it. If your teams are organized and communicate in a siloed, complex, and multi-layered way, any system you create will also be siloed and multi-layered. This can also lead to DevOps pipelines that are siloed, overly complex, and non-optimized.
Martin Fowler’s guidance is to “know not to fight it.” But if you take a people-first approach, you can try and build the required communication patterns into your DevOps pipelines in a more optimized way. For example, if your security and development teams need to communicate regarding open source dependencies, you can optimize the communication channel with the use of Dependabot and pull requests. Your security and development teams will not only be happier while staying in the flow, but they’ll also find security vulnerabilities much earlier in the pipeline.
The next strategy is finding a way to measure the productivity and happiness of your teams and enabling them to act if they can’t stay in the flow. At GitHub, we use the SPACE productivity framework to gauge developers’ perceived productivity and to ensure their well-being and satisfaction. Applying this framework can help measure several areas important to your DevOps pipelines, such as efficiency and flow. By leveraging this framework, you’ll be able to highlight the number of handoffs in a specific process and see how many teams were involved. You’ll also be able to reduce toil, improve your team’s happiness, and optimize your pipelines.
The bottom line: improving the developer experience
By addressing common DevOps antipatterns with these practical strategies, you can create an environment that will enable your team’s success and happiness while delivering on business value–and then start to scale this success to other teams.
Ready to increase developer velocity and collaboration while remaining secure? See how GitHub Enterprise can help.
The post 3 common DevOps antipatterns and cloud native strategies that can help appeared first on The GitHub Blog.
]]>