Skip to content
Snippets Groups Projects
user avatar
Sami Hiltunen authored
The Remove function removes a repository. It does so by first moving
the repository into a temporary directory and only then removing the
repository. This is done to avoid leaving partially deleted repositories
behind if the deletion is interrupted. The temporary directory is named
based on the last element of the repository's path. This can lead to
issues if the deletion wasn't performed to completion prior to the next
deletion happening for the same relative path as the temporary directory
name of the latter deletion conflicts with the former one's. Fix this by
adding a random element to the temporary directory name so multiple
deletions for the same relative path can be performed without conflicts.

As we're using MkdirTemp to create a unique directory, we're removing
the repository in a deferred function to ensure its removal in all of the
error branches. This makes no real difference given the removal effectively
happens when the repository is renamed to the temporary directory.

Changelog: fixed
b82c6393

Gitaly

Quick Links: Roadmap | Want to Contribute? | GitLab Gitaly Issues | GitLab Gitaly Merge Requests |

Gitaly is a Git RPC service for handling all the Git calls made by GitLab.

To see where it fits in please look at GitLab's architecture.

Project Goals

Fault-tolerant horizontal scaling of Git storage in GitLab, and particularly, on GitLab.com.

This will be achieved by focusing on two areas (in this order):

  1. Migrate from repository access via NFS to gitaly-proto, GitLab's new Git RPC protocol
  2. Evolve from large Gitaly servers managed as "pets" to smaller Gitaly servers that are "cattle"

Current Status

As of GitLab 11.5, almost all application code accesses Git repositories through Gitaly instead of direct disk access. GitLab.com production no longer uses direct disk access to touch Git repositories; the NFS mounts have been removed.

For performance reasons some RPCs can be performed through NFS still. An effort is made to mitigate performance issues by removing Gitaly N+1. Once that is no longer necessary we can conclude the migration project by removing the Git repository storage paths from GitLab Rails configuration.

In the meantime we are building features according to our roadmap.

If you're interested in seeing how well Gitaly is performing on GitLab.com, read about our observability story!

Overall

image

Dashboard (The link can be accessed by GitLab team members.)

By Feature

image

Dashboard (The link can be accessed by GitLab team members.)

Installation

Most users won't install Gitaly on its own. It is already included in your GitLab installation.

Gitaly requires Go 1.19 or Go 1.20. Run make to compile the executables required by Gitaly.

Gitaly uses git. Versions 2.41.0 and newer are supported.

Configuration

The administration and reference guide is documented in the GitLab project.

Contributing

See CONTRIBUTING.md.

Name

Gitaly is a tribute to Git and the town of Aly. Where the town of Aly has zero inhabitants most of the year we would like to reduce the number of disk operations to zero for most actions. It doesn't hurt that it sounds like Italy, the capital of which is the destination of all roads. All Git actions in GitLab end up in Gitaly.

Design

High-level architecture overview:

Gitaly Architecture

Edit this diagram directly in Google Drawings

Gitaly clients

As of Q4 2018, the following GitLab components act as Gitaly clients:

  • gitlab: the main GitLab Rails application.
  • gitlab-shell: for git clone, git push etc. via SSH.
  • gitlab-workhorse: for git clone via HTTPS and for slow requests that serve raw Git data. (example)
  • gitaly-ssh: for internal Git data transfers between Gitaly servers.

The clients written in Go (gitlab-shell, gitlab-workhorse, gitaly-ssh) use library code from the gitlab.com/gitlab-org/gitaly/client package.

High Availability

We are working on a high-availability (HA) solution for Gitaly based on asynchronous replication. A Gitaly server would be made highly available by assigning one or more standby servers ("secondaries") to it, each of which contains a full copy of all the repository data on the primary Gitaly server.

To implement this we are building a new GitLab component called Praefect, which is hosted alongside the rest of Gitaly in this repository. As we currently envision it, Praefect will have four responsibilities:

  • route RPC traffic to the primary Gitaly server
  • inspect RPC traffic and mark repositories as dirty if the RPC is a "mutator"
  • ensure dirty repositories have their changes replicated to the secondary Gitaly servers
  • in the event of a failure on the primary, demote it to secondary and elect a new primary

Praefect has internal state: it needs to be able to "remember" which repositories are in need of replication, and which Gitaly server is the primary. We will use Postgres to store Praefect's internal state.

As of December 2019 we are busy rolling out the Postgres integration in Praefect. The minimum supported Postgres version is 9.6, just like the rest of GitLab.

Further reading

More about the project and its processes is detailed in the docs.

Distributed Tracing

Gitaly supports distributed tracing through LabKit using OpenTracing APIs.

By default, no tracing implementation is linked into the binary, but different OpenTracing providers can be linked in using build tags/build constraints. This can be done by setting the BUILD_TAGS make variable.

For more details of the supported providers, see LabKit, but as an example, for Jaeger tracing support, include the tags: BUILD_TAGS="tracer_static tracer_static_jaeger".

make BUILD_TAGS="tracer_static tracer_static_jaeger"

Once Gitaly is compiled with an opentracing provider, the tracing configuration is configured via the GITLAB_TRACING environment variable.

For example, to configure Jaeger, you could use the following command:

GITLAB_TRACING=opentracing://jaeger ./gitaly config.toml

Continuous Profiling

Gitaly supports Continuous Profiling through LabKit using Stackdriver Profiler.

For more information on how to set it up, see the LabKit monitoring docs.

Presentations