Skip to content
Snippets Groups Projects
This project is mirrored from https://gitlab.com/gitlab-org/build/omnibus-mirror/prometheus.git. Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
  1. Aug 16, 2023
  2. Aug 15, 2023
  3. Aug 14, 2023
  4. Aug 13, 2023
  5. Aug 11, 2023
  6. Aug 08, 2023
  7. Aug 05, 2023
  8. Aug 04, 2023
  9. Aug 03, 2023
  10. Aug 02, 2023
  11. Aug 01, 2023
  12. Jul 31, 2023
    • Julien Pivotto's avatar
      Merge pull request #12620 from marctc/hetzner_role_exported · 33a67f66
      Julien Pivotto authored
      sd: change hetzner role type and constants to be exportable
      33a67f66
    • Marc Tuduri's avatar
    • Łukasz Mierzwa's avatar
      Use a linked list for memSeries.headChunk (#11818) · 3c80963e
      Łukasz Mierzwa authored
      
      Currently memSeries holds a single head chunk in-memory and a slice of mmapped chunks.
      When append() is called on memSeries it might decide that a new headChunk is needed to use for given append() call.
      If that happens it will first mmap existing head chunk and only after that happens it will create a new empty headChunk and continue appending
      our sample to it.
      
      Since appending samples uses write lock on memSeries no other read or write can happen until any append is completed.
      When we have an append() that must create a new head chunk the whole memSeries is blocked until mmapping of existing head chunk finishes.
      Mmapping itself uses a lock as it needs to be serialised, which means that the more chunks to mmap we have the longer each chunk might wait
      for it to be mmapped.
      If there's enough chunks that require mmapping some memSeries will be locked for long enough that it will start affecting
      queries and scrapes.
      Queries might timeout, since by default they have a 2 minute timeout set.
      Scrapes will be blocked inside append() call, which means there will be a gap between samples. This will first affect range queries
      or calls using rate() and such, since the time range requested in the query might have too few samples to calculate anything.
      
      To avoid this we need to remove mmapping from append path, since mmapping is blocking.
      But this means that when we cut a new head chunk we need to keep the old one around, so we can mmap it later.
      This change makes memSeries.headChunk a linked list, memSeries.headChunk still points to the 'open' head chunk that receives new samples,
      while older, yet to be mmapped, chunks are linked to it.
      Mmapping is done on a schedule by iterating all memSeries one by one. Thanks to this we control when mmapping is done, since we trigger
      it manually, which reduces the risk that it will have to compete for mmap locks with other chunks.
      
      Signed-off-by: default avatarŁukasz Mierzwa <l.mierzwa@gmail.com>
      3c80963e