Skip to content

samber/slog-sampling

Repository files navigation

Slog sampling policy

tag Go Version GoDoc Build Status Go report Coverage Contributors License

A middleware that samples incoming records which caps the CPU and I/O load of logging while attempting to preserve a representative subset of your logs.

Sampling fixes throughput by dropping repetitive log entries.

See also:

HTTP middlewares:

Loggers:

Log sinks:

πŸš€ Install

go get github.com/samber/slog-sampling

Compatibility: go >= 1.21

No breaking changes will be made to exported APIs before v2.0.0.

πŸ’‘ Usage

GoDoc: https://pkg.go.dev/github.com/samber/slog-sampling

Middlewares

3 strategies are available:

The sampling middleware can be used standalone or with the slog-multi helpers.

A combination of multiple sampling strategies can be chained. Eg:

  • drop when a single log message is produced more than 100 times per second
  • drop above 1000 log records per second (globally)

Matchers

Similar log records can be deduplicated and rate-limited using the Matcher API.

Available Matcher:

  • slogsampling.MatchByLevelAndMessage (default)
  • slogsampling.MatchAll
  • slogsampling.MatchByLevel
  • slogsampling.MatchByMessage
  • slogsampling.MatchBySource
  • slogsampling.MatchByAttribute
  • slogsampling.MatchByContextValue

Uniform sampling

type UniformSamplingOption struct {
    // The sample rate for sampling traces in the range [0.0, 1.0].
    Rate float64

    // Optional hooks
    OnAccepted func(context.Context, slog.Record)
    OnDropped  func(context.Context, slog.Record)
}

Example using slog-multi:

import (
    slogmulti "github.com/samber/slog-multi"
    slogsampling "github.com/samber/slog-sampling"
    "log/slog"
)

// Will print 33% of entries.
option := slogsampling.UniformSamplingOption{
	// The sample rate for sampling traces in the range [0.0, 1.0].
    Rate:       0.33,
}

logger := slog.New(
    slogmulti.
        Pipe(option.NewMiddleware()).
        Handler(slog.NewJSONHandler(os.Stdout, nil)),
)

Threshold sampling

type ThresholdSamplingOption struct {
    // This will log the first `Threshold` log entries with the same hash,
    // in a `Tick` interval as-is. Following that, it will allow `Rate` in the range [0.0, 1.0].
    Tick       time.Duration
    Threshold  uint64
    Rate       float64

    // Group similar logs (default: by level and message)
    Matcher func(ctx context.Context, record *slog.Record) string

    // Optional hooks
    OnAccepted func(context.Context, slog.Record)
    OnDropped  func(context.Context, slog.Record)
}

If Rate is zero, the middleware will drop all log entries after the first Threshold records in that interval.

Example using slog-multi:

import (
    slogmulti "github.com/samber/slog-multi"
    slogsampling "github.com/samber/slog-sampling"
    "log/slog"
)

// Will print the first 10 entries having the same level+message, then every 10th messages until next interval.
option := slogsampling.ThresholdSamplingOption{
    Tick:       5 * time.Second,
    Threshold:  10,
    Rate:       0.1,
}

logger := slog.New(
    slogmulti.
        Pipe(option.NewMiddleware()).
        Handler(slog.NewJSONHandler(os.Stdout, nil)),
)

Absolute sampling

type AbsoluteSamplingOption struct {
    // This will log all entries with the same hash until max is reached,
    // in a `Tick` interval as-is. Following that, it will reduce log throughput
    // depending on previous interval.
    Tick time.Duration
    Max  uint64

    // Group similar logs (default: by level and message)
    Matcher Matcher

    // Optional hooks
    OnAccepted func(context.Context, slog.Record)
    OnDropped  func(context.Context, slog.Record)
}

Example using slog-multi:

import (
    slogmulti "github.com/samber/slog-multi"
    slogsampling "github.com/samber/slog-sampling"
    "log/slog"
)

// Will print the first 10 entries during the first 5s, then a fraction of messages during the following intervals.
option := slogsampling.AbsoluteSamplingOption{
    Tick:       5 * time.Second,
    Max:        10,

    Matcher: slogsampling.MatchAll(),
}

logger := slog.New(
    slogmulti.
        Pipe(option.NewMiddleware()).
        Handler(slog.NewJSONHandler(os.Stdout, nil)),
)

Custom sampler

type CustomSamplingOption struct {
    // The sample rate for sampling traces in the range [0.0, 1.0].
    Sampler func(context.Context, slog.Record) float64

    // Optional hooks
    OnAccepted func(context.Context, slog.Record)
    OnDropped  func(context.Context, slog.Record)
}

Example using slog-multi:

import (
    slogmulti "github.com/samber/slog-multi"
    slogsampling "github.com/samber/slog-sampling"
    "log/slog"
)

// Will print 100% of log entries during the night, or 50% of errors, 20% of warnings and 1% of lower levels.
option := slogsampling.CustomSamplingOption{
    Sampler: func(ctx context.Context, record slog.Record) float64 {
        if record.Time.Hour() < 6 || record.Time.Hour() > 22 {
            return 1
        }

        switch record.Level {
        case slog.LevelError:
            return 0.5
        case slog.LevelWarn:
            return 0.2
        default:
            return 0.01
        }
    },
}

logger := slog.New(
    slogmulti.
        Pipe(option.NewMiddleware()).
        Handler(slog.NewJSONHandler(os.Stdout, nil)),
)

🀝 Contributing

Don't hesitate ;)

# Install some dev dependencies
make tools

# Run tests
make test
# or
make watch-test

πŸ‘€ Contributors

Contributors

πŸ’« Show your support

Give a ⭐️ if this project helped you!

GitHub Sponsors

πŸ“ License

Copyright Β© 2023 Samuel Berthe.

This project is MIT licensed.