Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BigCache is logging allocations it should not #363

Open
Matias-Barrios opened this issue Jun 5, 2023 · 4 comments
Open

BigCache is logging allocations it should not #363

Matias-Barrios opened this issue Jun 5, 2023 · 4 comments
Labels

Comments

@Matias-Barrios
Copy link

What is the issue you are having?
I have deployed the version 3.0.2 and I thought this issue: #117 meant it was solved.
I have a constant flow of the below messages on my logs.

Can you help me understand how to fix this?

What is BigCache doing that it shouldn't?
It should not log this every few milliseconds :

2023/06/05 02:37:51 Allocated new queue in 92.911µs; Capacity: 614400 
2023/06/05 02:37:50 Allocated new queue in 81.841µs; Capacity: 614400
...

Minimal, Complete, and Verifiable Example

When asking a question about a problem caused by your code, you will get much better answers if you provide code we can use to reproduce the problem. That code should be...

  • ...Minimal – Use as little code as possible that still produces the same problem
  • ...Complete – Provide all parts needed to reproduce the problem
  • ...Verifiable – Test the code you're about to provide to make sure it reproduces the problem

For more information on how to provide an MCVE, please see the Stack Overflow documentation.

Environment:

  • Version (git sha or release): NA
  • OS (e.g. from /etc/os-release or winver.exe): alpine3.15
  • go version: golang:1.17.9
@janisz
Copy link
Collaborator

janisz commented Jun 5, 2023

Could you share your config and example payload to help reproduce it?

@Matias-Barrios
Copy link
Author

Sure. Here is my constructor:

func NewBigCache(config *config.Config, datadog dd.ClientInterface) IBigCache {
	bcconfig := bigcache.Config{
		Shards: 1024,
		LifeWindow: 300 * time.Second,
		CleanWindow: 5 * time.Minute,
		MaxEntriesInWindow: 1000 * 10 * 60,
		MaxEntrySize: 500,
		Verbose: true,
		HardMaxCacheSize: config.BigCache.MaxMB, // 600MB
		OnRemove: nil,
		OnRemoveWithReason: nil,
	}

	cache, initErr := bigcache.NewBigCache(bcconfig)
	if initErr != nil {
		log.Fatal().Msgf("error bigcache init: %s", initErr.Error())
	}
	go func() {
		for {
			time.Sleep(time.Duration(5) * time.Second)
			datadog.Histogram("bigcache_keys", float64(cache.Len()), []string{}, 1)
			datadog.Histogram("bigcache_hits", float64(cache.Stats().Hits), []string{}, 1)
			datadog.Histogram("bigcache_misses", float64(cache.Stats().Misses), []string{}, 1)
		}
	}()
	return BigCache{
		cache:   cache,
		enabled: config.BigCache.Enabled, // true
		datadog: datadog,
	}
}

And here is getting a key:

func (b BigCache) GetVendors(ctx context.Context, vendors []string) ([]models.Vendor, []string) {
	span, newCtx := tracer.StartSpanFromContext(ctx, "bigcache.GetVendors", tracer.ResourceName("bigcache"))
	ctx = newCtx
	defer span.Finish()
	if !b.enabled {
		return []models.Vendor{}, vendors
	}

	leftOver := make([]string, 0, len(vendors))
	out := make([]models.Vendor, 0, len(vendors))
	for _, v := range vendors {
		key := b.GetKeyName(v)
		value, err := b.cache.Get(key)
		if err != nil {
			leftOver = append(leftOver, v)
			continue
		}
		tmp := models.Vendor{}
		err = json.Unmarshal(value, &tmp)
		if err != nil {
			log.Ctx(ctx).Error().Msgf("error while unmarhsaling for key from BigCache %s: %s", key, err.Error())
			leftOver = append(leftOver, v)
			continue
		}
		out = append(out, tmp)
	}
	return out, leftOver
}

@janisz
Copy link
Collaborator

janisz commented Jun 5, 2023

OK, looks like a bug. If I understand correctly, bigcache allocates array every 1 ms but instead of growing array (which will be expected) it allocates the same size. What is not clear for me is the HardMaxCacheSize what's exact value passed here? Maybe it's a typo and instead of 600MB we have 614400 and that will explain why it does so. If this is the case then we should add a check if we are really growing the array and if not skip it and do not display confusing message.

if q.capacity == q.maxCapacity {
    return
}

func (q *BytesQueue) allocateAdditionalMemory(minimum int) {
start := time.Now()
if q.capacity < minimum {
q.capacity += minimum
}
q.capacity = q.capacity * 2
if q.capacity > q.maxCapacity && q.maxCapacity > 0 {
q.capacity = q.maxCapacity
}

@Matias-Barrios
Copy link
Author

In that value I am passing this number: 600. It was meant to use 600MB of RAM to cache stuff. Is not a plain int of megabytes I have to provide there?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants