Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add VisitAllEntries method to iterate all entries in cache. #3

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

andreiavrammsd
Copy link

@andreiavrammsd andreiavrammsd commented Dec 3, 2018

Iterate all entries in cache with a given callback with signature f func(k, v []byte) error. Iteration stops if error is returned from callback.

Outdated:

Retrieves cached keys by regex pattern.

type User struct {
	ID       int    `json:"id"`
	Username string `json:"username"`
}

const userCacheKey = "user:%d"

cache := fastcache.New(1000)

// Insert data into cache
u := &User{ID: 1, Username: "John"}
b, _ := json.Marshal(u)
cache.Set([]byte(fmt.Sprintf(userCacheKey, u.ID)), b)

u = &User{ID: 2, Username: "Doe"}
b, _ = json.Marshal(u)
cache.Set([]byte(fmt.Sprintf(userCacheKey, u.ID)), b)

// Retrieve all users
users, err := cache.Keys(`user:\d+`)
if err != nil {
	panic(err)
}

fmt.Printf("Found %d user(s)\n", len(users))

for _, u := range users {
	user := &User{}
	if err := json.Unmarshal(cache.Get(nil, u), user); err != nil {
		fmt.Printf("Error retrieving key \"%s\"", u)
		continue
	}
	fmt.Printf("%#v\n", user)
}

@codecov-io
Copy link

codecov-io commented Dec 3, 2018

Codecov Report

Merging #3 into master will increase coverage by 1.14%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master      #3      +/-   ##
=========================================
+ Coverage   76.05%   77.2%   +1.14%     
=========================================
  Files           4       4              
  Lines         497     522      +25     
=========================================
+ Hits          378     403      +25     
  Misses         65      65              
  Partials       54      54
Impacted Files Coverage Δ
fastcache.go 89.79% <100%> (+1.49%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 78aabb6...39cf0ef. Read the comment docs.

fastcache.go Outdated Show resolved Hide resolved
fastcache.go Outdated Show resolved Hide resolved
fastcache.go Outdated Show resolved Hide resolved
@cristaloleg
Copy link

Will it be better to filter keys outside fastcache?
Just get all the keys, filter them, fetch matched key-value pairs.

@valyala
Copy link
Collaborator

valyala commented Dec 4, 2018

Will it be better to filter keys outside fastcache?
Just get all the keys, filter them, fetch matched key-value pairs.

This won't be better, since returned keys for huge number of cached items (typical workload for fastcache) would require a lot of additional memory and pointers.

fastcache.go Outdated Show resolved Hide resolved
fastcache_test.go Outdated Show resolved Hide resolved
fastcache.go Outdated Show resolved Hide resolved
fastcache.go Outdated Show resolved Hide resolved
fastcache.go Outdated Show resolved Hide resolved
fastcache.go Show resolved Hide resolved
fastcache.go Show resolved Hide resolved
fastcache.go Outdated
idx += keyLen
value := chunk[idx : idx+valLen]

if err := f(key, value); err != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently it is called under per-bucket read lock. This means the bucket contents cannot be modified until all its entries are visited. This may take a lot of time for slow callbacks. I'm unsure whether this is OK.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also would lead to deadlock on an attempt to call Cache.Set(key, ...) inside f.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we mention these possible situations in the docs, do you think users will still ignore them and they will end up in undesired states?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we mention these possible situations in the docs, do you think users will still ignore them and they will end up in undesired states?

I'm absolutely sure :) There are the following choices:

  • to document all the restrictions and caveats in the hope users read and adhere the docs
  • to harden the implementation against users who don't read docs. I.e. do not hold the lock during f call and copy key, value args passed to f. This would slow down the implementation
  • to fork fastcache and implement this in the fork

I'm leaning to the last solution. Let's leave the PR open for a while.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we push on a queue all operations (get, set, del...) while VisitAllEntries is working? And when it finishes, we run all the operations (in the same order, of course).

fastcache.go Outdated Show resolved Hide resolved
fastcache.go Outdated Show resolved Hide resolved
@andreiavrammsd andreiavrammsd changed the title Add Keys method. Add VisitAllEntries method to iterate all entries in cache. Dec 6, 2018
@hiqinternational
Copy link

hiqinternational commented Jun 5, 2019

Please don't put everything inside doc only. You can write the limitations, caveats and few basic complete examples on the readme.md

Things to look out for should be most visible.

There are a lot of nice features and TTL is definitely needed here.

Please do make TTL ready at least as soon as possible.

I think the format of method can be implemented and set,
the function / method / strategy / optimization can be improved on later.

e.g. SetWithTTL(k,v,t) etc or just SetT(k,v,t) etc can be implemented already.
As for how the actual method should be, we can improve on overall later.

No one will read the doc complete but at least they will definitely read the readme.md AND a wiki section will be nice with FAQs section

@tsingson
Copy link

some case TTL is must and cool, but some others cache had.
but , TTL is no need for high performance cache, there another ( easy ) way to update cache item or remove it.

fastcache is perfect in my trial project and running in product now.

@diegobernardes
Copy link

@andreiavrammsd do you need any help to finish this PR? fastcache is perfect for my use case, it's missing just the feature to iterate over all the entries.

@andreiavrammsd
Copy link
Author

@diegobernardes, the PR is finished but not accepted because of performance concerns. Please see the discussions.

@hiqsociety
Copy link

Is there a feature for VisitAllEntries()? How can I code this / add to personal repo?

@miaoyc666
Copy link

Why not develop a function of simple Keys without regex pattern,I think there will be fewer performance problems 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants