Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(mlcache) do not error out on 'no memory' #41

Merged
merged 2 commits into from
Mar 17, 2018

Conversation

thibaultcha
Copy link
Owner

When the shm is full and eviction must happen through the LRU mechanism,
it could be that the removed values do not free a large enough amount of
memory in the shm, thus leading set() to fail and return a "no memory"
error.

This can happen when values being stored have sizes of different orders of
magnitude.

We now ignore such errors, so that the user can still retrieve the data
and benefit from it in the L1 cache which does not suffer from such a
limitation.

We print a warning notice when such errors occur, but a new opts.quiet
option can disable this behavior.

thibaultcha added a commit to Kong/kong that referenced this pull request Mar 17, 2018
Addresses the issue discussed in #3124 and #3241. This is part of a
series of fixes to address those errors.

Context
-------
In the `local` mode of the rate-limiting plugins, storing the
rate-limiting counters in the same shm used by Kong's database cache is
too invasive for the underlying shm, especially when the rate-limiting
plugins are used with a `seconds` precision.

On top of exhausting the database cache slots, this approach also
generates some form of fragmentation in the shm. This is due to the
side-by-side storage of values with sizes of different orders of
magnitude (JSON strings vs. an incremented double) and the LRU eviction
mechanism. When the shm is full and LRU kicks-in, it is highly probable
that several rate-limiting counters will be evicted (due to their
proliferation), thus not freeing enough space to store the retrieved
data, causing a `no memory` error to be reported by the shm.

Solution
--------
Declaring shms that are only used by some plugins is not very elegant.
Now, all users (even those not using rate-limiting plugins) have to pay
a memory cost (although small).
Unfortunately, and in the absence of a more dynamic solution to shm
configuration such as a more dynamic templating engine, or a
`configure_by_lua` phase, this is the safest solution.

Size rationale
--------------
Running a script generating similar keys and storing similar values
(double) indicates that an shm with 12Mb should be able to store about
~48,000 of those values at once. It is important to remind ourselves
that one Consumer/IP address might use more than one key (in fact, one
per period configured on the plugin), and both the rate-limiting and
response-ratelimiting plugins at once, and they use the same shms.

Even considering the above statements, ~48,000 keys per node seems
somewhat reasonable, considering keys of `second` precision will most
likely fill up the shm and be candidates for LRU eviction. Our concern
lies instead around long-lived limits (and thus, keys) set by the user.

Additionally, a future improvement upon this will be the setting of the
`init_ttl` argument for the rate-limiting keys, which will help **quite
considerably** in reducing the footprint of the plugins on the shm. As
of this day, this feature has been contributed to ngx_lua but not
released yet:

    openresty/lua-nginx-module#1226

See also
--------
Another piece of the fixes for the `no memory` errors resides in the
behavior of the database caching module upon a full shm. See:

    thibaultcha/lua-resty-mlcache#41

This patch reduces the likeliness of a full shm (by a lot!), but does
not remove it. The above patch ensures a somewhat still sane behavior
would the shm happen to be full again.

Fix #3124
Fix #3241
@thibaultcha thibaultcha force-pushed the fix/return-on-no-memory branch 3 times, most recently from f6013d0 to 77951e9 Compare March 17, 2018 23:18
When the shm is full and eviction must happen through the LRU mechanism,
it could be that the removed values do not free a large enough amount of
memory in the shm, thus leading `set()` to fail and return a "no memory"
error.

This can happen when values being stored have sizes of different orders of
magnitude.

We now ignore such errors, so that the user can still retrieve the data
and benefit from it in the L1 cache which does not suffer from such a
limitation.

We print a warning notice when such errors occur, but a new `opts.quiet`
option can disable this behavior.

From #41
@thibaultcha thibaultcha merged commit b625629 into master Mar 17, 2018
thibaultcha added a commit that referenced this pull request Mar 17, 2018
When the shm is full and eviction must happen through the LRU mechanism,
it could be that the removed values do not free a large enough amount of
memory in the shm, thus leading `set()` to fail and return a "no memory"
error.

This can happen when values being stored have sizes of different orders of
magnitude.

We now ignore such errors, so that the user can still retrieve the data
and benefit from it in the L1 cache which does not suffer from such a
limitation.

We print a warning notice when such errors occur, but a new `opts.quiet`
option can disable this behavior.

From #41
@thibaultcha thibaultcha deleted the fix/return-on-no-memory branch March 17, 2018 23:20
end

return true
end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it intentional to return true upon a no memory error? The original code had it also, but seems odd (also no mention in the docs about this)

Additionally you might want to return the number of tries executed, that would still have a truthy return value (non breaking), but also allow for some analytics reporting.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it intentional to return true upon a no memory error? The original code had it also, but seems odd (also no mention in the docs about this)

Yes, we don't want to unjustifiably hide the fetched value from the user's application.

Additionally you might want to return the number of tries executed, that would still have a truthy return value (non breaking), but also allow for some analytics reporting.

Yeah we could, although tracking forcible would be more useful for this purpose. See #36

thibaultcha added a commit to Kong/kong that referenced this pull request Mar 20, 2018
Addresses the issue discussed in #3124 and #3241. This is part of a
series of fixes to address those errors.

Context
-------
In the `local` mode of the rate-limiting plugins, storing the
rate-limiting counters in the same shm used by Kong's database cache is
too invasive for the underlying shm, especially when the rate-limiting
plugins are used with a `seconds` precision.

On top of exhausting the database cache slots, this approach also
generates some form of fragmentation in the shm. This is due to the
side-by-side storage of values with sizes of different orders of
magnitude (JSON strings vs. an incremented double) and the LRU eviction
mechanism. When the shm is full and LRU kicks-in, it is highly probable
that several rate-limiting counters will be evicted (due to their
proliferation), thus not freeing enough space to store the retrieved
data, causing a `no memory` error to be reported by the shm.

Solution
--------
Declaring shms that are only used by some plugins is not very elegant.
Now, all users (even those not using rate-limiting plugins) have to pay
a memory cost (although small).
Unfortunately, and in the absence of a more dynamic solution to shm
configuration such as a more dynamic templating engine, or a
`configure_by_lua` phase, this is the safest solution.

Size rationale
--------------
Running a script generating similar keys and storing similar values
(double) indicates that an shm with 12Mb should be able to store about
~48,000 of those values at once. It is important to remind ourselves
that one Consumer/IP address might use more than one key (in fact, one
per period configured on the plugin), and both the rate-limiting and
response-ratelimiting plugins at once, and they use the same shms.

Even considering the above statements, ~48,000 keys per node seems
somewhat reasonable, considering keys of `second` precision will most
likely fill up the shm and be candidates for LRU eviction. Our concern
lies instead around long-lived limits (and thus, keys) set by the user.

Additionally, a future improvement upon this will be the setting of the
`init_ttl` argument for the rate-limiting keys, which will help **quite
considerably** in reducing the footprint of the plugins on the shm. As
of this day, this feature has been contributed to ngx_lua but not
released yet:

    openresty/lua-nginx-module#1226

Again, this limit only applies when using the **local** strategy, which
also likely means that a load-balancer is distributing traffic to a pool
of Kong nodes with some sort of consistent load-balancing technique.
Thus considerably reducing the number of concurrent Consumers a given
node needs to handle at once.

See also
--------
Another piece of the fixes for the `no memory` errors resides in the
behavior of the database caching module upon a full shm. See:

    thibaultcha/lua-resty-mlcache#41

This patch reduces the likeliness of a full shm (by a lot!), but does
not remove it. The above patch ensures a somewhat still sane behavior
would the shm happen to be full again.

Fix #3124
Fix #3241
thibaultcha added a commit to Kong/kong that referenced this pull request Mar 26, 2018
This is part of a series of fixes:
- thibaultcha/lua-resty-mlcache#41
- thibaultcha/lua-resty-mlcache#42
- #3311
- #3341

Context
-------
In the `local` mode of the rate-limiting plugins, storing the
rate-limiting counters in the same shm used by Kong's database cache is
too invasive for the underlying shm, especially when the rate-limiting
plugins are used with a `seconds` precision.

On top of exhausting the database cache slots, this approach also
generates some form of fragmentation in the shm. This is due to the
side-by-side storage of values with sizes of different orders of
magnitude (JSON strings vs. an incremented double) and the LRU eviction
mechanism. When the shm is full and LRU kicks-in, it is highly probable
that several rate-limiting counters will be evicted (due to their
proliferation), thus not freeing enough space to store the retrieved
data, causing a `no memory` error to be reported by the shm.

Solution
--------
Declaring shms that are only used by some plugins is not very elegant.
Now, all users (even those not using rate-limiting plugins) have to pay
a memory cost (although small).
Unfortunately, and in the absence of a more dynamic solution to shm
configuration such as a more dynamic templating engine, or a
`configure_by_lua` phase, this is the safest solution.

Size rationale
--------------
Running a script generating similar keys and storing similar values
(double) indicates that an shm with 12Mb should be able to store about
~48,000 of those values at once. It is important to remind ourselves
that one Consumer/IP address might use more than one key (in fact, one
per period configured on the plugin), and both the rate-limiting and
response-ratelimiting plugins at once, and they use the same shms.

Even considering the above statements, ~48,000 keys per node seems
somewhat reasonable, considering keys of `second` precision will most
likely fill up the shm and be candidates for LRU eviction. Our concern
lies instead around long-lived limits (and thus, keys) set by the user.

Additionally, a future improvement upon this will be the setting of the
`init_ttl` argument for the rate-limiting keys, which will help **quite
considerably** in reducing the footprint of the plugins on the shm. As
of this day, this feature has been contributed to ngx_lua but not
released yet:

    openresty/lua-nginx-module#1226

Again, this limit only applies when using the **local** strategy, which
also likely means that a load-balancer is distributing traffic to a pool
of Kong nodes with some sort of consistent load-balancing technique.
Thus considerably reducing the number of concurrent Consumers a given
node needs to handle at once.

See also
--------
Another piece of the fixes for the `no memory` errors resides in the
behavior of the database caching module upon a full shm. See:

    thibaultcha/lua-resty-mlcache#41

This patch reduces the likeliness of a full shm (by a lot!), but does
not remove it. The above patch ensures a somewhat still sane behavior
would the shm happen to be full again.

Fix #3124
Fix #3241
thibaultcha added a commit to Kong/kong that referenced this pull request Mar 26, 2018
This is part of a series of fixes:
- thibaultcha/lua-resty-mlcache#41
- thibaultcha/lua-resty-mlcache#42
- #3311
- #3341

Changes:

* remove vendor mlcache.lua file which had received patches for Kong
  support
* add lua-resty-mlcache 2.0.0 as a dependency
* implement custom IPC options for the DB mlcache instance

Changelog of mlcache 2.0.0:

  https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#200
thibaultcha added a commit to Kong/kong that referenced this pull request Mar 28, 2018
This is part of a series of fixes:
- thibaultcha/lua-resty-mlcache#41
- thibaultcha/lua-resty-mlcache#42
- #3311
- #3341

Changes:

* remove vendor mlcache.lua file which had received patches for Kong
  support
* add lua-resty-mlcache 2.0.1 as a dependency
* implement custom IPC options for the DB mlcache instance

Changelog of mlcache 2.0.0:

  https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#200

Changelog of mlcache 2.0.1:

  https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#201
thibaultcha added a commit to Kong/kong that referenced this pull request Mar 28, 2018
This is part of a series of fixes:
- thibaultcha/lua-resty-mlcache#41
- thibaultcha/lua-resty-mlcache#42
- #3311
- #3341

Changes:

* remove vendor mlcache.lua file which had received patches for Kong
  support
* add lua-resty-mlcache 2.0.1 as a dependency
* implement custom IPC options for the DB mlcache instance

Changelog of mlcache 2.0.0:

  https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#200

Changelog of mlcache 2.0.1:

  https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#201
thibaultcha added a commit to Kong/kong that referenced this pull request Mar 28, 2018
This is part of a series of fixes:
- thibaultcha/lua-resty-mlcache#41
- thibaultcha/lua-resty-mlcache#42
- #3311
- #3341

Context
-------
In the `local` mode of the rate-limiting plugins, storing the
rate-limiting counters in the same shm used by Kong's database cache is
too invasive for the underlying shm, especially when the rate-limiting
plugins are used with a `seconds` precision.

On top of exhausting the database cache slots, this approach also
generates some form of fragmentation in the shm. This is due to the
side-by-side storage of values with sizes of different orders of
magnitude (JSON strings vs. an incremented double) and the LRU eviction
mechanism. When the shm is full and LRU kicks-in, it is highly probable
that several rate-limiting counters will be evicted (due to their
proliferation), thus not freeing enough space to store the retrieved
data, causing a `no memory` error to be reported by the shm.

Solution
--------
Declaring shms that are only used by some plugins is not very elegant.
Now, all users (even those not using rate-limiting plugins) have to pay
a memory cost (although small).
Unfortunately, and in the absence of a more dynamic solution to shm
configuration such as a more dynamic templating engine, or a
`configure_by_lua` phase, this is the safest solution.

Size rationale
--------------
Running a script generating similar keys and storing similar values
(double) indicates that an shm with 12Mb should be able to store about
~48,000 of those values at once. It is important to remind ourselves
that one Consumer/IP address might use more than one key (in fact, one
per period configured on the plugin), and both the rate-limiting and
response-ratelimiting plugins at once, and they use the same shms.

Even considering the above statements, ~48,000 keys per node seems
somewhat reasonable, considering keys of `second` precision will most
likely fill up the shm and be candidates for LRU eviction. Our concern
lies instead around long-lived limits (and thus, keys) set by the user.

Additionally, a future improvement upon this will be the setting of the
`init_ttl` argument for the rate-limiting keys, which will help **quite
considerably** in reducing the footprint of the plugins on the shm. As
of this day, this feature has been contributed to ngx_lua but not
released yet:

    openresty/lua-nginx-module#1226

Again, this limit only applies when using the **local** strategy, which
also likely means that a load-balancer is distributing traffic to a pool
of Kong nodes with some sort of consistent load-balancing technique.
Thus considerably reducing the number of concurrent Consumers a given
node needs to handle at once.

See also
--------
Another piece of the fixes for the `no memory` errors resides in the
behavior of the database caching module upon a full shm. See:

    thibaultcha/lua-resty-mlcache#41

This patch reduces the likeliness of a full shm (by a lot!), but does
not remove it. The above patch ensures a somewhat still sane behavior
would the shm happen to be full again.

Fix #3124
Fix #3241
From #3311
thibaultcha added a commit to Kong/kong that referenced this pull request Apr 23, 2018
This is part of a series of fixes:
- thibaultcha/lua-resty-mlcache#41
- thibaultcha/lua-resty-mlcache#42
- #3311
- #3341

Context
-------
In the `local` mode of the rate-limiting plugins, storing the
rate-limiting counters in the same shm used by Kong's database cache is
too invasive for the underlying shm, especially when the rate-limiting
plugins are used with a `seconds` precision.

On top of exhausting the database cache slots, this approach also
generates some form of fragmentation in the shm. This is due to the
side-by-side storage of values with sizes of different orders of
magnitude (JSON strings vs. an incremented double) and the LRU eviction
mechanism. When the shm is full and LRU kicks-in, it is highly probable
that several rate-limiting counters will be evicted (due to their
proliferation), thus not freeing enough space to store the retrieved
data, causing a `no memory` error to be reported by the shm.

Solution
--------
Declaring shms that are only used by some plugins is not very elegant.
Now, all users (even those not using rate-limiting plugins) have to pay
a memory cost (although small).
Unfortunately, and in the absence of a more dynamic solution to shm
configuration such as a more dynamic templating engine, or a
`configure_by_lua` phase, this is the safest solution.

Size rationale
--------------
Running a script generating similar keys and storing similar values
(double) indicates that an shm with 12Mb should be able to store about
~48,000 of those values at once. It is important to remind ourselves
that one Consumer/IP address might use more than one key (in fact, one
per period configured on the plugin), and both the rate-limiting and
response-ratelimiting plugins at once, and they use the same shms.

Even considering the above statements, ~48,000 keys per node seems
somewhat reasonable, considering keys of `second` precision will most
likely fill up the shm and be candidates for LRU eviction. Our concern
lies instead around long-lived limits (and thus, keys) set by the user.

Additionally, a future improvement upon this will be the setting of the
`init_ttl` argument for the rate-limiting keys, which will help **quite
considerably** in reducing the footprint of the plugins on the shm. As
of this day, this feature has been contributed to ngx_lua but not
released yet:

    openresty/lua-nginx-module#1226

Again, this limit only applies when using the **local** strategy, which
also likely means that a load-balancer is distributing traffic to a pool
of Kong nodes with some sort of consistent load-balancing technique.
Thus considerably reducing the number of concurrent Consumers a given
node needs to handle at once.

See also
--------
Another piece of the fixes for the `no memory` errors resides in the
behavior of the database caching module upon a full shm. See:

    thibaultcha/lua-resty-mlcache#41

This patch reduces the likeliness of a full shm (by a lot!), but does
not remove it. The above patch ensures a somewhat still sane behavior
would the shm happen to be full again.

Fix #3124
Fix #3241
From #3311
thibaultcha added a commit to Kong/kong that referenced this pull request Apr 23, 2018
This is part of a series of fixes:
- thibaultcha/lua-resty-mlcache#41
- thibaultcha/lua-resty-mlcache#42
- #3311
- #3341

Changes:

* remove vendor mlcache.lua file which had received patches for Kong
  support
* add lua-resty-mlcache 2.0.1 as a dependency
* implement custom IPC options for the DB mlcache instance

Changelog of mlcache 2.0.0:

  https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#200

Changelog of mlcache 2.0.1:

  https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#201
thibaultcha added a commit to Kong/kong that referenced this pull request Apr 24, 2018
This is part of a series of fixes:
- thibaultcha/lua-resty-mlcache#41
- thibaultcha/lua-resty-mlcache#42
- #3311
- #3341

Context
-------
In the `local` mode of the rate-limiting plugins, storing the
rate-limiting counters in the same shm used by Kong's database cache is
too invasive for the underlying shm, especially when the rate-limiting
plugins are used with a `seconds` precision.

On top of exhausting the database cache slots, this approach also
generates some form of fragmentation in the shm. This is due to the
side-by-side storage of values with sizes of different orders of
magnitude (JSON strings vs. an incremented double) and the LRU eviction
mechanism. When the shm is full and LRU kicks-in, it is highly probable
that several rate-limiting counters will be evicted (due to their
proliferation), thus not freeing enough space to store the retrieved
data, causing a `no memory` error to be reported by the shm.

Solution
--------
Declaring shms that are only used by some plugins is not very elegant.
Now, all users (even those not using rate-limiting plugins) have to pay
a memory cost (although small).
Unfortunately, and in the absence of a more dynamic solution to shm
configuration such as a more dynamic templating engine, or a
`configure_by_lua` phase, this is the safest solution.

Size rationale
--------------
Running a script generating similar keys and storing similar values
(double) indicates that an shm with 12Mb should be able to store about
~48,000 of those values at once. It is important to remind ourselves
that one Consumer/IP address might use more than one key (in fact, one
per period configured on the plugin), and both the rate-limiting and
response-ratelimiting plugins at once, and they use the same shms.

Even considering the above statements, ~48,000 keys per node seems
somewhat reasonable, considering keys of `second` precision will most
likely fill up the shm and be candidates for LRU eviction. Our concern
lies instead around long-lived limits (and thus, keys) set by the user.

Additionally, a future improvement upon this will be the setting of the
`init_ttl` argument for the rate-limiting keys, which will help **quite
considerably** in reducing the footprint of the plugins on the shm. As
of this day, this feature has been contributed to ngx_lua but not
released yet:

    openresty/lua-nginx-module#1226

Again, this limit only applies when using the **local** strategy, which
also likely means that a load-balancer is distributing traffic to a pool
of Kong nodes with some sort of consistent load-balancing technique.
Thus considerably reducing the number of concurrent Consumers a given
node needs to handle at once.

See also
--------
Another piece of the fixes for the `no memory` errors resides in the
behavior of the database caching module upon a full shm. See:

    thibaultcha/lua-resty-mlcache#41

This patch reduces the likeliness of a full shm (by a lot!), but does
not remove it. The above patch ensures a somewhat still sane behavior
would the shm happen to be full again.

Fix #3124
Fix #3241
From #3311
thibaultcha added a commit to Kong/kong that referenced this pull request Apr 24, 2018
This is part of a series of fixes:
- thibaultcha/lua-resty-mlcache#41
- thibaultcha/lua-resty-mlcache#42
- #3311
- #3341

Changes:

* remove vendor mlcache.lua file which had received patches for Kong
  support
* add lua-resty-mlcache 2.0.1 as a dependency
* implement custom IPC options for the DB mlcache instance

Changelog of mlcache 2.0.0:

  https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#200

Changelog of mlcache 2.0.1:

  https://github.com/thibaultcha/lua-resty-mlcache/blob/master/CHANGELOG.md#201
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants