-
Notifications
You must be signed in to change notification settings - Fork 676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump Kibana memory to 1500Mi #7819
Conversation
buildkite test this -f p=gke,s=8.1.3 -m t=TestFleetMode,t=TestAPMKibanaAssociation,t=TestAgentVersionUpgradeToLatest8x,t=TestFleetAgentWithoutTLS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
buildkite test this -f p=gke,s=8.1.3 -m t=TestFleetMode,t=TestAPMKibanaAssociation,t=TestAgentVersionUpgradeToLatest8x,t=TestFleetAgentWithoutTLS |
buildkite test this -f p=gke,s=8.1.3 -m t=TestFleetMode,t=TestAPMKibanaAssociation,t=TestAgentVersionUpgradeToLatest8x,t=TestFleetAgentWithoutTLS |
buildkite test this -f p=gke,s=8.1.3 -m t=TestFleetMode,t=TestAPMKibanaAssociation,t=TestAgentVersionUpgradeToLatest8x,t=TestFleetAgentWithoutTLS |
This results in too much memory being increased by default.
Maybe a temporary stopgap action would be to start by just increasing the memory in the tests for kibana diff --git a/test/e2e/test/kibana/builder.go b/test/e2e/test/kibana/builder.go
index f7439def7..dce565004 100644
--- a/test/e2e/test/kibana/builder.go
+++ b/test/e2e/test/kibana/builder.go
@@ -8,6 +8,7 @@ import (
"fmt"
corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/rand"
@@ -69,7 +70,7 @@ func newBuilder(name, randSuffix string) Builder {
Name: name,
Namespace: test.Ctx().ManagedNamespace(0),
}
- return Builder{
+ b := Builder{
Kibana: kbv1.Kibana{
ObjectMeta: meta,
Spec: kbv1.KibanaSpec{
@@ -85,6 +86,20 @@ func newBuilder(name, randSuffix string) Builder {
WithSuffix(randSuffix).
WithLabel(run.TestNameLabel, name).
WithPodLabel(run.TestNameLabel, name)
+
+ // bump Kibana memory in 8.1.x as we see abnormal memory usage, probably due to the move to cgroups v2
+ // (https://github.com/kubernetes/kubernetes/issues/118916)
+ ver := version.MustParse(test.Ctx().ElasticStackVersion)
+ if ver.GTE(version.MinFor(8, 1, 0)) && ver.LT(version.MinFor(8, 2, 0)) {
+ b = b.WithResources(corev1.ResourceRequirements{
+ Requests: map[corev1.ResourceName]resource.Quantity{
+ corev1.ResourceMemory: resource.MustParse("1500Mi"),
+ },
+ Limits: map[corev1.ResourceName]resource.Quantity{
+ corev1.ResourceMemory: resource.MustParse("1500Mi"),
+ }})
+ }
+ return b
} |
I've done exactly what @thbkrkr suggested, which seems to make sense. |
Superseded by #7836. |
We are seeing nightly e2e test failures with Kibana being killed with OOM. Bumping the default Kibana memory to 1500Mi and it seems like Kibana memory consumption as increased. We suspect that the increased memory consumption is from moving to CgroupsV2. There is a bug
kubernetes/kubernetes#118916