Skip to content
This repository has been archived by the owner on Dec 14, 2021. It is now read-only.

Sync with upstream #5

Open
wants to merge 10,000 commits into
base: master
Choose a base branch
from
Open

Sync with upstream #5

wants to merge 10,000 commits into from

Conversation

skipcloud
Copy link

Syncing with the upsteam remote repo

dustmop and others added 30 commits May 20, 2024 17:06
This version simply reverts the head of our fork, making it functionally identical to 1.13.0. By using this version, it will stop being possible to accidentally bump the viper version to an incompatible version like 1.13.2.
Co-authored-by: hmahmood <6599778+hmahmood@users.noreply.github.com>
…5671)

* pkg/config/model: Update the Viper config to copy proxy settings.

The Viper config previously did not copy the proxies in the CopyConfig
function, and in addition, the sync.Once could not be reset and would
prevent subsequent correct usage of the proxy field.

This PR copies the proxies field and also removes the Once, instead
utilizing the config mutex to ensure serial access.

* cleanup

* fix indentation
…25725)

* usm: tests: classification: Move kafka classification test to USM linux file

* usm: tests: classification: Move mysql classification test to USM linux file

* usm: tests: classification: Move postgres classification test to USM linux file

* usm: tests: classification: Move mongo classification test to USM linux file

* usm: tests: classification: Move redis classification test to USM linux file

* usm: tests: classification: Move AMQP classification test to USM linux file

* usm: tests: classification: Move HTTP2 classification test to USM linux file

* usm: tests: Remove skipIfNotLinux as it is not needed anymore

* usm: tests: Split classification tests to linux only and cross OS

Most of the tests are linux only and there is no matching classification utility
in the windows version. Today the tests are shared and run for both OS but skipped
on windows. That complicates writing tests and does not allow to easily share
utilities implemented in a linux only file as it will break the windows CI.

The PR moves the linux only classification tests into a linux file, and split
the ancestor test into CrossOS and Linux composers

* usm: tests: Move utilities to linux only file

* usm: tests: Move consts to linux only file

* Added a comment
* [OASIS-12] Fix OTel dependencies

* Fix license

* New line

* Update OTel deps in sub modules too

* inv tidy-all

* Fix pgproto3 Query

* Fix test variable

* tidy
* [windows][cws] Remove unnecessary lock

This was missed in previous conversion to
LRU from standard go map.  Since lru has its own locking,
the explict lock is superfluous..

* remove spaces
* Bump the buildimage to use Python 3.11.8

* Nudge CI

* Nudge CI

* test py2
Sometimes, errors are seen in this test of the type "Expected 1260
produce requests but got 1245".  The root cause is not yet known, but
reduce the request count in an attempt to reduce the flakiness.
* Events reported as errors

* release note
* [kitchen/e2e] update amazon linux 2023 x86_64

* [e2e] bump amazon 2023 amis

* [e2e] bump amazon linux 2023 arm64
* Revert "Revert "[CWS] fix windows selftests (#25242)" (#25593)"

This reverts commit 9fd8b79.

* windows: use long path in selftest rule

* linter: add missing comment
* [CSM] Fix Syscalls anomaly event

* [CSM] Remove Syscall Anomaly Detection kernel event
* [comp/core/tagger/tagstore] Add DCA implementation that reduces mem usage

* Review comments
* Update some naming details in documentation

* Update docs/public/components/creating-components.md

Co-authored-by: Austin Lai <76412946+alai97@users.noreply.github.com>

---------

Co-authored-by: Austin Lai <76412946+alai97@users.noreply.github.com>
Co-authored-by: amarziali <andrea.marziali@datadoghq.com>
val06 and others added 30 commits May 29, 2024 09:42
Run tests with the intended tags store settings rather than always
using no-op store.
HTTP and Kafka were mixed up in the message.
* Add log source

* Add integration name

* Include container environment

* Address revies

* Address review

* Add commnet
* replace gui security mechanism with authToken->accessToken JWT mechanism

* typo sessionID

* fix lint issue

* rename authToken to loginToken and create dedicated endpoint

* use single-use intentToken and homemade hmac signed accessToken

* update accessToken payload by adding expirationTime

* replace bad external package

* add token prefix, update gorilla http router

* update systray launch-gui function

* add GUI_session_expiration param

* add release note

* use nil ApiEndpointProvider and update some comments/docs
* Try with symlinks on Windows

* First pass

* Revert to using UNIX sockets on Windows

* Allow passing parameters to install command

* fix bootstrapper compilation issue

* Fix windows installer error handling

* Use HTTP instead of UNIX sockets

* Revert "Use HTTP instead of UNIX sockets"

This reverts commit 1ee4e46.

* Refactor installer paths

* Generate the OCI package

* Only package the Agent 7 in OCI

* Generate package in pipeline

* Fix linux compilation

* progress

* Fix merge conflicts

* Fix powershell script to generate OCI packages

* Fix source directory

* Fix deploy

* Fix deploy

* remove helper

* Invert calls and add return error to function

* Linter

* Update comments

* Add missing argument

* Only include windows registry on windows

* Add missing error

* Add missing arg

* Fix linux compilation errors

* Fix last error

* Fix linux unit tests

* Address review comments

* Update pkg/fleet/installer/installer.go

Co-authored-by: Arthur Bellal <arthur.bellal@datadoghq.com>

---------

Co-authored-by: Arthur Bellal <arthur.bellal@datadoghq.com>
* Add zstd compression level option.

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Set level on stream compression.

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Log on zstd

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Tidy

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Tidy

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Tidy

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Remove printf

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Remove spurious import.

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Add unit test for compression level.

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Remove zstd dependency from config and rename option to include zstd

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Linter errors

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

* Revert version change for otel.

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>

---------

Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
* use a dedicated variable to force a lower compression level

Using the maximal compression level on armhf jobs will attempt to
allocate more than 32bits binaries can address.

* use a more flexible approach for forcing compression level

* rename variable

* fix method name
…etrics (#25515)

* Collect cgroup path in workloadmeta

* Implement the container filter

* Update TestDump

* release note

* fix dogstatsd test using a mock workloadmeta with no implemented methods

* support the case of an empty cgroup path

* Update releasenotes/notes/support-arbitrary-container-id-cc0efdf7c156b7ad.yaml

Co-authored-by: Bryce Eadie <bryce.eadie@datadoghq.com>

* mention that the path is relative

* Remove no-lint comments

* remove the unsubscribe

* Move trie to its own package

* Add code comments to explain about suffix matching and make sure we only populate the Trie if the regex does not match

* add test cases for cgroupfs / systemd

* update protobuf

---------

Co-authored-by: Bryce Eadie <bryce.eadie@datadoghq.com>
… spans to the input to APM stats concentrator. (#23503)
…5886)

* Get mounts from overlayfs snapshotter, similarly as what is done in MountImage

* prefer opts.UseMount, remove unused NoCache opts, use cache only when scanning images

* move scan from snapshotter out of the default function
* usm: kafka: Use unsigned loop counter

This allows some compiler optimzations and avoids a "looks like the BPF
stack limit of 512 bytes is exceeded" error when more code is added to
the classification functions.

* usm: kafka: test: Make unused fields non-zero

Set some unparsed fields in the packets to non-zero values for easier
debugging when packet parsing goes wrong and reads from wrong offsets.

* usm: kafka: test: Pass FetchRequest to buildMessages

To allow the request to be customized per test.

* usm: kafka: test: Pass in API version

Pass in the API version as a parameter to the raw tests, so that
different API versions can be tested.

* usm: kafka: test: Test different versions in raw test

Test some versions which have specific conditions.

* usm: kafka: test: Allow topic to be customized

* usm: kafka: test: Add test for large topic name

* usm: kafka: test: Add test for many topics

* usm: kafka: test: Add raw test for many partitions

* usm: kafka: test: Handle flexible header

* usm: kafka: Pass in version number to parsing tests

* usm: kafka: Support Fetch v12

Add support for v12 of the Fetch API.  The main change is the use of
varints and addition of tagged values.  While that change itself is
fairly straightforward, the need to parse varints caused the programs to
exceed the instruction size supported by older kernels (4096
instructions), so the programs had to be split into separate programs
for pre-v12 and v12, as well as to do the parsing in two steps in
separate programs: a first step to parse the locations of the record
batches arrays, and a second step to parse the list of found locations.

The split between pre-v12 and v12 is simple and handled by informing the
compiler about the bounds of the versions, and then depending on it to
eliminate the unused branches.  The split of the parsing into two steps
requires code to handle the transition between the two steps correctly.

Also mainly due to code size restrictions on older kernels (even after
the above optimizations), the following limitations are currently
present:

 - A maximum of four bytes of varints are supported
 - The tagged values fields are always assumed to be empty

* usm: kafka: test: Check for errors from LookupMaxKeyVersion

* usm: kafka: Remove leftover comment

* usm: kafka: remove unused cgo type

* usm: kafka: Remove passing of level

* usm: kafka: Add comment to parse_varint_u16()

* usm: kafka: Fix u16 varint check

The second byte's MSB should only be checked if the first byte has the
MSB set, since otherwise the second byte is unrelated to the varint.

* usm: kafka: Rename topic_name_size_tmp to topic_name_size_raw

* usm: kafka: Handle splits in header

We were previously not handling this mainly due to code size
restrictions, but with the program split this can be supported.

* usm: kafka: Make read_with_remainder generic

Remove the copy/pasted code and make read_with_remainder() generic with
the use of a function pointer.  The function pointer will be folded at
compile time by the compiler.  The instruction count is unchanged before
and after this patch.

* usm: kafka: Remove unused nonrestartable functions

* usm: kafka: Remove _restartable suffix

Now that we only have restartable readers, remove the redundant suffix.

* usm: kafka: Validate zero tagged fields when possible

* usm: kafka: Remove __read_varint indirection

* usm: kafka: Allow varint bytes to be customized per location

To reduce code size by limiting iterations wherever it makes sense.

* usm: kafka: Customize varint bytes per location

* usm: kafka: Clarify N-1 operation in varints

* usm: kafka: Rename tmp2 variable

* usm: kafka: Add comment about flexible

* usm: kafka: Skip varint number of topics

* usm: kafka: Check zero tagged fields in request

* usm: kafka: Use 64 bits for varint results

* usm: kafka: Add comment about uninitialized variable

* usm: kafka: test: Fix teardown

The telemetry test assigns to the client in the context multiple times,
so some of the clients are never torn down, leading to issues with tests
run later (such as for example the still-running clients attempting to
connect to the raw server while the raw tests are executed afterward).
Co-authored-by: kacper-murzyn <89013263+kacper-murzyn@users.noreply.github.com>
Co-authored-by: arbll <arthur.bellal@datadoghq.com>
…nitoring (#25846)

* Add new parameter djm_config.enabled to enable Data Jobs Monitoring

* rst uses double backticks

Co-authored-by: Vickenty Fesunov <vickenty@users.noreply.github.com>

---------

Co-authored-by: Vickenty Fesunov <vickenty@users.noreply.github.com>
* last_stable updated to 7.54.0

* Update release.json
Fixes section id for junit file uploads
…yout (#25836)

Co-authored-by: GustavoCaso <gustavo.caso@datadoghq.com>
* ensure old approvers are not kept after policy reloading

* add new approver capabilities for other FRIM event types

* ensure we take into account all event basenames for approvers

* ensure approver are able to skip event types with no active rule

* add test for approver mechanism

* disable approvers in `TestETWFileNotifications`
* Initial version

* Rename tagenrichmentprocessor to infraattributesprocessor

* Fix merge

* Rename config_logs_strict

* Add tagger component

* Fix tests and lint

* tidy

* Refactor to tag multiple entities

* Rename

* Temp update mapping go attributes

* tidy

* Update attributes mapping

* Fix resource metrics loop, add more tests

* Update attributes to v0.16.1

* Update attributes to v0.16.1

* Add container_image_metadata

* generate licenses

* Rename

* Fix merge conflicts

* generate deps
Co-authored-by: jonbodner <jon.bodner@datadoghq.com>
* Update the documentation to mention deva

* Fix anchors

* Update docs/public/setup.md

Co-authored-by: Ofek Lev <ofekmeister@gmail.com>

* Update docs/dev/agent_dev_env.md

* remove empty file

* Update docs/dev/agent_dev_env.md

Co-authored-by: Nicolas Schweitzer <nicolas.schweitzer@datadoghq.com>

* comments

---------

Co-authored-by: Ofek Lev <ofekmeister@gmail.com>
Co-authored-by: Nicolas Schweitzer <nicolas.schweitzer@datadoghq.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet