New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Individual log is not retrievable via the name returned in the log list #514
Comments
The API pod logs if this helps:
|
@iainsproat Which log type did you set in the config? File or S3 |
Also logs are not enabled by default. Although the API can be enabled from the config, separate configuration are required. For |
Thanks for responding @sayan-biswas Your suggestion about looking at the logs for the UpdateLog method pointed me in the right location. I can see that there appears to be a race condition in Tekton Results Watcher between the deletion of the Task Run by Tekton Results Watcher after it has completed and the uploading of logs by the Watcher. I am running Tekton results watcher with the following flag: In the Tekton Watcher logs I can see that garbage collection occurring successfully:
However, some time later Tekton Watcher then logs the following failure:
As it is an intermittent issue and not consistent I am confident that S3 is configured correctly. I can query the blob storage provider directly and see the data for other Tekton Task and Pipeline Runs (which presumably won the race condition!) have been stored, and can be queried via the API. My expectation is that the garbage collection of completed TaskRuns/PipelineRuns would wait until all the logs are confirmed as having been successfully stored before proceeding to delete the completed TaskRun or PipelineRun. In the meantime, as a workaround, I'll increase the value of I've attached the entire logs output for both the API and Watcher. During this logged period I started Tekton Results, applied a Task, and then applied a TaskRun. |
@iainsproat Yes, this happens because the log streaming from the watcher to the API server only starts after the PipelineRun/Taskrun moves to |
/assign |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
/assign |
Expected Behavior
This results in the following as expected:
I expect that using the
.records[0].name
would return the individual log record that can be seen in the list, e.g.:I may be doing something incorrect here, but it seems to match the API documentation.
What I expect to have returned is the following:
Actual Behavior
Retrieving the
.records[0].name
and attempting to access the resource results in the log record not being found.Steps to Reproduce the Problem
Steps detailed above
Additional Info
Kubernetes version:
Output of
kubectl version
:Tekton Pipeline version:
The text was updated successfully, but these errors were encountered: