You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The enable_workflow_job_events_queue var allows for setting up an extra SQS queue where a copy of each webhook payload is copied. The docs list this as experimental with potential use cases being for gathering metrics, responding to matrix builds, etc. Basically whatever extra things the user may need to integrate.
In my case I have been using a logs+metrics exporter that consumes the queue. In my case this exporter runs in an EKS cluster in a separate AWS account. I had been using an aws_sqs_queue_policy to allow the external principal to consume the queue.
This recent change - a8cba4e - broke my configuration, however, because this policy overwrites the one I lay down via another terraform stack.
I think I can work around this by creating an IAM role in the account containing the SQS queue and assuming this role from my consumer in the other AWS account. However, it would be useful to provide an extra var in this stack so that users could add their own queue-policy statements. Since this queue is not used internally by the runner stack and is meant for users to build their own additional functionality this seems like a reasonable request.
The text was updated successfully, but these errors were encountered:
The
enable_workflow_job_events_queue
var allows for setting up an extra SQS queue where a copy of each webhook payload is copied. The docs list this as experimental with potential use cases being for gathering metrics, responding to matrix builds, etc. Basically whatever extra things the user may need to integrate.In my case I have been using a logs+metrics exporter that consumes the queue. In my case this exporter runs in an EKS cluster in a separate AWS account. I had been using an
aws_sqs_queue_policy
to allow the external principal to consume the queue.This recent change - a8cba4e - broke my configuration, however, because this policy overwrites the one I lay down via another terraform stack.
I think I can work around this by creating an IAM role in the account containing the SQS queue and assuming this role from my consumer in the other AWS account. However, it would be useful to provide an extra var in this stack so that users could add their own queue-policy statements. Since this queue is not used internally by the runner stack and is meant for users to build their own additional functionality this seems like a reasonable request.
The text was updated successfully, but these errors were encountered: