Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic triggering of triggerGroups #1474

Open
ronnberg opened this issue Nov 2, 2022 · 15 comments
Open

Dynamic triggering of triggerGroups #1474

ronnberg opened this issue Nov 2, 2022 · 15 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.

Comments

@ronnberg
Copy link

ronnberg commented Nov 2, 2022

Feature request

I would like to have the possibility to use interceptor variabels when matching triggers with matchLabels. This would mean that you could define a single eventlistener for multiple repositories and activate specific triggers where labels match something in the payload. Since eventlistener has a long running pod it make sense to me to not have one eventlistener per repository. I know that it is possible to let the eventlistener trigger multipele triggers and let the triggers (via interceptor) filter out if they should continue or not. With many repositories wired to one eventlistener this would create a spike for every event. Being able to use variables on matchLabels would save resources in the cluster by having fewer eventlisteners and only activate the triggers that should actually run.

Use case

For our company this would be a great feature. We have a microservice architecture with many repositories. Every opening to the outside world comes with a lot of extra work (approval, firewalls etc.) so to have a singe eventlistener for multiple repositories is a must.

Example:

spec:
  triggerGroups:
  - interceptors:
    - name: get-repository-name
      params:
      - name: overlays
        value:
        - expression: body.repository.name
          key: repo_name
      ref:
        kind: ClusterInterceptor
        name: cel
    triggerSelector:
      labelSelector:
        matchLabels:
          reponame: $(extensions.repo_name)

@ronnberg ronnberg added the kind/feature Categorizes issue or PR as related to a new feature. label Nov 2, 2022
@tekton-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 31, 2023
@tekton-robot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 2, 2023
@tekton-robot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link
Contributor

khrm commented Apr 3, 2023

/remove-lifecycle rotten

It can be part of future roadmap.

@tekton-robot tekton-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 3, 2023
@khrm khrm reopened this Apr 3, 2023
@tekton-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 2, 2023
@tekton-robot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 1, 2023
@tekton-robot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@khrm
Copy link
Contributor

khrm commented Aug 31, 2023

/remove-lifecycle rotten

@tekton-robot tekton-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 31, 2023
@khrm
Copy link
Contributor

khrm commented Aug 31, 2023

/reopen
This is a good feature to have. Someone can take it up.

@tekton-robot
Copy link

@khrm: Reopened this issue.

In response to this:

/reopen
This is a good feature to have. Someone can take it up.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tekton-robot tekton-robot reopened this Aug 31, 2023
@khrm khrm added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Aug 31, 2023
@luoyucumt
Copy link

luoyucumt commented Apr 10, 2024

This feature sounds really cool, I tried to implement it, not sure if it meets the requirements.

In /pkg/template/event.go file, add func ResolveTriggerSelector and applyEventValuesToTriggerSelector

// ResolveTriggerSelector resolves a templated trigger selector by replacing params with their values.
func ResolveTriggerSelector(selector *triggersv1.EventListenerTriggerSelector, body []byte, header http.Header, extensions map[string]interface{}, triggerContext TriggerContext) (*triggersv1.EventListenerTriggerSelector, error) {
	triggerSelector := selector.DeepCopy()
	if triggerSelector == nil {
		return nil, nil
	}

	err := applyEventValuesToTriggerSelector(triggerSelector, body, header, extensions, triggerContext)
	if err != nil {
		return nil, fmt.Errorf("failed to ApplyEventValuesToTriggerSelector: %w", err)
	}

	return triggerSelector, nil
}

// applyEventValuesToTriggerSelector returns triggerSelector with the JSONPath variables replaced
// with values from the event body, headers, and extensions.
func applyEventValuesToTriggerSelector(selector *triggersv1.EventListenerTriggerSelector, body []byte, header http.Header, extensions map[string]interface{}, triggerContext TriggerContext) error {
	event, err := newEvent(body, header, extensions, triggerContext)
	if err != nil {
		return fmt.Errorf("failed to marshal event: %w", err)
	}

	// Apply event values to namespace selector
	for _, matchName := range selector.NamespaceSelector.MatchNames {
		if isTektonExpr(matchName) {
			expressions, originals := findTektonExpressions(matchName)
			for i, expr := range expressions {
				val, err := parseJSONPath(event, expr)
				if err != nil {
					return fmt.Errorf("failed to replace JSONPath value for match names %s: %w", matchName, err)
				}
				matchName = strings.ReplaceAll(matchName, originals[i], val)
			}
		}
	}

	// Apply event values to label selector
	if selector.LabelSelector != nil {
		// Apply event values to match labels
		for name, value := range selector.LabelSelector.MatchLabels {
			if isTektonExpr(value) {
				expressions, originals := findTektonExpressions(value)
				for i, expr := range expressions {
					val, err := parseJSONPath(event, expr)
					if err != nil {
						return fmt.Errorf("failed to replace JSONPath value for match labels %s: %s: %w", name, value, err)
					}
					value = strings.ReplaceAll(value, originals[i], val)
				}
				selector.LabelSelector.MatchLabels[name] = value
			}
		}

		// Apply event values to match expressions
		for _, matchExpr := range selector.LabelSelector.MatchExpressions {
			for i, value := range matchExpr.Values {
				if isTektonExpr(value) {
					expressions, originals := findTektonExpressions(value)
					for j, expr := range expressions {
						val, err := parseJSONPath(event, expr)
						if err != nil {
							return fmt.Errorf("failed to replace JSONPath value for match expressions %s: %s: %w", matchExpr.Key, value, err)
						}
						value = strings.ReplaceAll(value, originals[j], val)
					}
					matchExpr.Values[i] = value
				}
			}
		}
	}

	return nil
}

In /pkg/sink/sink.go file, replace func processTriggerGroups

func (r Sink) processTriggerGroups(g triggersv1.EventListenerTriggerGroup, el *triggersv1.EventListener, request *http.Request, event []byte, eventID string, eventLog *zap.SugaredLogger, wg *sync.WaitGroup) {
	log := eventLog.With(zap.String(triggers.TriggerGroupLabelKey, g.Name))

	extensions := map[string]interface{}{}
	payload, header, resp, err := r.ExecuteInterceptors(g.Interceptors, request, event, log, eventID, fmt.Sprintf("namespaces/%s/triggerGroups/%s", r.EventListenerNamespace, g.Name), r.EventListenerNamespace, extensions)
	if err != nil {
		log.Error(err)
		return
	}
	if resp != nil {
		if resp.Extensions != nil {
			for k, v := range resp.Extensions {
				extensions[k] = v
			}
		}
		if !resp.Continue {
			eventLog.Infof("interceptor stopped trigger processing: %v", resp.Status.Err())
			return
		}
	}

	triggerSelector, err := template.ResolveTriggerSelector(&g.TriggerSelector, payload, header, extensions, template.NewTriggerContext(eventID))
	if err != nil {
		log.Error(err)
		return
	}

	trItems, err := r.selectTriggers(triggerSelector.NamespaceSelector, triggerSelector.LabelSelector)
	if err != nil {
		return
	}

	// Create a new HTTP request that contains the body and header from any interceptors in the TriggerGroup
	// This request will be passed on to the triggers in this group
	triggerReq := request.Clone(request.Context())
	triggerReq.Header = header
	triggerReq.Body = io.NopCloser(bytes.NewBuffer(payload))

	wg.Add(len(trItems))
	for _, t := range trItems {
		go func(t triggersv1.Trigger) {
			defer wg.Done()
			// TODO(dibyom): We might be able to get away with only cloning if necessary
			// i.e. if there are interceptors and iff those interceptors will modify the body/header (i.e. webhook)
			localRequest := triggerReq.Clone(triggerReq.Context())
			r.processTrigger(t, el, localRequest, event, eventID, log, extensions)
		}(*t)
	}
}

Can such modifications complete this feature?

@luoyucumt
Copy link

rotten

Is this code possible?

@Hechamon
Copy link

I am in the same situation as OP and would find this feature really helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.
Projects
Status: In Progress
Development

No branches or pull requests

5 participants