You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a Cal-ITP Customer Success representative, I need to be able to monitor changes to Littlepay Group and Product configuration in order to help diagnose issues with transactions and configuration related to discounts provided by the Cal-ITP Benefits app.
Acceptance Criteria
A table of Littlepay Groups exists containing at least the columns id, label, participant_id, populated from the Littlepay Back Office Groups endpoint
A table of Littlepay Products exists containing at least the columns id, code, status, type, description, participant_id, populated from the Littlepay Back Office Products endpoint
A table of Littlepay Group<>Product linkages exists containing at least the columns group_id, product_id, participant_id, populated from the Littlepay Back Office Product Groups endpoint
The data is acquired nightly and timestamped to allow viewing changes over time
This data is available to Metabase users
Notes
The cal-itp/littlepay Python package implements the necessary APIs to fetch this data as Python objects and in CSV format.
User story / feature request
As a Cal-ITP Customer Success representative, I need to be able to monitor changes to Littlepay Group and Product configuration in order to help diagnose issues with transactions and configuration related to discounts provided by the Cal-ITP Benefits app.
Acceptance Criteria
id, label, participant_id
, populated from the Littlepay Back Office Groups endpointid, code, status, type, description, participant_id
, populated from the Littlepay Back Office Products endpointgroup_id, product_id, participant_id
, populated from the Littlepay Back Office Product Groups endpointNotes
The
cal-itp/littlepay
Python package implements the necessary APIs to fetch this data as Python objects and in CSV format.A GitHub Action is set up to scrape this data into CSVs stored in the repository and can be used as a model for a new DAG.
See this Slack thread for more background / context.
The text was updated successfully, but these errors were encountered: