Skip to content

openbridge/airbridge

Repository files navigation

Airbridge: Lightweight Airbyte Data Flows

We wanted a clean, no-frills, open source solution that focused solely on the core Airbyte source and data connectors. That is it.

Not finding a solution to accomplish this goal, we decided to pull something that fits—introducing Airbridge.

Overview

Airbridge uses base Airbyte Docker images, so you can concentrate on simple, well-bounded data extraction and delivery while using the minimum resources to get the job done. Pick your Airbyte source and destination; Airbridge handles the rest.

🐳 Docker-Driven: Utilizes prebuilt source and destination Docker images via Docker Hub.

🐍 Python-Powered: Built on standards-based Python, Airbridge ensures a clean, quick, and modular data flow, allowing easy integration and modification.

🔗 Airbyte Sources and Destinations: Orchestrating the resources needed to bridge sources and destinations.

🔄 Automated State Management: Includes simple but effective automated state tracking for each run.

🔓 Open-Source: No special license, everything Airbridge is MIT.

📦 No Bloat: No proprietary packages. No unnecessary wrappers.

Prerequisites

The Airbridge project requires Docker and Python:

  1. Docker: The project uses Airbyte Docker images, which containerize source and destination connectors, ensuring a consistent and isolated environment for them. See Docker for Linux, Docker Desktop for Windows, Docker Desktop for Mac
  2. Python: The project is written in Python and requires various Python packages to function correctly. Download and install the required version from Python's official website.

Quick Start

You have Python and Docker installed. Docker is running, you downloaded Airbridge, and you are ready to go!

The fastest way to get started is via Poetry.

To install Poetry, you can use Python, or Python3, depending on your environment;

curl -sSL https://install.python-poetry.org | python -
or
curl -sSL https://install.python-poetry.org | python3 -

Once installed, go into the Airbridge project folder, then run the install:

poetry install

Make sure you are in the src/airbridge directory, then you can run Airbridge using a simple Python command like this;

poetry run main  -i airbyte/source-stripe -w airbyte/destination-s3 -s /airbridge/env/stripe-source-config.json -d /airbridge/env/s3-destination-config.json -c /airbridge/env/stripe-catalog.json  -o /airbridge/tmp/mydataoutput/

The above command is an example. It shows a connection to Stripe, collecting all the data defined in the catalog, then send the data to Amazon S3. Thats it.

Note: The paths are above are absolute, not relative. Make sure your have those set correctly specific to your environment!

After running Airbridge, in your local output path (-o), you will see;

- airbridge
  - tmp
    - mydataoutput
      - airbyte
        - source-stripe
          - 1629876543
            - data.json
            - state.json

How this data is represented in your destination will vary according to configs you supplied.

Overview of Configs

For Airbridge to work, it needs Airbyte defined configs. Configs define required credentials and catalog for Airbyte to work.

In our example run command we passed a collection of arguments.

First, we defined the Airbyte docker source image name. We used -i airbyte/source-stripe in our command because we want to use Stripe as a source.

Next, we set the destination. This is where you want airbyte/source-stripe data to land. In our command, we used -w airbyte/destination-s3 because we want data from Stripe to be sent to our Amazon S3 data lake.

We passed -c /env/stripe-catalog.json because this reflects the catalog of the airbyte/source-stripe source. The catalog defines the schemas and other elements that define the outputs of airbyte/source-stripe.

Lastly, we set a location to store the data from the source prior to sending it to your destination. We passed -o /tmp/467d8d8c57ea4eaea7670d2b9aec7ecf to store the output of airbyte/source-stripe prior to posting to airbyte/destination-s3.

Example: Swapping Sources

You could quickly switch things up from Stripe to using airbyte/source-klaviyo while keeping your destination the same (airbyte/destination-s3).

All you need to do is swap Klaviyo source (klaviyo-source-config.json) and catalog (klaviyo-catalog.json), but leave unchanged S3 (s3-destination-config.json) and the local source output (/airbridge/tmp/mydataoutput/).

Passing Your Config Arguments

The following arguments can be provided to Airbridge:

  • -i: The Airbyte source image from Docker Hub (required). Select a pre-built source image from Docker hub Airbyte source connector.
  • -w: The Airbyte destination image from Docker Hub (required). Select a pre-built source image from Docker hub Airbyte destination connector. This is where you want your data landed.
  • -s: The configuration (<source>-config.json) for the source.
  • -d: The configuration (<destination>-config.json) for the destination.
  • -c: The catalog configuration for both source and destination.
  • -o: The desired path for local data output. This is where the raw data from the connector is temporarily stored.
  • -j: Job ID associated with the process.
  • -t: Path to the state file. If provided, the application will use the state file as an input to your run The state file.
  • -r: Path to the external configuration file. For example, rather than pass arguments, you can use a config file via -r like this poetry run main -r ./config.json.

Example Airbridge Config

Here is an example of the config we pass when running poetry run main -r ./config.json;

{
    "airbyte-src-image": "airbyte/source-stripe",
    "airbyte-dst-image": "airbyte/destination-s3",
    "src-config-loc": "/path/to/airbridge/env/stripe-config.json",
    "dst-config-loc": "/path/to/airbridge/env/amazons3-config.json",
    "catalog-loc": "/path/to/airbridge/env/catalog.json",
    "output-path": "/path/to/airbridge/tmp/mydata",
    "job": "1234RDS34"
}

Understanding And Defining Your Configs

The principal effort running Airbridge will be setting up required Airbyte config files. As a result, the following documentation primarily focuses on getting Airbyte configs set up correctly for your source and destinations.

Deep Dive Into Configuration Files

As we have shown in our example, three configs are needed to run the Airbyte service:

  1. Source Credentials: This reflects your authorization to the source. The content of this is defined by Airbyte connector spec.json. Typically, a sample_files/sample_config.json in a connector directory will be used as a reference config file.
  2. Source Data Catalog: The catalog, often named something like configured_catalog.json, reflects the datasets and schemas defined by the connector.
  3. Destination Credentials: Like the Source connector, this reflects your authorization to the destination.

The Airbyte source or destination defines each of these configs. As such, you need to follow the specifications they set precisely as they define them. This includes both required and optional elements. To help with that process, we have created a config generation utility script, config.py.

Auto Generate Airbyte Config Templates With config.py

Not all Airbyte connectors and destinations contain reference config files. This can make determining what should be included in the source (or destination) credential file is challenging.

To simplify creating the source and destination credentials, you can run config.py. his script will generate a configuration file based on the specific source or destination specification (spec.json or spec.yaml) file. It can also create a local copy of the catalog.json.

Locating The spec.json or spec.yaml files

To find the spec.json or spec.yaml, you will need to navigate to the respective sources on Github. For example, you were interested in Stripe, go to connectors/source-stripe/. In that folder, you would find the spec.yaml in connectors/source-stripe/source_stripe/spec.yaml.

For LinkedIn, you go to connectors/source-linkedin-ads and the navigate to connectors/source-linkedin-ads/source_linkedin_ads/spec.json

Locating The catalog.json files

To find the catalog.json, you will need to navigate to the respective sources on Github. For example, you were interested in Chargebee, go to source-chargebee/integration_tests/. In that folder, you would find the configured_catalog.json.

NOTE: Always make sure you are passing the RAW output of the yaml or json file. For example, the GitHib link to the raw file will look like https://raw.githubusercontent.com/airbytehq/airbyte/master/airbyte-integrations/connectors/source-linkedin-ads/source_linkedin_ads/spec.json.

Running The Config Generation Script

The script accepts command-line arguments to specify the input spec file URL and the output path for the generated configuration file.

To run config.py, make sure to run pip install requests jsonschema if you do not have them installed. Note: If you're using a Python environment where pip refers to Python 2, you should use pip3 instead of pip.

The script takes an input and generates the config as an output via the following arguments;

  • The -i or --input argument specifies the URL of the spec file (either YAML or JSON format).
  • The -o or --output argument specifies the path where the generated configuration file should be saved.
  • The -c or --catalog argument specifies the URL of the catalog file (either YAML or JSON format).

Example usage:

python3 config.py -i https://example.com/spec.yaml -o ./config/my-config.json

This example uses the source_klaviyo/spec.json from Github:

python3 config.py -i https://raw.githubusercontent.com/airbytehq/airbyte/master/airbyte-integrations/connectors/source-klaviyo/source_klaviyo/spec.json -o ./my-klaviyo-config.json

In this example, you do not specify the name when passing -o. You simply pass the folder:

python3 config.py -i https://raw.githubusercontent.com/airbytehq/airbyte/master/airbyte-integrations/connectors/destination-s3/src/main/resources/spec.json -o ./config

If a filename is not passed, the script will auto generate the file name based on the value of title in spec.json. In this example, the config name would be s3-destination-spec-config.json.

This will connect, collect, and then store the configured_catalog.json locally to ./config/catalog/source—chargebee-catalog.json

poetry run ./config.py -c https://raw.githubusercontent.com/airbytehq/airbyte/master/airbyte-integrations/connectors/source-chargebee/integration_tests/configured_catalog.json -o ./config/catalog/sourcechargebee-catalog.json

Example Configs

The generated configuration file contains placeholder values that can be replaced with actual values before use. The following is a config.json generated from the LinkedIn Ads spec.json. Note that the template config highlights required and optional fields as defined in the spec. You will need to supply these according to your specific use case.

{
    "credentials": "optional_value",
    "client_id": "required_value",
    "client_secret": "required_value",
    "refresh_token": "required_value",
    "access_token": "required_value",
    "start_date": "required_value",
    "account_ids": "optional_value",
    "ad_analytics_reports": "optional_value"
}

The config below was generated from the S3 destination destination-s3/src/main/resources/spec.json. Note that is a little more involved than the LinkedIn config. This highlights some of the variations between different resources.

{
    "access_key_id": "optional_value",
    "secret_access_key": "optional_value",
    "s3_bucket_name": "required_value",
    "s3_bucket_path": "required_value",
    "s3_bucket_region": "required_value",
    "format": {
        "format_type": "optional_value",
        "compression_codec": "optional_value",
        "flattening": "optional_value",
        "compression": {
            "compression_type": "optional_value"
        },
        "block_size_mb": "optional_value",
        "max_padding_size_mb": "optional_value",
        "page_size_kb": "optional_value",
        "dictionary_page_size_kb": "optional_value",
        "dictionary_encoding": "optional_value"
    },
    "s3_endpoint": "optional_value",
    "s3_path_format": "optional_value",
    "file_name_pattern": "optional_value"
}

Here is an example of an S3 config.json that removes the optional placeholders that are not needed;

{
    "access_key_id": "1234565TYD45YU111X",
    "secret_access_key": "DSERFKGKFDUR459-123SAXVFKD",
    "s3_bucket_name": "my-unique-bucket-name",
    "s3_bucket_path": "airbyte/raw",
    "s3_bucket_region": "us-east-1",
    "format": {
        "format_type": "CSV",
        "flattening": "Root level flattening",
        "compression": {
            "compression_type": "GZIP"
        }
    },
    "s3_path_format": "airbyte-stripe/${STREAM_NAME}/dt=${YEAR}${MONTH}${DAY}/${SOURCE_NAMESPACE}_${STREAM_NAME}_${EPOCH}_${UUID}"
}

Tracking Your Runs

In the realm of data processing, especially when dealing with data synchronization or orchestration tasks, it's crucial to have a way to identify each execution or "job uniquely." This is especially true given that a state file informs Airbyte where you want to start or where you left off. This is where the concept of a "Job ID" comes into play.

What is a Job ID?

A Job ID is a unique identifier assigned to a specific execution or "run" of a process or script. In the Airbyte Docker Runner context, passing a Job ID via -j serves as a unique tag that can differentiate one script run from another. This becomes especially important when multiple instances of the script might be running concurrently or when you need to trace back the results, logs, or errors of a specific execution.

Passing a Job ID

In the context of the Airbyte Docker Runner, the Job ID can be provided via the -j or --job argument, allowing users or systems to determine the method of uniqueness that best fits their needs. If not provided, the script or system could default to generating one using a method like the above.

What should the Job ID be?

While the method for generating a Job ID can vary based on the system or requirements, common practices include:

  • Timestamps: Using a precise timestamp (down to milliseconds or microseconds) ensures uniqueness, as no two moments in time are the same.
  • UUIDs: Universally Unique Identifiers are designed to be unique across time and space and can be generated without requiring a centralized authority.
  • Sequential IDs: In systems with a central authority or database, IDs can be generated sequentially.

Ultimately, what is used should map to setting a meaningful key that is relevant to your use case.

Tracking State

The state.json file is crucial in incremental data extraction processes. In the context of Airbyte or similar data integration tools, the state file keeps track of the last data extraction point. This ensures that in subsequent runs, the script only processes records that have been updated or created after the previous extraction rather than reprocessing the entire dataset.

The contents of the state.json file are derived from the data.json output, which the source worker generates during the data extraction process. The data.json file defines the current data state at the source during a particular run. As a result, data.json acts as the blueprint for the state.json file, ensuring the state always accurately reflects the most recent extraction point.

State Structure:

The file typically contains key-value pairs where the key represents a particular stream or table, and the value represents the point to which data has been extracted. For example:

{
    "users": {
        "last_updated": "2023-08-24T10:37:49"
    },
    "orders": {
        "last_order_id": 31245
    }
}

In this example, the state.json file indicates that the user's table was last extracted up to records updated at 2023-08-24T10:37:49, and the orders table was last extracted up to the order with ID 31245.

Tracking State

The manifest file serves as a reference point for understanding the specifics of each job run. By inspecting the manifest, users can glean insights about the source and destination images, the current state of data extraction, and other vital metadata about the job. This can be particularly useful for optimizing workflows, debugging, or understanding the flow and outcome of specific job runs.

A manifest will contain the following data;

  • Key: A unique hash of the source config.
  • Job: Represents the unique identifier for a specific run of the script. In the provided example, if you pass -j boo123, the manifest will reflect the path you the state file of your run.
  • Source: Name of the source, e.g., airbyte-source-stripe.
  • Path To Data File: Full path to the data file associated with the job.
  • Path To State File: The path to the state.json file, which captures the current state of data extraction for that particular run. For exmaple, you may want to use the most recent state file as an input to your next run -t /path/to/state.json. This allows you to optimize your workflows.
  • Run Timestamp: Numeric value representing a timestamp emitted by Airbyte source.
  • Modified At: Numeric value representing the time the entry was modified (or created).

The manifest JSOn will look like this;

{
  "key": [
    {
      "jobid": "<Job Identifier>",
      "source": "<Source Name>",
      "data_file": "<Path to Data File>",
      "state_file_path": "<Path to State File>",
      "timestamp": <Timestamp Value>,
      "modified_at": <Modified Timestamp Value>
    }
  ]
}

Here is an example of the contents of a manifest file:

{
  "7887ff97bdcc4cb8360cd0128705ea9b": [
    {
      "jobid": "RRRR1345",
      "source": "airbyte-source-stripe",
      "data_file": "/tmp/mydata/airbyte-source-stripe/1693311748/data_1693311747.json",
      "state_file_path": "/tmp/mydata/airbyte-source-stripe/1693311748/state.json",
      "timestamp": 1693311748,
      "modified_at": 1693311795
    },
    {
      "jobid": "RRRR1346",
      "source": "airbyte-source-stripe",
      "data_file": "/tmp/mydata/airbyte-source-stripe/1693313177/data_1693313176.json",
      "state_file_path": "/tmp/mydata/airbyte-source-stripe/1693313177/state.json",
      "timestamp": 1693313177,
      "modified_at": 1693313222
    }
  ]
}

Support

As an opensource initiative, we are proud to be part of a community that collaborates, shares knowledge, and aids one another.

Airbyte Connectors

If you encounter any issues or have queries related to Airbyte source or destination connectors, you should:

  • Review Existing Issues: Before reaching out to the Airbyte community, please take a moment to review the Airbyte GitHub issues. That dedicated community has already addressed many common questions and issues.

  • Engage with Connector Maintainers And Contributors: If your question or issue pertains to a specific connector, directly engage the respective connector maintainer or contributor on GitHub. They possess specialized knowledge about their connector and can provide detailed insights.

Install Python

Here's a step-by-step guide on how to check if Python is installed and its version, along with instructions for installing Python on macOS, Windows 10+, and Ubuntu Linux:

Checking Python Installation and Version

  1. Check if Python is Installed: Open a terminal or command prompt and enter the following command:

    python --version

    If Python is installed, the version number will be displayed. If it's not installed, you'll see an error message.

  2. Check Python Version: To check the exact version of Python installed, you can run:

    python -V

Installing Python on macOS

  1. Check Installed Version (if any): Open the terminal and run the following command to check if Python is already installed:

    python --version
  2. Install Homebrew (if not installed): If Homebrew is not installed, you can install it using the following command:

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  3. Install Python using Homebrew: After installing Homebrew, you can install Python with the following command:

    brew install python

Installing Python on Windows 10+

  1. Download Python Installer: Visit the official Python website (https://www.python.org/downloads/) and download the latest version of Python for Windows.

  2. Run Installer: Double-click the downloaded installer (.exe) file and follow the installation wizard. Make sure to check the option "Add Python to PATH" during installation.

  3. Verify Installation: Open Command Prompt (cmd) and run the following command to verify the installation:

    python --version

Installing Python on Ubuntu Linux

  1. Check Installed Version (if any): Open the terminal and run the following command to check if Python is already installed:

    python3 --version
  2. Update Package List: Run the following command to update the package list:

    sudo apt update
  3. Install Python: Install Python 3 using the following command:

    sudo apt install python3

Additional Notes

  • On Linux, you might use python3 and python3.x to specify the Python version, as python often refers to Python 2.
  • On Windows, you can use "Command Prompt" or "PowerShell" to run the commands.
  • Python's official website provides detailed installation guides for various platforms: https://www.python.org/downloads/installing/

Make sure to adjust the commands as needed based on your system configuration and requirements.

Set Up the Python Environment with Poetry

Poetry is a dependency management and packaging tool that simplifies working with Python. If you want to use Poetry, see the steps below:

  • Install Poetry: You can install via curl -sSL https://install.python-poetry.org | python3.

  • Install Project Dependencies: To get your environment ready, run poetry install. This command will install both the runtime and development dependencies for the project in an isolated virtual environment managed by Poetry.

Frequenlty Asked Questions

I'm getting permissions issues when running run.py or config.py

You may need to setting permissions chmod +x run.py config.py if you get permission denied.

How is source data organized?

Data outputs from the source container will have the following structure locally. The output structure of airbyte-source-stripe/1693336785/data.json is a bit of a black box as it is defined by the source.

├── mydataoutput
│   └── airbyte-source-stripe
│       ├── 1693336785
│       │   ├── data_1693336784.json
│       │   ├── hsperfdata_root
│       │   └── state.json
│       ├── 1693352821
│       │   ├── data_1693352820.json
│       │   ├── hsperfdata_root
│       │   └── state.json
│       ├── 1693353402
│       │   ├── data_1693353401.json
│       │   ├── hsperfdata_root
│       │   └── state.json
│       ├── 1693370559
│       │   ├── data_1693370558.json
│       │   ├── hsperfdata_root
│       │   └── state.json

How can I control where the outputs are sent for different accounts in the same source?

If you want to vary outputs, say for different Stripe accounts, set the -o uniquely for each account. For example, lets say you have two Stripe accounts, Stripe A and Stripe B.

You can vary the output path for /stripe-a/ and /stripe-b to ensure separation of the data;

├── /stripe-a/
│   └── airbyte-source-stripe
│       ├── 1693336785
│       │   ├── data_1693336665.json
│       │   ├── hsperfdata_root
│       │   └── state.json
├── /stripe-b/
│   └── airbyte-source-stripe
│       ├── 1693336232
│       │   ├── data_1693336912.json
│       │   ├── hsperfdata_root
│       │   └── state.json

Which Python version do these scripts support?

The scripts use a shebang line (#!/usr/bin/env python3), which means they are designed for Python 3. It's recommended to use Python 3.x to ensure compatibility.

I'm facing issues related to encoding while reading or writing files. How can I resolve them?

Ensure that the files you're working with are UTF-8 encoded. The scripts expect UTF-8 encoding when reading or writing data.

Can I modify these scripts to fit my custom workflow or use-case?

Yes, these scripts can be modified to fit specific requirements. However, make sure you understand the logic and flow before making any changes to avoid unexpected behaviors. The scripts seem to be taking longer than expected.

Is there any way to improve performance?

Ensure that any external dependencies, like Docker, are running optimally. Also, consider the size and complexity of the data you're processing. Large datasets might naturally take longer to process.

Where can I find more information or documentation related to these scripts?

The scripts contain docstrings and comments that provide insights into their operation. If they are part of a larger project, check the project's documentation or README for more details.

Are there any known limitations or constraints when using these scripts?

Airbridge and Airbyte rely on external tools like Docker, which may have its own set of requirements. Ensure your system meets all prerequisites before executing the scripts.

I'm getting errors related to missing or undefined variables. What might be the cause?

Ensure you're providing all required arguments when executing Airbridge. Check the Airbyte documentation see what contents are expected in a config.

How can I contribute or report issues related to these scripts?

You can typically contribute or report issues via the project's GitHub repository.

Are there any security considerations I should be aware of when using these scripts?

Always be cautious with your credentials for your source and destination! Treat them as you would other credentials.

What configuration options are supported?

Currently, we support the following;

  • --airbyte-src-image (or -i): Specifies the Airbyte source image.
  • --src-config-loc (or -s): Indicates the location of the source configuration.
  • --airbyte-dst-image (or -w): Specifies the Airbyte destination image.
  • --dst-config-loc (or -d): Indicates the location of the destination configuration.
  • --catalog-loc (or -c): Points to the location of the catalog.
  • --output-path (or -o): Designates the path where the output should be stored.
  • --state-file-path (or -t): Specifies the path to the state file.
  • --runtime-configs (or -r): Points to an external configuration file for additional settings.
  • --job (or -j): Represents a job identifier.

What is the purpose of the --airbyte-src-image and --airbyte-dst-image arguments?

This argument specifies the Docker image for the Airbyte source and destination. You should provide the name of the image you wish to use as the source and destination in your data workflow.

How do I use the --src-config-loc and --dst-config-loc arguments?

These arguments point to the locations of the source and destination configurations, respectively. Provide the path to your configuration files when using these arguments.

What does the --catalog-loc argument do?

This argument specifies the location of the catalog file, which typically contains metadata or schema information for the data being processed.

Where will the output data be stored?

The output data's location is determined by the --output-path argument. Provide the directory or path where you want the processed data to be saved.

I want to continue from where I left off in my data workflow. How do I do that?

Use the --state-file-path argument to specify the location of your state file. This file usually contains information about the last processed data point, allowing the script to resume from that point.

Can I use an external configuration file with the script?

Yes, use the --runtime-configs argument to specify the path to an external configuration file. This allows you to provide additional settings or override default ones.

How do I specify a job ID when running the script?

Use the --job argument followed by your desired job identifier. This can be useful for logging, tracking, or differentiating between multiple runs of the script.

Reference Source Documentation Pages

The following is a reference collection of data source documentation. This is not meant to be a comprehensive list, merely a waypoint to help get people pointed in the right direction.

Connector Name Documentation Page
Postgres Postgres
ActiveCampaign ActiveCampaign
Adjust Adjust
Aha API Aha API
Aircall Aircall
Airtable Airtable
AlloyDB for PostgreSQL AlloyDB for PostgreSQL
Alpha Vantage Alpha Vantage
Amazon Ads Amazon Ads
Amazon Seller Partner Amazon Seller Partner
Amazon SQS Amazon SQS
Amplitude Amplitude
Apify Dataset Apify Dataset
Appfollow Appfollow
Apple Search Ads Apple Search Ads
AppsFlyer AppsFlyer
Appstore Appstore
Asana Asana
Ashby Ashby
Auth0 Auth0
AWS CloudTrail AWS CloudTrail
Azure Blob Storage Azure Blob Storage
Azure Table Storage Azure Table Storage
Babelforce Babelforce
Bamboo HR Bamboo HR
Baton Baton
BigCommerce BigCommerce
BigQuery BigQuery
Bing Ads Bing Ads
Braintree Braintree
Braze Braze
Breezometer Breezometer
CallRail CallRail
Captain Data Captain Data
Cart.com Cart.com
Chargebee Chargebee
Chargify Chargify
Chartmogul Chartmogul
ClickHouse ClickHouse
ClickUp API ClickUp API
Clockify Clockify
Close.com Close.com
CockroachDB CockroachDB
Coda Coda
CoinAPI CoinAPI
CoinGecko Coins CoinGecko Coins
Coinmarketcap API Coinmarketcap API
Commcare Commcare
Commercetools Commercetools
Configcat API Configcat API
Confluence Confluence
ConvertKit ConvertKit
Convex Convex
Copper Copper
Courier Courier
Customer.io Customer.io
Datadog Datadog
DataScope DataScope
Db2 Db2
Delighted Delighted
Dixa Dixa
Dockerhub Dockerhub
Dremio Dremio
Drift Drift
Drupal Drupal
Display & Video 360 Display & Video 360
Dynamodb Dynamodb
End-to-End Testing Source for Cloud End-to-End Testing Source for Cloud
End-to-End Testing Source End-to-End Testing Source
Elasticsearch Elasticsearch
EmailOctopus EmailOctopus
Everhour Everhour
Exchange Rates API Exchange Rates API
Facebook Marketing Facebook Marketing
Facebook Pages Facebook Pages
Faker Faker
Fastbill Fastbill
Fauna Fauna
Files (CSV, JSON, Excel, Feather, Parquet) Files (CSV, JSON, Excel, Feather, Parquet)
Firebase Realtime Database Firebase Realtime Database
Firebolt Firebolt
Flexport Flexport
Freshcaller Freshcaller
Freshdesk Freshdesk
Freshsales Freshsales
Freshservice Freshservice
FullStory FullStory
Gainsight-API Gainsight-API
GCS GCS
Genesys Genesys
getLago API getLago API
GitHub GitHub
GitLab GitLab
Glassfrog Glassfrog
GNews GNews
GoCardless GoCardless
Gong Gong
Google Ads Google Ads
Google Analytics 4 (GA4) Google Analytics 4 (GA4)
Google Analytics (Universal Analytics) Google Analytics (Universal Analytics)
Google Directory Google Directory
Google PageSpeed Insights Google PageSpeed Insights
Google Search Console Google Search Console
Google Sheets Google Sheets
Google-webfonts Google-webfonts
Google Workspace Admin Reports Google Workspace Admin Reports
Greenhouse Greenhouse
Gridly Gridly
Gutendex Gutendex
Harness Harness
Harvest Harvest
HTTP Request HTTP Request
Hubplanner Hubplanner
HubSpot HubSpot
Insightly Insightly
Instagram Instagram
Instatus Instatus
Intercom Intercom
Intruder.io API Intruder.io API
Ip2whois API Ip2whois API
Iterable Iterable
Jenkins Jenkins
Jira Jira
K6 Cloud API K6 Cloud API
Kafka Kafka
Klarna Klarna
Klaviyo Klaviyo
Kustomer Kustomer
Kyriba Kyriba
Kyve Source Kyve Source
Launchdarkly API Launchdarkly API
Lemlist Lemlist
Lever Hiring Lever Hiring
LinkedIn Ads LinkedIn Ads
LinkedIn Pages LinkedIn Pages
Linnworks Linnworks
Lokalise Lokalise
Looker Looker
Magento Magento
Mailchimp Mailchimp
MailerLite MailerLite
Mailersend Mailersend
MailGun MailGun
Mailjet - Mail API Mailjet - Mail API
Mailjet - SMS API Mailjet - SMS API
Marketo Marketo
Merge Merge
Metabase Metabase
Microsoft Dataverse Microsoft Dataverse
Microsoft Dynamics AX Microsoft Dynamics AX
Microsoft Dynamics Customer Engagement Microsoft Dynamics Customer Engagement
Microsoft Dynamics GP Microsoft Dynamics GP
Microsoft Dynamics NAV Microsoft Dynamics NAV
Microsoft Teams Microsoft Teams
Mixpanel Mixpanel
Monday Monday
Mongo DB Mongo DB
Microsoft SQL Server (MSSQL) Microsoft SQL Server (MSSQL)
My Hours My Hours
MySQL MySQL
N8n N8n
NASA NASA
Oracle Netsuite Oracle Netsuite
News API News API
Newsdata API Newsdata API
Notion Notion
New York Times New York Times
Okta Okta
Omnisend Omnisend
OneSignal OneSignal
Open Exchange Rates Open Exchange Rates
OpenWeather OpenWeather
Opsgenie Opsgenie
Oracle Peoplesoft Oracle Peoplesoft
Oracle Siebel CRM Oracle Siebel CRM
Oracle DB Oracle DB
Orb Orb
Orbit Orbit
Oura Oura
Outreach Outreach
PagerDuty PagerDuty
Pardot Pardot
Partnerstack Partnerstack
Paypal Transaction Paypal Transaction
Paystack Paystack
Pendo Pendo
PersistIq PersistIq
Pexels-API Pexels-API
Pinterest Pinterest
Pipedrive Pipedrive
Pivotal Tracker Pivotal Tracker
Plaid Plaid
Plausible Plausible
Pocket Pocket
PokéAPI PokéAPI
Polygon Stock API Polygon Stock API
Postgres Postgres
PostHog PostHog
Postmarkapp Postmarkapp
PrestaShop PrestaShop
Primetric Primetric
Public APIs Public APIs
Punk-API Punk-API
PyPI PyPI
Qonto Qonto
Qualaroo Qualaroo
QuickBooks QuickBooks
Railz Railz
RD Station Marketing RD Station Marketing
Recharge Recharge
Recreation.gov API Recreation.gov API
Recruitee Recruitee
Recurly Recurly
Redshift Redshift
Reply.io Reply.io
Retently Retently
RingCentral RingCentral
Robert Koch-Institut Covid Robert Koch-Institut Covid
Rocket.chat API Rocket.chat API
RSS RSS
S3 S3
Salesforce Salesforce
Salesloft Salesloft
SAP Business One SAP Business One
sap-fieldglass sap-fieldglass
SearchMetrics SearchMetrics
Secoda API Secoda API
Sendgrid Sendgrid
Sendinblue API Sendinblue API
Senseforce Senseforce
Sentry Sentry
SFTP Bulk SFTP Bulk
SFTP SFTP
Shopify Shopify
Shortio Shortio
Slack Slack
Smaily Smaily
SmartEngage SmartEngage
Smartsheets Smartsheets
Snapchat Marketing Snapchat Marketing
Snowflake Snowflake
Sonar Cloud API Sonar Cloud API
SpaceX-API SpaceX-API
Spree Commerce Spree Commerce
Square Square
Statuspage.io API Statuspage.io API
Strava Strava
Stripe Stripe
Sugar CRM Sugar CRM
SurveySparrow SurveySparrow
SurveyCTO SurveyCTO
SurveyMonkey SurveyMonkey
Talkdesk Explore Talkdesk Explore
Tempo Tempo
Teradata Teradata
The Guardian API The Guardian API
TiDB TiDB
TikTok Marketing TikTok Marketing
Timely Timely
TMDb TMDb
Todoist Todoist
Toggl API Toggl API
TPL/3PL Central TPL/3PL Central
Trello Trello
TrustPilot TrustPilot
TVMaze Schedule TVMaze Schedule
Twilio Taskrouter Twilio Taskrouter
Twilio Twilio
Twitter Twitter
Tyntec SMS Tyntec SMS
Typeform Typeform
Unleash Unleash
US Census API US Census API
Vantage API Vantage API
VictorOps VictorOps
Visma e-conomic Visma e-conomic
Vittally Vittally
Waiteraid Waiteraid
Weatherstack Weatherstack
Webflow Webflow
Whisky Hunter Whisky Hunter
Wikipedia Pageviews Wikipedia Pageviews
WooCommerce WooCommerce
Wordpress Wordpress
Workable Workable
Workramp Workramp
Wrike Wrike
Xero Xero
XKCD XKCD
Yahoo Finance Price Yahoo Finance Price
Yandex Metrica Yandex Metrica
Yotpo Yotpo
Younium Younium
YouTube Analytics YouTube Analytics
Zapier Supported Storage Zapier Supported Storage
Zencart Zencart
Zendesk Chat Zendesk Chat
Zendesk Sell Zendesk Sell
Zendesk Sunshine Zendesk Sunshine
Zendesk Support Zendesk Support
Zendesk Talk Zendesk Talk
Zenefits Zenefits
Zenloop Zenloop
Zoho CRM Zoho CRM
Zoom Zoom
Zuora Zuora

References

Legal

Airbridge is authored un the MIT License.

License Disclaimer for Contributors

By contributing to this project, you agree to license your contributions under the terms of the project's LICENSE, which is an MIT License. This means that your contributions will be available to the public under the same license.

If your contributions include code snippets, files, or other content that is subject to a different license, please provide clear indication of the license terms associated with that content. By making a contribution, you confirm that you have the necessary rights to grant this license.

We appreciate your collaboration and commitment to open source principles. If you have any questions or concerns about licensing, please feel free to contact us.

License Liability Disclaimer

The contributors to this project, including the project maintainers and collaborators, make every effort to ensure that the project's licensing is accurate and compliant with open-source principles. However, it's important to note that this project may include third-party content, references to products, brands, or software that are subject to different licenses or rights.

While we strive to respect the rights of all content, products, brands, or software, it's possible that certain materials in this project may inadvertently infringe upon the rights of others. If you believe that any content, references, or materials in this project violate your rights or are not appropriately attributed, please let us know immediately so we can address the concern.

Additionally, if you, as a contributor, provide content or contributions that are subject to a different license, or if you reference specific products, brands, or software, you acknowledge and confirm that you have the necessary rights to grant the license under which you are contributing, and that any references are made in accordance with the respective rights holders' terms.

We want to emphasize that any references made to products, brands, or software do not imply any endorsement or affiliation. All products, brands, and software referenced here retain their respective rights. This includes the responsibility to clearly indicate the license terms for any content you contribute.

We value the principles of open collaboration, transparency, and mutual respect within the open-source community. If you have questions or concerns about licensing, attribution, references, or the content of this project, please reach out to us.

Your feedback and vigilance help us maintain the integrity of this project's licensing, references, and its commitment to respecting the rights of all parties involved.