Skip to content
Kishore Gopalakrishna edited this page Jun 8, 2015 · 40 revisions

Introduction to Pinot

Pinot is a distributed Online Analytics datastore, that allows one to slice and dice data sets with billions of rows with sub second latency. It is used at LinkedIn to deliver scalable real time analytics with low latency. It allows near real time ingestion of events through Kafka as well as batch processing through Hadoop and is designed to scale horizontally.

What is it for (and not)?

Pinot is well suited for analytical use cases on immutable append-only data that require low latency between an event being ingested and it being available to be queried. Because of the design choices we made to achieve these goals, there are certain limitations present in Pinot:

Key Features

  • A column-oriented database with various compression schemes such as Run Length Encoding, FixedBit etc
  • Pluggable indexing technologies - Sorted Index, Bitmap Index, Posting List
  • Ability to optimize query/execution plan based on query and segment metadata .
  • Near real time ingestion from Kafka and batch ingestion from Hadoop
  • SQL like language that supports selection, aggregation, filtering, group by, order by, distinct queries on fact data.
  • Support for multivalued fields
  • Horizontally scalable and Fault tolerant

What is not supported

  • Pinot is not a replacement for database i.e it cannot be used as source of truth store, cannot mutate data
  • Not a replacement for search engine i.e Full text search, relevance not supported
  • User defined functions. We might support this in future.

Pinot works very well for querying time series data with lots of Dimensions and Metrics. Example - Query (profile views, ad campaign performance, etc.) in an analytical fashion (who viewed this profile in the last weeks, how many ads were clicked per campaign).

Terminology

Before we get to quick start, lets go over the terminology.

  • Table: A table is a logical abstraction to refer to a collection of related data. It consists of columns and rows (Document). Table Schema defines column names and their metadata.
  • Segment: A logical table is divided into multiple physical units referred to as segments.

Pinot has the following Roles/Components:

  • Pinot Controller: Manages the nodes in the cluster. Responsibilities :
    • Handles all Create, Update, Delete operations on Tables and Segments.
    • Computes assignment of Table and its segments to Pinot Servers.
  • Pinot Server: Hosts one or more physical segments. Responsibilities: -
    • When assigned a pre created segment, download it and load it. If assigned a Kafka topic, start consuming from a sub set of partitions in Kafka.
    • Process queries forwarded by Pinot Broker and return the response to Pinot Broker.
  • Pinot Broker: Accepts queries from clients and routes them to multiple servers (based of routing strategy) and merges the response from various servers before sending it to the clients

Pinot leverages Apache Helix for cluster management.

[Insert Architecture Image here]

For more information on Pinot Design and Architecture can be found here [insert link]


Quick Start

In this quick start, we will load BaseBall stats from 1878 to 2013 into Pinot and run queries against it. The baseball data contains 100000 records and 15 columns.

1: Install Pinot

Option A: Build from code

git clone https://github.com/linkedin/pinot.git
cd pinot
mvn install package  -DskipTests

Option B Download tarball

wget [insert link here]
tar -xzf pinot-*-pkg.tar.gz

Step 2: Run quick start

Execute the quickstart.sh script in bin folder which performs the following:

  • Converts the Baseball data in csv into Pinot data format.
  • Starts Pinot components, Zookeeper, Controller, Broker, Server.
  • Uploads the segment to Pinot
cd bin
./quickstart.sh

You should see the following output.


Step 3: Pinot Data Explorer

Pinot comes with a simple web interface to query data in Pinot. Open your browser and go to http://localhost:9000/query


Pinot usage

At LinkedIn, it powers dozens of both internal and customer-facing analytical applications, such as Who Viewed My Profile, Who Viewed My Jobs and many more, with interactive-level response times.

Documentation

Clone this wiki locally