Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reading composite types from Kafka is broken #25

Open
zilder opened this issue Jul 4, 2018 · 0 comments
Open

Reading composite types from Kafka is broken #25

zilder opened this issue Jul 4, 2018 · 0 comments

Comments

@zilder
Copy link
Contributor

zilder commented Jul 4, 2018

CREATE TYPE abc AS (a INT, b TEXT, c TIMESTAMP);

CREATE FOREIGN TABLE kafka_json (
	part int OPTIONS (partition 'true'),
	offs bigint OPTIONS (offset 'true'),
	x abc)
SERVER kafka_server
OPTIONS (format 'json', topic 'json', batch_size '30', buffer_delay '100');

INSERT INTO kafka_json (x) VALUES ((1, 'test', current_timestamp));

SELECT * FROM kafka_json;
ERROR:  malformed record literal: "{"a":1,"b":"test","c":"2018-07-04T11:16:55.671986"}"
DETAIL:  Missing left parenthesis.

Postgres expects an input string formatted like:

(1,"test","2018-07-04T11:16:55.671986")

for composite type. The easy solution would be to just remove keys from input string and replace {} with (). But this won't work if JSON document has different key order or extra or missing keys. So it makes sense to write a simple JSON parser to collect individual key-value pairs, reorder them if needed and fill the gaps with NULLs. It's also possible to use built-in JSONB facilities from postgres to parse and manage JSON documents but I anticipate that it would be less efficient than custom designed solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant