Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Insert, Update and delete for geospatial data in hive #107

Open
SrinivasRIL opened this issue Aug 17, 2016 · 4 comments
Open

Insert, Update and delete for geospatial data in hive #107

SrinivasRIL opened this issue Aug 17, 2016 · 4 comments

Comments

@SrinivasRIL
Copy link

SrinivasRIL commented Aug 17, 2016

Hi,
I wanted to update and delete some records in a table in hive that contains shape attributes and i was reading this article
http://unmeshasreeveni.blogspot.in/2014/11/updatedeleteinsert-in-hive-0140.html.

So basically I need to create a hive table with ACID support , but however we are not using ORC file format, instead we are using unenclosed json format. SO does that mean that we cant update or delete any records with the uneclosed json format??. Or if there is, can you suggest a way to go about it.

Thanks

@randallwhitman
Copy link
Contributor

Have you tried it out, and what was the result?

@SrinivasRIL
Copy link
Author

SrinivasRIL commented Aug 18, 2016

I have set all those properties as described in the article and created a bucketed table but the usual way as an unenclosed json format and loaded the data. Obviously after updating I dint get the expected results because i dint store it as an orc file

update bucketenode set businessranking ='2' where state = 'Goa'

I got the following error

Attempt to do update or delete using transaction manager that does not support these operations.

I set the following properties

hive>set hive.support.concurrency =true;
 hive> set hive.enforce.bucketing = true;
hive> set hive.exec.dynamic.partition.mode = nonstrict ;
hive> set hive.txn.manager =org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
hive> set  hive.compactor.initiator.on = true;
hive> set hive.compactor.worker.threads = 1;

I am not sure how to store it as an orc file when we have to specify STORED AS INPUTFORMAT 'com.esri.json.hadoop.UnenclosedJsonInputFormat' while we create a table with a json file appended to it.

If you can provide any insights on this problem, it would be great as we need to start updating records in the near future

@randallwhitman
Copy link
Contributor

I am not sure how to store it as an orc file when we have to specify STORED AS INPUTFORMAT 'com.esri.json.hadoop.UnenclosedJsonInputFormat' while we create a table with a json file appended to it.

JSON and ORC are completely different formats. You can choose to work with data in its original format, or convert one format to another.

@randallwhitman
Copy link
Contributor

This is Open-Source and accepts contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants