Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use the example (counter) #239

Open
Thegaram opened this issue Apr 23, 2018 · 8 comments
Open

How to use the example (counter) #239

Thegaram opened this issue Apr 23, 2018 · 8 comments

Comments

@Thegaram
Copy link

Hi, thanks for open-sourcing this great lib!

I am having problems running the example counter application. I've successfully built everything, but when I run the example I get the following error:

root@f693fedf78d4:/rocksplicator/build# ./examples/counter_service/counter 
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0423 12:47:09.583624  4238 availability_zone.cpp:69] Got invalid az: 
I0423 12:47:09.591229  4241 thrift_client_pool.h:151] Started counter::CounterAsyncClient thrift client IO thread
I0423 12:47:09.591681  4242 thrift_client_pool.h:151] Started counter::CounterAsyncClient thrift client IO thread
E0423 12:47:09.592037  4243 file_watcher.cpp:196] Failed to inotify_add_watch() with errno 2 : No such file or directory
F0423 12:47:09.592182  4238 thrift_router.h:118] Check failed: common::FileWatcher::Instance()->AddFile( config_path_, [this, local_group] (std::string content) { std::shared_ptr<const ClusterLayout> new_layout( parser_(std::move(content), local_group)); { folly::RWSpinLock::WriteHolder write_guard(layout_rwlock_); cluster_layout_.swap(new_layout); } }) Failed to watch 
*** Check failure stack trace: ***
Aborted

I guess I need to pass shard_config_path. What is the syntax of this file?

What happens when I run the example application? I understand it starts a server. I guess the python cluster management scripts communicate with this server? So is this a single Master node? How do I set up Slaves and how do I connect these components?

What is the advantage of using these admin tools / cluster management modules, as opposed to instantiating RocksDBReplicator directly from my code?

Thanks!

@newpoo
Copy link
Contributor

newpoo commented Apr 25, 2018

Hi, thanks for trying it out.

Yes, you're gonna need a config which contains the shard mapping for your cluster.
An config example can be found here.

We are migrating from using an admin tool to manage clusters to a Helix based solution which is implemented in the cluster_management directory. The helix based solution greatly reduced our operation burden.

Would you mind telling us more about your use case? Not sure how urgent is your use case. We are gonna open source a production system based on rocksplicator soon, which is integrated with both admin tool and helix based solutions.

@Thegaram
Copy link
Author

Great news @newpoo, looking forward to your release!

We are looking for a reasonably fast, horizontally scalable, replicated key-value store and rocksDB+rocksplicator looks like a good solution. The fact that you use this setup at Pinterest in production suggests that this would work for us too. So for now, I would just like to try rocksplicator and see how it works.

Now I am trying to set up a demo project using rocksdb_replicator directly. I have a master node:

rocksdb::DB* db;
rocksdb::Options options;
options.create_if_missing = true;
rocksdb::Status s = rocksdb::DB::Open(options, "/tmp/masterdb", &db);
assert(s.ok());

RocksDBReplicator* my_replicator = RocksDBReplicator::instance();
my_replicator->addDB("my_testdb", std::shared_ptr<rocksdb::DB>(db), DBRole::MASTER);

std::this_thread::sleep_for(100s);

And two slaves, one is writing the db, the other reading it:

rocksdb::DB* db;
rocksdb::Options options;
options.create_if_missing = true;
rocksdb::Status s = rocksdb::DB::Open(options, "/tmp/slavedb1", &db);
assert(s.ok());

RocksDBReplicator* my_replicator = RocksDBReplicator::instance();
my_replicator->addDB("my_testdb", std::shared_ptr<rocksdb::DB>(db), DBRole::SLAVE, folly::SocketAddress("127.0.0.1", 9091));

while (true)
{
    // write something ...
    std::this_thread::sleep_for(10s);
}
// ...
rocksdb::Status s = rocksdb::DB::Open(options, "/tmp/slavedb2", &db);

RocksDBReplicator* my_replicator = RocksDBReplicator::instance();
my_replicator->addDB("my_testdb", std::shared_ptr<rocksdb::DB>(db), DBRole::SLAVE, folly::SocketAddress("127.0.0.1", 9091));

while (true)
{
    // read the same key...
    std::this_thread::sleep_for(2s);
}

The idea is that the new values written by slave1 would be visible on slave2 via replication through the master. However, when I run the slaves, they are unable to connect to the master node.

Could you please give me some pointers about this? Is this the right way to use rocksplicator or am I badly misunderstanding things? Thanks!

@newpoo
Copy link
Contributor

newpoo commented Apr 27, 2018

For Master-Salve setup, writes are only allowed to go to Master, and Master will replicate these updates to all Slaves.
So you probably want to write to Master and read from Slaves.

In a production setting, you may use the interface defined in rocksdb_admin directory directly. the counter service has some example codes.

@Thegaram
Copy link
Author

Thegaram commented May 2, 2018

Thanks for the timely reply @newpoo!

Alright, so I'll go back to the counter example for now. So I started a counter_service instance with the config you mentioned. How to set up the actual masters and slaves after this? What is the meaning of user_pins and interest_pins?

I tried to send a request to http://127.0.0.1:9090/getCounter but it did not work, what endpoints can I use? (Sorry, I am new to the frameworks you use.)

@E-Saei
Copy link

E-Saei commented Nov 13, 2019

Hi, I wanted to thank you for open sourcing this great library.
I would like to try the example, but I get similar error as @Thegaram. I stored this config
"{"
" "user_pins": {"
" "num_leaf_segments": 3,"
" "127.0.0.1:8090": ["00000", "00001", "00002"],"
" "127.0.0.1:8091": ["00002"],"
" "127.0.0.1:8092": ["00002"]"
" },"
" "interest_pins": {"
" "num_leaf_segments": 2,"
" "127.0.0.1:8090": ["00000"],"
" "127.0.0.1:8091": ["00001"]"
" }"
"}";
in a file and passed to the counter executable as shard_config_path. However I get an error that the file cannot be parsed.
The question I have is, if I need to create cluster myself, if yes how and what is the correct syntax for config file.
I would like to have a master and one slave.
I appreciate any help.

@linuxpham
Copy link

Is there any documentation (step by step) to start "COUNTER" service?

@xunliu
Copy link

xunliu commented Sep 27, 2020

@newpoo Hi, thanks for open-sourcing this great lib!

  1. I followed the steps below to test, wirted a config file in /root/sync/rocksplicator.conf.
{
    "user_pins": {
        "num_leaf_segments": 3,
        "127.0.0.1:8090": ["00000", "00001", "00002"],
        "127.0.0.1:8091": ["00002"],
        "127.0.0.1:8092": ["00002"]
    },
    "interest_pins": {
        "num_leaf_segments": 2,
        "127.0.0.1:8090": ["00000"],
        "127.0.0.1:8091": ["00001"]
    }
}
  1. startup /example/counter_service/counter
./counter --shard_config_path="/root/sync/rocksplicator.conf" --port=9090
  1. startup /example/counter_service/stress
./stress --server_ip="127.0.0.1"

BUT. counter print error

root@b968758e9b49:~/sync/rocksplicator/cmake-build-docker/examples/counter_service# ./counter --shard_config_path="/root/sync/rocksplicator.conf" --port=9090
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0927 09:44:18.673982   935 availability_zone.cpp:71] Got invalid az: 
I0927 09:44:18.675344   938 thrift_client_pool.h:179] Started counter::CounterAsyncClient thrift client IO thread with name io-CounterAsyncClient
I0927 09:44:18.676810   940 thrift_client_pool.h:179] Started counter::CounterAsyncClient thrift client IO thread with name io-CounterAsyncClient
I0927 09:44:18.676935   939 thrift_client_pool.h:179] Started counter::CounterAsyncClient thrift client IO thread with name io-CounterAsyncClient
I0927 09:44:18.677378   941 thrift_client_pool.h:179] Started counter::CounterAsyncClient thrift client IO thread with name io-CounterAsyncClient
I0927 09:44:18.678081   942 thrift_client_pool.h:179] Started counter::CounterAsyncClient thrift client IO thread with name io-CounterAsyncClient
I0927 09:44:18.678627   943 thrift_client_pool.h:179] Started counter::CounterAsyncClient thrift client IO thread with name io-CounterAsyncClient
I0927 09:44:18.678966   935 thrift_router.h:125] Local Group used by ThriftRouter: 
I0927 09:44:18.689565   935 status_server.cpp:181] Starting status server at 9999
I0927 09:44:18.689827   935 counter.cpp:101] Starting server at port 9090
E0927 09:45:25.636703  1403 thrift_router.h:209] Invalid segment found: default
E0927 09:45:25.638242  1321 thrift_router.h:209] Invalid segment found: default
E0927 09:45:25.639525  1397 thrift_router.h:209] Invalid segment found: default
E0927 09:45:25.641865  1416 thrift_router.h:209] Invalid segment found: default
E0927 09:45:25.643050  1452 thrift_router.h:209] Invalid segment found: default
E0927 09:45:25.644629  1433 thrift_router.h:209] Invalid segment found: default
E0927 09:45:25.647233  1372 thrift_router.h:209] Invalid segment found: default
....

What am I missing? Maybe a few rocksdb should be started?

@blueChild
Copy link

Thanks for the timely reply @newpoo!

Alright, so I'll go back to the counter example for now. So I started a counter_service instance with the config you mentioned. How to set up the actual masters and slaves after this? What is the meaning of user_pins and interest_pins?

I tried to send a request to http://127.0.0.1:9090/getCounter but it did not work, what endpoints can I use? (Sorry, I am new to the frameworks you use.)

hi , I have the same questions, did you already have the answer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants