Skip to content
This repository has been archived by the owner on Oct 22, 2021. It is now read-only.

{"error":"error","reason":"internal_server_error"} happened when Clustering #120

Open
John1Q84 opened this issue Dec 7, 2012 · 6 comments

Comments

@John1Q84
Copy link

John1Q84 commented Dec 7, 2012

Hi, I'm very new at NoSQL DB and I have great interests in bigcouch for clustering. BTW, when I set up 5 bigcouch servers using AWS EC2, I face error message like, "root@due1cdb004:/opt/bigcouch/etc# curl -X PUT http://localhost:5984/sharddb
{"error":"error","reason":"internal_server_error"}"

I totally have no idea what I have been wrong.. I just follow steps on cloudant guide from website.

First, I tried to set two instances as one cluster by curl command like, < $ curl -X PUT http://localhost:5986/nodes/due1cdb003@cdb003p -d{} >. It; looks fine. Then, tried to create database 'albums' by , < $ curl -X PUT http://localhost:5984/albums>. And I saw error message next..
what am I wrong?

@John1Q84
Copy link
Author

John1Q84 commented Dec 7, 2012

and the log looks like,

[Fri, 07 Dec 2012 10:50:37 GMT] [error] [emulator] [--------] Error in process <0.6323.2> on node 'due1cdb001@cdb001p' with exit value: {{rexi_DOWN,noconnect},[{mem3_rep,rexi_call,2},{mem3_rep,replicate_batch,1},{mem3_rep,go,3},{mem3_rep,go,2}]}

@wohali
Copy link

wohali commented Dec 7, 2012

"$ curl -X PUT http://localhost:5986/nodes/due1cdb003@cdb003p -d{}"

This is your problem. Use a fully qualified domain name that resolves to the other machine's instance after the at-sign (@). The {rexi_DOWN,noconnect} error means the node @cbd001p cannot reach the other machine.

@John1Q84
Copy link
Author

Thanks wohali for your answer.

But, I still have question with same error log after I revised EC2 public DNS instead of 'cdb001p'.
The log almost same.. hmm...

the error is "'cdb1@ec2-someurl.compute-1.amazonaws.com' with exit value: {{rexi_DOWN,noconnect},[{mem3_rep,rexi_call,2},{mem3_rep,replicate_batch,1},{mem3_rep,go,3},{mem3_rep,go,2}]}"

what's wrong with me...Orz(it is emoticon means 'down on my knees').....

@wohali
Copy link

wohali commented Dec 10, 2012

Check firewall rules between your nodes. Most likely you do not have the various ports required open between them. See http://stackoverflow.com/questions/8459949/bigcouch-cluster-connection-issue for details.

@John1Q84
Copy link
Author

Fortunately, I found one more problem that I made. I was connecting two machines by both side..(according to the post, it should be one side).
But symptom is same..they are still not connected.
Of course I checked the port 4369 to communicate one to the other.(Actually they are in the same security group of AWS, that means the machines could be connected by all ports. and I checked by telnet with port.)
hmm... I think I missed very tiny points... I am still looking..
Thanks anyway wohali :)

@vinaysaini
Copy link

try to do telnet to the other bigcouch server from main server to see if ports are open using command
telnet fqdn 5986

where fqdn is the fully qualified domain name of the other bigcouch server.it should give output as connected.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants