Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "cookie" or other identification mechanism to SimpleLocalnet #17

Open
mboes opened this issue Jun 22, 2015 · 8 comments
Open

Add "cookie" or other identification mechanism to SimpleLocalnet #17

mboes opened this issue Jun 22, 2015 · 8 comments

Comments

@mboes
Copy link
Contributor

mboes commented Jun 22, 2015

From @edsko on October 23, 2012 13:59

so that we have multiple independent Cloud Haskell applications running on the same network.

Copied from original issue: haskell-distributed/distributed-process#56

@mboes
Copy link
Contributor Author

mboes commented Jun 22, 2015

From @edsko on October 23, 2012 15:11

(Possibly it might suffice to make the multicast port configurable.)

@ciez
Copy link

ciez commented Dec 16, 2016

service name may already be sent with whereisRemoteAsync

it is received with WhereIsReply service mpid

see Control-Distributed-Process

@qnikst
Copy link
Contributor

qnikst commented Dec 16, 2016

@ciez, sorry can you elaborate how registry API is relevant to this ticket? If you want to tell that we have similar problem for that API, please open relevant ticket in the distributed-process package

@ciez
Copy link

ciez commented Dec 16, 2016

so that we have multiple independent Cloud Haskell applications running on the same network.

WhereIsReply handler may choose to handle or ignore specific service.
for example

This is not secure however the ticket headline mentions identification not authorisation

@qnikst
Copy link
Contributor

qnikst commented Dec 16, 2016

This task is about d-p-simplelocalnet, the problem is that it does network wide send on multicast address thus all nodes even from the different network will see each other.
Idea behind this ticket is that we should provide cookie inside each notification, thus node can decide wether it want to add node to the list of known nodes or not.
So in my understanding this layer should be complete invisible to the higher d-p framework. Possibly I'm missing your idea about how to solve this.

@ciez
Copy link

ciez commented Dec 16, 2016

interesting. I tried to make raketka nodes discover peers via multicast but could not. so I went for specified (in config) peers.

at least on my pc it behaves like this: same service nodes see each other, send messages to each other. Broadcast looks up peer pids from node's own state.

just checked: service name certainly makes a difference: same service -> connected, different service -> messages are not sent/delivered.

basically, WhereIsReply String works as a cookie.

can send complete working code on request. Raketka is only a base project.

@qnikst
Copy link
Contributor

qnikst commented Dec 16, 2016

This task is about multicast functionality that will not work properly in case if there are 2 independent d-p clusters in the same network, basically about this code:

https://github.com/haskell-distributed/distributed-process-simplelocalnet/blob/master/src/Control/Distributed/Process/Backend/SimpleLocalnet.hs#L178

https://github.com/haskell-distributed/distributed-process-simplelocalnet/blob/master/src/Control/Distributed/Process/Backend/SimpleLocalnet.hs#L232-L245

distributed-process-simplelocalnet always uses same multicast address and the same port, so if you will want to start 2 independent clusters, and will send a request - then clients from both clusters will reply. So findPeers function will return both your and someones else nodes! I really don't see how whereIs may be relevant here, d-p layer is not involved here.

If you have thoubles using multicast then you may want to consult on irc, or if you had particular problem/faced bug - then create an issue.

@ciez
Copy link

ciez commented Dec 16, 2016

yes it is likely we discuss different things. I am just saying that - although unintended - my code works exactly as requested by @mboes: multiple clusters run in parallel with cpu peaking without messages crossing - as far as I can tell.

This behaviour suits me ok the way it is.

findPeers - this I think did not work for me. Or maybe I just used it wrongly.

this is a problem though. Any chance someone might take a look?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants