Skip to content
This repository has been archived by the owner on Sep 26, 2021. It is now read-only.

Proposal: machine share #179

Open
nathanleclaire opened this issue Dec 30, 2014 · 69 comments
Open

Proposal: machine share #179

nathanleclaire opened this issue Dec 30, 2014 · 69 comments

Comments

@nathanleclaire
Copy link
Contributor

machine share

Abstract

machine is shaping up to be an excellent tool for quickly and painlessly creating and managing Docker hosts across a variety of virtualization platforms and hosting providers. For a variety of reasons, syncing files between the computer where machine commands are executed and the machines themselves is desirable.

Motivation

Containers are a great tool and unit of deployment and machine makes creating a place to run your containers easier than ever. There are, however, many reasons why one would want to share files which are not pre-baked into the Docker image from their machine client computer to a machine host. Some examples are:

  1. The machine host is VM on the user's laptop and they are developing a web application inside of a container. They want to develop with the source code bind-mounted inside of a container, but still edit on their computer using Sublime etc.
  2. The user wants to spin up 10 hosts in the cloud and do some scientific computing on them using Docker. They have a container with, say, the Python libraries they need, but they also need to push their .csv files up to the hosts from their laptop. This user does not know about the implementation details of machine or how to find the SSH keys for the hosts etc.
  3. There is some artifact of a build run on a machine (e.g. one or many compiled binaries) and the user wants to retrieve that artifact from the remote.

Like the rest of machine, it would be preferable to have 80% of use cases where this sort of thing happens integrated seamlessly into the machine workflow, while still providing enough flexibility to users who fall into the 20% not covered by the most common use cases.

Interface

After thinking about the mechanics and UX of this, I think we should favor explicitness over implicitness and err on the side of not creating accidental or implicit shares.

There are a few aspects to the story that should be considered.

Command Line

The syntax would be something like this:

$ pwd
/Users/nathanleclaire/website

$ machine ls
NAME           ACTIVE   DRIVER         STATE     URL
shareexample   *        digitalocean   Running   tcp://104.236.115.220:2376

$ machine ssh -c pwd
/root

$ machine share --driver rsync . /root/code
Sharing /Users/nathanleclaire/website from this computer to /root/code on host "shareexample"....

$ machine share ls
MACHINE      DRIVER SRC                           DEST
shareexample rsync  /Users/nathanleclaire/website /root/code

$ ls

$ echo foo >foo.txt

$ ls 
foo.txt

$ machine ssh -c "cat foo.txt"
cat: foo.txt: No such file or directory
FATA[0001] exit status 1

$ machine share push
[INFO] Pushing to remote...

$ machine ssh -c "cat foo.txt"
foo

$ machine share --driver scp / /root/client_home_dir 
ERR[0001] Sharing the home directory or folders outside of it is not allowed.  To override use --i-know-what-i-am-doing-i-swear

IMO we should forbid users from creating shares to or from outside of the home directory of the client or the remote host. There's a strong argument that the home directory itself should be banned from sharing as well, to prevent accidental sharing of files which should be moved around carefully such as ~/.ssh and, of course, ~/.docker. Also, clients could share directories to multiple locations, but any shares which point to the same destination on the remote would be disallowed.

Totally open to feedback and changes on the UI, this is just what I've come up with so far.

Drivers

There is a huge variety of ways to get files from point A to point B and back, so I'd propose a standard interface that drivers have to implement to be recognized as an option for sharing (just like we have done with virtualization / cloud platforms). The default would be something like scp (since it is so simple and is pretty ubiquitous) and users would be able to manually specify one as well. Users could pick the driver that suits their needs. Additionally it would allow the machine team to start with a simpler core and move forward later e.g. just rsync, scp, and vboxsf could be the options in the v1.0 and then later other drivers could be added.

Some possible drivers: scp, vboxsf, fusionsf, rsync, sshfs, nfs, samba

This would be useful because different use cases call for different ways of moving files around. NFS might work well for development in a VM, but you might want rsync if you are pushing large files to the server frequently, and so on.

Part of the interface for a share driver would be some sort of IsContractFulfilled() method which returns a boolean that indicates if the "contract" necessary for the share to work is fulfilled by both the client and remote host. This would allow us to, for instance, check if rsync is installed on the client machine and the remote host, and refuse to create a share if that is not the case. Likewise, it would prevent users from trying to do something silly like using the vboxsf driver on a non-Virtualbox host.

Possible Issues

  • If machine moves to a client-server model, which seems favored at the moment, it introduces additional complications to the implementation of machine share. Namely, machine share would be all client-side logic, and would not be able to make the same assumptions it may make today e.g. that the SSH keys for the hosts are all lying around ready to be used on the client computer.
  • machine share push and machine share pull make a ton of sense for some drivers, such as rsync, but not so much sense for vboxsf or nfs which update automatically. What happens when a user does machine share push on such a driver? "ERR: This driver is bidirectional"?
  • This is likely to have a pretty big scope, so we might want to consider making it a separate tool or introducing some sort of model for extensions that could keep the core really lean.
  • How are symlinks handled? Permissions? What happens if you move, rename, or delete the shared directory on the client, or on the remote?

Other Notes

I am not attached to the proposal, I am just looking for feedback and to see if anyone else thinks this would be useful. I have put together some code which defines a tentative ShareDriver interface but it is not really to my liking (and no real sync functionality is implemented) so I haven't shared it.

@ehazlett
Copy link
Contributor

@nathanleclaire thanks for the proposal -- well written :)

A couple of initial thoughts: the idea of sharing I think makes sense for vbox (similar to what is recommended for fig) and we already do that for the vbox driver. However, outside of that, IMO they should be Docker images. I think it would be handy to be able to copy files (perhaps a machine scp or something) but the idea of pushing/syncing starts to enter into the system management area and I'm not sure we want to go there.

Are there other use cases you could think of besides app code?

To be clear, I'm not arguing against it, just trying to better understand the usage :)

@sthulb
Copy link
Contributor

sthulb commented Jan 7, 2015

+1 for machine scp

@SvenDowideit
Copy link
Contributor

The basic reasoning that many users ave given is that when they are developing, they want the source code to be on their local machine, that way, when they blow away the remote instance, they don't have to worry about the files going away.

I recon this concern will exist for those of us that will use ephemeral cloud Docker daemons just the same as for b2d style ones.

so I'm very +1 to this - as its similar to what i started exploring in boot2docker/boot2docker-cli#247

@sthulb
Copy link
Contributor

sthulb commented Jan 8, 2015

Perhaps local drivers could implement an arg for this. But cloud providers shouldn't have this.

@jeffmendoza
Copy link
Contributor

scp and/or rsync would be good, same as current machine ssh.

@SvenDowideit
Copy link
Contributor

@sthulb can you elaborate on why not? this will be a common question, and so will need to be in the docs (its not that hard to envision an opportunistic rsync that could allow the server to happily run while the clients are gone)

@sthulb
Copy link
Contributor

sthulb commented Jan 8, 2015

Local machines are geared for dev environments. I feel like remote (cloud) machines are for production purposes and thus, should have their data provisioned with data containers, or via chef/puppet/ansible/etc.

I can understand the requirement to upload a single file to a remote host though (various secrets).

I'm willing to sway if others feel like this is a thing we should have.

@ehazlett
Copy link
Contributor

ehazlett commented Jan 8, 2015

I have to agree with @sthulb. I think we should stay away from syncing/sharing in remote environments. I can see it being extremely powerful for local.

@SvenDowideit
Copy link
Contributor

I use digital ocean as a dev environment. I also have 2 non-vm physical boot2docker servers that I use for development and testing, where my sshfs share driver for b2d-cli is very useful.

@sthulb
Copy link
Contributor

sthulb commented Jan 9, 2015

Perhaps this is a case for plugins.

@ehazlett
Copy link
Contributor

ehazlett commented Jan 9, 2015

@sthulb i think that's a brilliant idea

@SvenDowideit
Copy link
Contributor

+1 - if you look at the PR I made in the b2d-cli repo, I copied the driver model, but starting machine with plugins would be nicer.

@Tebro
Copy link

Tebro commented Jan 16, 2015

+1 for scp, and I agree with comments above, this proposal looks useful for development machines (Vbox, fusion), but on the other providers I think this could make quite a mess real fast.

@waitingkuo
Copy link

+1 for scp, (perhaps sftp?)

@leth
Copy link

leth commented Feb 1, 2015

+1 for local VM shares, NFS is a portable option if driver-specific shares are awkward. Vagrant does this kind of thing well :) I'd love to see it in this project.

@jlrigau
Copy link

jlrigau commented Feb 3, 2015

+1 for scp

@nathanleclaire
Copy link
Contributor Author

To give interested parties an update on where my head is at with this one: I'd like to make a machine scp command which is simply for moving files from point A to point B one way via scp (I think I want a --delta flag for this too to optionally shell out to rsync instead of scp).

Then, for more complicated use case e.g. NFS/Samba, there would be a separate (still docker-machine-centric) tool entirely as scope of managing this could get quite large.

@sthulb
Copy link
Contributor

sthulb commented Feb 3, 2015

I'm starting to think we should have a git style interface, with executables starting docker-machine-foo, in this case foo would be share.

@sthulb
Copy link
Contributor

sthulb commented Feb 3, 2015

This would be great for extending machine.

@jokeyrhyme
Copy link

Just tried docker-machine v0.1.0 for the first time today. I have a workflow involving live updates to a host directory being propagated to a mounted volume within a container. I'm using the standard docker CLI arguments to achieve this.

  • this works fine if I use docker-machine create -d virtualbox dev (unsurprising since this builds upon the boot2docker work)
  • docker-machine create -d vmwarefusion dev results in an empty volume mount

I'm very impressed with docker-machine so far. My project's unit tests (ones unrelated to the mounted volume) passed just as they used to with boot2docker. Besides this issue, this was an extremely seamless transition from boot2docker to docker-machine.

@ehazlett
Copy link
Contributor

@jokeyrhyme the issue with fusion is known and is being worked on. Thanks for the great feedback!

@jcheroske
Copy link

+1 for scp

@nathanleclaire
Copy link
Contributor Author

@jcheroske It's in master now!! #1140

@jcheroske
Copy link

That's incredible. Noob question: Is there an osx binary of the latest and
greatest? My binary doesn't have it yet. I did figure out, digging around
in the .docker dir, how the certs are stored. Then I configured my
.ssh/config so that I could use regular ssh and scp. But having it built-in
is much better.

On Tue, May 19, 2015 at 1:05 PM, Nathan LeClaire notifications@github.com
wrote:

@jcheroske It's in master now!! #1140


Reply to this email directly or view it on GitHub.

@nathanleclaire
Copy link
Contributor Author

@jcheroske You can always get the latest (master) binaries from here: https://docker-machine-builds.evanhazlett.com/latest/ :)

@vincepri
Copy link

vincepri commented Jul 4, 2015

Are there any updates on this?

@blaggacao
Copy link
Contributor

Windows, rsync and the fswatch problems: In order to reduce hooping through github issues, i reference explicitly:
boot2docker/boot2docker-cli#66 (comment)

In a nutshell, for serious development you need the inotify (or corresponding) functionality to work. fswatch implements an intersting library, but which does not support Windows' FileSystemWatch API. A good starting point on this issue is https://github.com/thekid/inotify-win/, but: thekid/inotify-win#7

@ghost
Copy link

ghost commented Dec 1, 2015

Docker is such a mess. I think is not good as it seems.

@ghost
Copy link

ghost commented Dec 1, 2015

https://blog.abevoelker.com/why-i-dont-use-docker-much-anymore/

@iBobik
Copy link

iBobik commented Dec 1, 2015

We are not in Docker Compose support forum, so please don’t discuss it there. People are subscribed to this issue because of other topic.

Read tutorial about Docker Compose on docker.com. It is great utility, you just using it wrong.

    1. 2015 v 4:36, ginolon notifications@github.com:

Docker is such a mess. I think is not good as it seems.


Reply to this email directly or view it on GitHub #179 (comment).

@krasi-georgiev
Copy link

+1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1

current virtualbox sharing is performance limited and has permissions issues with some containers (postgresql for example)

@dskrzypczynski
Copy link

+1

1 similar comment
@dustinblackman
Copy link
Contributor

+1

@fyddaben
Copy link

so the solution is what? what should i do in my mac?ヽ(゜Q。)ノ?

@iBobik
Copy link

iBobik commented Jan 11, 2016

Please, write there only constructive comments. No „+1“, because it is send
to all subscribers and they will unsubscribe if it was too much.

If you want to support this issue, just subscribe it.

Jan Pobořil

2016-01-11 5:03 GMT+01:00 daben1990 notifications@github.com:

+1+1+1 ,expecting it for so long time.


Reply to this email directly or view it on GitHub
#179 (comment).

@erkie
Copy link

erkie commented Feb 10, 2016

Running this command works fine for copying between development machines on the same network:

rsync -chavzP --stats other-computer.local:~/.docker/machine/ ~/.docker/machine/ --exclude machines/default --exclude cache

@oleksdovz
Copy link

+1

@blaggacao
Copy link
Contributor

Finally I found the right issue, @nathanleclaire I would bet CIFS/NFS docker volumes, as implemented by https://github.com/gondor/docker-volume-netshare would solve the problem immediately. However those would need to be integrated seamlessly into boot2docker. Would that be possible?

@blaggacao
Copy link
Contributor

I think moby/moby#20262 can be an advent to the final answer to this.
With this PR, in 1.11, sharing would become a question of exposing the right nfs/samba shares on the hosts. Or doesn't github.com/docker/docker/pkg/mount support nfs/samba yet?

@MBuffenoir
Copy link

MBuffenoir commented Jul 5, 2016

I use something along this line to keep in sync:

fswatch -o -e '.git' . | while read num ; \
do \
     rsync -avzh -e "ssh -i /path/to/home/.docker/machine/machines/machine-name/id_rsa" --progress --exclude '.git' . ubuntu@$(docker-machine ip machine-name):/home/ubuntu; \
done

edit: typo

@gaieges
Copy link

gaieges commented Feb 27, 2017

Very interested in this feature as well, signing up for notifications.

IMO - based on the thread, it sounds like the easiest and most useful implementation would be to use standard drivers in the image to create local-only volume mounts, and not implement the feature remotely (if you are - you're sort of doing it wrong, right?).

Value for myself would come in local, on-the-fly development to contents in the container without any sftp / scp / rsync steps involved (or scripts like the one that MBuffenoir has set up)

@in10se
Copy link

in10se commented Jun 28, 2018

Same here. This is a must have.

@erkie
Copy link

erkie commented Jun 29, 2018

@in10se sorry to say but Docker Machine seems to be a dead product. Development ans innovation has stagnated and I wouldn’t count on it for production environments. I think Hashicorp Terraform or similar tools are the way to go.

@in10se
Copy link

in10se commented Jul 2, 2018

@erkie, it does look like activity has slowed down in the past few months. Hopefully it will resume. Thanks for the suggestion.

@ezmiller
Copy link

ezmiller commented Aug 9, 2019

What is the state of this proposal?

@luckydonald
Copy link

Not much going on here.

@thalysonalexr
Copy link

7 years????

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests