Skip to content
This repository has been archived by the owner on Mar 25, 2018. It is now read-only.

Virtualbox Image Store Design

andreaturli edited this page Mar 20, 2012 · 5 revisions

Virtualbox Refinements & Image Store Design

Network options

In order to attach a correct NIC to a node, we need to have a way to know the network around the machine. There are multiple approaches:

  • jclouds autodiscovery network: if a network is available to the host machine, the vms will try to use that
  • user defined network: similarly to what a user can do from vbox GUI

There are many ways to discover the network available to the host:

  • run ipconfig on the host using ssh
  • vboxmanage list bridgedifs

but the major problem is where to inject this information to the node!

we have a number of master machine managed by the cache, and whenever we require to createNode using a template, the clone process will create a node machine starting from the master. There is an domain object called CloneSpec that we need to create in order to generate the desired clone, and this CloneSpec needs a NetworkSpec obj. NB copying the NetworkSpec of the master to the CloneSpec can not be enough cause at the moment all the master have a NAT NIC attached and sometimes you want to have the ability to customize the node in a different way.

Possible solution

Introducing a TemplateOptions to create Master machines with different networkSpec. The cloned machine (the nodes) will be created using this extra information to apply the right NetworkSpec to the node.

Next implementation

Here's a proposal to workaround the bridged issue: 2 NICs

  • 1 HostOnly with DHCP enabled: to allow host to access the guest
  • 1 NAT to give guest access to internet

No port-forwarding needed

Creation pipeline

Consider ExecutionList or Functions.chain.. etc.

At the moment the CreateAndInstallVm function has too many responsabilities:

create the vm
install the guestAdditions
post-installation scripts

Maybe it could be better to have separated steps. Let’s identify those steps:

createAndInstallVm -> VM (NAT+PortForwarding) GuestAdditionsInstaller -> VM+GA (NAT+PortForwarding) PostInstallations -> MasterVM (VM+GA+cleanings and No NIC)

in PostInstallations there are a number of steps needed. At the moment I implemented:

cleanUpUdevIfNeeded: this is needed only for Ubuntu machine

but we definitely need:

detachAllNICs
detachAllISOs
cleanUpHostname

At this stage CloneMachineFromMaster will take care only to clone and to ensure Bridged NIC. Proposals I think we could move all the ensure* and detachAll* methods to machineUtils.

ensureNATadapterIsAttached should be ensureNATadapterIsAttachedToEth0 NetworkSpec desing we need to rethink about the NatAdapter and NetworkSpec.

We need to generalize a bit, cause not only NAT is interesting for us. Moreover NatAdapter contains only info to redirectRules (I think they could be a PortForwarding object)

Proposals In VmSpec we could specify the number of NIC that we want to have. NetworkSpec will specify the kind of adapter (NAT or Bridged) and the eventual PortForwarding rules. << ok so we know we will hit this, and then allow users to ask for it. ComputeAdapter options VirtualBoxComputeServiceAdapterLiveTest are not passing: basically because the default template doesn't match any image.

Proposal

listImages() return an empty collection. We need to ensure that at least the default master image is created before running CreateNodeWithGroupEncodedIntoNameThenStoreCredentials.
This should take care to look/create MasterVM and clone the machine
We could use a cache where we store (key, yaml description), where key simplify template matching i.e.: ‘ubuntu 11.04 server amd64’
CreateNodeWithGroupEncodedIntoNameThenStoreCredentials should:
    createMasterIfNorAlreadyCreated() << this part handled by LoadingCache
    cloneMaster() -> machine
    machine.start()

loader LookupMasterFromBlobStoreContainerOrLazyCreate extends CacheLoader<IsoSpec, IMachine> {

IMachine apply (IsoSpec)
createMasterIfNorAlreadyCreated()
persist to blobstore images container w/blob named vm id
}

list implementation

BlobStore api directly:
@Provides
LoadingCache<IsoSpec, IMachine> provideImageCache(LookupMasterFromBlobStoreContainerOrLazyCreate loader, BlobStore blobstore, @Named(“something”) String containerForKnownImages){
cache = CacheBuilder.newBuilder().build(loader);
for (blob: blobstore.list(containerForKnownImages){
//try and if fail log and remove or ignore/mark bad, etc.
cache.getUnchecked(fromJson(blob.getPayload().getInput(), VBoxImage).getId());
return cache;
}

then.. in computeserviceadapter for listImages, we use BlobStore.list(_) (looks very much like Node from yaml in byon)

Otherwise can persist from Map<String,InputStream>, removing need to load BlobStore classes @Provides LoadingCache<IsoSpec, IMachine> provideImageCache(LookupMasterFromBlobStoreContainerOrLazyCreate loader, @Named(“vbox.image.container”) Map<String,InputStream> metaMap){ cache = CacheBuilder.newBuilder().build(loader); for (blob: metaMap.keySet()){ //try and if fail log and remove or ignore/mark bad, etc. cache.getUnchecked(fromJson(blob.getPayload().getInput(), VBoxImage).getId()); return cache; }