SplitAmong
Definition
Signature
splitAmong(vs : set<set<VM>>, ns : set<set<server>>)
vs
: a non-empty set of set of VMs for a meaningful constraint. VMs not in theRunning
state are ignored. Sets insidevs
must be disjoint.ns
: a set of set of servers that is composed of more sets thanvs
or the constraint is sure of not being satisfiable. Sets composingns
must be disjoint. Servers not in theOnline
state are ignored.
The splitAmong
constraint forces the sets of VMs inside vs
to be hosted on distinct set of servers in ns
. VMs inside a same set may still be collocated.
Classification
- Primary users: application administrator
- Manipulated elements: VM placement
- Concerns: VM-to-VM placement, Partitioning, Fault tolerance
Usage
The splitAmong
constraint deserves isolation requirements. One solution to ensure disaster recovery for an application is to replicate it. When the master application fail, the replica is activated transparently to neglect the failure effect. In practice, the replication is a mechanism provided at the hypervisor level [14], [42]. The replicas are then placed to a distant server to make the application survive
to a datacenter failure. One application administrator may obtain this fault tolerance using one splitAmong
constraint. The sets of VMs given as parameters are the master then the slave VMs while the
set of servers are the servers composing each datacenter.
Example
Figure 8 depicts a sample reconfiguration between a source and a destination configuration. In this example, the following splitAmong
constraints were considered:
N1: VM1 VM2
N2: VM3
N3: VM4 VM5
N4: VM6
N5: VM7 VM8

N1: VM1
N2: VM3
N3: VM2 VM4 VM5
N4: VM7 VM6
N5: (VM8)
Figure 8: A reconfiguration motivated by splitAmong
constraints.
splitAmong({{VM1,VM3},{VM2,VM4}},{{N1,N2},{N3,N4}})
. This constraint was not satisfied in the source configuration asVM2
andVM1
were both running inside the set of servers{N1,N2}
despite they belong to different sets of VMs. In addition, the set of VMs{VM2,VM4}
was spread among the two set of servers while it should be running on only one. These violations were fixed by relocatingVM2
toN4
to let the first set of VMs running on the first set of servers and the second set of VMs running on the second set of servers.splitAmong({{VM1,VM3},{VM5,VM6,VM7, VM8}},{{N1,N2},{N3,N4}})
. This constraint was not satisfied in the source configuration asVM7
andVM8
were running onN5
, that does not belong to any of the allowed sets. This violation was fixed by relocatingVM7
toN4
and by suspendingVM8
which is now ignored by the constraint.splitAmong({{VM1,VM2,VM3},{VM7,VM8}},{{N1,N2,N3},{N4,N5}})
.This constraint was satisfied in the source configuration as the sets of VMs share do not share a group of servers. The constraint is still satisfied in the destination configuration despite the relocation ofVM2
andVM7
toN3
andN4
respectively which let them running inside their dedicated group of servers.
See also
Related Constraints
split
: This constraint disallows two set of VMs to share servers.spread
,lazySpread
: These constraints disallow the colocation between VMs rather than groups of VMs.fence
:splitAmong
is equivalent to afence
constraint when only one set of VMs and one set of servers are given as arguments.
Specialization(s)
To
lazySpread
:splitAmong(s/|s|,N/|N|)
↔lazySpread(s1)
To
among
:splitAmong({vs1},ns1)
↔among(vs1, ns1)