4 Autocluster is set of scripts for building virtual clusters to test
5 clustered Samba. It uses Linux's libvirt and KVM virtualisation
8 Autocluster is a collection of scripts, template and configuration
9 files that allow you to create a cluster of virtual nodes very
10 quickly. You can create a cluster from scratch in less than 30
11 minutes. Once you have a base image you can then recreate a cluster
12 or create new virtual clusters in minutes.
14 The current implementation creates virtual clusters of RHEL5/6 nodes.
20 * INSTALLING AUTOCLUSTER
35 INSTALLING AUTOCLUSTER
36 ======================
38 Before you start, make sure you have the latest version of
39 autocluster. To download autocluster do this:
41 git clone git://git.samba.org/autocluster.git
43 Or to update it, run "git pull" in the autocluster directory
45 You probably want to add the directory where autocluster is installed
46 to your PATH, otherwise things may quickly become tedious.
52 This section explains how to setup a host machine to run virtual
53 clusters generated by autocluster.
56 1) Install and configure required software.
58 a) Install kvm, libvirt and expect.
60 Autocluster creates virtual machines that use libvirt to run under
61 KVM. This means that you will need to install both KVM and
62 libvirt on your host machine. Expect is used by the "waitfor"
63 script and should be available for installation form your
70 Autocluster should work with the standard RHEL6 qemu-kvm and
71 libvirt packages. However, you'll need to tell autocluster
72 where the KVM executable is:
74 KVM=/usr/libexec/qemu-kvm
76 For RHEL5/CentOS5, useful packages for both kvm and libvirt used
79 http://www.lfarkas.org/linux/packages/centos/5/x86_64/
81 However, since recent versions of RHEL5 ship with KVM, 3rd party
82 KVM RPMs for RHEL5 are now scarce.
84 RHEL5.4's KVM also has problems when autocluster uses virtio
85 shared disks, since multipath doesn't notice virtio disks. This
86 is fixed in RHEL5.6 and in a recent RHEL5.5 update - you should
87 be able to use the settings recommended above for RHEL6.
89 If you're still running RHEL5.4, you have lots of time, you have
90 lots of disk space and you like complexity then see the sections
91 below on "iSCSI shared disks" and "Raw IDE system disks".
95 Useful packages ship with Fedora Core 10 (Cambridge) and later.
96 Some of the above notes on RHEL might apply to Fedora's KVM.
100 Useful packages ship with Ubuntu 8.10 (Intrepid Ibex) and later.
101 In recent Ubuntu versions (e.g. 10.10 Maverick Meerkat) the KVM
102 package is called "qemu-kvm". Older versions have a package
105 For other distributions you'll have to backport distro sources or
106 compile from upstream source as described below.
108 * For KVM see the "Downloads" and "Code" sections at:
110 http://www.linux-kvm.org/
116 b) Install guestfish or qemu-nbd and nbd-client.
118 Recent Linux distributions, including RHEL since 6.0, contain
119 guestfish. Guestfish (see http://libguestfs.org/ - there are
120 binary packages for several distros here) is a CLI for
121 manipulating KVM/QEMU disk images. Autocluster supports
122 guestfish, so if guestfish is available then you should use it.
123 It should be more reliable than NBD.
125 Autocluster attempts to use the best available method (guestmount
126 -> guestfish -> loopback) for accessing disk image. If it chooses
127 a suboptimal method, you can force the method:
129 SYSTEM_DISK_ACCESS_METHOD=guestfish
131 If you can't use guestfish then you'll have to use NBD. For this
132 you will need the qemu-nbd and nbd-client programs, which
133 autocluster uses to loopback-nbd-mount the disk images when
134 configuring each node.
136 NBD for various distros:
140 qemu-nbd is only available in the old packages from lfarkas.org.
141 Recompiling the RHEL5 kvm package to support NBD is quite
142 straightforward. RHEL6 doesn't have an NBD kernel module, so is
143 harder to retrofit for NBD support - use guestfish instead.
145 Unless you can find an RPM for nbd-client then you need to
146 download source from:
148 http://sourceforge.net/projects/nbd/
154 qemu-nbd is in the qemu-kvm or kvm package.
156 nbd-client is in the nbd package.
160 qemu-nbd is in the qemu-kvm or kvm package. In older releases
161 it is called kvm-nbd, so you need to set the QEMU_NBD
162 configuration variable.
164 nbd-client is in the nbd-client package.
166 * As mentioned above, nbd can be found at:
168 http://sourceforge.net/projects/nbd/
170 c) Environment and libvirt virtual networks
172 You will need to add the autocluster directory to your PATH.
174 You will need to configure the right libvirt networking setup. To
177 host_setup/setup_networks.sh [ <myconfig> ]
179 If you're using a network setup different to the default then pass
180 your autocluster configuration filename, which should set the
183 You might also need to set:
185 VIRSH_DEFAULT_CONNECT_URI=qemu:///system
187 in your environment so that virsh does KVM/QEMU things by default.
189 2) If your install server is far away then you may need a caching web
190 proxy on your local network.
192 If you don't have one, then you can install a squid proxy on your
195 WEBPROXY="http://10.0.0.1:3128/"
197 See host_setup/etc/squid/squid.conf for a sample config suitable
198 for a virtual cluster. Make sure it caches large objects and has
199 plenty of space. This will be needed to make downloading all the
200 RPMs to each client sane
202 To test your squid setup, run a command like this:
204 http_proxy=http://10.0.0.1:3128/ wget <some-url>
206 Check your firewall setup. If you have problems accessing the
207 proxy from your nodes (including from kickstart postinstall) then
208 check it again! Some distributions install nice "convenient"
209 firewalls by default that might block access to the squid port
210 from the nodes. On a current version of Fedora Core you may be
211 able to run system-config-firewall-tui to reconfigure the
214 3) Setup a DNS server on your host. See host_setup/etc/bind/ for a
215 sample config that is suitable. It needs to redirect DNS queries
216 for your virtual domain to your windows domain controller
218 4) Download a RHEL install ISO.
224 A cluster comprises a single base disk image, a copy-on-write disk
225 image for each node and some XML files that tell libvirt about each
226 node's virtual hardware configuration. The copy-on-write disk images
227 save a lot of disk space on the host machine because they each use the
228 base disk image - without them the disk image for each cluster node
229 would need to contain the entire RHEL install.
231 The cluster creation process can be broken down into 2 mains steps:
233 1) Creating the base disk image.
235 2) Create the per-node disk images and corresponding XML files.
237 However, before you do this you will need to create a configuration
238 file. See the "CONFIGURATION" section below for more details.
240 Here are more details on the "create cluster" process. Note that
241 unless you have done something extra special then you'll need to run
244 1) Create the base disk image using:
246 ./autocluster create base
248 The first thing this step does is to check that it can connect to
249 the YUM server. If this fails make sure that there are no
250 firewalls blocking your access to the server.
252 The install will take about 10 to 15 minutes and you will see the
253 packages installing in your terminal
255 The installation process uses kickstart. The choice of
256 postinstall script is set using the POSTINSTALL_TEMPLATE variable.
257 An example is provided in
258 base/all/root/scripts/gpfs-nas-postinstall.sh.
260 It makes sense to install packages that will be common to all
261 nodes into the base image. This save time later when you're
262 setting up the cluster nodes. However, you don't have to do this
263 - you can set POSTINSTALL_TEMPLATE to "" instead - but then you
264 will lose the quick cluster creation/setup that is a major feature
267 When that has finished you should mark that base image immutable
270 chattr +i /virtual/ac-base.img
272 That will ensure it won't change. This is a precaution as the
273 image will be used as a basis file for the per-node images, and if
274 it changes your cluster will become corrupt
276 2) Now run "autocluster create cluster" specifying a cluster
279 autocluster create cluster c1
281 This will create and install the XML node descriptions and the
282 disk images for your cluster nodes, and any other nodes you have
283 configured. Each disk image is initially created as an "empty"
284 copy-on-write image, which is linked to the base image. Those
285 images are then attached to using guestfish or
286 loopback-nbd-mounted, and populated with system configuration
287 files and other potentially useful things (such as scripts).
293 At this point the cluster has been created but isn't yet running.
294 Autocluster provides a command called "vircmd", which is a thin
295 wrapper around libvirt's virsh command. vircmd takes a cluster name
296 instead of a node/domain name and runs the requested command on all
297 nodes in the cluster.
299 1) Now boot your cluster nodes like this:
303 The most useful vircmd commands are:
306 shutdown : graceful shutdown of a node
307 destroy : power off a node immediately
309 2) You can watch boot progress like this:
311 tail -f /var/log/kvm/serial.c1*
313 All the nodes have serial consoles, making it easier to capture
314 kernel panic messages and watch the nodes via ssh
320 Now you have a cluster of nodes, which might have a variety of
321 packages installed and configured in a common way. Now that the
322 cluster is up and running you might need to configure specialised
323 subsystems like GPFS or Samba. You can do this by hand or use the
324 sample scripts/configurations that are provided.
326 Now you can ssh into your nodes. You may like to look at the small set
327 of scripts in /root/scripts on the nodes for some scripts. In
330 mknsd.sh : sets up the local shared disks as GPFS NSDs
331 setup_gpfs.sh : sets up GPFS, creates a filesystem etc
332 setup_cluster.sh : sets up clustered Samba and other NAS services
333 setup_tsm_server.sh: run this on the TSM node to setup the TSM server
334 setup_tsm_client.sh: run this on the GPFS nodes to setup HSM
335 setup_ad_server.sh : run this on a node to setup a Samba4 AD
337 To setup a clustered NAS system you will normally need to run
338 setup_gpfs.sh and setup_cluster.sh on one of the nodes.
341 AUTOMATED CLUSTER CREATION
342 ==========================
344 The last 2 steps can be automated. An example script for doing this
345 can be found in examples/create_cluster.sh.
354 Autocluster uses configuration files containing Unix shell style
355 variables. For example,
359 indicates that the last octet of the first IP address in the cluster
360 will be 30. If an option contains multiple words then they will be
361 separated by underscores ('_'), as in:
365 All options have an equivalent command-line option, such
370 Command-line options are lowercase. Words are separated by dashes
375 Normally you would use a configuration file with variables so that you
376 can repeat steps easily. The command-line equivalents are useful for
377 trying things out without resorting to an editor. You can specify a
378 configuration file to use on the autocluster command-line using the -c
381 autocluster -c config-foo create base
383 If you don't provide a configuration variable then autocluster will
384 look for a file called "config" in the current directory.
386 You can also use environment variables to override the default values
387 of configuration variables. However, both command-line options and
388 configuration file entries will override environment variables.
390 Potentially useful information:
392 * Use "autocluster --help" to list all available command-line options
393 - all the items listed under "configuration options:" are the
394 equivalents of the settings for config files. This output also
395 shows descriptions of the options.
397 * You can use the --dump option to check the current value of
398 configuration variables. This is most useful when used in
399 combination with grep:
401 autocluster --dump | grep ISO_DIR
403 In the past we recommended using --dump to create initial
404 configuration file. Don't do this - it is a bad idea! There are a
405 lot of options and you'll create a huge file that you don't
406 understand and can't debug!
408 * Configuration options are defined in config.d/*.defconf. You
409 shouldn't need to look in these files... but sometimes they contain
410 comments about options that are too long to fit into help strings.
415 * I recommend that you aim for the smallest possible configuration file.
420 and move on from there.
422 * The NODES configuration variable controls the types of nodes that
423 are created. At the time of writing, the default value is:
425 NODES="sofs_front:0-3 rhel_base:4"
427 This means that you get 4 clustered NAS nodes, at IP offsets 0, 1,
428 2, & 3 from FIRSTIP, all part of the CTDB cluster. You also get an
429 additional utility node at IP offset 4 that can be used, for
430 example, as a test client. Since sofs_* nodes are present, the base
431 node will not be part of the CTDB cluster - it is just extra.
433 For many standard use cases the nodes specified by NODES can be
434 modified by setting NUMNODES, WITH_SOFS_GUI and WITH_TSM_NODE.
435 However, these options can't be used to create nodes without
436 specifying IP offsets - except WITH_TSM_NODE, which checks to see if
437 IP offset 0 is vacant. Therefore, for many uses you can ignore the
440 However, NODES is the recommended mechanism for specifying the nodes
441 that you want in your cluster. It is powerful, easy to read and
442 centralises the information in a single line of your configuration
448 The RHEL5 version of KVM does not support the SCSI block device
449 emulation. Therefore, you can use either virtio or iSCSI shared
450 disks. Unfortunately, in RHEL5.4 and early versions of RHEL5.5,
451 virtio block devices are not supported by the version of multipath in
452 RHEL5. So this leaves iSCSI as the only choice.
454 The main configuration options you need for iSCSI disks are:
456 SHARED_DISK_TYPE=iscsi
457 NICMODEL=virtio # Recommended for performance
458 add_extra_package iscsi-initiator-utils
460 Note that SHARED_DISK_PREFIX and SHARED_DISK_CACHE are ignored for
461 iSCSI shared disks because KVM doesn't (need to) know about them.
463 You will need to install the scsi-target-utils package on the host
464 system. After creating a cluster, autocluster will print a message
465 that points you to a file tmp/iscsi.$CLUSTER - you need to run the
466 commands in this file (probably via: sh tmp/iscsi.$CLUSTER) before
467 booting your cluster. This will remove any old target with the same
468 ID, and create the new target, LUNs and ACLs.
470 You can use the following command to list information about the
473 tgtadm --lld iscsi --mode target --op show
475 If you need multiple clusters using iSCSI on the same host then each
476 cluster will need to have a different setting for ISCSI_TID.
481 RHEL versions of KVM do not support the SCSI block device emulation,
482 so autocluster now defaults to using an IDE system disk instead of a
483 SCSI one. Therefore, you can use virtio or ide system disks.
484 However, writeback caching, qcow2 and virtio are incompatible and
485 result in I/O corruption. So, you can use either virtio system disks
486 without any caching, accepting reduced performance, or you can use IDE
487 system disks with writeback caching, with nice performance.
489 For IDE disks, here are the required settings:
492 SYSTEM_DISK_PREFIX=hd
493 SYSTEM_DISK_CACHE=writeback
495 The next problem is that RHEL5's KVM does not include qemu-nbd. The
496 best solution is to build your own qemu-nbd and stop reading this
499 If, for whatever reason, you're unable to build your own qemu-nbd,
500 then you can use raw, rather than qcow2, system disks. If you do this
501 then you need significantly more disk space (since the system disks
502 will be *copies* of the base image) and cluster creation time will no
503 longer be pleasantly snappy (due to the copying time - the images are
504 large and a single copy can take several minutes). So, having tried
505 to warn you off this option, if you really want to do this then you'll
508 SYSTEM_DISK_FORMAT=raw
511 Note that if you're testing cluster creation with iSCSI shared disks
512 then you should find a way of switching off raw disks. This avoids
513 every iSCSI glitch costing you a lot of time while raw disks are
519 The -e option provides support for executing arbitrary bash code.
520 This is useful for testing and debugging.
522 One good use of this option is to test template substitution using the
523 function substitute_vars(). For example:
525 ./autocluster -c example.autocluster -e 'CLUSTER=foo; DISK=foo.qcow2; UUID=abcdef; NAME=foon1; set_macaddrs; substitute_vars templates/node.xml'
527 This prints templates/node.xml with all appropriate substitutions
528 done. Some internal variables (e.g. CLUSTER, DISK, UUID, NAME) are
529 given fairly arbitrary values but the various MAC address strings are
530 set using the function set_macaddrs().
532 The -e option is also useful when writing scripts that use
533 autocluster. Given the complexities of the configuration system you
534 probably don't want to parse configuration files yourself to determine
535 the current settings. Instead, you can ask autocluster to tell you
536 useful pieces of information. For example, say you want to script
537 creating a base disk image and you want to ensure the image is
540 base_image=$(autocluster -c $CONFIG -e 'echo $VIRTBASE/$BASENAME.img')
541 chattr -V -i "$base_image"
543 if autocluster -c $CONFIG create base ; then
544 chattr -V +i "$base_image"
547 Note that the command that autocluster should run is enclosed in
548 single quotes. This means that $VIRTBASE and $BASENAME will be expand
549 within autocluster after the configuration file has been loaded.