1 <?xml version="1.0" encoding="iso-8859-1"?>
3 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
4 "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
8 <refentrytitle>ctdb</refentrytitle>
9 <manvolnum>7</manvolnum>
10 <refmiscinfo class="source">ctdb</refmiscinfo>
11 <refmiscinfo class="manual">CTDB - clustered TDB database</refmiscinfo>
16 <refname>ctdb</refname>
17 <refpurpose>Clustered TDB</refpurpose>
21 <title>DESCRIPTION</title>
24 CTDB is a clustered database component in clustered Samba that
25 provides a high-availability load-sharing CIFS server cluster.
29 The main functions of CTDB are:
35 Provide a clustered version of the TDB database with automatic
36 rebuild/recovery of the databases upon node failures.
42 Monitor nodes in the cluster and services running on each node.
48 Manage a pool of public IP addresses that are used to provide
49 services to clients. Alternatively, CTDB can be used with
56 Combined with a cluster filesystem CTDB provides a full
57 high-availablity (HA) environment for services such as clustered
58 Samba, NFS and other services.
63 <title>ANATOMY OF A CTDB CLUSTER</title>
66 A CTDB cluster is a collection of nodes with 2 or more network
67 interfaces. All nodes provide network (usually file/NAS) services
68 to clients. Data served by file services is stored on shared
69 storage (usually a cluster filesystem) that is accessible by all
73 CTDB provides an "all active" cluster, where services are load
74 balanced across all nodes.
79 <title>Recovery Lock</title>
82 CTDB uses a <emphasis>recovery lock</emphasis> to avoid a
83 <emphasis>split brain</emphasis>, where a cluster becomes
84 partitioned and each partition attempts to operate
85 independently. Issues that can result from a split brain
86 include file data corruption, because file locking metadata may
87 not be tracked correctly.
91 CTDB uses a <emphasis>cluster leader and follower</emphasis>
92 model of cluster management. All nodes in a cluster elect one
93 node to be the leader. The leader node coordinates privileged
94 operations such as database recovery and IP address failover.
95 CTDB refers to the leader node as the <emphasis>recovery
96 master</emphasis>. This node takes and holds the recovery lock
97 to assert its privileged role in the cluster.
101 By default, the recovery lock is implemented using a file
102 (specified by <parameter>CTDB_RECOVERY_LOCK</parameter>)
103 residing in shared storage (usually) on a cluster filesystem.
104 To support a recovery lock the cluster filesystem must support
106 <citerefentry><refentrytitle>ping_pong</refentrytitle>
107 <manvolnum>1</manvolnum></citerefentry> for more details.
111 The recovery lock can also be implemented using an arbitrary
112 cluster mutex call-out by using an exclamation point ('!') as
113 the first character of
114 <parameter>CTDB_RECOVERY_LOCK</parameter>. For example, a value
115 of <command>!/usr/local/bin/myhelper recovery</command> would
116 run the given helper with the specified arguments. See the
117 source code relating to cluster mutexes for clues about writing
122 If a cluster becomes partitioned (for example, due to a
123 communication failure) and a different recovery master is
124 elected by the nodes in each partition, then only one of these
125 recovery masters will be able to take the recovery lock. The
126 recovery master in the "losing" partition will not be able to
127 take the recovery lock and will be excluded from the cluster.
128 The nodes in the "losing" partition will elect each node in turn
129 as their recovery master so eventually all the nodes in that
130 partition will be excluded.
134 CTDB does sanity checks to ensure that the recovery lock is held
139 CTDB can run without a recovery lock but this is not recommended
140 as there will be no protection from split brains.
145 <title>Private vs Public addresses</title>
148 Each node in a CTDB cluster has multiple IP addresses assigned
154 A single private IP address that is used for communication
160 One or more public IP addresses that are used to provide
161 NAS or other services.
168 <title>Private address</title>
171 Each node is configured with a unique, permanently assigned
172 private address. This address is configured by the operating
173 system. This address uniquely identifies a physical node in
174 the cluster and is the address that CTDB daemons will use to
175 communicate with the CTDB daemons on other nodes.
179 Private addresses are listed in the file
180 <filename>/usr/local/etc/ctdb/nodes</filename>). This file
181 contains the list of private addresses for all nodes in the
182 cluster, one per line. This file must be the same on all nodes
187 Some users like to put this configuration file in their
188 cluster filesystem. A symbolic link should be used in this
193 Private addresses should not be used by clients to connect to
194 services provided by the cluster.
197 It is strongly recommended that the private addresses are
198 configured on a private network that is separate from client
199 networks. This is because the CTDB protocol is both
200 unauthenticated and unencrypted. If clients share the private
201 network then steps need to be taken to stop injection of
202 packets to relevant ports on the private addresses. It is
203 also likely that CTDB protocol traffic between nodes could
204 leak sensitive information if it can be intercepted.
208 Example <filename>/usr/local/etc/ctdb/nodes</filename> for a four node
211 <screen format="linespecific">
220 <title>Public addresses</title>
223 Public addresses are used to provide services to clients.
224 Public addresses are not configured at the operating system
225 level and are not permanently associated with a particular
226 node. Instead, they are managed by CTDB and are assigned to
227 interfaces on physical nodes at runtime.
230 The CTDB cluster will assign/reassign these public addresses
231 across the available healthy nodes in the cluster. When one
232 node fails, its public addresses will be taken over by one or
233 more other nodes in the cluster. This ensures that services
234 provided by all public addresses are always available to
235 clients, as long as there are nodes available capable of
236 hosting this address.
240 The public address configuration is stored in
241 <filename>/usr/local/etc/ctdb/public_addresses</filename> on
242 each node. This file contains a list of the public addresses
243 that the node is capable of hosting, one per line. Each entry
244 also contains the netmask and the interface to which the
245 address should be assigned. If this file is missing then no
246 public addresses are configured.
250 Some users who have the same public addresses on all nodes
251 like to put this configuration file in their cluster
252 filesystem. A symbolic link should be used in this case.
256 Example <filename>/usr/local/etc/ctdb/public_addresses</filename> for a
257 node that can host 4 public addresses, on 2 different
260 <screen format="linespecific">
268 In many cases the public addresses file will be the same on
269 all nodes. However, it is possible to use different public
270 address configurations on different nodes.
274 Example: 4 nodes partitioned into two subgroups:
276 <screen format="linespecific">
277 Node 0:/usr/local/etc/ctdb/public_addresses
281 Node 1:/usr/local/etc/ctdb/public_addresses
285 Node 2:/usr/local/etc/ctdb/public_addresses
289 Node 3:/usr/local/etc/ctdb/public_addresses
294 In this example nodes 0 and 1 host two public addresses on the
295 10.1.1.x network while nodes 2 and 3 host two public addresses
296 for the 10.1.2.x network.
299 Public address 10.1.1.1 can be hosted by either of nodes 0 or
300 1 and will be available to clients as long as at least one of
301 these two nodes are available.
304 If both nodes 0 and 1 become unavailable then public address
305 10.1.1.1 also becomes unavailable. 10.1.1.1 can not be failed
306 over to nodes 2 or 3 since these nodes do not have this public
310 The <command>ctdb ip</command> command can be used to view the
311 current assignment of public addresses to physical nodes.
318 <title>Node status</title>
321 The current status of each node in the cluster can be viewed by the
322 <command>ctdb status</command> command.
326 A node can be in one of the following states:
334 This node is healthy and fully functional. It hosts public
335 addresses to provide services.
341 <term>DISCONNECTED</term>
344 This node is not reachable by other nodes via the private
345 network. It is not currently participating in the cluster.
346 It <emphasis>does not</emphasis> host public addresses to
347 provide services. It might be shut down.
353 <term>DISABLED</term>
356 This node has been administratively disabled. This node is
357 partially functional and participates in the cluster.
358 However, it <emphasis>does not</emphasis> host public
359 addresses to provide services.
365 <term>UNHEALTHY</term>
368 A service provided by this node has failed a health check
369 and should be investigated. This node is partially
370 functional and participates in the cluster. However, it
371 <emphasis>does not</emphasis> host public addresses to
372 provide services. Unhealthy nodes should be investigated
373 and may require an administrative action to rectify.
382 CTDB is not behaving as designed on this node. For example,
383 it may have failed too many recovery attempts. Such nodes
384 are banned from participating in the cluster for a
385 configurable time period before they attempt to rejoin the
386 cluster. A banned node <emphasis>does not</emphasis> host
387 public addresses to provide services. All banned nodes
388 should be investigated and may require an administrative
398 This node has been administratively exclude from the
399 cluster. A stopped node does no participate in the cluster
400 and <emphasis>does not</emphasis> host public addresses to
401 provide services. This state can be used while performing
402 maintenance on a node.
408 <term>PARTIALLYONLINE</term>
411 A node that is partially online participates in a cluster
412 like a healthy (OK) node. Some interfaces to serve public
413 addresses are down, but at least one interface is up. See
414 also <command>ctdb ifaces</command>.
423 <title>CAPABILITIES</title>
426 Cluster nodes can have several different capabilities enabled.
427 These are listed below.
433 <term>RECMASTER</term>
436 Indicates that a node can become the CTDB cluster recovery
437 master. The current recovery master is decided via an
438 election held by all active nodes with this capability.
450 Indicates that a node can be the location master (LMASTER)
451 for database records. The LMASTER always knows which node
452 has the latest copy of a record in a volatile database.
463 The RECMASTER and LMASTER capabilities can be disabled when CTDB
464 is used to create a cluster spanning across WAN links. In this
465 case CTDB acts as a WAN accelerator.
474 LVS is a mode where CTDB presents one single IP address for the
475 entire cluster. This is an alternative to using public IP
476 addresses and round-robin DNS to loadbalance clients across the
481 This is similar to using a layer-4 loadbalancing switch but with
486 One extra LVS public address is assigned on the public network
487 to each LVS group. Each LVS group is a set of nodes in the
488 cluster that presents the same LVS address public address to the
489 outside world. Normally there would only be one LVS group
490 spanning an entire cluster, but in situations where one CTDB
491 cluster spans multiple physical sites it might be useful to have
492 one LVS group for each site. There can be multiple LVS groups
493 in a cluster but each node can only be member of one LVS group.
497 Client access to the cluster is load-balanced across the HEALTHY
498 nodes in an LVS group. If no HEALTHY nodes exists then all
499 nodes in the group are used, regardless of health status. CTDB
500 will, however never load-balance LVS traffic to nodes that are
501 BANNED, STOPPED, DISABLED or DISCONNECTED. The <command>ctdb
502 lvs</command> command is used to show which nodes are currently
503 load-balanced across.
507 In each LVS group, one of the nodes is selected by CTDB to be
508 the LVS master. This node receives all traffic from clients
509 coming in to the LVS public address and multiplexes it across
510 the internal network to one of the nodes that LVS is using.
511 When responding to the client, that node will send the data back
512 directly to the client, bypassing the LVS master node. The
513 command <command>ctdb lvs master</command> will show which node
514 is the current LVS master.
518 The path used for a client I/O is:
522 Client sends request packet to LVSMASTER.
527 LVSMASTER passes the request on to one node across the
533 Selected node processes the request.
538 Node responds back to client.
545 This means that all incoming traffic to the cluster will pass
546 through one physical node, which limits scalability. You can
547 send more data to the LVS address that one physical node can
548 multiplex. This means that you should not use LVS if your I/O
549 pattern is write-intensive since you will be limited in the
550 available network bandwidth that node can handle. LVS does work
551 very well for read-intensive workloads where only smallish READ
552 requests are going through the LVSMASTER bottleneck and the
553 majority of the traffic volume (the data in the read replies)
554 goes straight from the processing node back to the clients. For
555 read-intensive i/o patterns you can achieve very high throughput
560 Note: you can use LVS and public addresses at the same time.
564 If you use LVS, you must have a permanent address configured for
565 the public interface on each node. This address must be routable
566 and the cluster nodes must be configured so that all traffic
567 back to client hosts are routed through this interface. This is
568 also required in order to allow samba/winbind on the node to
569 talk to the domain controller. This LVS IP address can not be
570 used to initiate outgoing traffic.
573 Make sure that the domain controller and the clients are
574 reachable from a node <emphasis>before</emphasis> you enable
575 LVS. Also ensure that outgoing traffic to these hosts is routed
576 out through the configured public interface.
580 <title>Configuration</title>
583 To activate LVS on a CTDB node you must specify the
584 <varname>CTDB_LVS_PUBLIC_IFACE</varname>,
585 <varname>CTDB_LVS_PUBLIC_IP</varname> and
586 <varname>CTDB_LVS_NODES</varname> configuration variables.
587 <varname>CTDB_LVS_NODES</varname> specifies a file containing
588 the private address of all nodes in the current node's LVS
594 <screen format="linespecific">
595 CTDB_LVS_PUBLIC_IFACE=eth1
596 CTDB_LVS_PUBLIC_IP=10.1.1.237
597 CTDB_LVS_NODES=/usr/local/etc/ctdb/lvs_nodes
602 Example <filename>/usr/local/etc/ctdb/lvs_nodes</filename>:
604 <screen format="linespecific">
611 Normally any node in an LVS group can act as the LVS master.
612 Nodes that are highly loaded due to other demands maybe
613 flagged with the "slave-only" option in the
614 <varname>CTDB_LVS_NODES</varname> file to limit the LVS
615 functionality of those nodes.
619 LVS nodes file that excludes 192.168.1.4 from being
622 <screen format="linespecific">
625 192.168.1.4 slave-only
632 <title>TRACKING AND RESETTING TCP CONNECTIONS</title>
635 CTDB tracks TCP connections from clients to public IP addresses,
636 on known ports. When an IP address moves from one node to
637 another, all existing TCP connections to that IP address are
638 reset. The node taking over this IP address will also send
639 gratuitous ARPs (for IPv4, or neighbour advertisement, for
640 IPv6). This allows clients to reconnect quickly, rather than
641 waiting for TCP timeouts, which can be very long.
645 It is important that established TCP connections do not survive
646 a release and take of a public IP address on the same node.
647 Such connections can get out of sync with sequence and ACK
648 numbers, potentially causing a disruptive ACK storm.
654 <title>NAT GATEWAY</title>
657 NAT gateway (NATGW) is an optional feature that is used to
658 configure fallback routing for nodes. This allows cluster nodes
659 to connect to external services (e.g. DNS, AD, NIS and LDAP)
660 when they do not host any public addresses (e.g. when they are
664 This also applies to node startup because CTDB marks nodes as
665 UNHEALTHY until they have passed a "monitor" event. In this
666 context, NAT gateway helps to avoid a "chicken and egg"
667 situation where a node needs to access an external service to
671 Another way of solving this type of problem is to assign an
672 extra static IP address to a public interface on every node.
673 This is simpler but it uses an extra IP address per node, while
674 NAT gateway generally uses only one extra IP address.
678 <title>Operation</title>
681 One extra NATGW public address is assigned on the public
682 network to each NATGW group. Each NATGW group is a set of
683 nodes in the cluster that shares the same NATGW address to
684 talk to the outside world. Normally there would only be one
685 NATGW group spanning an entire cluster, but in situations
686 where one CTDB cluster spans multiple physical sites it might
687 be useful to have one NATGW group for each site.
690 There can be multiple NATGW groups in a cluster but each node
691 can only be member of one NATGW group.
694 In each NATGW group, one of the nodes is selected by CTDB to
695 be the NATGW master and the other nodes are consider to be
696 NATGW slaves. NATGW slaves establish a fallback default route
697 to the NATGW master via the private network. When a NATGW
698 slave hosts no public IP addresses then it will use this route
699 for outbound connections. The NATGW master hosts the NATGW
700 public IP address and routes outgoing connections from
701 slave nodes via this IP address. It also establishes a
702 fallback default route.
707 <title>Configuration</title>
710 NATGW is usually configured similar to the following example configuration:
712 <screen format="linespecific">
713 CTDB_NATGW_NODES=/usr/local/etc/ctdb/natgw_nodes
714 CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
715 CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
716 CTDB_NATGW_PUBLIC_IFACE=eth0
717 CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
721 Normally any node in a NATGW group can act as the NATGW
722 master. Some configurations may have special nodes that lack
723 connectivity to a public network. In such cases, those nodes
724 can be flagged with the "slave-only" option in the
725 <varname>CTDB_NATGW_NODES</varname> file to limit the NATGW
726 functionality of those nodes.
730 See the <citetitle>NAT GATEWAY</citetitle> section in
731 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
732 <manvolnum>5</manvolnum></citerefentry> for more details of
739 <title>Implementation details</title>
742 When the NATGW functionality is used, one of the nodes is
743 selected to act as a NAT gateway for all the other nodes in
744 the group when they need to communicate with the external
745 services. The NATGW master is selected to be a node that is
746 most likely to have usable networks.
750 The NATGW master hosts the NATGW public IP address
751 <varname>CTDB_NATGW_PUBLIC_IP</varname> on the configured public
752 interfaces <varname>CTDB_NATGW_PUBLIC_IFACE</varname> and acts as
753 a router, masquerading outgoing connections from slave nodes
754 via this IP address. If
755 <varname>CTDB_NATGW_DEFAULT_GATEWAY</varname> is set then it
756 also establishes a fallback default route to the configured
757 this gateway with a metric of 10. A metric 10 route is used
758 so it can co-exist with other default routes that may be
763 A NATGW slave establishes its fallback default route to the
764 NATGW master via the private network
765 <varname>CTDB_NATGW_PRIVATE_NETWORK</varname>with a metric of 10.
766 This route is used for outbound connections when no other
767 default route is available because the node hosts no public
768 addresses. A metric 10 routes is used so that it can co-exist
769 with other default routes that may be available when the node
770 is hosting public addresses.
774 <varname>CTDB_NATGW_STATIC_ROUTES</varname> can be used to
775 have NATGW create more specific routes instead of just default
780 This is implemented in the <filename>11.natgw</filename>
781 eventscript. Please see the eventscript file and the
782 <citetitle>NAT GATEWAY</citetitle> section in
783 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
784 <manvolnum>5</manvolnum></citerefentry> for more details.
791 <title>POLICY ROUTING</title>
794 Policy routing is an optional CTDB feature to support complex
795 network topologies. Public addresses may be spread across
796 several different networks (or VLANs) and it may not be possible
797 to route packets from these public addresses via the system's
798 default route. Therefore, CTDB has support for policy routing
799 via the <filename>13.per_ip_routing</filename> eventscript.
800 This allows routing to be specified for packets sourced from
801 each public address. The routes are added and removed as CTDB
802 moves public addresses between nodes.
806 <title>Configuration variables</title>
809 There are 4 configuration variables related to policy routing:
810 <varname>CTDB_PER_IP_ROUTING_CONF</varname>,
811 <varname>CTDB_PER_IP_ROUTING_RULE_PREF</varname>,
812 <varname>CTDB_PER_IP_ROUTING_TABLE_ID_LOW</varname>,
813 <varname>CTDB_PER_IP_ROUTING_TABLE_ID_HIGH</varname>. See the
814 <citetitle>POLICY ROUTING</citetitle> section in
815 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
816 <manvolnum>5</manvolnum></citerefentry> for more details.
821 <title>Configuration</title>
824 The format of each line of
825 <varname>CTDB_PER_IP_ROUTING_CONF</varname> is:
829 <public_address> <network> [ <gateway> ]
833 Leading whitespace is ignored and arbitrary whitespace may be
834 used as a separator. Lines that have a "public address" item
835 that doesn't match an actual public address are ignored. This
836 means that comment lines can be added using a leading
837 character such as '#', since this will never match an IP
842 A line without a gateway indicates a link local route.
846 For example, consider the configuration line:
850 192.168.1.99 192.168.1.1/24
854 If the corresponding public_addresses line is:
858 192.168.1.99/24 eth2,eth3
862 <varname>CTDB_PER_IP_ROUTING_RULE_PREF</varname> is 100, and
863 CTDB adds the address to eth2 then the following routing
864 information is added:
868 ip rule add from 192.168.1.99 pref 100 table ctdb.192.168.1.99
869 ip route add 192.168.1.0/24 dev eth2 table ctdb.192.168.1.99
873 This causes traffic from 192.168.1.1 to 192.168.1.0/24 go via
878 The <command>ip rule</command> command will show (something
879 like - depending on other public addresses and other routes on
884 0: from all lookup local
885 100: from 192.168.1.99 lookup ctdb.192.168.1.99
886 32766: from all lookup main
887 32767: from all lookup default
891 <command>ip route show table ctdb.192.168.1.99</command> will show:
895 192.168.1.0/24 dev eth2 scope link
899 The usual use for a line containing a gateway is to add a
900 default route corresponding to a particular source address.
901 Consider this line of configuration:
905 192.168.1.99 0.0.0.0/0 192.168.1.1
909 In the situation described above this will cause an extra
910 routing command to be executed:
914 ip route add 0.0.0.0/0 via 192.168.1.1 dev eth2 table ctdb.192.168.1.99
918 With both configuration lines, <command>ip route show table
919 ctdb.192.168.1.99</command> will show:
923 192.168.1.0/24 dev eth2 scope link
924 default via 192.168.1.1 dev eth2
929 <title>Sample configuration</title>
932 Here is a more complete example configuration.
936 /usr/local/etc/ctdb/public_addresses:
938 192.168.1.98 eth2,eth3
939 192.168.1.99 eth2,eth3
941 /usr/local/etc/ctdb/policy_routing:
943 192.168.1.98 192.168.1.0/24
944 192.168.1.98 192.168.200.0/24 192.168.1.254
945 192.168.1.98 0.0.0.0/0 192.168.1.1
946 192.168.1.99 192.168.1.0/24
947 192.168.1.99 192.168.200.0/24 192.168.1.254
948 192.168.1.99 0.0.0.0/0 192.168.1.1
952 The routes local packets as expected, the default route is as
953 previously discussed, but packets to 192.168.200.0/24 are
954 routed via the alternate gateway 192.168.1.254.
961 <title>NOTIFICATIONS</title>
964 When certain state changes occur in CTDB, it can be configured
965 to perform arbitrary actions via notifications. For example,
966 sending SNMP traps or emails when a node becomes unhealthy or
971 The default notification script is
972 <filename>/usr/local/etc/ctdb/notify.sh</filename>. It executes
973 files in <filename>/usr/local/etc/ctdb/notify.d/</filename>,
974 which must be executable.
978 This notification script can be changed via the
979 <varname>CTDB_NOTIFY_SCRIPT</varname> configuration variable.
980 The specified script must be executable.
984 CTDB currently generates notifications after CTDB changes to
989 <member>init</member>
990 <member>setup</member>
991 <member>startup</member>
992 <member>healthy</member>
993 <member>unhealthy</member>
999 <title>DEBUG LEVELS</title>
1002 Valid values for DEBUGLEVEL are:
1006 <member>ERR</member>
1007 <member>WARNING</member>
1008 <member>NOTICE</member>
1009 <member>INFO</member>
1010 <member>DEBUG</member>
1016 <title>REMOTE CLUSTER NODES</title>
1018 It is possible to have a CTDB cluster that spans across a WAN link.
1019 For example where you have a CTDB cluster in your datacentre but you also
1020 want to have one additional CTDB node located at a remote branch site.
1021 This is similar to how a WAN accelerator works but with the difference
1022 that while a WAN-accelerator often acts as a Proxy or a MitM, in
1023 the ctdb remote cluster node configuration the Samba instance at the remote site
1024 IS the genuine server, not a proxy and not a MitM, and thus provides 100%
1025 correct CIFS semantics to clients.
1029 See the cluster as one single multihomed samba server where one of
1030 the NICs (the remote node) is very far away.
1034 NOTE: This does require that the cluster filesystem you use can cope
1035 with WAN-link latencies. Not all cluster filesystems can handle
1036 WAN-link latencies! Whether this will provide very good WAN-accelerator
1037 performance or it will perform very poorly depends entirely
1038 on how optimized your cluster filesystem is in handling high latency
1039 for data and metadata operations.
1043 To activate a node as being a remote cluster node you need to set
1044 the following two parameters in /etc/sysconfig/ctdb for the remote node:
1045 <screen format="linespecific">
1046 CTDB_CAPABILITY_LMASTER=no
1047 CTDB_CAPABILITY_RECMASTER=no
1052 Verify with the command "ctdb getcapabilities" that that node no longer
1053 has the recmaster or the lmaster capabilities.
1060 <title>SEE ALSO</title>
1063 <citerefentry><refentrytitle>ctdb</refentrytitle>
1064 <manvolnum>1</manvolnum></citerefentry>,
1066 <citerefentry><refentrytitle>ctdbd</refentrytitle>
1067 <manvolnum>1</manvolnum></citerefentry>,
1069 <citerefentry><refentrytitle>ctdbd_wrapper</refentrytitle>
1070 <manvolnum>1</manvolnum></citerefentry>,
1072 <citerefentry><refentrytitle>ctdb_diagnostics</refentrytitle>
1073 <manvolnum>1</manvolnum></citerefentry>,
1075 <citerefentry><refentrytitle>ltdbtool</refentrytitle>
1076 <manvolnum>1</manvolnum></citerefentry>,
1078 <citerefentry><refentrytitle>onnode</refentrytitle>
1079 <manvolnum>1</manvolnum></citerefentry>,
1081 <citerefentry><refentrytitle>ping_pong</refentrytitle>
1082 <manvolnum>1</manvolnum></citerefentry>,
1084 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
1085 <manvolnum>5</manvolnum></citerefentry>,
1087 <citerefentry><refentrytitle>ctdb-statistics</refentrytitle>
1088 <manvolnum>7</manvolnum></citerefentry>,
1090 <citerefentry><refentrytitle>ctdb-tunables</refentrytitle>
1091 <manvolnum>7</manvolnum></citerefentry>,
1093 <ulink url="http://ctdb.samba.org/"/>
1100 This documentation was written by
1109 <holder>Andrew Tridgell</holder>
1110 <holder>Ronnie Sahlberg</holder>
1114 This program is free software; you can redistribute it and/or
1115 modify it under the terms of the GNU General Public License as
1116 published by the Free Software Foundation; either version 3 of
1117 the License, or (at your option) any later version.
1120 This program is distributed in the hope that it will be
1121 useful, but WITHOUT ANY WARRANTY; without even the implied
1122 warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
1123 PURPOSE. See the GNU General Public License for more details.
1126 You should have received a copy of the GNU General Public
1127 License along with this program; if not, see
1128 <ulink url="http://www.gnu.org/licenses"/>.