1 <html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>ctdb-tunables</title><meta name="generator" content="DocBook XSL Stylesheets V1.78.1"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="refentry"><a name="ctdb-tunables.7"></a><div class="titlepage"></div><div class="refnamediv"><h2>Name</h2><p>ctdb-tunables — CTDB tunable configuration variables</p></div><div class="refsect1"><a name="idm140250114160032"></a><h2>DESCRIPTION</h2><p>
2 CTDB's behaviour can be configured by setting run-time tunable
3 variables. This lists and describes all tunables. See the
4 <span class="citerefentry"><span class="refentrytitle">ctdb</span>(1)</span>
5 <span class="command"><strong>listvars</strong></span>, <span class="command"><strong>setvar</strong></span> and
6 <span class="command"><strong>getvar</strong></span> commands for more details.
7 </p><div class="refsect2"><a name="idm140250113977776"></a><h3>MaxRedirectCount</h3><p>Default: 3</p><p>
8 If we are not the DMASTER and need to fetch a record across the network
9 we first send the request to the LMASTER after which the record
10 is passed onto the current DMASTER. If the DMASTER changes before
11 the request has reached that node, the request will be passed onto the
12 "next" DMASTER. For very hot records that migrate rapidly across the
13 cluster this can cause a request to "chase" the record for many hops
14 before it catches up with the record.
16 this is how many hops we allow trying to chase the DMASTER before we
17 switch back to the LMASTER again to ask for new directions.
19 When chasing a record, this is how many hops we will chase the record
20 for before going back to the LMASTER to ask for new guidance.
21 </p></div><div class="refsect2"><a name="idm140250112552272"></a><h3>SeqnumInterval</h3><p>Default: 1000</p><p>
22 Some databases have seqnum tracking enabled, so that samba will be able
23 to detect asynchronously when there has been updates to the database.
24 Everytime a database is updated its sequence number is increased.
26 This tunable is used to specify in 'ms' how frequently ctdb will
27 send out updates to remote nodes to inform them that the sequence
29 </p></div><div class="refsect2"><a name="idm140250112801136"></a><h3>ControlTimeout</h3><p>Default: 60</p><p>
31 setting for timeout for when sending a control message to either the
32 local or a remote ctdb daemon.
33 </p></div><div class="refsect2"><a name="idm140250112625904"></a><h3>TraverseTimeout</h3><p>Default: 20</p><p>
34 This setting controls how long we allow a traverse process to run.
35 After this timeout triggers, the main ctdb daemon will abort the
36 traverse if it has not yet finished.
37 </p></div><div class="refsect2"><a name="idm140250112994768"></a><h3>KeepaliveInterval</h3><p>Default: 5</p><p>
38 How often in seconds should the nodes send keepalives to eachother.
39 </p></div><div class="refsect2"><a name="idm140250113999488"></a><h3>KeepaliveLimit</h3><p>Default: 5</p><p>
40 After how many keepalive intervals without any traffic should a node
41 wait until marking the peer as DISCONNECTED.
43 If a node has hung, it can thus take KeepaliveInterval*(KeepaliveLimit+1)
44 seconds before we determine that the node is DISCONNECTED and that we
45 require a recovery. This limitshould not be set too high since we want
46 a hung node to be detectec, and expunged from the cluster well before
47 common CIFS timeouts (45-90 seconds) kick in.
48 </p></div><div class="refsect2"><a name="idm140250113832160"></a><h3>RecoverTimeout</h3><p>Default: 20</p><p>
49 This is the default setting for timeouts for controls when sent from the
50 recovery daemon. We allow longer control timeouts from the recovery daemon
51 than from normal use since the recovery dameon often use controls that
52 can take a lot longer than normal controls.
53 </p></div><div class="refsect2"><a name="idm140250113438992"></a><h3>RecoverInterval</h3><p>Default: 1</p><p>
54 How frequently in seconds should the recovery daemon perform the
55 consistency checks that determine if we need to perform a recovery or not.
56 </p></div><div class="refsect2"><a name="idm140250113236784"></a><h3>ElectionTimeout</h3><p>Default: 3</p><p>
57 When electing a new recovery master, this is how many seconds we allow
58 the election to take before we either deem the election finished
59 or we fail the election and start a new one.
60 </p></div><div class="refsect2"><a name="idm140250113440720"></a><h3>TakeoverTimeout</h3><p>Default: 9</p><p>
61 This is how many seconds we allow controls to take for IP failover events.
62 </p></div><div class="refsect2"><a name="idm140250111963200"></a><h3>MonitorInterval</h3><p>Default: 15</p><p>
63 How often should ctdb run the event scripts to check for a nodes health.
64 </p></div><div class="refsect2"><a name="idm140250112163600"></a><h3>TickleUpdateInterval</h3><p>Default: 20</p><p>
65 How often will ctdb record and store the "tickle" information used to
66 kickstart stalled tcp connections after a recovery.
67 </p></div><div class="refsect2"><a name="idm140250115209376"></a><h3>EventScriptTimeout</h3><p>Default: 30</p><p>
68 Maximum time in seconds to allow an event to run before timing
69 out. This is the total time for all enabled scripts that are
70 run for an event, not just a single event script.
72 Note that timeouts are ignored for some events ("takeip",
73 "releaseip", "startrecovery", "recovered") and converted to
74 success. The logic here is that the callers of these events
75 implement their own additional timeout.
76 </p></div><div class="refsect2"><a name="idm140250113300784"></a><h3>MonitorTimeoutCount</h3><p>Default: 20</p><p>
77 How many monitor events in a row need to timeout before a node
78 is flagged as UNHEALTHY. This setting is useful if scripts
79 can not be written so that they do not hang for benign
81 </p></div><div class="refsect2"><a name="idm140250113467376"></a><h3>RecoveryGracePeriod</h3><p>Default: 120</p><p>
82 During recoveries, if a node has not caused recovery failures during the
83 last grace period, any records of transgressions that the node has caused
84 recovery failures will be forgiven. This resets the ban-counter back to
86 </p></div><div class="refsect2"><a name="idm140250112493824"></a><h3>RecoveryBanPeriod</h3><p>Default: 300</p><p>
87 If a node becomes banned causing repetitive recovery failures. The node will
88 eventually become banned from the cluster.
89 This controls how long the culprit node will be banned from the cluster
90 before it is allowed to try to join the cluster again.
91 Don't set to small. A node gets banned for a reason and it is usually due
92 to real problems with the node.
93 </p></div><div class="refsect2"><a name="idm140250111569040"></a><h3>DatabaseHashSize</h3><p>Default: 100001</p><p>
94 Size of the hash chains for the local store of the tdbs that ctdb manages.
95 </p></div><div class="refsect2"><a name="idm140250112323888"></a><h3>DatabaseMaxDead</h3><p>Default: 5</p><p>
96 How many dead records per hashchain in the TDB database do we allow before
97 the freelist needs to be processed.
98 </p></div><div class="refsect2"><a name="idm140250114315872"></a><h3>RerecoveryTimeout</h3><p>Default: 10</p><p>
99 Once a recovery has completed, no additional recoveries are permitted
100 until this timeout has expired.
101 </p></div><div class="refsect2"><a name="idm140250113887312"></a><h3>EnableBans</h3><p>Default: 1</p><p>
102 When set to 0, this disables BANNING completely in the cluster and thus
103 nodes can not get banned, even it they break. Don't set to 0 unless you
104 know what you are doing. You should set this to the same value on
105 all nodes to avoid unexpected behaviour.
106 </p></div><div class="refsect2"><a name="idm140250111531200"></a><h3>DeterministicIPs</h3><p>Default: 0</p><p>
107 When enabled, this tunable makes ctdb try to keep public IP addresses
108 locked to specific nodes as far as possible. This makes it easier for
109 debugging since you can know that as long as all nodes are healthy
110 public IP X will always be hosted by node Y.
112 The cost of using deterministic IP address assignment is that it
113 disables part of the logic where ctdb tries to reduce the number of
114 public IP assignment changes in the cluster. This tunable may increase
115 the number of IP failover/failbacks that are performed on the cluster
117 </p></div><div class="refsect2"><a name="idm140250112209184"></a><h3>LCP2PublicIPs</h3><p>Default: 1</p><p>
118 When enabled this switches ctdb to use the LCP2 ip allocation
120 </p></div><div class="refsect2"><a name="idm140250114324944"></a><h3>ReclockPingPeriod</h3><p>Default: x</p><p>
122 </p></div><div class="refsect2"><a name="idm140250113649088"></a><h3>NoIPFailback</h3><p>Default: 0</p><p>
123 When set to 1, ctdb will not perform failback of IP addresses when a node
124 becomes healthy. Ctdb WILL perform failover of public IP addresses when a
125 node becomes UNHEALTHY, but when the node becomes HEALTHY again, ctdb
126 will not fail the addresses back.
128 Use with caution! Normally when a node becomes available to the cluster
129 ctdb will try to reassign public IP addresses onto the new node as a way
130 to distribute the workload evenly across the clusternode. Ctdb tries to
131 make sure that all running nodes have approximately the same number of
132 public addresses it hosts.
134 When you enable this tunable, CTDB will no longer attempt to rebalance
135 the cluster by failing IP addresses back to the new nodes. An unbalanced
136 cluster will therefore remain unbalanced until there is manual
137 intervention from the administrator. When this parameter is set, you can
138 manually fail public IP addresses over to the new node(s) using the
139 'ctdb moveip' command.
140 </p></div><div class="refsect2"><a name="idm140250112149232"></a><h3>DisableIPFailover</h3><p>Default: 0</p><p>
141 When enabled, ctdb will not perform failover or failback. Even if a
142 node fails while holding public IPs, ctdb will not recover the IPs or
143 assign them to another node.
145 When you enable this tunable, CTDB will no longer attempt to recover
146 the cluster by failing IP addresses over to other nodes. This leads to
147 a service outage until the administrator has manually performed failover
148 to replacement nodes using the 'ctdb moveip' command.
149 </p></div><div class="refsect2"><a name="idm140250112776944"></a><h3>NoIPTakeover</h3><p>Default: 0</p><p>
150 When set to 1, ctdb will not allow IP addresses to be failed over
151 onto this node. Any IP addresses that the node currently hosts
152 will remain on the node but no new IP addresses can be failed over
154 </p></div><div class="refsect2"><a name="idm140250114993088"></a><h3>NoIPHostOnAllDisabled</h3><p>Default: 0</p><p>
155 If no nodes are healthy then by default ctdb will happily host
156 public IPs on disabled (unhealthy or administratively disabled)
157 nodes. This can cause problems, for example if the underlying
158 cluster filesystem is not mounted. When set to 1 on a node and
159 that node is disabled it, any IPs hosted by this node will be
160 released and the node will not takeover any IPs until it is no
162 </p></div><div class="refsect2"><a name="idm140250113809520"></a><h3>DBRecordCountWarn</h3><p>Default: 100000</p><p>
163 When set to non-zero, ctdb will log a warning when we try to recover a
164 database with more than this many records. This will produce a warning
165 if a database grows uncontrollably with orphaned records.
166 </p></div><div class="refsect2"><a name="idm140250113368688"></a><h3>DBRecordSizeWarn</h3><p>Default: 10000000</p><p>
167 When set to non-zero, ctdb will log a warning when we try to recover a
168 database where a single record is bigger than this. This will produce
169 a warning if a database record grows uncontrollably with orphaned
171 </p></div><div class="refsect2"><a name="idm140250112645408"></a><h3>DBSizeWarn</h3><p>Default: 1000000000</p><p>
172 When set to non-zero, ctdb will log a warning when we try to recover a
173 database bigger than this. This will produce
174 a warning if a database grows uncontrollably.
175 </p></div><div class="refsect2"><a name="idm140250114616832"></a><h3>VerboseMemoryNames</h3><p>Default: 0</p><p>
176 This feature consumes additional memory. when used the talloc library
177 will create more verbose names for all talloc allocated objects.
178 </p></div><div class="refsect2"><a name="idm140250113342448"></a><h3>RecdPingTimeout</h3><p>Default: 60</p><p>
179 If the main dameon has not heard a "ping" from the recovery dameon for
180 this many seconds, the main dameon will log a message that the recovery
181 daemon is potentially hung.
182 </p></div><div class="refsect2"><a name="idm140250113524352"></a><h3>RecdFailCount</h3><p>Default: 10</p><p>
183 If the recovery daemon has failed to ping the main dameon for this many
184 consecutive intervals, the main daemon will consider the recovery daemon
185 as hung and will try to restart it to recover.
186 </p></div><div class="refsect2"><a name="idm140250114731216"></a><h3>LogLatencyMs</h3><p>Default: 0</p><p>
187 When set to non-zero, this will make the main daemon log any operation that
188 took longer than this value, in 'ms', to complete.
189 These include "how long time a lockwait child process needed",
190 "how long time to write to a persistent database" but also
191 "how long did it take to get a response to a CALL from a remote node".
192 </p></div><div class="refsect2"><a name="idm140250111628960"></a><h3>RecLockLatencyMs</h3><p>Default: 1000</p><p>
193 When using a reclock file for split brain prevention, if set to non-zero
194 this tunable will make the recovery dameon log a message if the fcntl()
195 call to lock/testlock the recovery file takes longer than this number of
197 </p></div><div class="refsect2"><a name="idm140250112069552"></a><h3>RecoveryDropAllIPs</h3><p>Default: 120</p><p>
198 If we have been stuck in recovery, or stopped, or banned, mode for
199 this many seconds we will force drop all held public addresses.
200 </p></div><div class="refsect2"><a name="idm140250112936832"></a><h3>VacuumInterval</h3><p>Default: 10</p><p>
201 Periodic interval in seconds when vacuuming is triggered for
203 </p></div><div class="refsect2"><a name="idm140250112117984"></a><h3>VacuumMaxRunTime</h3><p>Default: 120</p><p>
204 The maximum time in seconds for which the vacuuming process is
205 allowed to run. If vacuuming process takes longer than this
206 value, then the vacuuming process is terminated.
207 </p></div><div class="refsect2"><a name="idm140250111996192"></a><h3>RepackLimit</h3><p>Default: 10000</p><p>
208 During vacuuming, if the number of freelist records are more
209 than <code class="varname">RepackLimit</code>, then databases are
210 repacked to get rid of the freelist records to avoid
213 Databases are repacked only if both
214 <code class="varname">RepackLimit</code> and
215 <code class="varname">VacuumLimit</code> are exceeded.
216 </p></div><div class="refsect2"><a name="idm140250114794416"></a><h3>VacuumLimit</h3><p>Default: 5000</p><p>
217 During vacuuming, if the number of deleted records are more
218 than <code class="varname">VacuumLimit</code>, then databases are
219 repacked to avoid fragmentation.
221 Databases are repacked only if both
222 <code class="varname">RepackLimit</code> and
223 <code class="varname">VacuumLimit</code> are exceeded.
224 </p></div><div class="refsect2"><a name="idm140250113734000"></a><h3>VacuumFastPathCount</h3><p>Default: 60</p><p>
225 When a record is deleted, it is marked for deletion during
226 vacuuming. Vacuuming process usually processes this list to purge
227 the records from the database. If the number of records marked
228 for deletion are more than VacuumFastPathCount, then vacuuming
229 process will scan the complete database for empty records instead
230 of using the list of records marked for deletion.
231 </p></div><div class="refsect2"><a name="idm140250114795824"></a><h3>DeferredAttachTO</h3><p>Default: 120</p><p>
232 When databases are frozen we do not allow clients to attach to the
233 databases. Instead of returning an error immediately to the application
234 the attach request from the client is deferred until the database
235 becomes available again at which stage we respond to the client.
237 This timeout controls how long we will defer the request from the client
238 before timing it out and returning an error to the client.
239 </p></div><div class="refsect2"><a name="idm140250112807120"></a><h3>HopcountMakeSticky</h3><p>Default: 50</p><p>
240 If the database is set to 'STICKY' mode, using the 'ctdb setdbsticky'
241 command, any record that is seen as very hot and migrating so fast that
242 hopcount surpasses 50 is set to become a STICKY record for StickyDuration
243 seconds. This means that after each migration the record will be kept on
244 the node and prevented from being migrated off the node.
246 This setting allows one to try to identify such records and stop them from
247 migrating across the cluster so fast. This will improve performance for
248 certain workloads, such as locking.tdb if many clients are opening/closing
249 the same file concurrently.
250 </p></div><div class="refsect2"><a name="idm140250113465456"></a><h3>StickyDuration</h3><p>Default: 600</p><p>
251 Once a record has been found to be fetch-lock hot and has been flagged to
252 become STICKY, this is for how long, in seconds, the record will be
253 flagged as a STICKY record.
254 </p></div><div class="refsect2"><a name="idm140250111515504"></a><h3>StickyPindown</h3><p>Default: 200</p><p>
255 Once a STICKY record has been migrated onto a node, it will be pinned down
256 on that node for this number of ms. Any request from other nodes to migrate
257 the record off the node will be deferred until the pindown timer expires.
258 </p></div><div class="refsect2"><a name="idm140250111919552"></a><h3>StatHistoryInterval</h3><p>Default: 1</p><p>
259 Granularity of the statistics collected in the statistics history.
260 </p></div><div class="refsect2"><a name="idm140250113801408"></a><h3>AllowClientDBAttach</h3><p>Default: 1</p><p>
261 When set to 0, clients are not allowed to attach to any databases.
262 This can be used to temporarily block any new processes from attaching
263 to and accessing the databases.
264 </p></div><div class="refsect2"><a name="idm140250111550352"></a><h3>RecoverPDBBySeqNum</h3><p>Default: 1</p><p>
265 When set to zero, database recovery for persistent databases
266 is record-by-record and recovery process simply collects the
267 most recent version of every individual record.
269 When set to non-zero, persistent databases will instead be
270 recovered as a whole db and not by individual records. The
271 node that contains the highest value stored in the record
272 "__db_sequence_number__" is selected and the copy of that
273 nodes database is used as the recovered database.
275 By default, recovery of persistent databses is done using
276 __db_sequence_number__ record.
277 </p></div><div class="refsect2"><a name="idm140250112196048"></a><h3>FetchCollapse</h3><p>Default: 1</p><p>
278 When many clients across many nodes try to access the same record at the
279 same time this can lead to a fetch storm where the record becomes very
280 active and bounces between nodes very fast. This leads to high CPU
281 utilization of the ctdbd daemon, trying to bounce that record around
282 very fast, and poor performance.
284 This parameter is used to activate a fetch-collapse. A fetch-collapse
285 is when we track which records we have requests in flight so that we only
286 keep one request in flight from a certain node, even if multiple smbd
287 processes are attemtping to fetch the record at the same time. This
288 can improve performance and reduce CPU utilization for certain
291 This timeout controls if we should collapse multiple fetch operations
292 of the same record into a single request and defer all duplicates or not.
293 </p></div><div class="refsect2"><a name="idm140250113381232"></a><h3>Samba3AvoidDeadlocks</h3><p>Default: 0</p><p>
294 Enable code that prevents deadlocks with Samba (only for Samba 3.x).
296 This should be set to 1 when using Samba version 3.x to enable special
297 code in CTDB to avoid deadlock with Samba version 3.x. This code
298 is not required for Samba version 4.x and must not be enabled for
300 </p></div></div><div class="refsect1"><a name="idm140250113309648"></a><h2>SEE ALSO</h2><p>
301 <span class="citerefentry"><span class="refentrytitle">ctdb</span>(1)</span>,
303 <span class="citerefentry"><span class="refentrytitle">ctdbd</span>(1)</span>,
305 <span class="citerefentry"><span class="refentrytitle">ctdbd.conf</span>(5)</span>,
307 <span class="citerefentry"><span class="refentrytitle">ctdb</span>(7)</span>,
309 <a class="ulink" href="http://ctdb.samba.org/" target="_top">http://ctdb.samba.org/</a>
310 </p></div></div></body></html>