Introduction
------------
-The run_tests script can be run as follows:
+For a developer, the simplest way of running most tests on a local
+machine from within the git repository is:
- scripts/run_tests simple/*.sh
+ make test
-It can also be run from other places (e.g. the top level ctdb source
-directory), as it works out where the tree is.
+This runs all UNIT and INTEGRATION tests.
-The pseudo-test simple/00_ctdb_init.sh causes ctdb to be (re)started
-on all nodes to attempt to force the cluster into a healthy state. By
-default (i.e. if CTDB_TEST_REAL_CLUSTER is unset - see below) this
-causes some local daemons to be started on the local machine. Tests
-can also be run against a real or virtual cluster. All tests
-communicate with cluster nodes using onnode - when using local daemons
-this is accomplished via some test hooks in onnode and the ctdb
-client.
+tests/run_tests.sh
+------------------
+
+This script can be used to manually run all tests or selected tests,
+with a variety of options. For usage, run:
+
+ tests/run_tests.sh -h
+
+If no tests are specified this runs all of the UNIT and INTEGRATION
+tests.
+
+By default:
+
+* INTEGRATION tests are run against 3 local daemons
+
+* When testing is complete, a summary showing a list is printed
+ showing the tests run and their results
+
+Tests can be selected in various ways:
+
+* tests/run_tests.sh UNIT INTEGRATION
-Command-line options
+ runs all UNIT and INTEGRATION tests, and is like specifying no tests
+
+* tests/run_tests.sh UNIT/tool
+
+ runs all of the "tool" UNIT tests
+
+* tests/run_tests.sh tests/UNIT/eventscripts/00.ctdb.setup.001.sh
+ tests/run_tests.sh tests/INTEGRATION/simple/basics.001.listnodes.sh
+
+ each runs a single specified test case
+
+* tests/run_tests.sh UNIT/eventscripts UNIT/tool tests/UNIT/onnode/0001.sh
+
+ runs a combination of UNIT test suites and a single UNIT test
+
+Testing on a cluster
--------------------
-The most useful option is "-s", which causes a summary of test results
-to be printed at the end of testing.
+INTEGRATION and CLUSTER tests can be run on a real or virtual cluster
+using tests/run_cluster_tests.sh (or "tests/run_tests.sh -c"). The
+test code needs to be available on all cluster nodes, as well as the
+test client node. The test client node needs to have a nodes file
+where the onnode(1) command will find it.
-Environment variables
----------------------
+If the all of the cluster nodes have the CTDB git tree in the same
+location as on the test client then no special action is necessary.
+The simplest way of doing this is to share the tree to cluster nodes
+and test clients via NFS.
-run_tests supports several environment variables, mostly implemented
-in scripts/ctdb_test_env. These are:
+Alternatively, the tests can be installed on all nodes. One technique
+is to build a package containing the tests and install it on all
+nodes. CTDB developers do a lot of testing this way using the
+provided sample packaging, which produces a ctdb-tests RPM package.
-* CTDB_TEST_REAL_CLUSTER
+Finally, if the test code is installed in a different place on the
+cluster nodes, then CTDB_TEST_REMOTE_DIR can be set on the test client
+node to point to a directory that contains the test_wrap script on the
+cluster nodes.
- If set, testing will occur on a real or virtual cluster.
+Running tests under valgrind
+----------------------------
- Assumptions:
+The easiest way of doing this is something like:
- - The ctdb client command can be found via $PATH on the nodes.
+ VALGRIND="valgrind -q" tests/run_tests ...
- - Password-less ssh access to the cluster nodes is permitted from
- the test host.
+This can be used to cause all invocations of the ctdb tool, test
+programs and, with local daemons, the ctdbd daemons themselves to run
+under valgrind.
- - $CTDB_NODES_FILE is set to the location of a file similar to
- /etc/ctdb/nodes. The file can be obtained by scping it from one
- of the cluster nodes.
+How is the ctdb tool invoked?
+-----------------------------
- - See CTDB_TEST_REMOTE_DIR.
+$CTDB determines how to invoke the ctdb client. If not already set
+and if $VALGRIND is set, this is set to "$VALGRIND ctdb". If this is
+not already set but $VALGRIND is not set, this is simply set to "ctdb"
- If not set, testing will proceed against local daemons.
+Test and debugging variable options
+-----------------------------------
-* CTDB_TEST_REMOTE_DIR
+ CTDB_TEST_MODE
- This may be required when running against a real or virtual cluster
- (as opposed to local daemons).
+ Set this environment variable to enable test mode.
- If set, this points to a directory containing the contents of the
- tests/scripts/ directory, as well as all of the test binaries. This
- can be accomplished in a couple of ways:
+ This enables daemons and tools to locate their socket and
+ PID file relative to CTDB_BASE.
- * By copying the relevant scripts/binaries to some directory.
+ When testing with multiple local daemons on a single
+ machine this does 3 extra things:
- * Building an RPM containing all of the test code that is required
- on the cluster nodes and installing it on each node. Hopefully
- this will be supported in a future version of the CTDB packaging
- process.
+ * Disables checks related to public IP addresses
- If not set, the test system assumes that the CTDB tree is available
- in the same location on the cluster nodes as on the test host. This
- could be accomplished by copying or by sharing with NFS (or
- similar).
+ * Speeds up the initial recovery during startup at the
+ expense of some consistency checking
-* VALGRIND
+ * Disables real-time scheduling
- This can be used to cause all invocations of the ctdb client (and,
- with local daemons, the ctdbd daemons themselves) to occur under
- valgrind.
+ CTDB_DEBUG_HUNG_SCRIPT_LOGFILE=FILENAME
+ FILENAME specifies where log messages should go when
+ debugging hung eventscripts. This is a testing option. See
+ also CTDB_DEBUG_HUNG_SCRIPT.
- The easiest way of doing this is something like:
+ No default. Messages go to stdout/stderr and are logged to
+ the same place as other CTDB log messages.
- VALGRIND="valgrind -q" scripts/run_tests ...
+ CTDB_SYS_ETCDIR=DIRECTORY
+ DIRECTORY containing system configuration files. This is
+ used to provide alternate configuration when testing and
+ should not need to be changed from the default.
- NOTE: Some libc calls seem to do weird things and perhaps cause
- spurious output from ctdbd at start time. Please read valgrind
- output carefully before reporting bugs. :-)
+ Default is /etc.
-* CTDB
+ CTDB_RUN_TIMEOUT_MONITOR=yes|no
+ Whether CTDB should simulate timing out monitor
+ events in local daemon tests.
- How to invoke the ctdb client. If not already set and if $VALGRIND
- is set, this is set to "$VALGRIND ctdb". If this is not already set
- but $VALGRIND is not set, this is simply set to "ctdb"
+ Default is no.
-Look, no run_test!
-------------------
+ CTDB_TEST_SAMBA_VERSION=VERSION
+
+ VERSION is a 32-bit number containg the Samba major
+ version in the most significant 16 bits and the minor
+ version in the least significant 16 bits. This can be
+ used to test CTDB's checking of incompatible versions
+ without installing an incompatible version. This is
+ probably best set like this:
-If you want to integrate individual tests into some other test
-environment you can use scripts/ctdb_test_env to wrap individual
-tests. They will return 0 on success, non-zero otherwise, and will
-print the output omitting the test header/footer that surrounds test
-output when tests are run using run_tests. So, you can do something
-like:
+ export CTDB_TEST_SAMBA_VERSION=$(( (4 << 16) | 12 ))
- for i in simple/*.sh ; do
- my_test_framework_wrapper scripts/ctdb_test_env $i
- done
+ CTDB_VARDIR=DIRECTORY
+ DIRECTORY containing CTDB files that are modified at runtime.
-to have your own framework process the test results and output.
+ Defaults to /usr/local/var/lib/ctdb.