1 dbench is a tool for measuring filesystem and network performance.
3 The rest of this README came from a very early version of dbench,
4 which I haven't updated. Ill get around to that some day ...
10 ------------------------------
16 Netbench is a terrible benchmark, but it's an "industry standard" and
17 it's what is used in the press to rate windows fileservers like Samba
20 The big problem with netbench for the open source community is that
21 very few people who work on open source software have the facilities
22 to run it properly. You need a lab with between 60 and 150 windows PC,
23 all on switched fast ethernet and a really grunty server (say a quad
24 xeon with 1GB ram and hardware raid). Then you need some way to nurse
25 all those machines along so they will run a very fussy benchmark suite
26 without crashing. Not easy and very expensive. Only one person in the
27 open source community that I know of has access to such a lab (Jeremy
30 In order for the development methodologies of the open source
31 community to work we need to be able to run this benchmark in an
32 environment that a bunch of us have access to. We need the source to
33 the benchmark so we can see what it does. We need to be able to split
34 it into pieces to look for individual bottlenecks. In short, we need
35 to open up netbench to the masses.
37 To do this I have written three tools, dbench, tbench and
38 smbtorture. All three read a load description file called client.txt
39 that was derived from a network sniffer dump of a real netbench
40 run. client.txt is about 4MB and describes the 90 thousand operations
41 that a netbench client does in a typical netbench run. They parse
42 client.txt and use it to produce the same load without having to buy a
43 huge lab. They can simulate any number of simultaneous clients.
45 client.txt must either be in the working directory, or specified on
46 the command line with the -c option.
51 dbench produces only the filesystem load. It does all the same IO
52 calls that the smbd server in Samba would produce when confronted with
53 a netbench run. It does no networking calls.
55 You can get dbench from ftp://samba.org/pub/tridge/dbench/
56 You run it as "dbench N" where N is the number of clients to
57 simulate. It gives out three numbers like this (this is from a 144
58 client run on a quad xeon box):
60 Throughput 40.6701 MB/sec (NB=50.8376 MB/sec 406.701 MBit/sec)
62 the first is the true throughput as seen by dbench. The second and
63 third numbers are "netbench scaled" numbers that give the throughput
64 that would be seen by Win9X boxes after taking into account the client
65 file cacheing performed by oplocks. They are given in both MB/sec and
66 MBit/sec as different netbench reports use different scales.
71 tbench produces only the TCP and process load. It does the same socket
72 calls that smbd would do under a netbench load. It does no filesystem
73 calls. The idea behind tbench is to eliminate smbd from the netbench
74 test, as though the smbd code could be made infinately fast. The
75 throughput results of tbench tell us how fast a netbench run could go
76 if we eliminated all filesystem IO and SMB packet processing. tbench
77 is built as part of the dbench package.
79 To run tbench first run tbench_srv on the server. Then run "tbench N SERVER"
80 on the client. N is the number of clients to simulate and SERVER is
81 the hostname of the server. The results are given in the same format
87 smbtorture is the stress tester from the Samba suite. I've recently
88 added a stress test that allows you to simulate the full netbench
89 benchmark, including all network traffic and filesystem IO.
91 To run smbtorture you first need to install Samba version 2.0.X. The
92 binary distrubtions at ftp://ftp.samba.org/pub/samba/bin-pkgs/redhat/
93 would do fine if you don't want to compile it yourself. Then setup a
94 netbench share on the fastest disk you have, making sure you have at
95 least 25MB free per simulated client. The simplest smb.conf would look
102 path = /data/netbench
106 Then you need smbtorture. You can either grab a precompiled i386
107 Redhat5.2 binary from ftp://samba.org/pub/tridge/dbench/smbtorture.gz
108 or you can follow the instructions at http://samba.org/cvs.html to
109 download the SAMBA_2_0 branch of the Samba cvs tree and use
110 "make smbtorture" to build it.
112 Finally, you'll need client.txt from
113 ftp://samba.org/pub/tridge/dbench/dbench.tgz in the same directory
114 that you run smbtorture from.
116 To run it you do this:
118 smbtorture //localhost/netbench -U% -N 32 NBW95
120 that will run a 32 client load. You can, of course, also run
121 smbtorture against a dfferent SMB server (such as a NT server) to give
122 comparitive results for any client load that the server can handle.
124 Even better is to run smbtorture on one machine and smbd on another,
125 connected by a very fast network (such as gigabit ethernet). That will
126 stop the smbtorture code itself from loading the server and will also
127 test the network driver instead of the loopback driver.
132 To give you an idea of what to expect here are some results on a quad
133 xeon machine running Linux 2.2.9 with 1GB of memory (it has 4GB but
134 Linux only uses 1GB by default). The machine also has a 45GB hardware
135 raid system and a Alteon AceNIC gigabit network card.
137 The results below are in netbench MB/sec (the NB= number in the
138 result). Multiply by 8 to get the MBit/sec numbers that Mindcraft used
139 in their graphs. The first column is the number of simulated clients,
140 the second is the result.
153 With tbench on loopback I get:
161 With tbench running across the gigabit network (using a dual
162 processor Origin200 as the client) I get:
170 With smbtorture running over loopback I get:
178 With smbtorture running across the gigabit network (using a dual
179 processor Origin200 as the client) I get:
187 With smbtorture running across the gigabit network but with smbd
188 modified so that write_file() and read_file() are null operations
189 (which eliminates the file IO) I get:
198 The above results show that, at least for this hardware configuration,
199 the problem isn't the filesystem code or the raid drivers. More tests
200 will be needed to find out exactly what the problem is but it looks
201 like a TCP scaling problem.
206 Hopefully Jeremy will be able to run smbtorture against NT on the same
207 hardwre sometime in the next week so we have direct numbers for
208 comparison, but from looking at the mindcraft numbers under netbench
209 we would expect NT to get about the following:
217 so we do well by comparison with small client loads but fall behind
218 quite a lot with large loads. Note that the numbers in the mindcraft
219 report for Linux/Samba are quite a long way behind what I give above
220 because mindcraft did a hopeless job of tuning Linux/Samba.
222 comparison with netbench
223 ------------------------
225 An initial comparison with real netbench results shows that smbtorture
226 does produce very similar throughput numbers. They aren't exactly the
227 same but they are similar enough for us to target our tuning efforts
228 and expect to see improvements reflected in real netbench runs. When
229 we find something that looks promising we can get Jeremy to run a real
236 the tbench results really pointed at the problem being the Linux TCP
237 stack. I made a quick (and very unsafe!) hack to Samba and the Linux
238 kernel to see if I could remove the lock_kernel() in sys_sendto() and
239 sys_recvfrom() for smbd processes by passing a MSG_NOLOCK flag in
240 send() and recv(). That gave an enormous improvement in the loopback
249 and in the loopback smbtorture results I also saw a big improvement:
257 that's a 50% improvement. I suspect the numbers will be higher with a
258 real netbench run as it won't have the overhead of running 80
259 smbtorture clients on the same box as the server.
265 One question some people may ask is whether the above represents a
266 realistic load on a fileserver. It doesn't. Nearly 90% of the
267 read/write IO operations in netbench are writes whereas in a "normal"
268 office load reads dominate. Also, the load is *much* higher than a
269 normal office PC would put on a server. There aren't many office PCs
270 that write over a hundred megabytes of data to a server in a few
271 minutes, unless maybe they are copying a CD.
273 That doesn't mean the benchmark is useless, it just means you
274 shouldn't use this for purchasing decisions unless you really
275 understand the results and how they relate to your environment.
281 smbtorture and dbench are released under the terms of the GNU Public
282 License version 3 or later.