1 How to generate NBENCH loadfiles from a network capture trace:
3 0, start with an empty share
4 We have to start with an empty share to make sure that the responses from the server will not
5 be dependent on additional state, not described in the network trace.
8 1, start capture on server (or client) use pcap filter to reduce workload
9 [root@h01n002mz RPM]# tcpdump -n -i eth0 -s 0 -w smb.cap host 10.0.0.11 and tcp port 445
12 2, log in to the share and start doing operations
13 The operations i do are
15 2, open the share in explorer
16 3, drag a file onto the share
17 4, read the file 5 times (dragging off)
24 4, convert the captyure file to a NBENCH loadfile :
26 genloadfile.sh smb.cap >smb.loadfile
28 beware if there are any
29 Unknown command:21 1.723006 10.0.0.12 -> 10.0.0.11 SMB Query Information Disk Response
30 frame.time_relative == 1.723006000 smb.cmd == 0x80 smb.nt_status == 0x00000000
32 These means there was a SMB command that the generator didnt know how to convert yet.
33 Either delete these and repair the loadfile or enhance the generator to be able to handle this opcode.
34 This particular command listed above we can just ignore since it will not affect our i/o.
37 5, now you can hopefully run this loadfile like this :
39 smbtorture //10.0.0.12/data -UAdministrator%test01 BENCH-NBENCH --loadfile=/shared/smb.loadfile
42 Which will run 10 threads for 120 seconds, each thread running the same loadfil.
43 These threads will try to keep the same "speed" of i/o as the original trace (using the timestamps in the loadfile)
45 This will produce output something like this :
47 10 96 0.00 MB/sec execute 29 sec latency 22.72 msec
49 which tells us that there are 10 threads running. We have reached line 96 in the loadfile and we have executed for 29 seconds.
51 The latency 22.72 means that we are within 22.72ms in time compared to the timestamps in the original trace.
52 This number will usually never be 0 no matter how fast your server is since it is only an approximation.
54 However, it can be used to test scalability of your server
55 How high can you make --num-progs be before latency starts going above 5000ms and remain above 5000ms
57 Since each thread runs the same i/o pattern and keeps the same approximate rate as the original client
58 this tells an approximation of how many such clients in parallell your server can handle.
61 If your server is clustered like ctdb/samba you can spread the threads out and do i/o to multiple nodes
62 in the cluster in parallell using an unclist.
63 This is a file that lists the ip addresses and shares that the threads should round-robin from.
70 By specifying --unclist=unclist to smbtorture the threads will now be spread out across three nodes.