+the send function. Thats how the backend gets to do an async reply, it
+calls this function when it is ready. Also notice that reply_getatr()
+only does the parsing of the request, and does not do the reply
+generation. That is done by the _send() function.
+
+The only missing piece in the Samba4 right now that prevents it being
+fully async is that it currently does the low level socket calls (read
+and write on sockets) in a blocking fashion. It does use select() to
+make it somewhat async, but if a client were to send a partial packet
+then delay before sending the rest then smbd would be stuck waiting
+for the second half of the packet.
+
+To fix this I plan on making the socket calls async as well, which
+luckily will not involve any API changes in the core of smbd or the
+library. It just involves a little bit of extra code in clitransport.c
+and smbd/request.c. As a side effect I hope that this will also reduce
+the average number of system calls required to answer a request, so we
+may see a performance improvement.
+
+
+NTVFS
+-----
+
+One of the most noticeable changes in Samba4 is the introduction of
+the NTVFS layer. This provided the initial motivation for the design
+of Samba4 and in many ways lies at the heart of the design.
+
+In Samba3 the main file serving process (smbd) combined the handling
+of the SMB protocol with the mapping to POSIX semantics in the same
+code. If you look in smbd/reply.c in Samba3 you see numerous places
+where POSIX assumptions are mixed tightly with SMB parsing code. We
+did have a VFS layer in Samba3, but it was a POSIX-like VFS layer, so
+no matter how you wrote a plugin you could not bypass the POSIX
+mapping decisions that had already been made before the VFS layer was
+called.
+
+In Samba4 things are quite different. All SMB parsing is performed in
+the smbd front end, then fully parsed requests are passed to the NTVFS
+backend. That backend makes any semantic mapping decisions and fills
+in the 'out' portion of the request. The front end is then responsible
+for putting those results into wire format and sending them to the
+client.
+
+Lets have a look at one of those request structures. Go and read the
+definition of "union smb_write" and "enum write_level" in
+include/smb_interfaces.h. (no, don't just skip reading it, really go
+and read it. Yes, that means you!).
+
+Notice the union? That's how Samba4 allows a single NTVFS backend
+interface to handle the several different ways of doing a write
+operation in the SMB protocol. Now lets look at one section of that
+union:
+
+ /* SMBwriteX interface */
+ struct {
+ enum write_level level;
+
+ struct {
+ uint16 fnum;
+ SMB_BIG_UINT offset;
+ uint16 wmode;
+ uint16 remaining;
+ uint32 count;
+ const char *data;
+ } in;
+ struct {
+ uint32 nwritten;
+ uint16 remaining;
+ } out;
+ } writex;
+
+see the "in" and "out" sections? The "in" section is for parameters
+that the SMB client sends on the wire as part of the request. The smbd
+front end parse code parses the wire request and fills in all those
+parameters. It then calls the NTVFS interface which looks like this:
+
+ NTSTATUS (*write)(struct request_context *req, union smb_write *io);
+
+and the NTVFS backend does the write request. The backend then fills
+in the "out" section of the writex structure and gives the union back
+to the front end (either by returning, or if done in an async fashion
+then by calling the async send function. See the async discussion
+elsewhere in this document).
+
+The NTVFS backend knows which particular function is being requested
+by looking at io->generic.level. Notice that this enum is also
+repeated inside each of the sub-structures in the union, so the
+backend could just as easily look at io->writex.level and would get
+the same variable.
+
+Notice also that some levels (such as splwrite) don't have an "out"
+section. This happens because there is no return value apart from a
+status code from those SMB calls.
+
+So what about status codes? The status code is returned directly by
+the backend NTVFS interface when the call is performed
+synchronously. When performed asynchronously then the status code is
+put into req->async.status before the req->async.send_fn() callback is
+called.
+
+Currently the most complete NTVFS backend is the CIFS backend. I don't
+expect this backend will be used much in production, but it does
+provide the ideal test case for our NTVFS design. As it offers the
+full capabilities that are possible with a CIFS server we can be sure
+that we don't have any gaping holes in our APIs, and that the front
+end code is flexible enough to handle any advances in the NT style
+feature sets of Unix filesystems that make come along.
+
+
+Process Models
+--------------
+
+In Samba3 we supported just one process model. It just so happens that
+the process model that Samba3 supported is the "right" one for most
+users, but there are situations where this model wasn't ideal.
+
+In Samba4 you can choose the smbd process model on the smbd command
+line.
+
+
+DCERPC binding strings
+----------------------
+
+When connecting to a dcerpc service you need to specify a binding
+string.
+
+The format is:
+
+ TRANSPORT:host:[flags]
+
+where TRANSPORT is either ncacn_np for SMB or ncacn_ip_tcp for RPC/TCP
+
+"host" is an IP or hostname or netbios name
+
+"flags" can include a SMB pipe name if using the ncacn_np transport or
+a TCP port number if using the ncacn_ip_tcp transport, otherwise they
+will be auto-determined.
+
+other recognised flags are:
+
+ sign : enable ntlmssp signing
+ seal : enable ntlmssp sealing
+ validate: enable the NDR validator
+ print: enable debugging of the packets
+ bigendian: use bigendian RPC
+
+
+For example, these all connect to the samr pipe:
+
+ ncacn_np:myserver
+ ncacn_np:myserver:samr
+ ncacn_np:myserver:samr,seal
+ ncacn_np:myserver:\pipe\samr
+ ncacn_np:myserver:/pipe/samr
+ ncacn_np:myserver[samr]
+ ncacn_np:myserver[\pipe\samr]
+ ncacn_np:myserver[/pipe/samr]
+ ncacn_np:myserver:[samr,sign,print]
+ ncacn_np:myserver:[\pipe\samr,sign,seal,bigendian]
+ ncacn_np:myserver:[/pipe/samr,seal,validate]
+
+ ncacn_ip_tcp:myserver
+ ncacn_ip_tcp:myserver:1024
+ ncacn_ip_tcp:myserver[1024]
+ ncacn_ip_tcp:myserver:[1024,sign,seal]
+