HP compile Read-only file system copies every file is your shell clean memory usage out of memory |
For HPUX apparently you need to add the option -Ae to the CFLAGS. Edit the Makefile and change CFLAGS to: CFLAGS=-Ae -O
if you get "Read-only file system" as an error when sending to a rsync server then you probably forgot to set "read only = no" for that module.
Some people occasionally report that rsync copies every file when they expect it to copy only a small subset. In most cases the explanation is that rsync is not in fact copying every file it is just trying to update file permissions or ownership and this is failing for some reason. rsync lists files with the -v option if it makes any change to the file, including minor changes such as group changes. If you think that rsync is erroneously copying every file then look at the stats produced with -v and see if rsync is really sending all the data.
The "is your shell clean" message and the "protocol mismatch" message are usually caused by having some sort of program in your .cshrc, .profile, .bashrc or equivalent file that writes a message every time you connect. Data written in this way corrupts the rsync data stream. rsync detects this at startup and produces those error messages. A good way to test this is something like: rsh remotemachine /bin/true > test.dat you should get a file called test.dat created of 0 length. If test.dat is not of zero length then your shell is not clean. Look at the contents of test.dat to see what was sent. Look at all the startup files on remotemachine to try and find the problem.
yes, rsync uses a lot of memory. The majority of the memory is used to hole the list of files being transferred. This takes about 100 bytes per file, so if you are transferring 800,000 files then rsync will consume about 80M of memory. It will be higher if you use -H or --delete. To fix this requires a major rewrite of rsync. I do plan on doing that, but I don't know when I'll get to it.
The usual reason for "out of memory" when running rsync is that you are transferring a _very_ large number of files. The size of the files doesn't matter, only the total number of files. As a rule of thumb you should expect rsync to consume about 100 bytes per file in the file list. This happens because rsync builds a internal file list structure containing all the vital details of each file. rsync needs to hold structure in memory because it is being constantly traversed. I do have a plan for how to rewrite rsync so that it consumes a fixed (small) amount of memory no matter how many files are transferred, but I haven't yet found a spare week of coding time to implement it!
rsync 2.4.3 has a problem with some versions of rsh. The versions of rsh (such as the one on Solaris) that don't handle non-blocking IO will cause all sorts of errors, including "unexpected tag" "multiplexing overflow" etc. The fix is to either use an earlier version of rsync or use ssh instead of rsh or wait for rsync 2.4.4
On some systems (notably SunOS4) cron supplies what looks like a socket to rsync, so rsync thinks that stdin is a socket. This means that if you start rsync with the --daemon switch from a cron job you end up rsync thiking it has been started from inetd. The fix is simple - just redirect stdin from /dev/null in your cron job.
> rsync: Command not found This error is produced when the remote shell is unable to locate the rsync binary in your path. There are 3 possible solutions: 1) install rsync in a "standard" location that is in your remote path. 2) modify your .cshrc, .bashrc etc on the remote machine to include the path that rsync is in 3) use the --rsync-path option to explicitly specify the path on the remote machine where rsync is installed You may echo find the command: rsh samba 'echo $PATH' for determining what your remote path is.
Jim wrote: > This seems to imply rsync can't copy files with names containing > spaces. A couple quick greps through the man page suggests that > this limitation isn't mentioned. Short answer: rsync can handle filenames with spaces Long answer: rsync handles spaces just like any other unix command line application. Within the code spaces are treated just like any other character so a filename with a space is no different from a filename with any other character in it. The problem of spaces is in the argv processing done to interpret the command line. As with any other unix application you have to escape spaces in some way on the command line or they will be used to separate arguments. It is slightly trickier in rsync because rsync sends a command line to the remote system to launch the peer copy of rsync. The command line is interpreted by the remote shell and thus the spaces need to arrive on the remote system escaped so that the shell doesn't split such filenames into multiple arguments. For example: rsync -av fjall:'a long filename' /tmp/ won't work because the remote shell gets an unquoted filename. Instead you have to use: rsync -av fjall:'"a long filename"' /tmp/ or a similar construct (there are lots of varients that work). As long as you know that the remote filenames on the command line are interpreted by the remote shell then it all works fine. I should probably provide the above examples in the docs :-) Cheers, Andrew
All messages which originate from the remote computer are sent to stderr. All informational messages from the local computer are sent to stdout. All error messages from the local computer are sent to stderr.
There is a reason to this system, and it would be quite difficult to change. The reason is that rsync uses a remote shell for execution. The remote shell provides stderr/stdout. The stdout stream is used for the rsync protocol. Mixing error messages into this stdout stream would involve lots of extra overhead and complexity in the protocol because each message would need to be escaped, which means non-messages would need to be encoded in some way. Instead rsync always sends remote messages to stderr. This means they appear on stderr at the local computer. rsync can't intercept them.
If you have a problem with scripts or cron jobs that produce stderr then I suggest you use your shell to redirect stderr and stdout. For example you could do a cron line like this:
0 0 * * * /usr/local/bin/rsync -avz /foobar /foo > logfile 2>&1
this would send both stderr and stdout to "logfile". The magic bit is the "2>&1" which says to redirect stderr to to the same descriptor to which stdout is currently directed.