Discussion:
[Duplicity-talk] OSError: [Errno 35] Resource temporarily unavailable
Chris Poole
2011-06-24 14:41:49 UTC
Permalink
Hi,

I am trying to restore a backup. It gets so far then falls over, ending with
these messages:

Traceback (most recent call last):
File "/usr/local/bin/duplicity", line 1311, in <module>
with_tempdir(main)
File "/usr/local/bin/duplicity", line 1304, in with_tempdir
fn()
File "/usr/local/bin/duplicity", line 1238, in main
restore(col_stats)
File "/usr/local/bin/duplicity", line 569, in restore
restore_get_patched_rop_iter(col_stats)):
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 521, in Write_ROPaths
for ropath in rop_iter:
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 493, in integrate_patch_iters
for patch_seq in collated:
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 378, in yield_tuples
setrorps( overflow, elems )
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 367, in setrorps
elems[i] = iter_list[i].next()
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 112, in difftar2path_iter
tarinfo_list = [tar_iter.next()]
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 328, in next
self.set_tarfile()
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 322, in set_tarfile
self.current_fp = self.fileobj_iter.next()
File "/usr/local/bin/duplicity", line 606, in get_fileobj_iter
manifest.volume_info_dict[vol_num])
File "/usr/local/bin/duplicity", line 630, in restore_get_enc_fileobj
fileobj = tdp.filtered_open_with_delete("rb")
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/dup_temp.py",
line 114, in filtered_open_with_delete
fh = FileobjHooked(path.DupPath.filtered_open(self, mode))
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/path.py",
line 741, in filtered_open
return gpg.GPGFile(False, self, gpg_profile)
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/gpg.py",
line 152, in __init__
'logger': self.logger_fp})
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/GnuPGInterface.py",
line 357, in run
create_fhs, attach_fhs)
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/GnuPGInterface.py",
line 399, in _attach_fork_exec
process.pid = os.fork()
OSError: [Errno 35] Resource temporarily unavailable

(Please find the complete verbosity 9 log attached.)

Both the backup location and restore directory are on the same local
(USB) drive, which has
been fine when I checked another backup with it just yesterday.

I was using Duplicity 0.6.13, but upgraded to 0.6.14, and the same
error is produced.

If anyone can offer any thoughts as to what the issue is, I'd be very grateful.


Thanks,

Chris Poole
Martin Pool
2011-06-24 14:54:27 UTC
Permalink
     File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/GnuPGInterface.py",
line 399, in _attach_fork_exec
       process.pid = os.fork()
   OSError: [Errno 35] Resource temporarily unavailable
This normally means the machine is running out of memory or some other
resource and it can't spawn another process.

Martin
Kenneth Loafman
2011-06-24 15:38:31 UTC
Permalink
Post by Chris Poole
Post by Chris Poole
File
"/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/GnuPGInterface.py",
Post by Chris Poole
line 399, in _attach_fork_exec
process.pid = os.fork()
OSError: [Errno 35] Resource temporarily unavailable
This normally means the machine is running out of memory or some other
resource and it can't spawn another process.
A couple of versions ago I removed threaded_waitpid() from
GnuPGInterface.py. This kept resource utilization down, especially file
handles, when doing long strings of incremental backups.

You can get the previous version at:
http://bazaar.launchpad.net/~duplicity-team/duplicity/0.6-series/download/656/duplicitygnupginterf-20090308130055-a8r7crbk76m7m40b-1/GnuPGInterface.py

I'm going to restore this version in the next release. It's really needed.

...Ken
Chris Poole
2011-06-24 16:12:38 UTC
Permalink
Post by Kenneth Loafman
A couple of versions ago I removed threaded_waitpid() from
GnuPGInterface.py.  This kept resource utilization down, especially file
handles, when doing long strings of incremental backups.
http://bazaar.launchpad.net/~duplicity-team/duplicity/0.6-series/download/656/duplicitygnupginterf-20090308130055-a8r7crbk76m7m40b-1/GnuPGInterface.py
Thanks. If raising the "ulimit -n" number won't help, I'll try using this file.
e***@web.de
2011-06-24 16:44:52 UTC
Permalink
as far as i understand Ken removed the fix i mentioned earlier. replacing gpginterface.py should bring it back and help reliably.

tell us your findings, Ede/duply.net
Post by Kenneth Loafman
Post by Kenneth Loafman
A couple of versions ago I removed threaded_waitpid() from
GnuPGInterface.py.  This kept resource utilization down, especially
file
Post by Kenneth Loafman
handles, when doing long strings of incremental backups.
http://bazaar.launchpad.net/~duplicity-team/duplicity/0.6-series/download/656/duplicitygnupginterf-20090308130055-a8r7crbk76m7m40b-1/GnuPGInterface.py
Thanks. If raising the "ulimit -n" number won't help, I'll try using this file.
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Chris Poole
2011-06-25 14:03:35 UTC
Permalink
Post by Kenneth Loafman
http://bazaar.launchpad.net/~duplicity-team/duplicity/0.6-series/download/656/duplicitygnupginterf-20090308130055-a8r7crbk76m7m40b-1/GnuPGInterface.py
OK so I replaced the file over to use this older version. Sadly it's
not really helped.

The RAM usage just goes up and up, with 50 or more GPG processes
showing up in htop. It's a shame that virtual memory can't be used; I
have 10GB of free HDD space on my internal drive.

I also ran ulimit -n 8192.

Fortunately I don't really _need_ this backup, I have others. I'm only
restoring it to check that it's correct.

Is there nothing else that can be done? Either use a system with more
RAM or perform a full backup to start a new chain?
e***@web.de
2011-06-25 14:32:34 UTC
Permalink
Post by Chris Poole
Post by Kenneth Loafman
http://bazaar.launchpad.net/~duplicity-team/duplicity/0.6-series/download/656/duplicitygnupginterf-20090308130055-a8r7crbk76m7m40b-1/GnuPGInterface.py
OK so I replaced the file over to use this older version. Sadly it's
not really helped.
could you post the -v9 output again please? there would be a log entry if the fix is not possible
Post by Chris Poole
The RAM usage just goes up and up, with 50 or more GPG processes
showing up in htop. It's a shame that virtual memory can't be used; I
have 10GB of free HDD space on my internal drive.
I also ran ulimit -n 8192.
your former output states
Post by Chris Poole
Total number of contained volumes: 1897
so you need at least 4x1897 + all open file descriptors in the system .. thats much to close to 8192 you set

use ulimit -n 100000 and see if this helps.. raise it until the error disappears, you probably have to be root to do so
after every set with ulimit, check if with ulimit -n if the setting is in effect
Post by Chris Poole
Fortunately I don't really _need_ this backup, I have others. I'm only
restoring it to check that it's correct.
Is there nothing else that can be done? Either use a system with more
RAM or perform a full backup to start a new chain?
try the raise until solved and post the output as requested above and we'll see if the lack of threading in your python is the issue. then we can think of other options.

ede/duply.net
Kenneth Loafman
2011-06-25 14:38:59 UTC
Permalink
Post by Kenneth Loafman
http://bazaar.launchpad.net/~duplicity-team/duplicity/0.6-series/download/656/duplicitygnupginterf-20090308130055-a8r7crbk76m7m40b-1/GnuPGInterface.py
Post by Chris Poole
OK so I replaced the file over to use this older version. Sadly it's
not really helped.
could you post the -v9 output again please? there would be a log entry if
the fix is not possible
Post by Chris Poole
The RAM usage just goes up and up, with 50 or more GPG processes
showing up in htop. It's a shame that virtual memory can't be used; I
have 10GB of free HDD space on my internal drive.
I also ran ulimit -n 8192.
your former output states
Post by Chris Poole
Total number of contained volumes: 1897
so you need at least 4x1897 + all open file descriptors in the system ..
thats much to close to 8192 you set
use ulimit -n 100000 and see if this helps.. raise it until the error
disappears, you probably have to be root to do so
after every set with ulimit, check if with ulimit -n if the setting is in effect
Post by Chris Poole
Fortunately I don't really _need_ this backup, I have others. I'm only
restoring it to check that it's correct.
Is there nothing else that can be done? Either use a system with more
RAM or perform a full backup to start a new chain?
try the raise until solved and post the output as requested above and we'll
see if the lack of threading in your python is the issue. then we can think
of other options.
8192 is not a big number for file descriptors. Try doubling or quadrupling
that.

File descriptors are cheap resources. It does not hurt to have too many.

...Ken
Chris Poole
2011-06-26 16:24:38 UTC
Permalink
Post by e***@web.de
could you post the -v9 output again please? there would be a log entry if the
fix is not possible
Sure, see the attached file. (After I replaced GnuPGInterface.py with the
version Ken pointed to.)
Post by e***@web.de
8192 is not a big number for file descriptors.  Try doubling or quadrupling
that.
File descriptors are cheap resources.  It does not hurt to have too many.
I tried changing ulimit to a higher number:

% ulimit -n 100000
limit: setrlimit failed: invalid argument

% sudo ulimit -n 100000
/usr/bin/ulimit: line 4: ulimit: open files: cannot modify limit:
Invalid argument

I also tried adding

kern.maxfiles=100000
kern.maxfilesperproc=100000
kern.maxproc=100000
kern.maxprocperuid=100000

to /etc/sysctl.conf. Still no dice sadly. All values except macproc were
correctly changed after a restart, but it didn't improve the situation.

20000 seems to be about as high as ulimit will go (on Mac Snow Leopard, 10.6.8),
which seems to be nowhere near enough for this restore operation.

Any more ideas?


Thanks

Chris


PS. As a side note, if I was to exclude my photos directory and make an
incremental backup, would the same number of files still need to be processed?
(The photos directory is roughly a quarter of the total backup size, maybe
more.)
Chris Poole
2011-06-28 11:20:39 UTC
Permalink
A small update:

I couldn't find any solution to increase the max open file number, so
I view this as a failed backup (since a restore isn't possible) and a
lesson learned: make a new full backup once every few months at the
minimum, not once every 7!

Fortunately, I take a Voldemort approach to backups, so I have the
same files backed up in a few other places. I've not lost anything.

It's a good reminder to periodically check the backups too, even if
the backup itself is working fine!


Thanks for all the help

Chris
Kenneth Loafman
2011-06-28 11:56:59 UTC
Permalink
Post by Chris Poole
I couldn't find any solution to increase the max open file number, so
I view this as a failed backup (since a restore isn't possible) and a
lesson learned: make a new full backup once every few months at the
minimum, not once every 7!
http://krypted.com/mac-os-x/maximum-files-in-mac-os-x/ <-- see this?
Post by Chris Poole
Fortunately, I take a Voldemort approach to backups, so I have the
same files backed up in a few other places. I've not lost anything.
Good plan. Also good name for a backup system: "Horcrux Backup".
Post by Chris Poole
It's a good reminder to periodically check the backups too, even if
the backup itself is working fine!
Good idea with any system.
Post by Chris Poole
Thanks for all the help
Wish we could have helped.

...Ken
Chris Poole
2011-06-28 12:07:50 UTC
Permalink
Post by Kenneth Loafman
http://krypted.com/mac-os-x/maximum-files-in-mac-os-x/ <-- see this?
Yes, or at least a similar article. It didn't work though, Mac OS X
does some different things to Linux and friends in this regard it
seems (including *BSD).

I have, however, just found a likely solution:

http://superuser.com/questions/302754/increase-the-maximum-number-of-open-file-descriptors-in-snow-leopard

I've just tried this, then set ulimit, and it worked. I have removed
the old backup I was having problems with, so it's too late for me to
try it, but it may prove useful for others in the future. TL;DR, `sudo
launchctl limit maxfiles 1000000` will let you set `ulimit -n 100000`.
(This is both a hard and soft limit.)
Post by Kenneth Loafman
Good plan.  Also good name for a backup system: "Horcrux Backup".
Actually I've written a little wrapper for Duplicity called 'horcrux'.
I'm just writing a little documentation for it. I've been testing it
for months now, using it to run all my Duplicity backups, and it seems
reliable. (I tried others like Duply but wanted to roll my own and
learn, and they didn't fit the exact use model I was after.)
Martin Pool
2011-06-28 12:40:38 UTC
Permalink
So why is duplicity using so many fds? Because it wants to read from
all the volume files in parallel? Maybe it should (at least
optionally) have a mode where they're each just entirely decrypted to
a temporary file?

m
Chris Poole
2011-06-28 13:33:16 UTC
Permalink
 Maybe it should (at least
optionally) have a mode where they're each just entirely decrypted to
a temporary file?
I think this would be a good idea, if there's enough local storage
available. I'm not familiar with Duplicity's internals, but I assume
only a subset of encrypted files would have to be decrypted before
they can be erased to reduce the total required HDD space.
e***@web.de
2011-06-28 13:31:38 UTC
Permalink
Post by Chris Poole
http://superuser.com/questions/302754/increase-the-maximum-number-of-open-file-descriptors-in-snow-leopard
I've just tried this, then set ulimit, and it worked. I have removed
the old backup I was having problems with, so it's too late for me to
try it, but it may prove useful for others in the future. TL;DR, `sudo
launchctl limit maxfiles 1000000` will let you set `ulimit -n 100000`.
(This is both a hard and soft limit.)
good to know, though you should have kept the 7month incremental to try it .. still nice find for the next osx victim,) ...ede/duply.net
Chris Poole
2011-06-28 14:50:06 UTC
Permalink
Post by e***@web.de
good to know, though you should have kept the 7month incremental to try it .. still nice find for the next osx victim,) ...ede/duply.net
Actually I did, though with me re-starting the backup it's not
important to me anymore. The fileset is on another external drive, so
I'll give it a try in a day or two.

Thanks for the help.


Chris
e***@web.de
2011-06-28 15:07:56 UTC
Permalink
Post by Chris Poole
Post by e***@web.de
good to know, though you should have kept the 7month incremental to try it .. still nice find for the next osx victim,) ...ede/duply.net
Actually I did, though with me re-starting the backup it's not
important to me anymore. The fileset is on another external drive, so
I'll give it a try in a day or two.
please do and tell the list if it really 'solves' the issue
Post by Chris Poole
Thanks for the help.
no wörries;) ... if your script is released you might want to tell me and i'll add it on duply.net as alternative, if you drop a line or two what makes it 'better' than the rest.

regards ede
Chris Poole
2011-06-29 12:59:11 UTC
Permalink
OK so I tried the launchctl solution. I started after a clean restart, with
everything except system software disabled (i.e., as little RAM
utilised as possible). I then ran `sudo launchctl limit
maxfiles 100000 200000` (soft and hard limits respectively), followed by `ulimit
-n 100000`. Given that I start duplicity from a bash script I wrote, I run the
same ulimit command at the top of the script.

The backup files are on a FW800 HDD. This is the final outcome, essentially the
same as before (and stopping at the same point):

Traceback (most recent call last):
File "/usr/local/bin/duplicity", line 1250, in <module>
with_tempdir(main)
File "/usr/local/bin/duplicity", line 1243, in with_tempdir
fn()
File "/usr/local/bin/duplicity", line 1197, in main
restore(col_stats)
File "/usr/local/bin/duplicity", line 539, in restore
restore_get_patched_rop_iter(col_stats)):
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/patchdir.py",
line 521, in Write_ROPaths
for ropath in rop_iter:
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/patchdir.py",
line 493, in integrate_patch_iters
for patch_seq in collated:
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/patchdir.py",
line 378, in yield_tuples
setrorps( overflow, elems )
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/patchdir.py",
line 367, in setrorps
elems[i] = iter_list[i].next()
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/patchdir.py",
line 112, in difftar2path_iter
tarinfo_list = [tar_iter.next()]
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/patchdir.py",
line 328, in next
self.set_tarfile()
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/patchdir.py",
line 322, in set_tarfile
self.current_fp = self.fileobj_iter.next()
File "/usr/local/bin/duplicity", line 576, in get_fileobj_iter
manifest.volume_info_dict[vol_num])
File "/usr/local/bin/duplicity", line 600, in restore_get_enc_fileobj
fileobj = tdp.filtered_open_with_delete("rb")
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/dup_temp.py",
line 114, in filtered_open_with_delete
fh = FileobjHooked(path.DupPath.filtered_open(self, mode))
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/path.py",
line 740, in filtered_open
return gpg.GPGFile(False, self, gpg_profile)
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/gpg.py",
line 135, in __init__
'logger': self.logger_fp})
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/GnuPGInterface.py",
line 365, in run
create_fhs, attach_fhs)
File "/usr/local/Cellar/duplicity/0.6.13/libexec/duplicity/GnuPGInterface.py",
line 407, in _attach_fork_exec
process.pid = os.fork()
OSError: [Errno 35] Resource temporarily unavailable

This is with the older GnuPGInterface.py that Ken linked to.

I'm not sure if this is still the issue or not, or whether it's something else,
but really I don't care as I've created a new backup chain, splitting the backup
into a few different directories too.

Maybe it's a deeper Mac OS X issue? At the moment I don't have a Linux machine
to test it on that I trust.


Thanks for all the help

Chris

e***@web.de
2011-06-24 15:10:52 UTC
Permalink
this really smells like all the gpg processes keep hanging around an python cannot get/deliver more file handlers.

see
http://article.gmane.org/gmane.comp.sysutils.backup.duplicity.general/3630/match=resource+temporarily+unavailable

we have a solution for linux. but that seems not to work under mac osx as in your case.

btw. your backup chain is really really really long. you should consider doing fulls from time to time, which circumvents the error above and has teh advantage that not every chain gets corrupted becuase one volume might be.

could you try restoring on a machine with more ram OR raise the setting for open file descriptors on osx (ulimit n reports 256 on an osx here).


ede/duply.net
Post by Chris Poole
Hi,
I am trying to restore a backup. It gets so far then falls over, ending with
File "/usr/local/bin/duplicity", line 1311, in <module>
with_tempdir(main)
File "/usr/local/bin/duplicity", line 1304, in with_tempdir
fn()
File "/usr/local/bin/duplicity", line 1238, in main
restore(col_stats)
File "/usr/local/bin/duplicity", line 569, in restore
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 521, in Write_ROPaths
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 493, in integrate_patch_iters
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 378, in yield_tuples
setrorps( overflow, elems )
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 367, in setrorps
elems[i] = iter_list[i].next()
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 112, in difftar2path_iter
tarinfo_list = [tar_iter.next()]
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 328, in next
self.set_tarfile()
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/patchdir.py",
line 322, in set_tarfile
self.current_fp = self.fileobj_iter.next()
File "/usr/local/bin/duplicity", line 606, in get_fileobj_iter
manifest.volume_info_dict[vol_num])
File "/usr/local/bin/duplicity", line 630, in restore_get_enc_fileobj
fileobj = tdp.filtered_open_with_delete("rb")
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/dup_temp.py",
line 114, in filtered_open_with_delete
fh = FileobjHooked(path.DupPath.filtered_open(self, mode))
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/path.py",
line 741, in filtered_open
return gpg.GPGFile(False, self, gpg_profile)
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/gpg.py",
line 152, in __init__
'logger': self.logger_fp})
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/GnuPGInterface.py",
line 357, in run
create_fhs, attach_fhs)
File "/usr/local/Cellar/duplicity/0.6.14/libexec/duplicity/GnuPGInterface.py",
line 399, in _attach_fork_exec
process.pid = os.fork()
OSError: [Errno 35] Resource temporarily unavailable
(Please find the complete verbosity 9 log attached.)
Both the backup location and restore directory are on the same local
(USB) drive, which has
been fine when I checked another backup with it just yesterday.
I was using Duplicity 0.6.13, but upgraded to 0.6.14, and the same
error is produced.
If anyone can offer any thoughts as to what the issue is, I'd be very grateful.
Thanks,
Chris Poole
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Martin Pool
2011-06-24 15:56:17 UTC
Permalink
Post by e***@web.de
this really smells like all the gpg processes keep hanging around an python cannot get/deliver more file handlers.
see
http://article.gmane.org/gmane.comp.sysutils.backup.duplicity.general/3630/match=resource+temporarily+unavailable
we have a solution for linux. but that seems not to work under mac osx as in your case.
btw. your backup chain is really really really long. you should consider doing fulls from time to time, which circumvents the error above and has teh advantage that not every chain gets corrupted becuase one volume might be.
could you try restoring on a machine with more ram OR raise the setting for open file descriptors on osx (ulimit n reports 256 on an osx here).
If fork is failing, as it is here, what you probably actually need to
change is the limit on the number of processes, though it could also
be some other object required by starting a process.

Martin
Chris Poole
2011-06-24 16:11:08 UTC
Permalink
Post by Martin Pool
If fork is failing, as it is here, what you probably actually need to
change is the limit on the number of processes, though it could also
be some other object required by starting a process.
At the moment I run ulimit -n 1024 before running duplicity, so I'll
try increasing that.
Kenneth Loafman
2011-06-24 16:41:33 UTC
Permalink
Post by Chris Poole
Post by Martin Pool
If fork is failing, as it is here, what you probably actually need to
change is the limit on the number of processes, though it could also
be some other object required by starting a process.
At the moment I run ulimit -n 1024 before running duplicity, so I'll
try increasing that.
File descriptors are cheap resources. Make it a big number.

...Ken
Chris Poole
2011-06-24 16:09:46 UTC
Permalink
Post by e***@web.de
btw. your backup chain is really really really long. you should consider doing fulls from time to time, which circumvents the error above and has teh advantage that not every chain gets corrupted becuase one volume might be.
Yes I had planned to do this, but my remote host only has enough room
for one full backup set plus some incrementals, so I'd have to remove
the old set and start again. I will do it, it's just a little
unnerving.
Post by e***@web.de
could you try restoring on a machine with more ram OR raise the setting for open file descriptors on osx (ulimit n reports 256 on an osx here).
This machine has 2GB, and I don't have one with larger memory. (Not
one that I trust, anyway.)

I run ulimit -n 1024 in my script before calling duplicity; would
raising this number help?
Kenneth Loafman
2011-06-24 16:38:23 UTC
Permalink
Post by e***@web.de
Post by e***@web.de
btw. your backup chain is really really really long. you should consider
doing fulls from time to time, which circumvents the error above and has teh
advantage that not every chain gets corrupted becuase one volume might be.
Yes I had planned to do this, but my remote host only has enough room
for one full backup set plus some incrementals, so I'd have to remove
the old set and start again. I will do it, it's just a little
unnerving.
Post by e***@web.de
could you try restoring on a machine with more ram OR raise the setting
for open file descriptors on osx (ulimit n reports 256 on an osx here).
This machine has 2GB, and I don't have one with larger memory. (Not
one that I trust, anyway.)
I run ulimit -n 1024 in my script before calling duplicity; would
raising this number help?
You need 4 file descriptors for each incremental, plus whatever your system
needs.

...Ken
Loading...