Discussion:
[Duplicity-talk] webdavs and large sync volume failing
w***@gmx.de
2016-02-02 20:18:32 UTC
Permalink
Hi,

I am trying to use Duplicity and duply to backup appr 60GB data to webdavs storage.

The script is running on a raspberrypi with raspbian jessie

Small backups are working.
I have tried to run this backup in one go, it failed; I then split into 15GB chunks it failed again with exit code 137

Any ideas what to do / where to look for?
Is the webdavs access not stable enough for such backup?
Is it maybe a performance issue with the raspberry

btw I started the backup script via a cron job, it was then running several hours before it failed.

Thanks for help.

copy from the output:


Start duply v1.9.1, time is 2016-02-01 21:06:02.
Using profile '/etc/duply/photo'.
Using installed duplicity version 0.6.24, python 2.7.9, gpg 1.4.18 (Home: ~/.gnupg), awk 'mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan', bash '4.3.30(1)-release (arm-unknown-linux-gnueabihf)'.
Autoset found secret key of first GPG_KEY entry ‚xxx' for signing.
Checking TEMP_DIR '/home/backup/tmp' is a folder (OK)
Checking TEMP_DIR '/home/backup/tmp' is writable (OK)
TODO: reimplent tmp space check
Test - Encrypt to ‚xxx' & Sign with ‚xxx' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/home/backup/tmp/duply.xxx_*'(OK)
Export PUB key ‚xxx' (OK)
Write file 'gpgkey.xxx.pub.asc' (OK)
Export SEC key ‚xxx' (OK)
Write file 'gpgkey.xxx.sec.asc' (OK)

INFO:

duply exported new keys to your profile.
You should backup your changed profile folder now and store it in a safe place.


--- Start running command PRE at 21:06:07.643 ---
Skipping n/a script '/etc/duply/photo/pre'.
--- Finished state OK at 21:06:07.881 - Runtime 00:00:00.237 ---

--- Start running command BKP at 21:06:08.091 ---
Reading globbing filelist /etc/duply/photo/exclude
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
Reuse configured PASSPHRASE as SIGN_PASSPHRASE
No signatures found, switching to full backup.
Attempt 1 failed. error: [Errno 32] Broken pipe
Attempt 1 failed. error: [Errno 32] Broken pipe
Attempt 1 failed. error: [Errno 32] Broken pipe
.
.
.
/usr/bin/duply: line 682: 20288 Killed TMPDIR='/home/backup/tmp' PASSPHRASE=xxx FTP_PASSWORD=xxx duplicity --archive-dir '/home/backup/duply/.duply-cache' --name duply_photo --encrypt-key xxxxx --sign-key xxxxx --verbosity '4' --exclude-globbing-filelist '/etc/duply/photo/exclude' '/mnt/photo' 'webdavs://***@webdav.xxx.xxx'
09:32:39.304 Task 'BKP' failed with exit code '137'.
--- Finished state FAILED 'code 137' at 09:32:39.304 - Runtime 12:26:31.212 —
e***@web.de
2016-02-02 20:27:40 UTC
Permalink
Post by w***@gmx.de
Hi,
I am trying to use Duplicity and duply to backup appr 60GB data to webdavs storage.
The script is running on a raspberrypi with raspbian jessie
Small backups are working.
I have tried to run this backup in one go, it failed; I then split into 15GB chunks it failed again with exit code 137
Any ideas what to do / where to look for?
can you run duplicity in max. verbosity '-v9' and send me the _complete_ output privately?
Post by w***@gmx.de
Is the webdavs access not stable enough for such backup?
should make no difference. the backup is split into volumes of the same size anyway.
Post by w***@gmx.de
Is it maybe a performance issue with the raspberry
btw I started the backup script via a cron job, it was then running several hours before it failed.
what's your duplicity version? with the latest duplicity you can install lftp and use that as alernative webdav backend via lftp+webdav://

..ede/duply.net
w***@gmx.de
2016-02-06 08:09:12 UTC
Permalink
Hi,

I have reproduced the error with verbosity 9 and sent the logfiles to ede/duply.net.
As stated in that mail it looks like this is a memory issue (/var/log/messages shows that oom-killer kicks in)

In parallel I have now upgraded my system to
duplicity 0.7.06
duply 1.11.1

and re-run the backup with backend lftp+webdavs as proposed below. This works fine without errors

but

If I check the status of this backup via the same backend lftp+webdavs it shows no backup present.
If I change back to webdavs backend for the status command I can see the 1 successfull full backup I created with the lftp+webdavs backend.
I tested the same with a small backup repository, here the backend lftp+webdavs works fine both for the backup and the status command

A quick investigation I did shows that the status command for the big backup (roughly 14GB of photos) has an issue when retrieving the file info from the webdav server. There is no data returned - in the log below there should be a long file list after STDOUT: but this is empty.

——— snip from log ——
CMD: lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls'
Reading results of 'lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls''
STDERR:
---- Resolving host address...
---- 1 address found: xxxxxxx
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND / HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:09 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1857
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:19 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1368
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> GET /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 200 OK
<--- Date: Sat, 06 Feb 2016 06:55:20 GMT
<--- Server: Apache
<--- Last-Modified: Mon, 01 Feb 2016 19:44:00 GMT
<--- ETag: "xxxxxxx ="
<--- Accept-Ranges: bytes
<--- Content-Length: 0
<--- Content-Type: text/html; charset="UTF-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Received all
---- Closing HTTP connection

STDOUT:

Local and Remote metadata are synchronized, no sync needed.

——— snip end from log ——
Post by e***@web.de
Post by w***@gmx.de
Hi,
I am trying to use Duplicity and duply to backup appr 60GB data to webdavs storage.
The script is running on a raspberrypi with raspbian jessie
Small backups are working.
I have tried to run this backup in one go, it failed; I then split into 15GB chunks it failed again with exit code 137
Any ideas what to do / where to look for?
can you run duplicity in max. verbosity '-v9' and send me the _complete_ output privately?
Post by w***@gmx.de
Is the webdavs access not stable enough for such backup?
should make no difference. the backup is split into volumes of the same size anyway.
Post by w***@gmx.de
Is it maybe a performance issue with the raspberry
btw I started the backup script via a cron job, it was then running several hours before it failed.
what's your duplicity version? with the latest duplicity you can install lftp and use that as alernative webdav backend via lftp+webdav://
..ede/duply.net
w***@gmx.de
2016-02-21 17:30:48 UTC
Permalink
Hi Edgar,

no reason to apologize, thanks for taking this up again.
A. where did you get your duplicity? did you build it yourself, which repository.
first I used the original raspbian duplicity and duply packages (I guess this is coming from Debian jessie arm branch)
This created the memory problems with the webdavs backend
I then upgraded to latest duplicity/duply directly downloading from the duply and duplicity project pages and then manually compiling/installing.
B. how much ram are we talking about in your raspberrypi? (never played around w/ it)
The raspberrypi has 512MB ram (of which 434MB are really available) and a 100MB swap file

The tmp folder is on a harddisk
1. webdav & oom-killing
I will re-run a full backup now with the latest duplicity version self compiled/installed and test if the memory problem re-apears (this will take a while as the job runs appr 24hours)
I will come back on this
2. lftp not listing
I tried to create a new log using your modified lftpbackend.py but it fails with ccl_cacert_path error output below.
The file /etc/duplicity/cacert.pem is existing.
Surprisingly the error remains even when reverting back to the original lftpbackend.py.
The webdavs backend is still working ok for list command.

What I did I do:
- replace the lftpbackend.py in the extracted duplicity archive
- installed it via „python setup.py install“
- run the command: duply photo_2010 list

Did I do something wrong when replacing the file?
I am pretty sure this error was not there when I ran the same duply command with lftpbackend some week back.


——— output when running the duply command ———
duply photo_2010 list > /var/log/duplydebug
Traceback (most recent call last):
File "/usr/local/bin/duplicity", line 1532, in <module>
with_tempdir(main)
File "/usr/local/bin/duplicity", line 1526, in with_tempdir
fn()
File "/usr/local/bin/duplicity", line 1364, in main
action = commandline.ProcessCommandLine(sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py", line 1108, in ProcessCommandLine
globals.backend = backend.get_backend(args[0])
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line 223, in get_backend
obj = get_backend_object(url_string)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line 209, in get_backend_object
return factory(pu)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backends/lftpbackend.py", line 111, in __init__
if globals.ssl_cacert_path:
AttributeError: 'module' object has no attribute 'ssl_cacert_path'

18:04:55.610 Task 'LIST' failed with exit code '30'.

——— output end ——


Gruss,
Wolle
Wolle,
sorry, lost you in my email tsunami!
there are obviously several issues in the mix here. let's try to tackle them in order below
first some more general questions?
A. where did you get your duplicity? did you build it yourself, which repository.
B. how much ram are we talking about in your raspberrypi? (never played around w/ it)
usually memory issues in the past were caused by maintainers fiddling w/ our gpginterface, that's why i am asking where you've got your duplicity from to check that.
What you can try.
1. webdav & oom-killing
run duplicity w/ a command you know that caused the issue in the past and in a second terminal observe memory usage and processes. tyr to find out which (sub)processes stuff your memory.
also make sure that /tmp or the folder you gave for temp files is not mounted to a in-memory file system.
2. lftp not listing
lftp backend probably has an issue in the listing code. please run the listing again in max. verbosity but backup/replace duplicity/backends/lftpbackend.py w/ the copy attached beforehand.
the resulting -v9 log output might tell us why the returned list is empty.
..ede/duply.net
Hi,
noone an idea what this problem could be or where I could continue trouble shooting?
Thanks for help.
BR,
Wolle
Post by w***@gmx.de
Hi,
I have reproduced the error with verbosity 9 and sent the logfiles to ede/duply.net.
As stated in that mail it looks like this is a memory issue (/var/log/messages shows that oom-killer kicks in)
In parallel I have now upgraded my system to
duplicity 0.7.06
duply 1.11.1
and re-run the backup with backend lftp+webdavs as proposed below. This works fine without errors
but
If I check the status of this backup via the same backend lftp+webdavs it shows no backup present.
If I change back to webdavs backend for the status command I can see the 1 successfull full backup I created with the lftp+webdavs backend.
I tested the same with a small backup repository, here the backend lftp+webdavs works fine both for the backup and the status command
A quick investigation I did shows that the status command for the big backup (roughly 14GB of photos) has an issue when retrieving the file info from the webdav server. There is no data returned - in the log below there should be a long file list after STDOUT: but this is empty.
——— snip from log ——
CMD: lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls'
Reading results of 'lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls''
---- Resolving host address...
---- 1 address found: xxxxxxx
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND / HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:09 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1857
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:19 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1368
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> GET /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 200 OK
<--- Date: Sat, 06 Feb 2016 06:55:20 GMT
<--- Server: Apache
<--- Last-Modified: Mon, 01 Feb 2016 19:44:00 GMT
<--- ETag: "xxxxxxx ="
<--- Accept-Ranges: bytes
<--- Content-Length: 0
<--- Content-Type: text/html; charset="UTF-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Received all
---- Closing HTTP connection
Local and Remote metadata are synchronized, no sync needed.
——— snip end from log ——
Post by e***@web.de
Post by w***@gmx.de
Hi,
I am trying to use Duplicity and duply to backup appr 60GB data to webdavs storage.
The script is running on a raspberrypi with raspbian jessie
Small backups are working.
I have tried to run this backup in one go, it failed; I then split into 15GB chunks it failed again with exit code 137
Any ideas what to do / where to look for?
can you run duplicity in max. verbosity '-v9' and send me the _complete_ output privately?
Post by w***@gmx.de
Is the webdavs access not stable enough for such backup?
should make no difference. the backup is split into volumes of the same size anyway.
Post by w***@gmx.de
Is it maybe a performance issue with the raspberry
btw I started the backup script via a cron job, it was then running several hours before it failed.
what's your duplicity version? with the latest duplicity you can install lftp and use that as alernative webdav backend via lftp+webdav://
..ede/duply.net
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
<lftpbackend.py>_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
w***@gmx.de
2016-02-22 20:02:27 UTC
Permalink
Hi,

short update on the 'module' object has no attribute ‚ssl_cacert_path' error

I got duplicity back to work by downloading a fresh duplicity tar from http://duplicity.nongnu.org/ <http://duplicity.nongnu.org/>
and manually removing all files in /usr/local/bin and in /usr/local/lib/python2.7

after a fresh python setup.py install duplicity worked again with the lftp backend on smaller test repositories.

For completness reasons I retested the procedure as above but this time replacing the lftpbackend.py before the install
=> this resulted again in the ssl_cacert_path error

Gruss,
wolle
Post by w***@gmx.de
Hi Edgar,
no reason to apologize, thanks for taking this up again.
A. where did you get your duplicity? did you build it yourself, which repository.
first I used the original raspbian duplicity and duply packages (I guess this is coming from Debian jessie arm branch)
This created the memory problems with the webdavs backend
I then upgraded to latest duplicity/duply directly downloading from the duply and duplicity project pages and then manually compiling/installing.
B. how much ram are we talking about in your raspberrypi? (never played around w/ it)
The raspberrypi has 512MB ram (of which 434MB are really available) and a 100MB swap file
The tmp folder is on a harddisk
1. webdav & oom-killing
I will re-run a full backup now with the latest duplicity version self compiled/installed and test if the memory problem re-apears (this will take a while as the job runs appr 24hours)
I will come back on this
2. lftp not listing
I tried to create a new log using your modified lftpbackend.py but it fails with ccl_cacert_path error output below.
The file /etc/duplicity/cacert.pem is existing.
Surprisingly the error remains even when reverting back to the original lftpbackend.py.
The webdavs backend is still working ok for list command.
- replace the lftpbackend.py in the extracted duplicity archive
- installed it via „python setup.py install“
- run the command: duply photo_2010 list
Did I do something wrong when replacing the file?
I am pretty sure this error was not there when I ran the same duply command with lftpbackend some week back.
——— output when running the duply command ———
duply photo_2010 list > /var/log/duplydebug
File "/usr/local/bin/duplicity", line 1532, in <module>
with_tempdir(main)
File "/usr/local/bin/duplicity", line 1526, in with_tempdir
fn()
File "/usr/local/bin/duplicity", line 1364, in main
action = commandline.ProcessCommandLine(sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py", line 1108, in ProcessCommandLine
globals.backend = backend.get_backend(args[0])
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line 223, in get_backend
obj = get_backend_object(url_string)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line 209, in get_backend_object
return factory(pu)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backends/lftpbackend.py", line 111, in __init__
AttributeError: 'module' object has no attribute 'ssl_cacert_path'
18:04:55.610 Task 'LIST' failed with exit code '30'.
——— output end ——
Gruss,
Wolle
Wolle,
sorry, lost you in my email tsunami!
there are obviously several issues in the mix here. let's try to tackle them in order below
first some more general questions?
A. where did you get your duplicity? did you build it yourself, which repository.
B. how much ram are we talking about in your raspberrypi? (never played around w/ it)
usually memory issues in the past were caused by maintainers fiddling w/ our gpginterface, that's why i am asking where you've got your duplicity from to check that.
What you can try.
1. webdav & oom-killing
run duplicity w/ a command you know that caused the issue in the past and in a second terminal observe memory usage and processes. tyr to find out which (sub)processes stuff your memory.
also make sure that /tmp or the folder you gave for temp files is not mounted to a in-memory file system.
2. lftp not listing
lftp backend probably has an issue in the listing code. please run the listing again in max. verbosity but backup/replace duplicity/backends/lftpbackend.py w/ the copy attached beforehand.
the resulting -v9 log output might tell us why the returned list is empty.
..ede/duply.net
Hi,
noone an idea what this problem could be or where I could continue trouble shooting?
Thanks for help.
BR,
Wolle
Post by w***@gmx.de
Hi,
I have reproduced the error with verbosity 9 and sent the logfiles to ede/duply.net.
As stated in that mail it looks like this is a memory issue (/var/log/messages shows that oom-killer kicks in)
In parallel I have now upgraded my system to
duplicity 0.7.06
duply 1.11.1
and re-run the backup with backend lftp+webdavs as proposed below. This works fine without errors
but
If I check the status of this backup via the same backend lftp+webdavs it shows no backup present.
If I change back to webdavs backend for the status command I can see the 1 successfull full backup I created with the lftp+webdavs backend.
I tested the same with a small backup repository, here the backend lftp+webdavs works fine both for the backup and the status command
A quick investigation I did shows that the status command for the big backup (roughly 14GB of photos) has an issue when retrieving the file info from the webdav server. There is no data returned - in the log below there should be a long file list after STDOUT: but this is empty.
——— snip from log ——
CMD: lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls'
Reading results of 'lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls''
---- Resolving host address...
---- 1 address found: xxxxxxx
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND / HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:09 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1857
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:19 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1368
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> GET /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 200 OK
<--- Date: Sat, 06 Feb 2016 06:55:20 GMT
<--- Server: Apache
<--- Last-Modified: Mon, 01 Feb 2016 19:44:00 GMT
<--- ETag: "xxxxxxx ="
<--- Accept-Ranges: bytes
<--- Content-Length: 0
<--- Content-Type: text/html; charset="UTF-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Received all
---- Closing HTTP connection
Local and Remote metadata are synchronized, no sync needed.
——— snip end from log ——
Post by e***@web.de
Post by w***@gmx.de
Hi,
I am trying to use Duplicity and duply to backup appr 60GB data to webdavs storage.
The script is running on a raspberrypi with raspbian jessie
Small backups are working.
I have tried to run this backup in one go, it failed; I then split into 15GB chunks it failed again with exit code 137
Any ideas what to do / where to look for?
can you run duplicity in max. verbosity '-v9' and send me the _complete_ output privately?
Post by w***@gmx.de
Is the webdavs access not stable enough for such backup?
should make no difference. the backup is split into volumes of the same size anyway.
Post by w***@gmx.de
Is it maybe a performance issue with the raspberry
btw I started the backup script via a cron job, it was then running several hours before it failed.
what's your duplicity version? with the latest duplicity you can install lftp and use that as alernative webdav backend via lftp+webdav://
..ede/duply.net
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
<lftpbackend.py>_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
e***@web.de
2016-02-23 14:43:44 UTC
Permalink
yo Wolle,

forget about 'AttributeError: 'module' object has no attribute 'ssl_cacert_path'' that's my bad as i forgot that it depends on other changes in my devel branch.

how do your self-installed versions work out so far?

..ede
Hi,
short update on the 'module' object has no attribute ‚ssl_cacert_path' error
I got duplicity back to work by downloading a fresh duplicity tar from http://duplicity.nongnu.org/ <http://duplicity.nongnu.org/>
and manually removing all files in /usr/local/bin and in /usr/local/lib/python2.7
after a fresh python setup.py install duplicity worked again with the lftp backend on smaller test repositories.
For completness reasons I retested the procedure as above but this time replacing the lftpbackend.py before the install
=> this resulted again in the ssl_cacert_path error
Gruss,
wolle
Post by w***@gmx.de
Hi Edgar,
no reason to apologize, thanks for taking this up again.
A. where did you get your duplicity? did you build it yourself, which repository.
first I used the original raspbian duplicity and duply packages (I guess this is coming from Debian jessie arm branch)
This created the memory problems with the webdavs backend
I then upgraded to latest duplicity/duply directly downloading from the duply and duplicity project pages and then manually compiling/installing.
B. how much ram are we talking about in your raspberrypi? (never played around w/ it)
The raspberrypi has 512MB ram (of which 434MB are really available) and a 100MB swap file
The tmp folder is on a harddisk
1. webdav & oom-killing
I will re-run a full backup now with the latest duplicity version self compiled/installed and test if the memory problem re-apears (this will take a while as the job runs appr 24hours)
I will come back on this
2. lftp not listing
I tried to create a new log using your modified lftpbackend.py but it fails with ccl_cacert_path error output below.
The file /etc/duplicity/cacert.pem is existing.
Surprisingly the error remains even when reverting back to the original lftpbackend.py.
The webdavs backend is still working ok for list command.
- replace the lftpbackend.py in the extracted duplicity archive
- installed it via „python setup.py install“
- run the command: duply photo_2010 list
Did I do something wrong when replacing the file?
I am pretty sure this error was not there when I ran the same duply command with lftpbackend some week back.
——— output when running the duply command ———
duply photo_2010 list > /var/log/duplydebug
File "/usr/local/bin/duplicity", line 1532, in <module>
with_tempdir(main)
File "/usr/local/bin/duplicity", line 1526, in with_tempdir
fn()
File "/usr/local/bin/duplicity", line 1364, in main
action = commandline.ProcessCommandLine(sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py", line 1108, in ProcessCommandLine
globals.backend = backend.get_backend(args[0])
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line 223, in get_backend
obj = get_backend_object(url_string)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line 209, in get_backend_object
return factory(pu)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backends/lftpbackend.py", line 111, in __init__
AttributeError: 'module' object has no attribute 'ssl_cacert_path'
18:04:55.610 Task 'LIST' failed with exit code '30'.
——— output end ——
Gruss,
Wolle
Wolle,
sorry, lost you in my email tsunami!
there are obviously several issues in the mix here. let's try to tackle them in order below
first some more general questions?
A. where did you get your duplicity? did you build it yourself, which repository.
B. how much ram are we talking about in your raspberrypi? (never played around w/ it)
usually memory issues in the past were caused by maintainers fiddling w/ our gpginterface, that's why i am asking where you've got your duplicity from to check that.
What you can try.
1. webdav & oom-killing
run duplicity w/ a command you know that caused the issue in the past and in a second terminal observe memory usage and processes. tyr to find out which (sub)processes stuff your memory.
also make sure that /tmp or the folder you gave for temp files is not mounted to a in-memory file system.
2. lftp not listing
lftp backend probably has an issue in the listing code. please run the listing again in max. verbosity but backup/replace duplicity/backends/lftpbackend.py w/ the copy attached beforehand.
the resulting -v9 log output might tell us why the returned list is empty.
..ede/duply.net
Hi,
noone an idea what this problem could be or where I could continue trouble shooting?
Thanks for help.
BR,
Wolle
Post by w***@gmx.de
Hi,
I have reproduced the error with verbosity 9 and sent the logfiles to ede/duply.net.
As stated in that mail it looks like this is a memory issue (/var/log/messages shows that oom-killer kicks in)
In parallel I have now upgraded my system to
duplicity 0.7.06
duply 1.11.1
and re-run the backup with backend lftp+webdavs as proposed below. This works fine without errors
but
If I check the status of this backup via the same backend lftp+webdavs it shows no backup present.
If I change back to webdavs backend for the status command I can see the 1 successfull full backup I created with the lftp+webdavs backend.
I tested the same with a small backup repository, here the backend lftp+webdavs works fine both for the backup and the status command
A quick investigation I did shows that the status command for the big backup (roughly 14GB of photos) has an issue when retrieving the file info from the webdav server. There is no data returned - in the log below there should be a long file list after STDOUT: but this is empty.
——— snip from log ——
CMD: lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls'
Reading results of 'lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls''
---- Resolving host address...
---- 1 address found: xxxxxxx
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND / HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:09 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1857
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:19 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1368
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> GET /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 200 OK
<--- Date: Sat, 06 Feb 2016 06:55:20 GMT
<--- Server: Apache
<--- Last-Modified: Mon, 01 Feb 2016 19:44:00 GMT
<--- ETag: "xxxxxxx ="
<--- Accept-Ranges: bytes
<--- Content-Length: 0
<--- Content-Type: text/html; charset="UTF-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Received all
---- Closing HTTP connection
Local and Remote metadata are synchronized, no sync needed.
——— snip end from log ——
Post by e***@web.de
Post by w***@gmx.de
Hi,
I am trying to use Duplicity and duply to backup appr 60GB data to webdavs storage.
The script is running on a raspberrypi with raspbian jessie
Small backups are working.
I have tried to run this backup in one go, it failed; I then split into 15GB chunks it failed again with exit code 137
Any ideas what to do / where to look for?
can you run duplicity in max. verbosity '-v9' and send me the _complete_ output privately?
Post by w***@gmx.de
Is the webdavs access not stable enough for such backup?
should make no difference. the backup is split into volumes of the same size anyway.
Post by w***@gmx.de
Is it maybe a performance issue with the raspberry
btw I started the backup script via a cron job, it was then running several hours before it failed.
what's your duplicity version? with the latest duplicity you can install lftp and use that as alernative webdav backend via lftp+webdav://
..ede/duply.net
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
<lftpbackend.py>_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
w***@gmx.de
2016-02-23 19:28:04 UTC
Permalink
Hi,
I tested the original lftp+webdavs backend from the latest available duplicity version.

still list command is not working - it does not find any existing backup

Edgar, I have sent you a log file to your email adress.

Gruss,
Wolle
Post by e***@web.de
yo Wolle,
forget about 'AttributeError: 'module' object has no attribute 'ssl_cacert_path'' that's my bad as i forgot that it depends on other changes in my devel branch.
how do your self-installed versions work out so far?
..ede
Hi,
short update on the 'module' object has no attribute ‚ssl_cacert_path' error
I got duplicity back to work by downloading a fresh duplicity tar from http://duplicity.nongnu.org/ <http://duplicity.nongnu.org/> <http://duplicity.nongnu.org/ <http://duplicity.nongnu.org/>>
and manually removing all files in /usr/local/bin and in /usr/local/lib/python2.7
after a fresh python setup.py install duplicity worked again with the lftp backend on smaller test repositories.
For completness reasons I retested the procedure as above but this time replacing the lftpbackend.py before the install
=> this resulted again in the ssl_cacert_path error
Gruss,
wolle
Post by w***@gmx.de
Hi Edgar,
no reason to apologize, thanks for taking this up again.
A. where did you get your duplicity? did you build it yourself, which repository.
first I used the original raspbian duplicity and duply packages (I guess this is coming from Debian jessie arm branch)
This created the memory problems with the webdavs backend
I then upgraded to latest duplicity/duply directly downloading from the duply and duplicity project pages and then manually compiling/installing.
B. how much ram are we talking about in your raspberrypi? (never played around w/ it)
The raspberrypi has 512MB ram (of which 434MB are really available) and a 100MB swap file
The tmp folder is on a harddisk
1. webdav & oom-killing
I will re-run a full backup now with the latest duplicity version self compiled/installed and test if the memory problem re-apears (this will take a while as the job runs appr 24hours)
I will come back on this
2. lftp not listing
I tried to create a new log using your modified lftpbackend.py but it fails with ccl_cacert_path error output below.
The file /etc/duplicity/cacert.pem is existing.
Surprisingly the error remains even when reverting back to the original lftpbackend.py.
The webdavs backend is still working ok for list command.
- replace the lftpbackend.py in the extracted duplicity archive
- installed it via „python setup.py install“
- run the command: duply photo_2010 list
Did I do something wrong when replacing the file?
I am pretty sure this error was not there when I ran the same duply command with lftpbackend some week back.
——— output when running the duply command ———
duply photo_2010 list > /var/log/duplydebug
File "/usr/local/bin/duplicity", line 1532, in <module>
with_tempdir(main)
File "/usr/local/bin/duplicity", line 1526, in with_tempdir
fn()
File "/usr/local/bin/duplicity", line 1364, in main
action = commandline.ProcessCommandLine(sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py", line 1108, in ProcessCommandLine
globals.backend = backend.get_backend(args[0])
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line 223, in get_backend
obj = get_backend_object(url_string)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line 209, in get_backend_object
return factory(pu)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backends/lftpbackend.py", line 111, in __init__
AttributeError: 'module' object has no attribute 'ssl_cacert_path'
18:04:55.610 Task 'LIST' failed with exit code '30'.
——— output end ——
Gruss,
Wolle
Wolle,
sorry, lost you in my email tsunami!
there are obviously several issues in the mix here. let's try to tackle them in order below
first some more general questions?
A. where did you get your duplicity? did you build it yourself, which repository.
B. how much ram are we talking about in your raspberrypi? (never played around w/ it)
usually memory issues in the past were caused by maintainers fiddling w/ our gpginterface, that's why i am asking where you've got your duplicity from to check that.
What you can try.
1. webdav & oom-killing
run duplicity w/ a command you know that caused the issue in the past and in a second terminal observe memory usage and processes. tyr to find out which (sub)processes stuff your memory.
also make sure that /tmp or the folder you gave for temp files is not mounted to a in-memory file system.
2. lftp not listing
lftp backend probably has an issue in the listing code. please run the listing again in max. verbosity but backup/replace duplicity/backends/lftpbackend.py w/ the copy attached beforehand.
the resulting -v9 log output might tell us why the returned list is empty.
..ede/duply.net
Hi,
noone an idea what this problem could be or where I could continue trouble shooting?
Thanks for help.
BR,
Wolle
Post by w***@gmx.de
Hi,
I have reproduced the error with verbosity 9 and sent the logfiles to ede/duply.net.
As stated in that mail it looks like this is a memory issue (/var/log/messages shows that oom-killer kicks in)
In parallel I have now upgraded my system to
duplicity 0.7.06
duply 1.11.1
and re-run the backup with backend lftp+webdavs as proposed below. This works fine without errors
but
If I check the status of this backup via the same backend lftp+webdavs it shows no backup present.
If I change back to webdavs backend for the status command I can see the 1 successfull full backup I created with the lftp+webdavs backend.
I tested the same with a small backup repository, here the backend lftp+webdavs works fine both for the backup and the status command
A quick investigation I did shows that the status command for the big backup (roughly 14GB of photos) has an issue when retrieving the file info from the webdav server. There is no data returned - in the log below there should be a long file list after STDOUT: but this is empty.
——— snip from log ——
CMD: lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls'
Reading results of 'lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls''
---- Resolving host address...
---- 1 address found: xxxxxxx
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND / HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:09 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1857
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:19 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1368
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> GET /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 200 OK
<--- Date: Sat, 06 Feb 2016 06:55:20 GMT
<--- Server: Apache
<--- Last-Modified: Mon, 01 Feb 2016 19:44:00 GMT
<--- ETag: "xxxxxxx ="
<--- Accept-Ranges: bytes
<--- Content-Length: 0
<--- Content-Type: text/html; charset="UTF-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Received all
---- Closing HTTP connection
Local and Remote metadata are synchronized, no sync needed.
——— snip end from log ——
Post by e***@web.de
Post by w***@gmx.de
Hi,
I am trying to use Duplicity and duply to backup appr 60GB data to webdavs storage.
The script is running on a raspberrypi with raspbian jessie
Small backups are working.
I have tried to run this backup in one go, it failed; I then split into 15GB chunks it failed again with exit code 137
Any ideas what to do / where to look for?
can you run duplicity in max. verbosity '-v9' and send me the _complete_ output privately?
Post by w***@gmx.de
Is the webdavs access not stable enough for such backup?
should make no difference. the backup is split into volumes of the same size anyway.
Post by w***@gmx.de
Is it maybe a performance issue with the raspberry
btw I started the backup script via a cron job, it was then running several hours before it failed.
what's your duplicity version? with the latest duplicity you can install lftp and use that as alernative webdav backend via lftp+webdav://
..ede/duply.net
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
<lftpbackend.py>_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk <https://lists.nongnu.org/mailman/listinfo/duplicity-talk>
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk <https://lists.nongnu.org/mailman/listinfo/duplicity-talk>
e***@web.de
2016-02-23 21:03:58 UTC
Permalink
Post by w***@gmx.de
Hi,
I tested the original lftp+webdavs backend from the latest available duplicity version.
still list command is not working - it does not find any existing backup
how is the default webdavs backend is working out for you now?
Post by w***@gmx.de
Edgar, I have sent you a log file to your email adress.
ok ..ede
w***@gmx.de
2016-02-24 19:30:11 UTC
Permalink
Hi,
Post by w***@gmx.de
still list command is not working - it does not find any existing backup
how is the default webdavs backend is working out for you now?
With latest duplicity version the webdavs backend seems working. Great!
I will stick to this one now.

Thanks for the help

Gruss,
Wolle

PS if I can still help to find the issues in the lftp backend I am willing to support.
e***@web.de
2016-02-23 21:10:13 UTC
Permalink
hey Wolle,

this is wrt. sd2dav.1und1.de.. can you setup a test account for me to debug against?

..ede/duply.net
Post by w***@gmx.de
Hi Edgar,
here the verbose 9 log of a list command failing
This is using the original lftp+webdavs backend from duplicity
I hope this gives some additional info.
If not I can as well install a full development version of duplicity if you send
me a tar archive or a link
Gruss,
Wolle
Post by e***@web.de
yo Wolle,
forget about 'AttributeError: 'module' object has no attribute
'ssl_cacert_path'' that's my bad as i forgot that it depends on other changes
in my devel branch.
how do your self-installed versions work out so far?
..ede
Hi,
short update on the 'module' object has no attribute ‚ssl_cacert_path' error
I got duplicity back to work by downloading a fresh duplicity tar
fromhttp://duplicity.nongnu.org/<http://duplicity.nongnu.org/>
and manually removing all files in /usr/local/bin and in /usr/local/lib/python2.7
after a fresh python setup.py install duplicity worked again with the lftp
backend on smaller test repositories.
For completness reasons I retested the procedure as above but this time
replacing the lftpbackend.py before the install
=> this resulted again in the ssl_cacert_path error
Gruss,
wolle
Post by w***@gmx.de
Hi Edgar,
no reason to apologize, thanks for taking this up again.
A. where did you get your duplicity? did you build it yourself, which repository.
first I used the original raspbian duplicity and duply packages (I guess
this is coming from Debian jessie arm branch)
This created the memory problems with the webdavs backend
I then upgraded to latest duplicity/duply directly downloading from the
duply and duplicity project pages and then manually compiling/installing.
B. how much ram are we talking about in your raspberrypi? (never played
around w/ it)
The raspberrypi has 512MB ram (of which 434MB are really available) and a
100MB swap file
The tmp folder is on a harddisk
1. webdav & oom-killing
I will re-run a full backup now with the latest duplicity version self
compiled/installed and test if the memory problem re-apears (this will take
a while as the job runs appr 24hours)
I will come back on this
2. lftp not listing
I tried to create a new log using your modified lftpbackend.py but it fails
with ccl_cacert_path error output below.
The file /etc/duplicity/cacert.pem is existing.
Surprisingly the error remains even when reverting back to the original
lftpbackend.py.
The webdavs backend is still working ok for list command.
- replace the lftpbackend.py in the extracted duplicity archive
- installed it via „python setup.py install“
- run the command: duply photo_2010 list
Did I do something wrong when replacing the file?
I am pretty sure this error was not there when I ran the same duply command
with lftpbackend some week back.
——— output when running the duply command ———
duply photo_2010 list > /var/log/duplydebug
File "/usr/local/bin/duplicity", line 1532, in <module>
with_tempdir(main)
File "/usr/local/bin/duplicity", line 1526, in with_tempdir
fn()
File "/usr/local/bin/duplicity", line 1364, in main
action = commandline.ProcessCommandLine(sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/duplicity/commandline.py", line
1108, in ProcessCommandLine
globals.backend = backend.get_backend(args[0])
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line
223, in get_backend
obj = get_backend_object(url_string)
File "/usr/local/lib/python2.7/dist-packages/duplicity/backend.py", line
209, in get_backend_object
return factory(pu)
File
"/usr/local/lib/python2.7/dist-packages/duplicity/backends/lftpbackend.py",
line 111, in __init__
AttributeError: 'module' object has no attribute 'ssl_cacert_path'
18:04:55.610 Task 'LIST' failed with exit code '30'.
——— output end ——
Gruss,
Wolle
Wolle,
sorry, lost you in my email tsunami!
there are obviously several issues in the mix here. let's try to tackle
them in order below
first some more general questions?
A. where did you get your duplicity? did you build it yourself, which repository.
B. how much ram are we talking about in your raspberrypi? (never played
around w/ it)
usually memory issues in the past were caused by maintainers fiddling w/
our gpginterface, that's why i am asking where you've got your duplicity
from to check that.
What you can try.
1. webdav & oom-killing
run duplicity w/ a command you know that caused the issue in the past and
in a second terminal observe memory usage and processes. tyr to find out
which (sub)processes stuff your memory.
also make sure that /tmp or the folder you gave for temp files is not
mounted to a in-memory file system.
2. lftp not listing
lftp backend probably has an issue in the listing code. please run the
listing again in max. verbosity but backup/replace
duplicity/backends/lftpbackend.py w/ the copy attached beforehand.
the resulting -v9 log output might tell us why the returned list is empty.
..ede/duply.net <http://duply.net>
Hi,
noone an idea what this problem could be or where I could continue trouble
shooting?
Thanks for help.
BR,
Wolle
Post by w***@gmx.de
Hi,
I have reproduced the error with verbosity 9 and sent the logfiles to
ede/duply.net <http://duply.net>.
As stated in that mail it looks like this is a memory issue
(/var/log/messages shows that oom-killer kicks in)
In parallel I have now upgraded my system to
duplicity 0.7.06
duply 1.11.1
and re-run the backup with backend lftp+webdavs as proposed below. This
works fine without errors
but
If I check the status of this backup via the same backend lftp+webdavs it
shows no backup present.
If I change back to webdavs backend for the status command I can see the
1 successfull full backup I created with the lftp+webdavs backend.
I tested the same with a small backup repository, here the backend
lftp+webdavs works fine both for the backup and the status command
A quick investigation I did shows that the status command for the big
backup (roughly 14GB of photos) has an issue when retrieving the file
info from the webdav server. There is no data returned - in the log below
there should be a long file list after STDOUT: but this is empty.
——— snip from log ——
CMD: lftp -c 'source
'/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd
'backup/photo/1971-2005/' || exit 0; ls'
Reading results of 'lftp -c 'source
'/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd
'backup/photo/1971-2005/' || exit 0; ls''
---- Resolving host address...
---- 1 address found: xxxxxxx
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND / HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:09 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1857
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:19 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1368
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> GET /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 200 OK
<--- Date: Sat, 06 Feb 2016 06:55:20 GMT
<--- Server: Apache
<--- Last-Modified: Mon, 01 Feb 2016 19:44:00 GMT
<--- ETag: "xxxxxxx ="
<--- Accept-Ranges: bytes
<--- Content-Length: 0
<--- Content-Type: text/html; charset="UTF-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Received all
---- Closing HTTP connection
Local and Remote metadata are synchronized, no sync needed.
——— snip end from log ——
Post by e***@web.de
Post by w***@gmx.de
Hi,
I am trying to use Duplicity and duply to backup appr 60GB data to
webdavs storage.
The script is running on a raspberrypi with raspbian jessie
Small backups are working.
I have tried to run this backup in one go, it failed; I then split into
15GB chunks it failed again with exit code 137
Any ideas what to do / where to look for?
can you run duplicity in max. verbosity '-v9' and send me the _complete_
output privately?
Post by w***@gmx.de
Is the webdavs access not stable enough for such backup?
should make no difference. the backup is split into volumes of the same
size anyway.
Post by w***@gmx.de
Is it maybe a performance issue with the raspberry
btw I started the backup script via a cron job, it was then running
several hours before it failed.
what's your duplicity version? with the latest duplicity you can install
lftp and use that as alernative webdav backend via lftp+webdav://
..ede/duply.net <http://duply.net>
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
<lftpbackend.py>_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
e***@web.de
2016-03-04 15:58:15 UTC
Permalink
ok,

just tried w/ the credentials you delivered and listing works fine for me w/ latest duplicity 0.7.06 and lftp 4.6.0 . the server is terribly slow though!

what's you lftp version? maybe you should consider upgrading it? ..ede
w***@gmx.de
2016-03-04 16:17:30 UTC
Permalink
Hi,
these are exactly the same versions I use, both duplicity and lftp.

the backups where I have problems are big backups > 10GB and several thousand files (photos)

Gruss,
Wolle
Post by e***@web.de
ok,
just tried w/ the credentials you delivered and listing works fine for me w/ latest duplicity 0.7.06 and lftp 4.6.0 . the server is terribly slow though!
what's you lftp version? maybe you should consider upgrading it? ..ede
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
e***@web.de
2016-04-17 14:58:21 UTC
Permalink
hey Wolle,

any news on this issue? i'd assume a timeout on the webserver wrt. the amount of files that it has to list. if so there is nothing duplicity can do. you could to raise the volume size to minimize the number of volumes, but htat wouldn't guarantee anything as the number of files will increase on the backend.

..ede/duply.net
Post by w***@gmx.de
Hi,
I have reproduced the error with verbosity 9 and sent the logfiles to ede/duply.net.
As stated in that mail it looks like this is a memory issue (/var/log/messages shows that oom-killer kicks in)
In parallel I have now upgraded my system to
duplicity 0.7.06
duply 1.11.1
and re-run the backup with backend lftp+webdavs as proposed below. This works fine without errors
but
If I check the status of this backup via the same backend lftp+webdavs it shows no backup present.
If I change back to webdavs backend for the status command I can see the 1 successfull full backup I created with the lftp+webdavs backend.
I tested the same with a small backup repository, here the backend lftp+webdavs works fine both for the backup and the status command
A quick investigation I did shows that the status command for the big backup (roughly 14GB of photos) has an issue when retrieving the file info from the webdav server. There is no data returned - in the log below there should be a long file list after STDOUT: but this is empty.
——— snip from log ——
CMD: lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls'
Reading results of 'lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls''
---- Resolving host address...
---- 1 address found: xxxxxxx
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND / HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:09 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1857
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:19 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1368
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> GET /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 200 OK
<--- Date: Sat, 06 Feb 2016 06:55:20 GMT
<--- Server: Apache
<--- Last-Modified: Mon, 01 Feb 2016 19:44:00 GMT
<--- ETag: "xxxxxxx ="
<--- Accept-Ranges: bytes
<--- Content-Length: 0
<--- Content-Type: text/html; charset="UTF-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Received all
---- Closing HTTP connection
Local and Remote metadata are synchronized, no sync needed.
——— snip end from log ——
Post by e***@web.de
Post by w***@gmx.de
Hi,
I am trying to use Duplicity and duply to backup appr 60GB data to webdavs storage.
The script is running on a raspberrypi with raspbian jessie
Small backups are working.
I have tried to run this backup in one go, it failed; I then split into 15GB chunks it failed again with exit code 137
Any ideas what to do / where to look for?
can you run duplicity in max. verbosity '-v9' and send me the _complete_ output privately?
Post by w***@gmx.de
Is the webdavs access not stable enough for such backup?
should make no difference. the backup is split into volumes of the same size anyway.
Post by w***@gmx.de
Is it maybe a performance issue with the raspberry
btw I started the backup script via a cron job, it was then running several hours before it failed.
what's your duplicity version? with the latest duplicity you can install lftp and use that as alernative webdav backend via lftp+webdav://
..ede/duply.net
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
w***@gmx.de
2016-04-20 09:02:01 UTC
Permalink
Hi Edgar,

no news on this, I have not further tested it with lftp+webdavs backend as I am now using webdav directly.

Though I need to admit I have not run these huge backups any more as the increments now are much smaller.

Anything I could still test to support the lftp+webdavs backend trouble shooting?

Gruss,
Wolle
Post by e***@web.de
hey Wolle,
any news on this issue? i'd assume a timeout on the webserver wrt. the amount of files that it has to list. if so there is nothing duplicity can do. you could to raise the volume size to minimize the number of volumes, but htat wouldn't guarantee anything as the number of files will increase on the backend.
..ede/duply.net
Post by w***@gmx.de
Hi,
I have reproduced the error with verbosity 9 and sent the logfiles to ede/duply.net.
As stated in that mail it looks like this is a memory issue (/var/log/messages shows that oom-killer kicks in)
In parallel I have now upgraded my system to
duplicity 0.7.06
duply 1.11.1
and re-run the backup with backend lftp+webdavs as proposed below. This works fine without errors
but
If I check the status of this backup via the same backend lftp+webdavs it shows no backup present.
If I change back to webdavs backend for the status command I can see the 1 successfull full backup I created with the lftp+webdavs backend.
I tested the same with a small backup repository, here the backend lftp+webdavs works fine both for the backup and the status command
A quick investigation I did shows that the status command for the big backup (roughly 14GB of photos) has an issue when retrieving the file info from the webdav server. There is no data returned - in the log below there should be a long file list after STDOUT: but this is empty.
——— snip from log ——
CMD: lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls'
Reading results of 'lftp -c 'source '/home/backup/tmp/duplicity-uzDqze-tempdir/mkstemp-yXoZe9-1'; cd 'backup/photo/1971-2005/' || exit 0; ls''
---- Resolving host address...
---- 1 address found: xxxxxxx
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND / HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:09 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1857
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> PROPFIND /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Depth: 0
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 207 Multi-Status
<--- Date: Sat, 06 Feb 2016 06:55:19 GMT
<--- Server: Apache
<--- ETag: "xxxxxxx ="
<--- Content-Length: 1368
<--- Content-Type: text/xml; charset="utf-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Hit EOF
---- Closing HTTP connection
---- Connecting to sd2dav. xxxxxxx port 443
---- Sending request...
---> GET /backup/photo/1971-2005/ HTTP/1.1
---> Host: sd2dav. xxxxxxx
---> User-Agent: lftp/4.6.0
---> Accept: */*
---> Authorization: Basic xxxxxxx
---> Connection: keep-alive
--->
Certificate: xxxxxxx
Trusted
<--- HTTP/1.1 200 OK
<--- Date: Sat, 06 Feb 2016 06:55:20 GMT
<--- Server: Apache
<--- Last-Modified: Mon, 01 Feb 2016 19:44:00 GMT
<--- ETag: "xxxxxxx ="
<--- Accept-Ranges: bytes
<--- Content-Length: 0
<--- Content-Type: text/html; charset="UTF-8"
<--- Vary: Accept-Encoding
<--- Keep-Alive: timeout=3, max=100
<--- Connection: Keep-Alive
<---
---- Receiving body...
---- Received all
---- Closing HTTP connection
Local and Remote metadata are synchronized, no sync needed.
——— snip end from log ——
Post by e***@web.de
Post by w***@gmx.de
Hi,
I am trying to use Duplicity and duply to backup appr 60GB data to webdavs storage.
The script is running on a raspberrypi with raspbian jessie
Small backups are working.
I have tried to run this backup in one go, it failed; I then split into 15GB chunks it failed again with exit code 137
Any ideas what to do / where to look for?
can you run duplicity in max. verbosity '-v9' and send me the _complete_ output privately?
Post by w***@gmx.de
Is the webdavs access not stable enough for such backup?
should make no difference. the backup is split into volumes of the same size anyway.
Post by w***@gmx.de
Is it maybe a performance issue with the raspberry
btw I started the backup script via a cron job, it was then running several hours before it failed.
what's your duplicity version? with the latest duplicity you can install lftp and use that as alernative webdav backend via lftp+webdav://
..ede/duply.net
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Loading...