Discussion:
[Duplicity-talk] Running out of disk space -- new full backup?
Grant
2016-01-30 19:50:08 UTC
Permalink
One of the systems I send my backups to is running out of space. Is
the solution to delete all of the backups and run a new full backup?

- Grant
Jacob Godserv
2016-01-30 19:55:26 UTC
Permalink
If you do full backups regularly, and keep at least 2 around, you can
always opt to keep fewer full backups too. See remove-all-but-n-full
and remove-all-inc-of-but-n-full for some options.
Post by Grant
One of the systems I send my backups to is running out of space. Is
the solution to delete all of the backups and run a new full backup?
- Grant
_______________________________________________
Duplicity-talk mailing list
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Grant
2016-02-03 16:46:33 UTC
Permalink
If you do full backups regularly, and keep at least 2 around, you can always
opt to keep fewer full backups too. See remove-all-but-n-full and
remove-all-inc-of-but-n-full for some options.
So far I've only done 1 full backup and all incrementals after that.
Should I re-think this strategy? Is the point of running periodic
full backups to save disk space as per above?

- Grant
Post by Grant
One of the systems I send my backups to is running out of space. Is
the solution to delete all of the backups and run a new full backup?
Grant
2016-02-12 01:27:27 UTC
Permalink
Post by Grant
If you do full backups regularly, and keep at least 2 around, you can always
opt to keep fewer full backups too. See remove-all-but-n-full and
remove-all-inc-of-but-n-full for some options.
So far I've only done 1 full backup and all incrementals after that.
Should I re-think this strategy? Is the point of running periodic
full backups to save disk space as per above?
- Grant
Post by Grant
One of the systems I send my backups to is running out of space. Is
the solution to delete all of the backups and run a new full backup?
Can anyone help me figure this out? I've been using duplicity-0.6.26
happily for quite a while but I'm finally up against disk space. One
of my systems has 14GB in /root/.cache/duplicity/ compared to 38GB in
the backup target. That seems crazy so I deleted the cache but the
next duplicity run brought it right back in full 14GB glory.

Will running another full backup and using remove-all-but-n-full and
remove-all-inc-of-but-n-full reduce disk space usage at the backup
target and in the cache?

Is there another way to reduce the disk space used as cache?

- Grant
Scott Hannahs
2016-02-12 14:07:11 UTC
Permalink
Post by Grant
Post by Grant
If you do full backups regularly, and keep at least 2 around, you can always
opt to keep fewer full backups too. See remove-all-but-n-full and
remove-all-inc-of-but-n-full for some options.
So far I've only done 1 full backup and all incrementals after that.
Should I re-think this strategy? Is the point of running periodic
full backups to save disk space as per above?
- Grant
Post by Grant
One of the systems I send my backups to is running out of space. Is
the solution to delete all of the backups and run a new full backup?
Can anyone help me figure this out? I've been using duplicity-0.6.26
happily for quite a while but I'm finally up against disk space. One
of my systems has 14GB in /root/.cache/duplicity/ compared to 38GB in
the backup target. That seems crazy so I deleted the cache but the
next duplicity run brought it right back in full 14GB glory.
Will running another full backup and using remove-all-but-n-full and
remove-all-inc-of-but-n-full reduce disk space usage at the backup
target and in the cache?
Is there another way to reduce the disk space used as cache?
- Grant
Well a couple of things. If you have a very very long backup chain of incrementals it will increase the cache size a lot. It is also less reliable since if one of the incrementals is corrupted then all of the subsequent incrementals will not be available. That is why an infinite chain of incrementals is not a good idea.

Periodic full backups are the answer and that is why I have a script that decides to make an incremental or not every 30 +/- a few days on several directories. The random +/- keeps the network from being in sync and requiring a full backup of all directories on the same day.

-Scott
Grant
2016-02-13 17:50:48 UTC
Permalink
Post by Scott Hannahs
Post by Grant
Post by Grant
If you do full backups regularly, and keep at least 2 around, you can always
opt to keep fewer full backups too. See remove-all-but-n-full and
remove-all-inc-of-but-n-full for some options.
So far I've only done 1 full backup and all incrementals after that.
Should I re-think this strategy? Is the point of running periodic
full backups to save disk space as per above?
- Grant
Post by Grant
One of the systems I send my backups to is running out of space. Is
the solution to delete all of the backups and run a new full backup?
Can anyone help me figure this out? I've been using duplicity-0.6.26
happily for quite a while but I'm finally up against disk space. One
of my systems has 14GB in /root/.cache/duplicity/ compared to 38GB in
the backup target. That seems crazy so I deleted the cache but the
next duplicity run brought it right back in full 14GB glory.
Will running another full backup and using remove-all-but-n-full and
remove-all-inc-of-but-n-full reduce disk space usage at the backup
target and in the cache?
Is there another way to reduce the disk space used as cache?
- Grant
Well a couple of things. If you have a very very long backup chain of incrementals it will increase the cache size a lot. It is also less reliable since if one of the incrementals is corrupted then all of the subsequent incrementals will not be available. That is why an infinite chain of incrementals is not a good idea.
Periodic full backups are the answer and that is why I have a script that decides to make an incremental or not every 30 +/- a few days on several directories. The random +/- keeps the network from being in sync and requiring a full backup of all directories on the same day.
Thanks Scott. How best to transition from a single full backup and
infinite incrementals to running a full backup every 30 days?

Should I just use full-if-older-than, remove-all-but-n-full, and
remove-all-inc-of-but-n-full from now on? Any other cleanup necessary
(cache or otherwise)?

- Grant
Scott Hannahs
2016-02-13 18:00:37 UTC
Permalink
Post by Grant
Post by Scott Hannahs
Post by Grant
Post by Grant
If you do full backups regularly, and keep at least 2 around, you can always
opt to keep fewer full backups too. See remove-all-but-n-full and
remove-all-inc-of-but-n-full for some options.
So far I've only done 1 full backup and all incrementals after that.
Should I re-think this strategy? Is the point of running periodic
full backups to save disk space as per above?
- Grant
Post by Grant
One of the systems I send my backups to is running out of space. Is
the solution to delete all of the backups and run a new full backup?
Can anyone help me figure this out? I've been using duplicity-0.6.26
happily for quite a while but I'm finally up against disk space. One
of my systems has 14GB in /root/.cache/duplicity/ compared to 38GB in
the backup target. That seems crazy so I deleted the cache but the
next duplicity run brought it right back in full 14GB glory.
Will running another full backup and using remove-all-but-n-full and
remove-all-inc-of-but-n-full reduce disk space usage at the backup
target and in the cache?
Is there another way to reduce the disk space used as cache?
- Grant
Well a couple of things. If you have a very very long backup chain of incrementals it will increase the cache size a lot. It is also less reliable since if one of the incrementals is corrupted then all of the subsequent incrementals will not be available. That is why an infinite chain of incrementals is not a good idea.
Periodic full backups are the answer and that is why I have a script that decides to make an incremental or not every 30 +/- a few days on several directories. The random +/- keeps the network from being in sync and requiring a full backup of all directories on the same day.
Thanks Scott. How best to transition from a single full backup and
infinite incrementals to running a full backup every 30 days?
Should I just use full-if-older-than, remove-all-but-n-full, and
remove-all-inc-of-but-n-full from now on? Any other cleanup necessary
(cache or otherwise)?
My shortened commands are:

nice -n19 /sw/bin/duplicity --full-if-older-than 40D --num-retries 5 --tempdir /var/tmp/duplicity --volsize 250 --asynchronous-upload
nice -n19 /sw/bin/duplicity remove-all-but-n-full 1 --num-retries 5 --tempdir /var/tmp/duplicity --volsize 250 --asynchronous-upload
nice -n19 /sw/bin/duplicity cleanup --force --num-retries 5 --tempdir /var/tmp/duplicity --volsize 250 --asynchronous-upload

This means that the full has to complete before removing the older one.

-Scott
Grant
2016-02-16 02:09:28 UTC
Permalink
Post by Scott Hannahs
Post by Grant
Post by Scott Hannahs
Post by Grant
Post by Grant
If you do full backups regularly, and keep at least 2 around, you can always
opt to keep fewer full backups too. See remove-all-but-n-full and
remove-all-inc-of-but-n-full for some options.
So far I've only done 1 full backup and all incrementals after that.
Should I re-think this strategy? Is the point of running periodic
full backups to save disk space as per above?
- Grant
Post by Grant
One of the systems I send my backups to is running out of space. Is
the solution to delete all of the backups and run a new full backup?
Can anyone help me figure this out? I've been using duplicity-0.6.26
happily for quite a while but I'm finally up against disk space. One
of my systems has 14GB in /root/.cache/duplicity/ compared to 38GB in
the backup target. That seems crazy so I deleted the cache but the
next duplicity run brought it right back in full 14GB glory.
Will running another full backup and using remove-all-but-n-full and
remove-all-inc-of-but-n-full reduce disk space usage at the backup
target and in the cache?
Is there another way to reduce the disk space used as cache?
- Grant
Well a couple of things. If you have a very very long backup chain of incrementals it will increase the cache size a lot. It is also less reliable since if one of the incrementals is corrupted then all of the subsequent incrementals will not be available. That is why an infinite chain of incrementals is not a good idea.
Periodic full backups are the answer and that is why I have a script that decides to make an incremental or not every 30 +/- a few days on several directories. The random +/- keeps the network from being in sync and requiring a full backup of all directories on the same day.
Thanks Scott. How best to transition from a single full backup and
infinite incrementals to running a full backup every 30 days?
Should I just use full-if-older-than, remove-all-but-n-full, and
remove-all-inc-of-but-n-full from now on? Any other cleanup necessary
(cache or otherwise)?
nice -n19 /sw/bin/duplicity --full-if-older-than 40D --num-retries 5 --tempdir /var/tmp/duplicity --volsize 250 --asynchronous-upload
nice -n19 /sw/bin/duplicity remove-all-but-n-full 1 --num-retries 5 --tempdir /var/tmp/duplicity --volsize 250 --asynchronous-upload
nice -n19 /sw/bin/duplicity cleanup --force --num-retries 5 --tempdir /var/tmp/duplicity --volsize 250 --asynchronous-upload
This means that the full has to complete before removing the older one.
I ran the above (with remove-all-but-n-full 3) and .cache/duplicity
grew in size significantly. As a reminder, that was my second full
backup, every other daily run has been incremental. Do I need to run
remove-all-inc-of-but-n-full in order to make .cache/duplicity
smaller?

- Grant
Scott Hannahs
2016-02-16 18:15:46 UTC
Permalink
Post by Grant
I ran the above (with remove-all-but-n-full 3) and .cache/duplicity
grew in size significantly. As a reminder, that was my second full
backup, every other daily run has been incremental. Do I need to run
remove-all-inc-of-but-n-full in order to make .cache/duplicity
smaller?
Well you have now the following structure in increasing time mode
FULL Set
Incremental
… lots...
Incremental
Full Set
Incremental

The last Full set is new and will significantly increase your cache. This will stabilize once you get 3 Full backups and then when that very long backup chain of incrementals is removed it will go to some steady state.

Removing the intermediate incrementals is up to you. I haven’t tried that option of remove-all-inc-of-but-n-full but it will remove some of the incrementals if that is acceptable it will of course reduce the cache size.
Loading...