Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems very similar to Borg Backup [1], so I'm interested to hear from others who have used both on how it compares.

More generally, I've been looking for a solution that helps distribute backups in a peer-to-peer way. I have a few friends with their own home servers, and we want to replicate backups across each other's servers for geographical redundancy. Currently, I have a script that uses rsync to copy some tar archives over daily, but this doesn't scale well as more peers want to join our backup-sharing group, since it requires them granting me SSH access.

What I need is a decentralized network to share and retrieve backups from peers. I tried using dat [2] with a Borg Backup repository inside it, but ran into some nasty issues with dat which would cause it to regularly crash and one time even corrupt the data.

Does anyone have any suggestions for such a situation?

[1] https://www.borgbackup.org/ [2] https://github.com/datproject/dat



This might not be quite what you're looking for, but Syncthing[1] is a popular P2P file sharing solution. You could use Restic to make backups to a shared Syncthing folder.

[1]: https://syncthing.net/


Thanks for the suggestion. I've tried Syncthing, but it seems to still require that users are explicitly added (i.e., no public access [1]). I'd prefer a solution where anyone could decide to start helping replicate backups, without me having to add them in some way.

[1] https://github.com/syncthing/syncthing/issues/1942


Then you should look into IPFS :)

https://github.com/ipfs/ipfs


As far as I know, IPFS objects are immutable. Is your suggestion that I publish some sort of index containing all of the IPFS links to my backups, and then my friends can automate pinning those links? I think that could work, but it would be pretty bandwidth intensive since there would be no deduplication (I'd have to also encrypt since I wouldn't want everyone to have my files).


I'm not suggesting anything specific. Personally, I think allowing permanent public access to your server backups sounds like a terrible idea, but it's your data and you choose your threat model.


Crashplan home used to support this, but alas I've found no me alternative.


I've been looking for similar. I've coded up a few pieces in go and have trying to fit all the pieces into a coherent model before starting any serious coding.

My plan was to use gRPC + go for communications, a DHT to find peers, and github's klauspost/reedsolomon for adding redundancy.

I wanted to support a simple client in go, tracking all filesystem state locally in something like sqlite and of course encrypting before upload. The local state of course would be backed up as well.

Encrypted blobs would be offered for upload to a peer 2 peer server (or in small setups it could be the same machine) and accepted if they were unique. If not unique, the client would be subscribed to that blob.

The server would then chunk up 1GB or so of blobs, run the reedsolomon to add the desired level of redundancy and start trading those chunks with peers. No peer would know if you trusted them, you might well set your server to only "trust" peers until they have a 95% uptime and 95% reliability when challenged over a month. Reputation would be tracked for the peers you trade with, but only directly. Much like torrent's tit for tat strategy.

The p2p server would accept uploads from any trusted clients and work to ensure the configured replication across any peers it could find.

The peer challenges would be something like ask for the sha256 of a range of bytes of a blob the peer stored for you. Maybe 100 random challenges every few hours.

The general goal is something that would "just work", create keys, get nagged to print the keys out, and have sane defaults for everything.


Borg is great as well. I use both. Main difference to me is that Restic supports s3 natively but doesn’t have compression (last time I checked).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: