Logs at retry level might be enough, and are likely postable. If you prefer, using verbose level is more informative without completely overwhelming size (like profiling will make), but you’d have to sanitize before posting it. I think i will just redo the entire thing and hope it goes well this time. I know but i don’t think i have another option.Īlso before i start it, is there a way to know which file was being processed at the moment an error occurred? I’ve been wondering if it has been happening on one specific file or not. Trying to debug something that takes so long to fail is certainly awkward, and I don’t know where it’ll land… typical home network) with no boxes in the middle to time out? Trying to read current upload back into a DB might be interesting to see if it even starts, fails soon, or does another two weeks. Is there anything else i need to do before restarting the task from scratch? Any flags i need to add to the task? I have the following flags on it at the moment: You mean i should change the blocksize on the task to 5 MB so there will be less database entries? That sounds like a good plan actually.Ĭhoosing a large value will cause a larger overhead on file changes, Going that route should probably bump blocksize up from its default 100KBĮspecially given video which doesn’t deduplicate well, something like 5 MB blocksize might be reasonable. Thank you for the writeup, but reading this and considering how valuable the source data is, and how important it will be to have a proper backup solution that i can rely on for years to come, and how i have absolutely no idea where to start with the described process, i think i will just redo the entire thing and hope it goes well this time.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |