![]() The docs say: "If your cloud storage system has tens of thousands of files on it then you may need to disable this option as SyncBackPro may use a large amount of CPU time retrieving the list." But doesn't mention anything about the speed if this would even help. However if I disable "Retrieve a list of all the files and folders then filter (faster in most situations)" then I can edit the number of scanning threads. Please increase the "Number of scanning threads to use (too many will degrade performance)" under Modify profile > Expert > Cloud > Advanced page and see if it can improve performance.Īs stated in the help file the option "Number of scanning threads to use" can significantly improve performance, but if too many threads are used then it can also significantly reduce performance (by overloading the network, CPU, and increasing memory usage). ![]() Sorry, the option "What to do if the same file has been changed on Source and Amazon S3" does not improve the scan time of a profile run. Would changing the "What to do if the same file has been changed on Source and Amazon S3" to Do nothing fix it? The local files will always be new and never changed. This doesn't seem ideal since it may delete a file that failed. The only alternative I can think of is to make this purely a backup operation to enable Fast Backup then a Powershell script in the Post-Run to delete the files. Would changing the "What to do if the same file has been changed on Source and Amazon S3" to Do nothing fix it? The local files will always ben new and never changed. Basically an "Only scan Source against Destination" type option for users with massive numbers of Destination files. However with this quantity of files, scanning the objects that are in Source against the Destination would be significantly faster (11k files to scan in S3 versus all 400k). Everything else is set to Do nothing.įrom the docs since I'm using a move on the Source to Destination Fast Backup cannot be used. ![]() File and folder decisions are both set Source overwrite S3 (with move the file enabled) and if exists on source but not S3 to Move the file. The profile is set up and doing that just fine, however it must rescan the entire S3 every time. We are wanting to keep the past 7 days locally if we need to inspect any of these files, the rest are moved to S3 (deleting it locally). Backing up to S3, so far it is about 400k files in S3. I'm backing up a folder we use for archival purposes.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |