As a consequence, deleting using rclone delete -rmdirs will not be able to remove directories containing symlinks they can only be deleted via rclone purge. In general the operations the API can execute on symlinks directly are limited to:Īt the moment the hidrive-backend simply ignores any native symlinks on List(.), meaning that users can not interact with existing native symlinks in HiDrive-Accounts using rclone. HiDrive can natively store symbolic links (symlinks) and has a special type for them, but they can not be up- or downloaded directly using the API. I have some time to review and merge this now - sorry for the delay.Īs a first step can you rebase it on master and fix the conflict - it shouldn't be hard. Otherwise, a solution that comes to mind is to implement a special fs.Object-type in the hidrive-backend, that only supports the 3 API-operations mentioned above. The preferred solution would be for backends to be able to support native symlinks directly. This means that symlinks can not be handled by rclone as regular files.Īt the moment the hidrive-backend simply ignores any native symlinks on List(.), meaning that users can not interact with existing native symlinks in HiDrive-Accounts using rclone.Īs a consequence, deleting using rclone delete -rmdirs will not be able to remove directories containing symlinks they can only be deleted via rclone purge.Īs noted, the hidrive-backend is not able to create native symlinks eitherway, but it still may be confusing for users if they can not interact with existing native symlinks. In general the operations the API can execute on symlinks directly are limited to: rclonelink-files that rclone uses to encode and store symbolic links. Note: The following section concerns symbolic links that can be natively stored with HiDrive, not the. This is done to simplify the code and program flow, but can easily be changed, which would suffice for the tests to pass successfully.īut the condition TestFsPutError is supposed to test, will still not hold for files of any size, just for smaller ones. There is no transactional mechanism to upload a file in multiple parts.Ĭurrently the hidrive-backend always creates an empty file as a first step, even if the file could be uploaded using a single request. Specifically files up to 2 GiB can be uploaded using a single request files larger than this need to be uploaded in multiple parts. If the condition should hold true for files of any size, a system resembling the Chunker-backend would be a possible solution. If it suffices that the tests pass for small files, that change can be made without too much hassle. This is not a condition that the API can satisfy for files of arbitrary size. It makes sure that aborting a file half way through does not create TestFsPutError tests uploading a file where there is an error When running test_all -backends hidrive there are currently two tests that are failing:īoth fail for the same reason, that is highlighted by the comment above TestFsPutError: I will also add a few notes to the code in its current state, highlighting some design choices that you may disagree with and thus need to change.īack to the discussion: This has been resolved. ![]() We are mostly ready to start the review process for this change,īut there are some last quirks we would like to discuss with you before.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |