Matt-
Hi,
Thanks for such a great product. I've used it for the last couple of years and find it excellent.
I have a suggestion on the way the software calculates workers. At the moment, I have the opportunity to decide to use multi-part downloads for files that exceed a set threshold. I've worked out how many threads I need to maximise my bandwidth and it works well.
However smaller files don't use all of my bandwidth so I use multiple workers to achieve the same result via concurrent file downloads.
The challenge is when I have a directory with a mix of both large and small files. I can find myself with multiple workers each using multi-part downloads and end up using too many connections to be efficient.
I'd like to suggest that the "workers" concept is used as a pool across both single files and multi-part. If I hit a large file, the workers will complete the smaller file I am downloading and then join into the larger multi-part effort. When they finish, they can move onto other files. The net result is a consistent number of total connections regardless of purpose.
Could this change be implemented, please?
Thanks for such a great product. I've used it for the last couple of years and find it excellent.
I have a suggestion on the way the software calculates workers. At the moment, I have the opportunity to decide to use multi-part downloads for files that exceed a set threshold. I've worked out how many threads I need to maximise my bandwidth and it works well.
However smaller files don't use all of my bandwidth so I use multiple workers to achieve the same result via concurrent file downloads.
The challenge is when I have a directory with a mix of both large and small files. I can find myself with multiple workers each using multi-part downloads and end up using too many connections to be efficient.
I'd like to suggest that the "workers" concept is used as a pool across both single files and multi-part. If I hit a large file, the workers will complete the smaller file I am downloading and then join into the larger multi-part effort. When they finish, they can move onto other files. The net result is a consistent number of total connections regardless of purpose.
Could this change be implemented, please?