It causes killing the process when the server machine has not got much memory.
Steps to reproduce:
- Upload a big file (~100M)
- Try downloading it after you have it uploaded
- Monitor RAM usage at the same time
It causes killing the process when the server machine has not got much memory.
I think we need to do something in JavaScript to split the data into sections and then load it. Loading smaller chunks uses less RAM on the server and client side. I did same thing in one of my previous projects and it worked.
I think we need to do something in JavaScript to split the data into sections and then load it. Loading smaller chunks uses less RAM on the server and client side. I did same thing in one of my previous projects and it worked.
I think you have misunderstood. The client doesn't have any problem with downloading a file, even a big one. On the other hand, server reads file into a buffer and returns it back to the client so I think the problem's with buffering size or something like that.
I think you have misunderstood. The client doesn't have any problem with downloading a file, even a big one. On the other hand, server reads file into a buffer and returns it back to the client so I think the problem's with buffering size or something like that.
I have checked cherrypy source code and it seems that the buffering from the cherrypy side is just 64KiB(see https://github.com/cherrypy/cherrypy/blob/master/cherrypy/lib/__init__.py#L47)
I found that [server.py#L262](/src/master/server.py#L262) only causes the entire file to be loaded part by part into memory, not downloading it in chunks!
I put a break-point in [file_generator function of cherrypy.lib](https://github.com/cherrypy/cherrypy/blob/master/cherrypy/lib/__init__.py#L64) and then I see `input.read(...)` was called several times before downloading started.
I also tried using `cherrypy.lib.static.serve_file(...)` and it had same problem.
Maybe it's better to use Ngenix or other HTTP-servers like Apache to handle downloads just like Docs recommended [here](https://docs.cherrypy.org/en/3.2.6/progguide/files/downloading.html)
Read [this](https://stackoverflow.com/questions/26227727/large-file-downloads-in-cherrypy) too.
Steps to reproduce:
It causes killing the process when the server machine has not got much memory.
I think we need to do something in JavaScript to split the data into sections and then load it. Loading smaller chunks uses less RAM on the server and client side. I did same thing in one of my previous projects and it worked.
I think you have misunderstood. The client doesn't have any problem with downloading a file, even a big one. On the other hand, server reads file into a buffer and returns it back to the client so I think the problem's with buffering size or something like that.
I have checked cherrypy source code and it seems that the buffering from the cherrypy side is just 64KiB(see https://github.com/cherrypy/cherrypy/blob/master/cherrypy/lib/__init__.py#L47)
I found that server.py#L262 only causes the entire file to be loaded part by part into memory, not downloading it in chunks!
I put a break-point in file_generator function of cherrypy.lib and then I see
input.read(...)
was called several times before downloading started.I also tried using
cherrypy.lib.static.serve_file(...)
and it had same problem.Maybe it's better to use Ngenix or other HTTP-servers like Apache to handle downloads just like Docs recommended here
Read this too.
https://github.com/cherrypy/cherrypy/issues/1147