In transfering a large file from a server to user, there is the potential for a connection snafu. If the user has been connected for 20mins with 3 mins left it can be super frustrating as we all know!
I know some download tools can just resume where they left off. I assume they do this by comparing what is on the users hard drive against what is being sent, byte by byte, and then requesting the server to pick up at some specified byte?
I would like to implement something like this, but using blocks of say 2k bytes. I suppose it would be possible to use an ACK for each block, but it seems like that might slow things down. Speed is important here.
I suspect it would fairly easy to count "blocks" and send the server a request for the remaining blocks.
Anyone confronted this before and used a better schema?
I know some download tools can just resume where they left off. I assume they do this by comparing what is on the users hard drive against what is being sent, byte by byte, and then requesting the server to pick up at some specified byte?
I would like to implement something like this, but using blocks of say 2k bytes. I suppose it would be possible to use an ACK for each block, but it seems like that might slow things down. Speed is important here.
I suspect it would fairly easy to count "blocks" and send the server a request for the remaining blocks.
Anyone confronted this before and used a better schema?
Comment