Suboptimal link retrieval
Status: Pre-Alpha
Brought to you by:
esselo
Link retrieving is done using a very stupid solution with a back-quoted wget script.
When the server is retrieving a large document, the database stays locked and the
client will try to send a new request after timing out, which will fail. To circumvent
this, the server kills any running wget processes when a new request arrives, making
the script very hostile against other applications using wget. This should be changed
so that the server would time out simultaneously with the client or at least recognise
its own wget processes.