From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 988F8138936 for ; Sat, 9 Feb 2013 10:37:04 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 80A9621C0A3; Sat, 9 Feb 2013 10:36:52 +0000 (UTC) Received: from mail-oa0-f42.google.com (mail-oa0-f42.google.com [209.85.219.42]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id CD90521C07B for ; Sat, 9 Feb 2013 10:36:50 +0000 (UTC) Received: by mail-oa0-f42.google.com with SMTP id i18so4910450oag.15 for ; Sat, 09 Feb 2013 02:36:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=8Sc/2Hy1otS5eJSMrRRtRIBHy2SBQTmh75SVJ9PC4gQ=; b=pG1zFAWS7B6qib76ZGg56/OqOh1KNL6b8vuNOqYMlI60i4gP6HC6cJFTlXpQ9Ragod pjS3AKlumM571FK1sG9BJQCIEQ/VhGhUn7DJwvkYt++hZmHpTs5SxUxvAHJkkYa0YHDm fo/JwWVXL18LkD/iAmaJT9hhiMf+6jmYyXS5/aRDillftco4EOUw2+a10XgP21Gx2PIT mTzcvWkCuN+mjvRqZhJtvL7wfWbiWWwAC+jk0d8rsZnPGyCpbVoEJVePuBWODS81pFxE Re3xnm/nfxCRTxC37Qp1vO2mxHCiZXobd+zkhTF9/CW9iTC3Vco3DxnoG9NrPVDHtsSg 9PZQ== Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 X-Received: by 10.182.69.71 with SMTP id c7mr6210825obu.92.1360406209976; Sat, 09 Feb 2013 02:36:49 -0800 (PST) Received: by 10.60.16.135 with HTTP; Sat, 9 Feb 2013 02:36:49 -0800 (PST) In-Reply-To: <5115BB67.9010307@gmail.com> References: <5113DA25.7060408@gmail.com> <5115BB67.9010307@gmail.com> Date: Sat, 9 Feb 2013 21:36:49 +1100 Message-ID: Subject: Re: [gentoo-user] {OT} LWP::UserAgent slows website From: Adam Carter To: "gentoo-user@lists.gentoo.org" Content-Type: multipart/alternative; boundary=14dae93a156de55ac504d5484088 X-Archives-Salt: 32844ecb-dedb-4c60-a58b-5fd7ccf35525 X-Archives-Hash: 6fb6c93c0ff3465811d3363ed1bf54be --14dae93a156de55ac504d5484088 Content-Type: text/plain; charset=ISO-8859-1 > There are several things you can do to improve the state of things. > The first and foremost is to add caching in front of the server, using > an accelerator proxy. (i.e. squid running in accelerator mode.) In > this way, you have a program which receives the user's request, checks > to see if it's a request that it already has a response for, checks > whether that response is still valid, and then checks to see whether > or not it's permitted to respond on the server's behalf...almost > entirely without bothering the main web server. This process is far, > far, far faster than having the request hit the serving application's > main code. > I was under the impression that Apache coded sensibly enough to handle incoming requests as least as well as Squid would. Agree with everything else tho. OP should look into what's required on the back end to process those 6 requests, as it superficially appears that a very small number of requests is generating a huge amount of work, and that means the site would be easy to DoS. --14dae93a156de55ac504d5484088 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

=
There are several things you can do to improve the state of things.
The first and foremost is to add caching in front of the server, using
an accelerator proxy. (i.e. squid running in accelerator mode.) In
this way, you have a program which receives the user's request, checks<= br> to see if it's a request that it already has a response for, checks
whether that response is still valid, and then checks to see whether
or not it's permitted to respond on the server's behalf...almost entirely without bothering the main web server. This process is far,
far, far faster than having the request hit the serving application's main code.


I= was under the impression that Apache coded sensibly enough to handle incom= ing requests as least as well as Squid would. Agree with everything else th= o.

OP sh= ould look into what's required on the back end to process those 6 reque= sts, as it superficially appears that a very small number of requests is ge= nerating a huge amount of work, and that means the site would be easy to Do= S.
--14dae93a156de55ac504d5484088--