From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id B17F7138925 for ; Sat, 9 Feb 2013 02:59:01 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 747F621C07D; Sat, 9 Feb 2013 02:58:52 +0000 (UTC) Received: from mail-ia0-f172.google.com (mail-ia0-f172.google.com [209.85.210.172]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id D0C2121C004 for ; Sat, 9 Feb 2013 02:58:50 +0000 (UTC) Received: by mail-ia0-f172.google.com with SMTP id u8so4963956iag.3 for ; Fri, 08 Feb 2013 18:58:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:x-enigmail-version:content-type :content-transfer-encoding; bh=MWDiUJqQcbp5IkyJxsZDS1qy8XinV8tTa3QkPuN1/z4=; b=KDQV9ZYmRlrih2kHIPTca+e6KAonB754x4tA61MwopIortP4sTT6PJv1Hqlv09eOPS L1h5W1byHySLvOafWXBFZlzC41oginAiq9r8QQqtkmO3gYtNMpNtVGKa7myOAC9u1lKP LVv2N3sBAqg6mF4APN0pvEBY4H/wvhuEeqEY3h/SawsYWnwFnYH6leAtyaEfqiET+pco rTa/JfELMdWiOX9jnfIbz6p/i9BkV32Bhqs9yvjH6p/WK1F8CvC0sBm9S7SiCwtTSLMO bEGzH2lmDa27ab5all8PGKPTYo50iDm/2TrOe9ZO+bbY8FdbDOU+9sZtG/hyj9hu5JhZ 4HmQ== X-Received: by 10.43.17.199 with SMTP id qd7mr12772646icb.52.1360378729937; Fri, 08 Feb 2013 18:58:49 -0800 (PST) Received: from ?IPv6:2001:5c0:1000:a::33? ([2001:5c0:1000:a::33]) by mx.google.com with ESMTPS id fa6sm18366301igb.2.2013.02.08.18.58.48 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 08 Feb 2013 18:58:49 -0800 (PST) Message-ID: <5115BB67.9010307@gmail.com> Date: Fri, 08 Feb 2013 21:58:47 -0500 From: Michael Mol User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130205 Thunderbird/17.0.2 Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 To: gentoo-user@lists.gentoo.org Subject: Re: [gentoo-user] {OT} LWP::UserAgent slows website References: <5113DA25.7060408@gmail.com> In-Reply-To: X-Enigmail-Version: 1.5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Archives-Salt: 06ea5ef2-2bfc-495a-b78d-2a4938922c80 X-Archives-Hash: 28865cd7bec61e709c49995302d3471a -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/08/2013 09:39 PM, Grant wrote: >>>> A little more infromation would help. like what webserver, >>>> what kind of requests, etc >>>> >>>> -Kevin >>> >>> It's apache and the requests/responses are XML. I know this is >>> pathetically little information with which to diagnose the >>> problem. I'm just wondering if there is a tool or method >>> that's good to diagnose things of this nature. >> >> The problems are server-side, not necessarily client-side. Your >> optimizations are going to need to be performed there. > > Are you saying the problem may lie with the server to which I was > making the request? Yes. > The responses all come back successfully within a few seconds. > Can you give me a really general description of the sort of problem > that could behave like this? Your server is just a single computer, running multiple processes. Each request from a user (be it you or someone else) requires a certain amount of resources while it's executing. If there aren't enough resources, some of the requests will have to wait until enough others have finished in order for the resources to be freed up. To really simplify things, let's say your server has a single CPU core, the queries made against it only require CPU consumption, not disk consumption, and the queries you're making require 3s of CPU time. If you make a query, the server will spend 3s thinking before it spits a result back to you. During this time, it can't think about anything else...if it does, the server will take as much longer to respond to you as it takes thinking about other things. Let's say you make two queries at the same time. Each requires 3s of CPU time, so you'll need a grand total of 6s to get all your results back. That's fine, you're expecting this. Now let's say you make a query, and someone else makes a query. Each query takes 3s of CPU time. Since the server has 6s worth of work to do, all the users will get their responses by the end of that 6s. Depending on how a variety of factors come into play, user A might see his query come back at the end of 3s, and user B might see his query come back at the end of 6s. Or it might be reversed. Or both users might not see their results until the end of that 6s. It's really not very predictable. The more queries you make, the more work you give the server. If the server has to spend a few seconds' worth of resources, that's a few seconds' worth of resources unavailable to other users. A few seconds for a query against a web server is actually a huge amount of time...a well-tuned application on a well-tuned webserver backed by a well-tuned database should probably respond to the query in under 50ms! This is because there are often many, many users making queries, and each user tends to make many queries at the same time. There are several things you can do to improve the state of things. The first and foremost is to add caching in front of the server, using an accelerator proxy. (i.e. squid running in accelerator mode.) In this way, you have a program which receives the user's request, checks to see if it's a request that it already has a response for, checks whether that response is still valid, and then checks to see whether or not it's permitted to respond on the server's behalf...almost entirely without bothering the main web server. This process is far, far, far faster than having the request hit the serving application's main code. The second thing is to check the web server configuration itself. Does it have enough spare request handlers available? Does it have too many? If there's enough CPU and RAM left over to launch a few more request handlers when the server is under heavy load, it might be a good idea to allow it to do just that. The third thing to do is to tune the database itself. MySQL in particular ships with horrible default settings that typically limit its performance to far below the hardware you'd normally find it on. Tuning the database requires knowledge of how the database engine works. There's an entire profession dedicated to doing that right... The fourth thing to do is add caching to the application, using things like memcachedb. This may require modifying the application...though if the application has support already, then, well, great. If that's still not enough, there are more things you can do, but you should probably start considering throwing more hardware at the problem... -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJRFbtnAAoJED5TcEBdxYwQNiAH/18rSripzwl6DjK/lePRl9GI LjOqarZ5XmW7lhfWwLajQfbfYXCcA6iEmrlxRZwIm039zIuvcuAIC1dLW64IYeyR OMWppXTDo4dqpOYusPIcOVFvBECJGdU59ONOf2iHR5qUTwi2+Dip1DY5nFZLQjvD zuDE418npqzm2ENaFpGM5SWAs7r/CvE4TiRWaZ2wZrHZrf36cXeT2miK/SFm33ZI 9rCqo8MKj8tw36i3M0lu9JvTTWPgbAJ43AKDxyYsEa3DZzbiBS9GK5pHl0XClVQK by6uhmlxcdldcddu8vqPoLv45gfS2EYO3Oc0rZ9pAVOq5kJUlsmzSEq3NWcymEA= =vSDC -----END PGP SIGNATURE-----