From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id EC08F15864F for ; Mon, 27 Mar 2023 11:33:28 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 8A92BE08EC; Mon, 27 Mar 2023 11:33:24 +0000 (UTC) Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com [209.85.219.174]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 411E0E0801 for ; Mon, 27 Mar 2023 11:33:24 +0000 (UTC) Received: by mail-yb1-f174.google.com with SMTP id k17so10046328ybm.11 for ; Mon, 27 Mar 2023 04:33:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679916803; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TL2QSKN2URIeACY/vQ2mEDRK5VayAqoBr6+bq3li3mk=; b=LUXmEj2LRjVebfZONjV6w5dm+V3rAa7dr1g2rNixC9fuQ2+WLoscjbaTedJRs2iUmV 8MnaxQ45EDeS421RUIUGvB55R0fLMI60k1PPO9SiOBM4TANsRm51AhgDsIim6SURnkKG XwK8b8J9nw9p69YmQvQpTjKupvge/ujR1vRMFaQrwePNDhZzjauPDjVb/aaHZ7ph/UBh mB+1EXKnKuZvRy2p83lh1POqhcDalvT2I5jju2vMQDutdOWojDvfazeyd4rEZFefJP/H yfbzb3z3rLorlHmoY8vWaA5H5bh5TEzZIEs+Z9K6W5ug9ReOOXYDcFA+PjTyced8Jhau mgiQ== X-Gm-Message-State: AAQBX9faP8m9nHwAprhW8VHLB1L3ehKkpDevU5W+RkfasZIoxcE6HKOB lugl4PYWm1Ju4CEU9WnCeZCbXsp1wgF+l7aZxvrB3zaLCdo= X-Google-Smtp-Source: AKy350Z836M9zIe8R9yFnHEFQK7iqH1G+rNaCwQJFEUeUGUKME0UOh5fl9GoRvbsyXtHUwp2TqA+Fr/pUNnoWcyU5Wc= X-Received: by 2002:a05:6902:154e:b0:b77:d2db:5f8f with SMTP id r14-20020a056902154e00b00b77d2db5f8fmr6671257ybu.12.1679916803245; Mon, 27 Mar 2023 04:33:23 -0700 (PDT) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply MIME-Version: 1.0 References: <57322874-e9c0-2f2c-8994-43438fe72995@gmail.com> <5d324904-4d4d-a02c-4a8a-cd985b170df6@gmail.com> <2b28be5d-0715-2633-3856-bd668a2758f4@youngman.org.uk> In-Reply-To: <2b28be5d-0715-2633-3856-bd668a2758f4@youngman.org.uk> From: Rich Freeman Date: Mon, 27 Mar 2023 07:33:13 -0400 Message-ID: Subject: Re: [gentoo-user] PCIe x1 or PCIe x4 SATA controller card To: gentoo-user@lists.gentoo.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Archives-Salt: ccb7246a-3956-42e3-ad64-0eebfa21994f X-Archives-Hash: 058d9f0a4400a6c40e53939bac72dafd On Mon, Mar 27, 2023 at 5:30=E2=80=AFAM Wols Lists wrote: > > On 27/03/2023 01:18, Dale wrote: > > Thanks for any light you can shed on this. Googling just leads to a to= n > > of confusion. What's true 6 months ago is wrong today. :/ It's hard > > to tell what still applies. > > Well, back in the days of the megahurtz wars, a higher clock speed > allegedly meant a faster CPU. Now they all run about 5GHz, and anything > faster would break the speed of light ... so how they do it nowadays I > really don't know ... Effective instructions per clock in general, and increasing core count are some of the big ones. I say effective because I think IPC is somewhat synthetic and idealized, and there are MANY bottlenecks in a CPU. Efficiency improvements that allow a CPU to boost for longer or increase sustained clock speed also help. Getting more done in a clock cycle can happen in many ways: 1. Actually reducing the number of cycles needed to complete an instruction at the most elementary level. 2. Using speculative execution to increase the number of instructions run in parallel. 3. Improving branch prediction to maximize your speculative execution budge= t. 4. Reducing the cost of prediction errors by reducing pipelines/etc. 5. Better cache to reduce wait time. 6. Better internal IO to reduce wait time. 7. Better external IO (esp RAM) to reduce wait time. Those are just ones I've thought of offhand. I'm sure there are tons of info on which ones matter the most in practice, and things I haven't thought of. --=20 Rich