public inbox for gentoo-project@lists.gentoo.org
 help / color / mirror / Atom feed
From: John Helmert III <ajak@gentoo.org>
To: gentoo-project@lists.gentoo.org
Subject: Re: [gentoo-project] Council meeting 2024-01-14 - Call for agenda items
Date: Mon, 1 Jan 2024 19:16:32 -0800	[thread overview]
Message-ID: <ZZOAEPo4Tl8Z1ilL@gentoo.org> (raw)
In-Reply-To: <robbat2-20240101T230455-356961672Z@orbis-terrarum.net>

[-- Attachment #1: Type: text/plain, Size: 5739 bytes --]

On Tue, Jan 02, 2024 at 01:02:17AM +0000, Robin H. Johnson wrote:
> On Mon, Jan 01, 2024 at 07:14:47PM +0100, Ulrich Mueller wrote:
> > In two weeks from now, the Council will meet again. This is the time
> > to raise and prepare items that the Council should put on the agenda
> > to discuss or vote on.
> > 
> > Please respond to this message with agenda items. Do not hesitate to
> > repeat your agenda item here with a pointer if you previously
> > suggested one (since the last meeting).
> > 
> > The agenda for the meeting will be sent out on Sunday 2024-01-07.
> 
> Agenda items from Infra:
> 
> --------------------------
> 
> # Refresh Hetzner servers
> dilfridge pointed out that demeter.amd64.dev.gentoo.org is being used to
> build the new binary packages, and is almost at capacity. It's Hetzner's
> AX51-NVME model, with 64GB RAM & 2x1TB NVME, presently costing ~EUR58/mo
> 
> Checking the other Hetzner servers, the oldest node is oystercatcher,
> which is a PX90-SSD, w/ 64GB RAM & 2x240G SATA SSD: costing ~EUR100/mo.
> 
> I'd like to move demeter.amd64.dev.gentoo.org's workload to a
> higher-spec server (new CPU generation, double core count, double RAM,
> double storage):
> 
> Hetzner AX102 or equivalent specification from Hetzner's Server Auction
> models, cost of EUR104/mo plus EUR39 setup.
> 
> At that point, replace oystercatcher to either re-use the old demeter
> instance or a better deal from Hetzner's server auction.
> 
> (The other two servers at Hetzner are calonectris & wagtail, that handle
> logging & failover for Git; they are newer than oystercatcher, one of
> them is sponsored by Hetzner already).
> 
> ## Net financial impact:
> - Demeter new AX102: +104 EUR/mo, +EUR39 setup.
> - oystercatcher->demeter content swap: 0 EUR/mo change
> - decom old oystercatcher hardware: -EUR100/mo
> = Net: EUR4/mo, EUR39 one-time charge.
> 
> --------------------------
> 
> # Add hardware for ONLINE historical distfile archive 
> Bug #834712 is a draft proposal to add a ONLINE historical distfile archive
> 
> A number of community members have collected distfiles over Gentoo's
> history, but it doesn't exist online in a single place (e.g. my own
> archives exist on offline LTO tapes, as part of my personal backups).
> 
> This needs somewhere in the realm of 4-8TB of online storage, based on
> the bug research so far (wide range due to the need to verify duplicate
> files without a coherent set of checksums over a 20-year span, as well
> as excluding mirror-restricted files).
> 
> The existing servers at Hetzner do not have enough storage, so a
> different hosting location is required.
> 
> The service would be backed by CDN77's CDN service, in case there are
> accidentally some hot files, to avoid bandwidth issues or slamming slow
> disks.
> 
> There would also be a periodic backup into AWS S3 Glacier, able to
> re-create the server if needed;

Are we already using Glacier? Glacier itself presumably isn't libre,
so I'm not sure how we should feel about it from the perspective of
social contract depdendency requirements.

> however serving data from S3 directly is cost prohibitive (a full
> restore for 8TB of data would cost USD20).

Typo? "USD20" doesn't seem prohibitive.

> A stretch goal could be also hosting historical snapshots and then
> historical release media, but those are less critical than the distfiles
> themselves.
> 
> ## Option 1: Use drive slots in {killdeer,kingbird}.gentoo.org (CapEx)
> Add NVME QLC drives to the existing VM-hosting servers to one or both of
> the killdeer & kingbird servers at OSL.
> 
> The servers presently have 3 U.2 drives, and 8-10 drive slots [need to
> review backplane model].
> 
> Intel/Solidgm D5-P5316 QLC U.2 drives
> E.g. 15.36TB drive $1,125USD/ea
> https://www.wiredzone.com/shop/product/10021877-intel-ssdpf2nv153tz-hard-drive-15-36tb-nvme-pcie-4-0-u-2-15mm-d5-p5316-series-8590 
> E.g. 30.72TB drive, $2,086USD/ea
> https://www.wiredzone.com/shop/product/10022217-intel-ssdpf2nv307tz-hard-drive-30-72tb-ssd-nvme-pcie-x4-gen4-u-2-15mm-d5-p5316-series-8832
> Other drive vendors exist as well, but the pricing on Intel QLCs is
> extremely good (we might also be able to find even better pricing via
> Intel employees).
> 
> Min qty would be 2x15.36T drives, 1 per server.
> Max qty would be 4x30.72T drives, 2 per server (running RAID1)
> 
> Cost range $2250-$8400, plus applicable US sales taxes
> 
> This would be a capital purchase, and the depreciation would provide tax
> deduction over a 5 year period of the US IRS MACRS tax rules, with
> applicable front-loaded bonus depreciation for that service year (60%
> for 2024).
> 
> ## Option 2: Hetzner server (OpEx)
> Add a high-storage server at Hetzner, e.g. SX64 model.
> - SX64: EUR81/mo + EUR39 setup => 48TB RAID5 usable
> - Other deals may exist in the Server Auction at the time.
> 
> This would be an ongoing expense, providing a direct offset to annual
> income.
> 
> ## Other financial impact:
> - USD10/mo estimated AWS S3 storage costs for historical distfiles.
> 
> ## Infra opinion:
> I (robbat2) have a soft preference to use Option 1, with the larger
> drives, and stretch the extra capacity other services located at OSL:
> e.g. project hosting, dipper.gentoo.org replacement (8TB storage usage).
> 
> Downside is that our network segment OSUOSL is short on IPv4 addresses.
> 
> -- 
> Robin Hugh Johnson
> Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
> E-Mail   : robbat2@gentoo.org
> GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
> GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

  reply	other threads:[~2024-01-02  3:16 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-01 18:14 [gentoo-project] Council meeting 2024-01-14 - Call for agenda items Ulrich Mueller
2024-01-02  1:02 ` Robin H. Johnson
2024-01-02  3:16   ` John Helmert III [this message]
2024-01-02 22:37     ` Robin H. Johnson
2024-01-03  6:24       ` Michał Górny
2024-01-03  8:33         ` Robin H. Johnson
2024-01-03 12:01         ` Rich Freeman
2024-01-13 13:00     ` Andreas K. Huettel
2024-01-08 21:02 ` [gentoo-project] Council meeting 2024-01-14 - agenda Ulrich Mueller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZZOAEPo4Tl8Z1ilL@gentoo.org \
    --to=ajak@gentoo.org \
    --cc=gentoo-project@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox