* [gentoo-project] Council meeting 2024-01-14 - Call for agenda items
@ 2024-01-01 18:14 Ulrich Mueller
2024-01-02 1:02 ` Robin H. Johnson
2024-01-08 21:02 ` [gentoo-project] Council meeting 2024-01-14 - agenda Ulrich Mueller
0 siblings, 2 replies; 9+ messages in thread
From: Ulrich Mueller @ 2024-01-01 18:14 UTC (permalink / raw
To: gentoo-dev-announce, gentoo-project
[-- Attachment #1: Type: text/plain, Size: 450 bytes --]
In two weeks from now, the Council will meet again. This is the time
to raise and prepare items that the Council should put on the agenda
to discuss or vote on.
Please respond to this message with agenda items. Do not hesitate to
repeat your agenda item here with a pointer if you previously
suggested one (since the last meeting).
The agenda for the meeting will be sent out on Sunday 2024-01-07.
Please reply to the gentoo-project list.
Ulrich
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 507 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-project] Council meeting 2024-01-14 - Call for agenda items
2024-01-01 18:14 [gentoo-project] Council meeting 2024-01-14 - Call for agenda items Ulrich Mueller
@ 2024-01-02 1:02 ` Robin H. Johnson
2024-01-02 3:16 ` John Helmert III
2024-01-08 21:02 ` [gentoo-project] Council meeting 2024-01-14 - agenda Ulrich Mueller
1 sibling, 1 reply; 9+ messages in thread
From: Robin H. Johnson @ 2024-01-02 1:02 UTC (permalink / raw
To: gentoo-project
[-- Attachment #1: Type: text/plain, Size: 5194 bytes --]
On Mon, Jan 01, 2024 at 07:14:47PM +0100, Ulrich Mueller wrote:
> In two weeks from now, the Council will meet again. This is the time
> to raise and prepare items that the Council should put on the agenda
> to discuss or vote on.
>
> Please respond to this message with agenda items. Do not hesitate to
> repeat your agenda item here with a pointer if you previously
> suggested one (since the last meeting).
>
> The agenda for the meeting will be sent out on Sunday 2024-01-07.
Agenda items from Infra:
--------------------------
# Refresh Hetzner servers
dilfridge pointed out that demeter.amd64.dev.gentoo.org is being used to
build the new binary packages, and is almost at capacity. It's Hetzner's
AX51-NVME model, with 64GB RAM & 2x1TB NVME, presently costing ~EUR58/mo
Checking the other Hetzner servers, the oldest node is oystercatcher,
which is a PX90-SSD, w/ 64GB RAM & 2x240G SATA SSD: costing ~EUR100/mo.
I'd like to move demeter.amd64.dev.gentoo.org's workload to a
higher-spec server (new CPU generation, double core count, double RAM,
double storage):
Hetzner AX102 or equivalent specification from Hetzner's Server Auction
models, cost of EUR104/mo plus EUR39 setup.
At that point, replace oystercatcher to either re-use the old demeter
instance or a better deal from Hetzner's server auction.
(The other two servers at Hetzner are calonectris & wagtail, that handle
logging & failover for Git; they are newer than oystercatcher, one of
them is sponsored by Hetzner already).
## Net financial impact:
- Demeter new AX102: +104 EUR/mo, +EUR39 setup.
- oystercatcher->demeter content swap: 0 EUR/mo change
- decom old oystercatcher hardware: -EUR100/mo
= Net: EUR4/mo, EUR39 one-time charge.
--------------------------
# Add hardware for ONLINE historical distfile archive
Bug #834712 is a draft proposal to add a ONLINE historical distfile archive
A number of community members have collected distfiles over Gentoo's
history, but it doesn't exist online in a single place (e.g. my own
archives exist on offline LTO tapes, as part of my personal backups).
This needs somewhere in the realm of 4-8TB of online storage, based on
the bug research so far (wide range due to the need to verify duplicate
files without a coherent set of checksums over a 20-year span, as well
as excluding mirror-restricted files).
The existing servers at Hetzner do not have enough storage, so a
different hosting location is required.
The service would be backed by CDN77's CDN service, in case there are
accidentally some hot files, to avoid bandwidth issues or slamming slow
disks.
There would also be a periodic backup into AWS S3 Glacier, able to
re-create the server if needed; however serving data from S3 directly is
cost prohibitive (a full restore for 8TB of data would cost USD20).
A stretch goal could be also hosting historical snapshots and then
historical release media, but those are less critical than the distfiles
themselves.
## Option 1: Use drive slots in {killdeer,kingbird}.gentoo.org (CapEx)
Add NVME QLC drives to the existing VM-hosting servers to one or both of
the killdeer & kingbird servers at OSL.
The servers presently have 3 U.2 drives, and 8-10 drive slots [need to
review backplane model].
Intel/Solidgm D5-P5316 QLC U.2 drives
E.g. 15.36TB drive $1,125USD/ea
https://www.wiredzone.com/shop/product/10021877-intel-ssdpf2nv153tz-hard-drive-15-36tb-nvme-pcie-4-0-u-2-15mm-d5-p5316-series-8590
E.g. 30.72TB drive, $2,086USD/ea
https://www.wiredzone.com/shop/product/10022217-intel-ssdpf2nv307tz-hard-drive-30-72tb-ssd-nvme-pcie-x4-gen4-u-2-15mm-d5-p5316-series-8832
Other drive vendors exist as well, but the pricing on Intel QLCs is
extremely good (we might also be able to find even better pricing via
Intel employees).
Min qty would be 2x15.36T drives, 1 per server.
Max qty would be 4x30.72T drives, 2 per server (running RAID1)
Cost range $2250-$8400, plus applicable US sales taxes
This would be a capital purchase, and the depreciation would provide tax
deduction over a 5 year period of the US IRS MACRS tax rules, with
applicable front-loaded bonus depreciation for that service year (60%
for 2024).
## Option 2: Hetzner server (OpEx)
Add a high-storage server at Hetzner, e.g. SX64 model.
- SX64: EUR81/mo + EUR39 setup => 48TB RAID5 usable
- Other deals may exist in the Server Auction at the time.
This would be an ongoing expense, providing a direct offset to annual
income.
## Other financial impact:
- USD10/mo estimated AWS S3 storage costs for historical distfiles.
## Infra opinion:
I (robbat2) have a soft preference to use Option 1, with the larger
drives, and stretch the extra capacity other services located at OSL:
e.g. project hosting, dipper.gentoo.org replacement (8TB storage usage).
Downside is that our network segment OSUOSL is short on IPv4 addresses.
--
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail : robbat2@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 1113 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-project] Council meeting 2024-01-14 - Call for agenda items
2024-01-02 1:02 ` Robin H. Johnson
@ 2024-01-02 3:16 ` John Helmert III
2024-01-02 22:37 ` Robin H. Johnson
2024-01-13 13:00 ` Andreas K. Huettel
0 siblings, 2 replies; 9+ messages in thread
From: John Helmert III @ 2024-01-02 3:16 UTC (permalink / raw
To: gentoo-project
[-- Attachment #1: Type: text/plain, Size: 5739 bytes --]
On Tue, Jan 02, 2024 at 01:02:17AM +0000, Robin H. Johnson wrote:
> On Mon, Jan 01, 2024 at 07:14:47PM +0100, Ulrich Mueller wrote:
> > In two weeks from now, the Council will meet again. This is the time
> > to raise and prepare items that the Council should put on the agenda
> > to discuss or vote on.
> >
> > Please respond to this message with agenda items. Do not hesitate to
> > repeat your agenda item here with a pointer if you previously
> > suggested one (since the last meeting).
> >
> > The agenda for the meeting will be sent out on Sunday 2024-01-07.
>
> Agenda items from Infra:
>
> --------------------------
>
> # Refresh Hetzner servers
> dilfridge pointed out that demeter.amd64.dev.gentoo.org is being used to
> build the new binary packages, and is almost at capacity. It's Hetzner's
> AX51-NVME model, with 64GB RAM & 2x1TB NVME, presently costing ~EUR58/mo
>
> Checking the other Hetzner servers, the oldest node is oystercatcher,
> which is a PX90-SSD, w/ 64GB RAM & 2x240G SATA SSD: costing ~EUR100/mo.
>
> I'd like to move demeter.amd64.dev.gentoo.org's workload to a
> higher-spec server (new CPU generation, double core count, double RAM,
> double storage):
>
> Hetzner AX102 or equivalent specification from Hetzner's Server Auction
> models, cost of EUR104/mo plus EUR39 setup.
>
> At that point, replace oystercatcher to either re-use the old demeter
> instance or a better deal from Hetzner's server auction.
>
> (The other two servers at Hetzner are calonectris & wagtail, that handle
> logging & failover for Git; they are newer than oystercatcher, one of
> them is sponsored by Hetzner already).
>
> ## Net financial impact:
> - Demeter new AX102: +104 EUR/mo, +EUR39 setup.
> - oystercatcher->demeter content swap: 0 EUR/mo change
> - decom old oystercatcher hardware: -EUR100/mo
> = Net: EUR4/mo, EUR39 one-time charge.
>
> --------------------------
>
> # Add hardware for ONLINE historical distfile archive
> Bug #834712 is a draft proposal to add a ONLINE historical distfile archive
>
> A number of community members have collected distfiles over Gentoo's
> history, but it doesn't exist online in a single place (e.g. my own
> archives exist on offline LTO tapes, as part of my personal backups).
>
> This needs somewhere in the realm of 4-8TB of online storage, based on
> the bug research so far (wide range due to the need to verify duplicate
> files without a coherent set of checksums over a 20-year span, as well
> as excluding mirror-restricted files).
>
> The existing servers at Hetzner do not have enough storage, so a
> different hosting location is required.
>
> The service would be backed by CDN77's CDN service, in case there are
> accidentally some hot files, to avoid bandwidth issues or slamming slow
> disks.
>
> There would also be a periodic backup into AWS S3 Glacier, able to
> re-create the server if needed;
Are we already using Glacier? Glacier itself presumably isn't libre,
so I'm not sure how we should feel about it from the perspective of
social contract depdendency requirements.
> however serving data from S3 directly is cost prohibitive (a full
> restore for 8TB of data would cost USD20).
Typo? "USD20" doesn't seem prohibitive.
> A stretch goal could be also hosting historical snapshots and then
> historical release media, but those are less critical than the distfiles
> themselves.
>
> ## Option 1: Use drive slots in {killdeer,kingbird}.gentoo.org (CapEx)
> Add NVME QLC drives to the existing VM-hosting servers to one or both of
> the killdeer & kingbird servers at OSL.
>
> The servers presently have 3 U.2 drives, and 8-10 drive slots [need to
> review backplane model].
>
> Intel/Solidgm D5-P5316 QLC U.2 drives
> E.g. 15.36TB drive $1,125USD/ea
> https://www.wiredzone.com/shop/product/10021877-intel-ssdpf2nv153tz-hard-drive-15-36tb-nvme-pcie-4-0-u-2-15mm-d5-p5316-series-8590
> E.g. 30.72TB drive, $2,086USD/ea
> https://www.wiredzone.com/shop/product/10022217-intel-ssdpf2nv307tz-hard-drive-30-72tb-ssd-nvme-pcie-x4-gen4-u-2-15mm-d5-p5316-series-8832
> Other drive vendors exist as well, but the pricing on Intel QLCs is
> extremely good (we might also be able to find even better pricing via
> Intel employees).
>
> Min qty would be 2x15.36T drives, 1 per server.
> Max qty would be 4x30.72T drives, 2 per server (running RAID1)
>
> Cost range $2250-$8400, plus applicable US sales taxes
>
> This would be a capital purchase, and the depreciation would provide tax
> deduction over a 5 year period of the US IRS MACRS tax rules, with
> applicable front-loaded bonus depreciation for that service year (60%
> for 2024).
>
> ## Option 2: Hetzner server (OpEx)
> Add a high-storage server at Hetzner, e.g. SX64 model.
> - SX64: EUR81/mo + EUR39 setup => 48TB RAID5 usable
> - Other deals may exist in the Server Auction at the time.
>
> This would be an ongoing expense, providing a direct offset to annual
> income.
>
> ## Other financial impact:
> - USD10/mo estimated AWS S3 storage costs for historical distfiles.
>
> ## Infra opinion:
> I (robbat2) have a soft preference to use Option 1, with the larger
> drives, and stretch the extra capacity other services located at OSL:
> e.g. project hosting, dipper.gentoo.org replacement (8TB storage usage).
>
> Downside is that our network segment OSUOSL is short on IPv4 addresses.
>
> --
> Robin Hugh Johnson
> Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
> E-Mail : robbat2@gentoo.org
> GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
> GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-project] Council meeting 2024-01-14 - Call for agenda items
2024-01-02 3:16 ` John Helmert III
@ 2024-01-02 22:37 ` Robin H. Johnson
2024-01-03 6:24 ` Michał Górny
2024-01-13 13:00 ` Andreas K. Huettel
1 sibling, 1 reply; 9+ messages in thread
From: Robin H. Johnson @ 2024-01-02 22:37 UTC (permalink / raw
To: gentoo-project
[-- Attachment #1: Type: text/plain, Size: 1520 bytes --]
On Mon, Jan 01, 2024 at 07:16:32PM -0800, John Helmert III wrote:
> > Agenda items from Infra:
...
> > # Add hardware for ONLINE historical distfile archive
> > Bug #834712 is a draft proposal to add a ONLINE historical distfile archive
...
> > There would also be a periodic backup into AWS S3 Glacier, able to
> > re-create the server if needed;
> Are we already using Glacier? Glacier itself presumably isn't libre,
> so I'm not sure how we should feel about it from the perspective of
> social contract depdendency requirements.
Yes, Infra already uses S3 and Glacier for backups specifically. It's
*NOT* in any hot path whatsoever, backups only for disaster recovery.
> > however serving data from S3 directly is cost prohibitive (a full
> > restore for 8TB of data would cost USD20).
> Typo? "USD20" doesn't seem prohibitive.
Yes, a typo.
The restore from Glacier->regular S3 is a one time USD20-100 cost
(depending on object count). PLUS USD90/TB to send it out back out to
the other hardware.
If we wanted to serve the entire archive from S3, alternate the cost
calculations:
- $25/TB/mo in S3 Standard storage => $100/mo for 4TB, $200/mo for 8TB
- plus egress fees to send data from AWS to CDN77: $0.09/GB worst case;
traffic unknown.
--
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail : robbat2@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 1113 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-project] Council meeting 2024-01-14 - Call for agenda items
2024-01-02 22:37 ` Robin H. Johnson
@ 2024-01-03 6:24 ` Michał Górny
2024-01-03 8:33 ` Robin H. Johnson
2024-01-03 12:01 ` Rich Freeman
0 siblings, 2 replies; 9+ messages in thread
From: Michał Górny @ 2024-01-03 6:24 UTC (permalink / raw
To: gentoo-project
[-- Attachment #1: Type: text/plain, Size: 1031 bytes --]
On Tue, 2024-01-02 at 22:37 +0000, Robin H. Johnson wrote:
> On Mon, Jan 01, 2024 at 07:16:32PM -0800, John Helmert III wrote:
> > > Agenda items from Infra:
> ...
> > > # Add hardware for ONLINE historical distfile archive
> > > Bug #834712 is a draft proposal to add a ONLINE historical distfile archive
> ...
> > > There would also be a periodic backup into AWS S3 Glacier, able to
> > > re-create the server if needed;
> > Are we already using Glacier? Glacier itself presumably isn't libre,
> > so I'm not sure how we should feel about it from the perspective of
> > social contract depdendency requirements.
> Yes, Infra already uses S3 and Glacier for backups specifically. It's
> *NOT* in any hot path whatsoever, backups only for disaster recovery.
So we're basically talking about using services of an extremely
unethical company that can additionally randomly change princes to store
backups that we never test because it would be too expensive to test
them.
--
Best regards,
Michał Górny
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 512 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-project] Council meeting 2024-01-14 - Call for agenda items
2024-01-03 6:24 ` Michał Górny
@ 2024-01-03 8:33 ` Robin H. Johnson
2024-01-03 12:01 ` Rich Freeman
1 sibling, 0 replies; 9+ messages in thread
From: Robin H. Johnson @ 2024-01-03 8:33 UTC (permalink / raw
To: gentoo-project
[-- Attachment #1: Type: text/plain, Size: 2787 bytes --]
On Wed, Jan 03, 2024 at 07:24:57AM +0100, Michał Górny wrote:
> On Tue, 2024-01-02 at 22:37 +0000, Robin H. Johnson wrote:
> > On Mon, Jan 01, 2024 at 07:16:32PM -0800, John Helmert III wrote:
> > > > Agenda items from Infra:
> > ...
> > > > # Add hardware for ONLINE historical distfile archive
> > > > Bug #834712 is a draft proposal to add a ONLINE historical distfile archive
> > ...
> > > > There would also be a periodic backup into AWS S3 Glacier, able to
> > > > re-create the server if needed;
> > > Are we already using Glacier? Glacier itself presumably isn't libre,
> > > so I'm not sure how we should feel about it from the perspective of
> > > social contract depdendency requirements.
> > Yes, Infra already uses S3 and Glacier for backups specifically. It's
> > *NOT* in any hot path whatsoever, backups only for disaster recovery.
>
> So we're basically talking about using services of an extremely
> unethical company that can additionally randomly change princes to store
> backups that we never test because it would be too expensive to test
> them.
Copying it *OUT* of AWS's cloud is very expensive, that's their business
model; additionally the Glacier Deep Storage is optimized for NOT being
accessed.
Cheap verification is possibly by doing the verification within the
cloud, and picking which content to verify, rather than everything.
I did a verification test of the main git.g.o repos a few years ago - a
retest would be good (esp. with somebody else trying to follow the
restore instructions instead of me, to ensure I'm not in the critical
path to restore).
Say you want to use a libre provider: rsync.net is the closest to true
libre offering that I'm aware of.
AWS Glacier is USD0.00099/GB/mo (USD0.99/TB)
rsync.net is USD0.01/GB/mo (USD10.00/TB)
I'm aware of discounts on both services, but I'm using published prices
to compare.
10x more expensive as a baseline, before comparing the services on
any other merits.
Infra presently has 30TB+ of backups in AWS, split by filesizes, since
Glacier has a minimum object size, and small files are significantly
cheaper to storage in hot storage.
If you'd like those backups to also be present on rsync.net, or some
other libre service; please put that forward as a proposal to council
for funding.
As treasurer, I strove to find the cheapest option long-term option that
fit the requirements, including the previous social contract opinion
that backups were reasonable to host on AWS or similar provider.
--
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
E-Mail : robbat2@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 1113 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-project] Council meeting 2024-01-14 - Call for agenda items
2024-01-03 6:24 ` Michał Górny
2024-01-03 8:33 ` Robin H. Johnson
@ 2024-01-03 12:01 ` Rich Freeman
1 sibling, 0 replies; 9+ messages in thread
From: Rich Freeman @ 2024-01-03 12:01 UTC (permalink / raw
To: gentoo-project
On Wed, Jan 3, 2024 at 1:24 AM Michał Górny <mgorny@gentoo.org> wrote:
>
> On Tue, 2024-01-02 at 22:37 +0000, Robin H. Johnson wrote:
> > On Mon, Jan 01, 2024 at 07:16:32PM -0800, John Helmert III wrote:
> > > > Agenda items from Infra:
> > ...
> > > > # Add hardware for ONLINE historical distfile archive
> > > > Bug #834712 is a draft proposal to add a ONLINE historical distfile archive
> > ...
> > > > There would also be a periodic backup into AWS S3 Glacier, able to
> > > > re-create the server if needed;
> > > Are we already using Glacier? Glacier itself presumably isn't libre,
> > > so I'm not sure how we should feel about it from the perspective of
> > > social contract depdendency requirements.
> > Yes, Infra already uses S3 and Glacier for backups specifically. It's
> > *NOT* in any hot path whatsoever, backups only for disaster recovery.
>
> So we're basically talking about using services of an extremely
> unethical company that can additionally randomly change princes to store
> backups that we never test because it would be too expensive to test
> them.
Any company can change their prices unless you have some sort of
contract with them, and I've yet to see anything that is nearly as
cheap as deep glacier for the use case of backup storage. (Setting
aside the original point of this thread which was accessible archives,
which IMO is NOT a good use case for this service.) I'd love to have
a cheaper option, but this particular AWS offering seems to be the one
that isn't premium priced. $0.99/TB/month with offsite replicas is
hard to beat. Also, their price changes have historically tended to
be in the downwards direction.
Testing glacier is not expensive. Just pick a reasonable sample of
data (perhaps one created for this purpose), and restore it to a
server on AWS. You would pay the activation fees, but that's it. The
larger cost is the data transfer, and you don't need to do that to
test. Also, for very large amounts of data, physical shipping may be
cheaper (it is an option with them).
The other simple approach is to just create a test set of data (such
as your next planned full backup), and suspend any lifecycle policies
that would send it to glacier. Then perform your full backup, and
then do a full restore to AWS infrastructure. You wouldn't pay
anything for this beyond a few days of storage at the S3 costs and the
time any servers are running. I do this sort of thing to test my own
backups. Then reinstate your lifecycle policy and your backups will
immediately move to glacier, and you're no longer paying the full S3
costs.
I don't really see services the same way as software. First, NOBODY
offers 100% FOSS that I'm aware of - if they're storing it on tapes
odds are the tape drive has a proprietary firmware. Ditto with hard
drives. Then if you're paying somebody to physically store your
stuff, then of course they can pull the rug on you and you can't get
your stuff. IMO if this is a concern the best approach is redundancy.
Have your datacenter at one company, and your backups at another, and
then only a simultaneous failure at both will make the data
inaccessible.
The APIs are also basically a standard at this point. If S3 goes away
suddenly then you can just use any of a bunch of object store
implementations to roll your own, or use one of the gazillion
providers who would no doubt instantly spring up to cover this need,
likely using their own FOSS software. The interface APIs are
basically the most important thing when it comes to being open when
you're talking about a service, because that is what makes the service
portable. In the case of S3 there are competitors with the same APIs,
and FOSS solutions with the same APIs. We could self-host our own if
we wanted, and the reason we're probably not considering that is that
it would probably be 1000x more expensive.
--
Rich
^ permalink raw reply [flat|nested] 9+ messages in thread
* [gentoo-project] Council meeting 2024-01-14 - agenda
2024-01-01 18:14 [gentoo-project] Council meeting 2024-01-14 - Call for agenda items Ulrich Mueller
2024-01-02 1:02 ` Robin H. Johnson
@ 2024-01-08 21:02 ` Ulrich Mueller
1 sibling, 0 replies; 9+ messages in thread
From: Ulrich Mueller @ 2024-01-08 21:02 UTC (permalink / raw
To: gentoo-dev-announce, gentoo-project
[-- Attachment #1: Type: text/plain, Size: 505 bytes --]
The next Council meeting will be on Sunday 2024-01-14, 19:00 UTC in the
#gentoo-council channel on Libera Chat.
Agenda:
1. Roll call
2. Foundation dissolution status update
3. Refresh Hetzner servers [1]
4. Hardware for online historical distfile archive [1,2]
5. Open bugs with Council participation [3]
6. Open floor
[1] https://marc.info/?l=gentoo-project&m=170415728917799&w=2
[2] https://bugs.gentoo.org/834712
[3] https://wiki.gentoo.org/wiki/Project:Council#Open_bugs_with_Council_participation
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 507 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [gentoo-project] Council meeting 2024-01-14 - Call for agenda items
2024-01-02 3:16 ` John Helmert III
2024-01-02 22:37 ` Robin H. Johnson
@ 2024-01-13 13:00 ` Andreas K. Huettel
1 sibling, 0 replies; 9+ messages in thread
From: Andreas K. Huettel @ 2024-01-13 13:00 UTC (permalink / raw
To: gentoo-project
[-- Attachment #1: Type: text/plain, Size: 2233 bytes --]
> > Agenda items from Infra:
> >
> > --------------------------
> >
> > # Refresh Hetzner servers
> > dilfridge pointed out that demeter.amd64.dev.gentoo.org is being used to
> > build the new binary packages, and is almost at capacity. It's Hetzner's
> > AX51-NVME model, with 64GB RAM & 2x1TB NVME, presently costing ~EUR58/mo
[...]
> > I'd like to move demeter.amd64.dev.gentoo.org's workload to a
> > higher-spec server (new CPU generation, double core count, double RAM,
> > double storage):
> >
> > Hetzner AX102 or equivalent specification from Hetzner's Server Auction
> > models, cost of EUR104/mo plus EUR39 setup.
An example for the load of demeter is plotted her:
https://dev.gentoo.org/~dilfridge/load-demeter.pdf
Demeter is not only doing the binary packages, but also stages and isos for
(natively) amd64, x86 and (via qemu) alpha, m68k, loong, riscv, aarch64_be.
Setting to keep it maximally busy are
MAKEOPTS="-j17 -l32"
EMERGE_DEFAULT_OPTS="--jobs 5 --load 32"
I checked the server auction a few days ago and didnt really see anything
obviously better than the AX102 (of course this may have changed in the
meantime).
https://www.hetzner.com/dedicated-rootserver/matrix-ax
A few remarks about the AX102 config:
* the main requirement for the machine is many cores and threads (now 8 cores,
AX102 16 cores)
* nvme storage: 2T (now) is OK, 3.8T (AX102) is better
(mostly to have convenience buffer for bug fixing in qemu nspawns)
* no redundancy required, we can go a few days without package/stage updates,
a reinstall is easy and the data is kept on other infra machines /
mastermirror too anyway
* AX102 is Zen4, which is nice since future-proof (x86-84-v4 anyone?), right
now we have Zen2
> > ## Net financial impact:
> > - Demeter new AX102: +104 EUR/mo, +EUR39 setup.
> > - oystercatcher->demeter content swap: 0 EUR/mo change
> > - decom old oystercatcher hardware: -EUR100/mo
> > = Net: EUR4/mo, EUR39 one-time charge.
> >
> > --------------------------
--
Andreas K. Hüttel
dilfridge@gentoo.org
Gentoo Linux developer
(council, comrel, toolchain, base-system, perl, libreoffice)
https://wiki.gentoo.org/wiki/User:Dilfridge
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-01-13 13:00 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-01 18:14 [gentoo-project] Council meeting 2024-01-14 - Call for agenda items Ulrich Mueller
2024-01-02 1:02 ` Robin H. Johnson
2024-01-02 3:16 ` John Helmert III
2024-01-02 22:37 ` Robin H. Johnson
2024-01-03 6:24 ` Michał Górny
2024-01-03 8:33 ` Robin H. Johnson
2024-01-03 12:01 ` Rich Freeman
2024-01-13 13:00 ` Andreas K. Huettel
2024-01-08 21:02 ` [gentoo-project] Council meeting 2024-01-14 - agenda Ulrich Mueller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox