From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 09DDE138334 for ; Fri, 9 Nov 2018 02:30:14 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 4E7F1E0D16; Fri, 9 Nov 2018 02:30:06 +0000 (UTC) Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id B65EFE0CF3 for ; Fri, 9 Nov 2018 02:30:05 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id v9-v6so218413pff.2 for ; Thu, 08 Nov 2018 18:30:05 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=3GK9Xyir6k5HBOUJBtb73WNxt4y1D2YEXXc1MfXHy6w=; b=o5Uuhb9Ei0osWK1/8oZFvaowlYFkkvV8sK8qoA9IV/2tEmvccys+ttNUmtku9Z7q0s z9Br7F1IdYsqDzzKxlSblLVjhtnZLfUMaXB4M/x2Zkgmg+zB29Tvy9ZLRgaRlEounxBG zk3rkVlnze0Vip2fSzVHcs8wbc8hbKakMe4vmWIN2Mmu4pxxQ1i0sm+4lK7tqdto/OY8 +0PPoPZLqW/RmOhtvENoQ+TuBv5d13fBnDTxVMwia0m2vKjyAXx2+zCOmozhWK1MrPOs miI5ppBWwqCxBUXclpZ44AmxDD7/q9URlbn9NlP5zYPEuEpYsgE3I4h1Ig2AumDP10ZF VJLA== X-Gm-Message-State: AGRZ1gLOkOGmvvVb2g8yD4pNrTujsuhbWO73Om1vOLpP1ZunIkRWVsU8 e2F5JiGrv7shX+l+rEuYUkkuU+boiFCZ4ZCKEgPfDQ== X-Google-Smtp-Source: AJdET5dcpkrv55a68HmXjM/bwxhudCawQR2DMokl3pkw2QauuxyUQNN+vPblTMQq0dh33PxRox0uOthOZtDRdPGeNbs= X-Received: by 2002:a65:6295:: with SMTP id f21-v6mr5943433pgv.167.1541730604066; Thu, 08 Nov 2018 18:30:04 -0800 (PST) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 References: <9b5365ed-5cca-54b3-0da7-bbbe697b4c40@gmail.com> In-Reply-To: <9b5365ed-5cca-54b3-0da7-bbbe697b4c40@gmail.com> From: Rich Freeman Date: Thu, 8 Nov 2018 21:29:52 -0500 Message-ID: Subject: Re: [gentoo-user] Hard drive storage questions To: gentoo-user@lists.gentoo.org Content-Type: text/plain; charset="UTF-8" X-Archives-Salt: b539c910-920b-4544-b2ac-94dd570225a4 X-Archives-Hash: af987e3b923ee6e061142e346207d271 On Thu, Nov 8, 2018 at 8:16 PM Dale wrote: > > I'm trying to come up with a > plan that allows me to grow easier and without having to worry about > running out of motherboard based ports. > So, this is an issue I've been changing my mind on over the years. There are a few common approaches: * Find ways to cram a lot of drives on one host * Use a patchwork of NAS devices or improvised hosts sharing over samba/nfs/etc and end up with a mess of mount points. * Use a distributed FS Right now I'm mainly using the first approach, and I'm trying to move to the last. The middle option has never appealed to me. So, to do more of what you're doing in the most efficient way possible, I recommend finding used LSI HBA cards. These have mini-SAS ports on them, and one of these can be attached to a breakout cable that gets you 4 SATA ports. I just picked up two of these for $20 each on ebay (used) and they have 4 mini-SAS ports each, which is capacity for 16 SATA drives per card. Typically these have 4x or larger PCIe interfaces, so you'll need a large slot, or one with a cutout. You'd have to do the math but I suspect that if the card+MB supports PCIe 3.0 you're not losing much if you cram it into a smaller slot. If most of the drives are idle most of the time then that also demands less bandwidth. 16 fully busy hard drives obviously can put out a lot of data if reading sequentially. You can of course get more consumer-oriented SATA cards, but you're lucky to get 2-4 SATA ports on a card that runs you $30. The mini-SAS HBAs get you a LOT more drives per PCIe slot, and your PCIe slots are you main limiting factor assuming you have power and case space. Oh, and those HBA cards need to be flashed into "IT" mode - they're often sold this way, but if they support RAID you want to flash the IT firmware that just makes them into a bunch of standalone SATA slots. This is usually a PITA that involves DOS or whatever, but I have noticed some of the software needed in the Gentoo repo. If you go that route it is just like having a ton of SATA ports in your system - they just show up as sda...sdz and so on (no idea where it goes after that). Software-wise you just keep doing what you're already doing (though you should be seriously considering mdadm/zfs/btrfs/whatever at that point). That is the more traditional route. Now let me talk about distributed filesystems, which is the more scalable approach. I'm getting tired of being limited by SATA ports, and cases, and such. I'm also frustrated with some of zfs's inflexibility around removing drives. These are constraints that make upgrading painful, and often inefficient. Distributed filesystems offer a different solution. A distributed filesystem spreads its storage across many hosts, with an arbitrary number of drives per host (more or less). So, you can add more hosts, add more drives to a host, and so on. That means you're never forced to try to find a way to cram a few more drives in one host. The resulting filesystem appears as one gigantic filesystem (unless you want to split it up), which means no mess of nfs mountpoints and so on, and all the other headaches of nfs. Just as with RAID these support redundancy, except now you can lose entire hosts without issue. With many you can even tell it which PDU/rack/whatever each host is plugged into, and it will make sure you can lose all the hosts in one rack. You can also mount the filesystem on as many hosts as you want at the same time. They do tend to be a bit more complex. The big players can scale VERY large - thousands of drives easily. Everything seems to be moving towards Ceph/CephFS. If you were hosting a datacenter full of VMs/containers/etc I'd be telling you to host it on Ceph. However, for small scale (which you definitely are right now), I'm not thrilled with it. Due to the way it allocates data (hash-based) anytime anything changes you end up having to move all the data around in the cluster, and all the reports I've read suggests it doesn't perform all that great if you only have a few nodes. Ceph storage nodes are also RAM-hungry, and I want to run these on ARM to save power, and few ARM boards have that kind of RAM, and they're very expensive. Personally I'm working on deploying a cluster of a few nodes running LizardFS, which is basically a fork/derivative of MooseFS. While it won't scale nearly as well, below 100 nodes should be fine, and in particular it sounds like it works fairly well with only a few nodes. It has its pros and cons, but for my needs it should be sufficient. It also isn't RAM-hungry. I'm going to be testing it on some RockPro64s, with the LSI HBAs. I did note that Gentoo lacks a LizardFS client. I suspect I'll be looking to fix that - I'm sure the moosefs ebuild would be a good starting point. I'm probably going to be a whimp and run the storage nodes on Ubuntu or whatever upstream targets - they're basically appliances as far as I'm concerned. So, those are the two routes I'd recommend. Just get yourself an HBA if you only want a few more drives. If you see your needs expanding then consider a distributed filesystem. The advantage of the latter is that you can keep expanding it however you want with additional drives/nodes/whatever. If you're going over 20 nodes I'd use Ceph for sure - IMO that seems to be the future of this space. -- Rich