From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 2AAB21381F3 for ; Sat, 20 Jul 2013 18:43:44 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id DCFB1E0931; Sat, 20 Jul 2013 18:43:34 +0000 (UTC) Received: from svr-us4.tirtonadi.com (svr-us4.tirtonadi.com [69.65.43.212]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id E28DBE08E0 for ; Sat, 20 Jul 2013 18:43:33 +0000 (UTC) Received: from mail-ve0-f172.google.com ([209.85.128.172]:60060) by svr-us4.tirtonadi.com with esmtpsa (TLSv1:RC4-SHA:128) (Exim 4.80) (envelope-from ) id 1V0c80-002wSL-AH for gentoo-user@lists.gentoo.org; Sun, 21 Jul 2013 01:43:36 +0700 Received: by mail-ve0-f172.google.com with SMTP id jz10so4195319veb.17 for ; Sat, 20 Jul 2013 11:43:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=3oCBXyf6eBDxujz/ze4jYtLegb+A4NsNaLGieVxhYrM=; b=KI/eyDSS5H+2BHFNOREZT1LKd/7Zdqan9nhdGHA3MOseWmReBMeC0oiiSlY78eX+uC avTqVDTnrHdndT9Ly/EuFtGpmfq9phojebw1zwDqvUVhrPPzErF7cF+3zLoCdCW93VIy f/PyJcU+KEp0V3F3gQYHxpBQizDCAFk1O6Nv8Pars9xjdtWqmrL6G3NFmOrPRj9SlBST 1gDbeYlYC8BLYfUROYSEZWggt++Zb8shDvUVB5A1ngffdaCDopPRsF3ULNQ2fNsPlos2 3EOPo+xJv04g6uQvp0lh7vu4BA6NQHD31Ym3EujLhB5MXUnh8gSU3kSvM0OET8Pduq4+ DHhQ== Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 X-Received: by 10.220.59.69 with SMTP id k5mr7251707vch.34.1374345812393; Sat, 20 Jul 2013 11:43:32 -0700 (PDT) Received: by 10.220.187.201 with HTTP; Sat, 20 Jul 2013 11:43:32 -0700 (PDT) Received: by 10.220.187.201 with HTTP; Sat, 20 Jul 2013 11:43:32 -0700 (PDT) In-Reply-To: <51EA9E29.10008@libertytrek.org> References: <20130718182232.5c1301ce@acme7.acmenet> <20130719114234.332ff09e@acme7.acmenet> <51E96CBB.4080300@gmail.com> <201307191945.46099.michaelkintzios@gmail.com> <51EA9E29.10008@libertytrek.org> Date: Sun, 21 Jul 2013 01:43:32 +0700 Message-ID: Subject: Re: SSDs, VM SANs & RAID - WAS Re: [gentoo-user] SSD partitioning and migration From: Pandu Poluan To: gentoo-user@lists.gentoo.org Content-Type: multipart/alternative; boundary=001a11c2012cf2491b04e1f5d19c X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - svr-us4.tirtonadi.com X-AntiAbuse: Original Domain - lists.gentoo.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - poluan.info X-Get-Message-Sender-Via: svr-us4.tirtonadi.com: authenticated_id: rileyer+pandu.poluan.info/only user confirmed/virtual account not confirmed X-Archives-Salt: c199e18f-dbf7-40e2-bdc8-7b4fbcd1e0ee X-Archives-Hash: 28bc9a5f02f22f52308a5cf973bff4a3 --001a11c2012cf2491b04e1f5d19c Content-Type: text/plain; charset=UTF-8 On Jul 20, 2013 9:27 PM, "Tanstaafl" wrote: > > On 2013-07-19 3:02 PM, Paul Hartman wrote: >> >> I think you are. Unless you are moving massive terabytes of data >> across your drive on a constant basis I would not worry about regular >> everyday write activity being a problem. > > > I have a question regarding the use of SSDs in a VM SAN... > > We are considering buying a lower-end SAN (two actually, one for each of our locations), with lots of 2.5" bays, and using SSDs. > > The two questions that come to mind are: > > Is this a good use of SSDs? I honestly don't know if the running VMs would benefit from the faster IO or not (I *think* the answer is a resounding yes)? > Yes, the I/O would be faster, although how significant totally depends on your workload pattern. The bottleneck would be the LAN, though. The peak bandwidth of SATA is 6 GB/s = 48 Gbps. You'll need active/active multipathing and/or bonded interfaces to cater for that firehose. > Next is RAID... > > I've avoided RAID5 (and RAID6) like the plague ever since I almost got bit really badly by a multiple drive failure... luckily, the RAID5 had just finished rebuilding successfully after the first drive failed, before the second drive failed. I can't tell you how many years I aged that day while it was rebuilding after replacing the second failed drive. > > Ever since, I've always used RAID10. > Ahh, the Cadillac of RAID arrays :-) > So... with SSDs, I think another advantage would be much faster rebuilds after a failed drive? So I could maybe start using RAID6 (would survive two simultaneous disk failures), and not lose so much available storage (50% with RAID10)? > If you're using ZFS with spinning disks as its vdev 'elements', resilvering (rebuilding the RAID array) would be somewhat faster because ZFS knows what needs to be resilvered (i.e., used blocks) and skip over parts that don't need to be resilvered (i.e., unused blocks). > Last... while researching this, I ran across a very interesting article that I'd appreciate hearing opinions on. > > "The Benefits of a Flash Only, SAN-less Virtual Architecture": > > http://www.storage-switzerland.com/Articles/Entries/2012/9/20_The_Benefits_of_a_Flash_Only,_SAN-less_Virtual_Architecture.html > > or > > http://tinyurl.com/khwuspo > > Anyway, I look forward to hearing thoughts on this... > Interesting... Another alternative for performance is to buy a bunch of spinning disks (let's say, 12 of them 'enterprise'-grade disks), join them into a ZFS Pool of 5 mirrored vdevs (that is, a RAID10 a la ZFS) + 2 spares, then use 4 SSDs to hold the ZFS Cache and Intent Log. The capital expenditure for the gained capacity should be cheaper, but with a very acceptable performance. Rgds, -- --001a11c2012cf2491b04e1f5d19c Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


On Jul 20, 2013 9:27 PM, "Tanstaafl" <tanstaafl@libertytrek.org> wrote:
>
> On 2013-07-19 3:02 PM, Paul Hartman <paul.hartman+gentoo@gmail.com> wrote:
>>
>> I think you are. Unless you are moving massive terabytes of data >> across your drive on a constant basis I would not worry about regu= lar
>> everyday write activity being a problem.
>
>
> I have a question regarding the use of SSDs in a VM SAN...
>
> We are considering buying a lower-end SAN (two actually, one for each = of our locations), with lots of 2.5" bays, and using SSDs.
>
> The two questions that come to mind are:
>
> Is this a good use of SSDs? I honestly don't know if the running V= Ms would benefit from the faster IO or not (I *think* the answer is a resou= nding yes)?
>

Yes, the I/O would be faster, although how significant total= ly depends on your workload pattern.

The bottleneck would be the LAN, though. The peak bandwidth = of SATA is 6 GB/s =3D 48 Gbps. You'll need active/active multipathing a= nd/or bonded interfaces to cater for that firehose.

> Next is RAID...
>
> I've avoided RAID5 (and RAID6) like the plague ever since I almost= got bit really badly by a multiple drive failure... luckily, the RAID5 had= just finished rebuilding successfully after the first drive failed, before= the second drive failed. I can't tell you how many years I aged that d= ay while it was rebuilding after replacing the second failed drive.
>
> Ever since, I've always used RAID10.
>

Ahh, the Cadillac of RAID arrays :-)

> So... with SSDs, I think another advantage would be muc= h faster rebuilds after a failed drive? So I could maybe start using RAID6 = (would survive two simultaneous disk failures), and not lose so much availa= ble storage (50% with RAID10)?
>

If you're using ZFS with spinning disks as its vdev '= ;elements', resilvering (rebuilding the RAID array) would be somewhat f= aster because ZFS knows what needs to be resilvered (i.e., used blocks) and= skip over parts that don't need to be resilvered (i.e., unused blocks)= .

> Last... while researching this, I ran across a very int= eresting article that I'd appreciate hearing opinions on.
>
> "The Benefits of a Flash Only, SAN-less Virtual Architecture"= ;:
>
> http:/= /www.storage-switzerland.com/Articles/Entries/2012/9/20_The_Benefits_of_a_F= lash_Only,_SAN-less_Virtual_Architecture.html
>
> or
>
> http://tinyurl.com/khwuspo<= br> >
> Anyway, I look forward to hearing thoughts on this...
>

Interesting...

Another alternative for performance is to buy a bunch of spi= nning disks (let's say, 12 of them 'enterprise'-grade disks), j= oin them into a ZFS Pool of 5 mirrored vdevs (that is, a RAID10 a la ZFS) += 2 spares, then use 4 SSDs to hold the ZFS Cache and Intent Log.

The capital expenditure for the gained capacity should be ch= eaper, but with a very acceptable performance.

Rgds,
--

--001a11c2012cf2491b04e1f5d19c--