From: "Stefan G. Weichinger" <lists@xunil.at>
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Intel(R) C600 SAS Controller
Date: Wed, 11 Jun 2014 12:34:16 +0200 [thread overview]
Message-ID: <539830A8.4090500@xunil.at> (raw)
In-Reply-To: <53982C02.6040808@thegeezer.net>
Am 11.06.2014 12:14, schrieb thegeezer:
>> Basically 3 RAID-6 hw-raids over 6 SAS hdds.
>
> OK so i'm confused again. RAID6 requires minimum of 4 drives.
> if you have 3 raid6's then you would need 12 drives (coffee hasn't quite
> activated in me yet so my maths may not be right)
> or do you have essentially the first part of each of the six drives be
> virtual disk 1, the second part of each of the six drives virtual disk 2
> and the third part be virtual disk 3 -- if this is the case bear in mind
> that the slowest part of the disk is the end of the disk -- so you are
> essentially hobbling your virtual disk3 but only a little, instead of
> being around 150MB/sec it might run at 80.
I'd be happy to see 80 !
Ran atop now while dd-ing stuff to an external disk and got ~1MB/s for
2.5GB of data.
(this is even too slow for USB ...)
I am unsure what to post here from atop ... ?
To the initial question:
Yes, imagine the six disks "split" or partitioned at the level of the
hardware raid controller (as you described above).
> you might also like to try a simple test of the following (yes lvs count
> as block devices)
> # hdparm -t /dev/sda
> # hdparm -t /dev/sdb
> # hdparm -t /dev/sdc
> # hdparm -t /dev/vg01/winserver_disk0
> # hdparm -t /dev/vg01/amhold
everything around 380 MB/s ... only ~350 MB/s for
/dev/vg01/winserver_disk0 (which still is nice)
> i notice the core i7 only now. have you disabled turbo boost in the bios ?
> this is great for a desktop but awful for a server as it disables all
> those extra cores for a single busy thread
I checked BIOS settings yesterday and don't remember a turbo boost
option. I will check once more.
> cgroups are a great way of limiting or guaranteeing performance. by
> default i believe systemd will aim for user interactivity, but you want
> to change that to be more balanced.
> maybe some else can suggest how best to configure systemd cgroups.
> meanwhile can you
> # tree /sys/fs/cgroup/
# !tr
tree /sys/fs/cgroup/
/sys/fs/cgroup/
├── cpu -> cpu,cpuacct
├── cpuacct -> cpu,cpuacct
├── cpu,cpuacct
│ ├── cgroup.clone_children
│ ├── cgroup.event_control
│ ├── cgroup.procs
│ ├── cgroup.sane_behavior
│ ├── cpuacct.stat
│ ├── cpuacct.usage
│ ├── cpuacct.usage_percpu
│ ├── cpu.shares
│ ├── notify_on_release
│ ├── release_agent
│ └── tasks
├── cpuset
│ ├── cgroup.clone_children
│ ├── cgroup.event_control
│ ├── cgroup.procs
│ ├── cgroup.sane_behavior
│ ├── cpuset.cpu_exclusive
│ ├── cpuset.cpus
│ ├── cpuset.mem_exclusive
│ ├── cpuset.mem_hardwall
│ ├── cpuset.memory_migrate
│ ├── cpuset.memory_pressure
│ ├── cpuset.memory_pressure_enabled
│ ├── cpuset.memory_spread_page
│ ├── cpuset.memory_spread_slab
│ ├── cpuset.mems
│ ├── cpuset.sched_load_balance
│ ├── cpuset.sched_relax_domain_level
│ ├── machine.slice
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── cpuset.cpu_exclusive
│ │ ├── cpuset.cpus
│ │ ├── cpuset.mem_exclusive
│ │ ├── cpuset.mem_hardwall
│ │ ├── cpuset.memory_migrate
│ │ ├── cpuset.memory_pressure
│ │ ├── cpuset.memory_spread_page
│ │ ├── cpuset.memory_spread_slab
│ │ ├── cpuset.mems
│ │ ├── cpuset.sched_load_balance
│ │ ├── cpuset.sched_relax_domain_level
│ │ ├── machine-qemu\x2dotrs.scope
│ │ │ ├── cgroup.clone_children
│ │ │ ├── cgroup.event_control
│ │ │ ├── cgroup.procs
│ │ │ ├── cpuset.cpu_exclusive
│ │ │ ├── cpuset.cpus
│ │ │ ├── cpuset.mem_exclusive
│ │ │ ├── cpuset.mem_hardwall
│ │ │ ├── cpuset.memory_migrate
│ │ │ ├── cpuset.memory_pressure
│ │ │ ├── cpuset.memory_spread_page
│ │ │ ├── cpuset.memory_spread_slab
│ │ │ ├── cpuset.mems
│ │ │ ├── cpuset.sched_load_balance
│ │ │ ├── cpuset.sched_relax_domain_level
│ │ │ ├── emulator
│ │ │ │ ├── cgroup.clone_children
│ │ │ │ ├── cgroup.event_control
│ │ │ │ ├── cgroup.procs
│ │ │ │ ├── cpuset.cpu_exclusive
│ │ │ │ ├── cpuset.cpus
│ │ │ │ ├── cpuset.mem_exclusive
│ │ │ │ ├── cpuset.mem_hardwall
│ │ │ │ ├── cpuset.memory_migrate
│ │ │ │ ├── cpuset.memory_pressure
│ │ │ │ ├── cpuset.memory_spread_page
│ │ │ │ ├── cpuset.memory_spread_slab
│ │ │ │ ├── cpuset.mems
│ │ │ │ ├── cpuset.sched_load_balance
│ │ │ │ ├── cpuset.sched_relax_domain_level
│ │ │ │ ├── notify_on_release
│ │ │ │ └── tasks
│ │ │ ├── notify_on_release
│ │ │ ├── tasks
│ │ │ ├── vcpu0
│ │ │ │ ├── cgroup.clone_children
│ │ │ │ ├── cgroup.event_control
│ │ │ │ ├── cgroup.procs
│ │ │ │ ├── cpuset.cpu_exclusive
│ │ │ │ ├── cpuset.cpus
│ │ │ │ ├── cpuset.mem_exclusive
│ │ │ │ ├── cpuset.mem_hardwall
│ │ │ │ ├── cpuset.memory_migrate
│ │ │ │ ├── cpuset.memory_pressure
│ │ │ │ ├── cpuset.memory_spread_page
│ │ │ │ ├── cpuset.memory_spread_slab
│ │ │ │ ├── cpuset.mems
│ │ │ │ ├── cpuset.sched_load_balance
│ │ │ │ ├── cpuset.sched_relax_domain_level
│ │ │ │ ├── notify_on_release
│ │ │ │ └── tasks
│ │ │ └── vcpu1
│ │ │ ├── cgroup.clone_children
│ │ │ ├── cgroup.event_control
│ │ │ ├── cgroup.procs
│ │ │ ├── cpuset.cpu_exclusive
│ │ │ ├── cpuset.cpus
│ │ │ ├── cpuset.mem_exclusive
│ │ │ ├── cpuset.mem_hardwall
│ │ │ ├── cpuset.memory_migrate
│ │ │ ├── cpuset.memory_pressure
│ │ │ ├── cpuset.memory_spread_page
│ │ │ ├── cpuset.memory_spread_slab
│ │ │ ├── cpuset.mems
│ │ │ ├── cpuset.sched_load_balance
│ │ │ ├── cpuset.sched_relax_domain_level
│ │ │ ├── notify_on_release
│ │ │ └── tasks
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── notify_on_release
│ ├── release_agent
│ └── tasks
├── devices
│ ├── cgroup.clone_children
│ ├── cgroup.event_control
│ ├── cgroup.procs
│ ├── cgroup.sane_behavior
│ ├── devices.allow
│ ├── devices.deny
│ ├── devices.list
│ ├── notify_on_release
│ ├── release_agent
│ ├── system.slice
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── devices.allow
│ │ ├── devices.deny
│ │ ├── devices.list
│ │ ├── notify_on_release
│ │ ├── systemd-machined.service
│ │ │ ├── cgroup.clone_children
│ │ │ ├── cgroup.event_control
│ │ │ ├── cgroup.procs
│ │ │ ├── devices.allow
│ │ │ ├── devices.deny
│ │ │ ├── devices.list
│ │ │ ├── notify_on_release
│ │ │ └── tasks
│ │ └── tasks
│ └── tasks
└── systemd
├── cgroup.clone_children
├── cgroup.event_control
├── cgroup.procs
├── cgroup.sane_behavior
├── machine.slice
│ ├── cgroup.clone_children
│ ├── cgroup.event_control
│ ├── cgroup.procs
│ ├── machine-qemu\x2dotrs.scope
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── notify_on_release
│ └── tasks
├── notify_on_release
├── release_agent
├── system.slice
│ ├── cgroup.clone_children
│ ├── cgroup.event_control
│ ├── cgroup.procs
│ ├── chronyd.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── dbus.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── dev-disk-by\x2did-scsi\x2d36003005701b46bf01b0d2db32bcaee78.swap
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── dev-disk-by\x2did-wwn\x2d0x6003005701b46bf01b0d2db32bcaee78.swap
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── dev-disk-by\x2dlabel-SWAP.swap
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── dev-disk-by\x2dpath-pci\x2d0000:02:00.0\x2dscsi\x2d0:2:1:0.swap
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├──
dev-disk-by\x2duuid-102d41a8\x2d848d\x2d4525\x2db39e\x2dd9b543355b71.swap
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── dev-mqueue.mount
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── dev-sdb.swap
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── kmod-static-nodes.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── libvirtd.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── libvirt-guests.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── mnt-amhold.mount
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── mnt-btrfs_windows.mount
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── -.mount
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── notify_on_release
│ ├── run-user-0.mount
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── system-amanda.slice
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-journald.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-logind.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-machined.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-random-seed.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-remount-fs.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-sysctl.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-tmpfiles-setup-dev.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-tmpfiles-setup.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-udevd.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-udev-settle.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-udev-trigger.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-update-utmp.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-user-sessions.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── systemd-vconsole-setup.service
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── system-getty.slice
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── getty@tty1.service
│ │ │ ├── cgroup.clone_children
│ │ │ ├── cgroup.event_control
│ │ │ ├── cgroup.procs
│ │ │ ├── notify_on_release
│ │ │ └── tasks
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── system-network.slice
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── system-sshd.slice
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ ├── sshd@3-172.32.99.234:22-172.32.99.12:43817.service
│ │ │ ├── cgroup.clone_children
│ │ │ ├── cgroup.event_control
│ │ │ ├── cgroup.procs
│ │ │ ├── notify_on_release
│ │ │ └── tasks
│ │ ├── sshd@7-172.32.99.234:22-172.32.99.12:44251.service
│ │ │ ├── cgroup.clone_children
│ │ │ ├── cgroup.event_control
│ │ │ ├── cgroup.procs
│ │ │ ├── notify_on_release
│ │ │ └── tasks
│ │ └── tasks
│ ├── system-systemd\x2dfsck.slice
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── tasks
│ ├── tmp.mount
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ ├── var-tmp-portage.mount
│ │ ├── cgroup.clone_children
│ │ ├── cgroup.event_control
│ │ ├── cgroup.procs
│ │ ├── notify_on_release
│ │ └── tasks
│ └── vixie-cron.service
│ ├── cgroup.clone_children
│ ├── cgroup.event_control
│ ├── cgroup.procs
│ ├── notify_on_release
│ └── tasks
├── tasks
└── user.slice
├── cgroup.clone_children
├── cgroup.event_control
├── cgroup.procs
├── notify_on_release
├── tasks
└── user-0.slice
├── cgroup.clone_children
├── cgroup.event_control
├── cgroup.procs
├── notify_on_release
├── session-2.scope
│ ├── cgroup.clone_children
│ ├── cgroup.event_control
│ ├── cgroup.procs
│ ├── notify_on_release
│ └── tasks
├── session-4.scope
│ ├── cgroup.clone_children
│ ├── cgroup.event_control
│ ├── cgroup.procs
│ ├── notify_on_release
│ └── tasks
├── session-8.scope
│ ├── cgroup.clone_children
│ ├── cgroup.event_control
│ ├── cgroup.procs
│ ├── notify_on_release
│ └── tasks
├── tasks
└── user@0.service
├── cgroup.clone_children
├── cgroup.event_control
├── cgroup.procs
├── notify_on_release
└── tasks
63 directories, 393 files
>> /dev/sda on /mnt/btrfs_windows type btrfs (rw,noatime,space_cache)
>
> a little odd that you have no partitions but otherwise minor.
That is btrfs-specific ... the whole /dev/sda is the btrfs-pool.
>> the last line is meant as a target directory for dumping the content of
>> the LV winserver_disk0 ... it is a btrfs subvolume mounted with
>> compression turned OFF.
>>
>
> if you consider that your drives are on the same raidset, they are all
> essentially "one disk".
> copying from one disk to the same disk you are halving the speed it will
> work at.
... sure ... but not THAT slow ... with 6 disks I would assume to still
get decent performance.
I want both performance and redundance so I went for this setup.
KVM images on btrfs: bad idea (or: not the best idea)
on LVM: I want to run virt-backup for backing up the raw disk image, I
do that on ~3 other customer servers and it works fine ... so ...
Maybe I have to rethink the RAID setup, yes.
What is a best practise here?
One big hw-array with RAID6 -> /dev/sda and then partition it like
sda1 -> /
sda2 -> swap
sda3 -> LVM -> KVM-LVs ...
?
next prev parent reply other threads:[~2014-06-11 10:34 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-23 7:52 [gentoo-user] Intel(R) C600 SAS Controller Stefan G. Weichinger
2014-05-24 19:24 ` Stefan G. Weichinger
2014-05-26 17:47 ` Stefan G. Weichinger
2014-05-26 19:57 ` Stefan G. Weichinger
2014-05-27 13:03 ` Stefan G. Weichinger
2014-06-10 19:57 ` Stefan G. Weichinger
2014-06-10 20:02 ` Stefan G. Weichinger
2014-06-10 22:27 ` Stefan G. Weichinger
2014-06-11 8:47 ` Stefan G. Weichinger
2014-06-11 9:19 ` thegeezer
2014-06-11 9:34 ` Stefan G. Weichinger
2014-06-11 10:14 ` thegeezer
2014-06-11 10:27 ` thegeezer
2014-06-11 10:34 ` Stefan G. Weichinger [this message]
2014-06-11 10:41 ` thegeezer
2014-06-11 10:49 ` Stefan G. Weichinger
2014-06-11 11:01 ` thegeezer
2014-06-11 11:20 ` Stefan G. Weichinger
2014-06-11 11:18 ` thegeezer
2014-06-11 11:21 ` Stefan G. Weichinger
2014-06-11 11:52 ` thegeezer
2014-06-11 12:41 ` Stefan G. Weichinger
2014-06-11 13:32 ` thegeezer
2014-06-11 13:44 ` Stefan G. Weichinger
2014-06-11 14:15 ` Stefan G. Weichinger
2014-06-11 14:18 ` thegeezer
2014-06-11 14:38 ` Stefan G. Weichinger
2014-06-11 18:57 ` Stefan G. Weichinger
2014-06-11 20:17 ` thegeezer
2014-06-12 8:15 ` Stefan G. Weichinger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=539830A8.4090500@xunil.at \
--to=lists@xunil.at \
--cc=gentoo-user@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox