* [gentoo-user] vmware-server performance
@ 2010-03-10 19:47 Stefan G. Weichinger
2010-03-11 15:54 ` Kyle Bader
0 siblings, 1 reply; 10+ messages in thread
From: Stefan G. Weichinger @ 2010-03-10 19:47 UTC (permalink / raw
To: gentoo-user
Recently I see bad performance with my vmware-server.
Loads of harddisk IO ... even bad on the RAID1, disks working all the
time (I hear them and iostat tells me).
Might have to do with kernel 2.6.33 and non-fitting vmware-modules?
I masked some modules back then because they didn't work, maybe they
would now.
Could someone tell me what combo works with gentoo-sources-2.6.33 ?
I currently have:
# eix vmware-mod
[I] app-emulation/vmware-modules
Available versions: 1.0.0.15-r1 1.0.0.15-r2 (~)1.0.0.24-r1{tbz2}
[m]1.0.0.25-r1 [m](~)1.0.0.26 {kernel_linux}
Installed versions: 1.0.0.24-r1{tbz2}(20:34:53
01.03.2010)(kernel_linux)
# eix vmware-ser
[I] app-emulation/vmware-server
Available versions: 1.0.8.126538!s 1.0.9.156507!s
(~)1.0.10.203137!s (~)2.0.1.156745-r3!s{tbz2} (~)2.0.2.203138!f!s{tbz2}
Installed versions: 2.0.2.203138!f!s{tbz2}(20:19:33 10.03.2010)
Thanks in advance, Stefan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [gentoo-user] vmware-server performance
2010-03-10 19:47 [gentoo-user] vmware-server performance Stefan G. Weichinger
@ 2010-03-11 15:54 ` Kyle Bader
2010-03-12 7:22 ` Stefan G. Weichinger
0 siblings, 1 reply; 10+ messages in thread
From: Kyle Bader @ 2010-03-11 15:54 UTC (permalink / raw
To: gentoo-user
If you use the cfq scheduler (linux default) you might try turning off
low latency mode (introduced in 2.6.32):
Echo 0 > /sys/class/block/<device name>/queue/iosched/low_latency
http://kernelnewbies.org/Linux_2_6_32
On 3/10/10, Stefan G. Weichinger <lists@xunil.at> wrote:
>
> Recently I see bad performance with my vmware-server.
>
> Loads of harddisk IO ... even bad on the RAID1, disks working all the
> time (I hear them and iostat tells me).
>
> Might have to do with kernel 2.6.33 and non-fitting vmware-modules?
>
> I masked some modules back then because they didn't work, maybe they
> would now.
>
> Could someone tell me what combo works with gentoo-sources-2.6.33 ?
>
> I currently have:
>
> # eix vmware-mod
> [I] app-emulation/vmware-modules
> Available versions: 1.0.0.15-r1 1.0.0.15-r2 (~)1.0.0.24-r1{tbz2}
> [m]1.0.0.25-r1 [m](~)1.0.0.26 {kernel_linux}
> Installed versions: 1.0.0.24-r1{tbz2}(20:34:53
> 01.03.2010)(kernel_linux)
>
> # eix vmware-ser
> [I] app-emulation/vmware-server
> Available versions: 1.0.8.126538!s 1.0.9.156507!s
> (~)1.0.10.203137!s (~)2.0.1.156745-r3!s{tbz2} (~)2.0.2.203138!f!s{tbz2}
> Installed versions: 2.0.2.203138!f!s{tbz2}(20:19:33 10.03.2010)
>
>
> Thanks in advance, Stefan
>
>
--
Sent from my mobile device
Kyle
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [gentoo-user] vmware-server performance
2010-03-11 15:54 ` Kyle Bader
@ 2010-03-12 7:22 ` Stefan G. Weichinger
2010-03-12 22:37 ` Kyle Bader
0 siblings, 1 reply; 10+ messages in thread
From: Stefan G. Weichinger @ 2010-03-12 7:22 UTC (permalink / raw
To: gentoo-user
Am 11.03.2010 16:54, schrieb Kyle Bader:
> If you use the cfq scheduler (linux default) you might try turning off
> low latency mode (introduced in 2.6.32):
>
> Echo 0 > /sys/class/block/<device name>/queue/iosched/low_latency
>
> http://kernelnewbies.org/Linux_2_6_32
That sounded good, but unfortunately it is not really doing the trick.
The VM still takes minutes to boot ... and this after I copied it back
to the RAID1-array which should in theory be faster than the
noraid-partition before.
Thanks anyway, I will test that setting ...
Stefan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [gentoo-user] vmware-server performance
2010-03-12 7:22 ` Stefan G. Weichinger
@ 2010-03-12 22:37 ` Kyle Bader
2010-03-13 18:25 ` Stefan G. Weichinger
0 siblings, 1 reply; 10+ messages in thread
From: Kyle Bader @ 2010-03-12 22:37 UTC (permalink / raw
To: gentoo-user
If the elevated iowait from iostat is on the host you might be able to
find something hogging you io bandwidth with iotop. Also look for D
state procs with ps auxr. Are you on a software raid?
If you are on linux soft raid you might check your disks for errors
with smartmontools. Other than that the only thing I can think of is
something like a performance regression in the ide/scsi/sata
controller (on host or virtual) or mdadm on host. If the host system
is bogged before starting vmware instances I would suspect the former
(host controller or mdadm).
On 3/11/10, Stefan G. Weichinger <lists@xunil.at> wrote:
> Am 11.03.2010 16:54, schrieb Kyle Bader:
>> If you use the cfq scheduler (linux default) you might try turning off
>> low latency mode (introduced in 2.6.32):
>>
>> Echo 0 > /sys/class/block/<device name>/queue/iosched/low_latency
>>
>> http://kernelnewbies.org/Linux_2_6_32
>
> That sounded good, but unfortunately it is not really doing the trick.
> The VM still takes minutes to boot ... and this after I copied it back
> to the RAID1-array which should in theory be faster than the
> noraid-partition before.
>
> Thanks anyway, I will test that setting ...
>
> Stefan
>
>
>
--
Sent from my mobile device
Kyle
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [gentoo-user] vmware-server performance
2010-03-12 22:37 ` Kyle Bader
@ 2010-03-13 18:25 ` Stefan G. Weichinger
2010-03-18 21:16 ` Stefan G. Weichinger
0 siblings, 1 reply; 10+ messages in thread
From: Stefan G. Weichinger @ 2010-03-13 18:25 UTC (permalink / raw
To: gentoo-user
Am 12.03.2010 23:37, schrieb Kyle Bader:
> If the elevated iowait from iostat is on the host you might be able to
> find something hogging you io bandwidth with iotop. Also look for D
> state procs with ps auxr. Are you on a software raid?
Yes, sw-raid level 1, two SATA-disks.
iotop points to kdmflush, whatever that is ...
equery doesn't know it, so I assume it's some kind of kernel-process?
device-mapper-related ? dm ...
> If you are on linux soft raid you might check your disks for errors
> with smartmontools. Other than that the only thing I can think of is
> something like a performance regression in the ide/scsi/sata
> controller (on host or virtual) or mdadm on host. If the host system
> is bogged before starting vmware instances I would suspect the former
> (host controller or mdadm).
The disks look good so far ...
thanks, S
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [gentoo-user] vmware-server performance
2010-03-13 18:25 ` Stefan G. Weichinger
@ 2010-03-18 21:16 ` Stefan G. Weichinger
2010-04-29 18:22 ` Stefan G. Weichinger
0 siblings, 1 reply; 10+ messages in thread
From: Stefan G. Weichinger @ 2010-03-18 21:16 UTC (permalink / raw
To: gentoo-user
Am 13.03.2010 19:25, schrieb Stefan G. Weichinger:
>> If you are on linux soft raid you might check your disks for errors
>> with smartmontools. Other than that the only thing I can think of is
>> something like a performance regression in the ide/scsi/sata
>> controller (on host or virtual) or mdadm on host. If the host system
>> is bogged before starting vmware instances I would suspect the former
>> (host controller or mdadm).
>
> The disks look good so far ...
Just to bump this one up again ...
Hard disks OK, ran long smart-tests, completely ok.
Still that high io-load from kdmflush.
Stefan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [gentoo-user] vmware-server performance
2010-03-18 21:16 ` Stefan G. Weichinger
@ 2010-04-29 18:22 ` Stefan G. Weichinger
2010-04-30 14:41 ` Florian Philipp
0 siblings, 1 reply; 10+ messages in thread
From: Stefan G. Weichinger @ 2010-04-29 18:22 UTC (permalink / raw
To: gentoo-user
Am 18.03.2010 22:16, schrieb Stefan G. Weichinger:
> Am 13.03.2010 19:25, schrieb Stefan G. Weichinger:
>
>>> If you are on linux soft raid you might check your disks for errors
>>> with smartmontools. Other than that the only thing I can think of is
>>> something like a performance regression in the ide/scsi/sata
>>> controller (on host or virtual) or mdadm on host. If the host system
>>> is bogged before starting vmware instances I would suspect the former
>>> (host controller or mdadm).
>>
>> The disks look good so far ...
>
> Just to bump this one up again ...
>
> Hard disks OK, ran long smart-tests, completely ok.
>
> Still that high io-load from kdmflush.
No change since then.
What do you guys use? RAID1, RAID0 ?? LVM? Specific filesystems?
I could also transfer it to another box using NFSv4 ... but that wasn't
much difference back then.
I would like to hear your thoughts, thanks, Stefan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [gentoo-user] vmware-server performance
2010-04-29 18:22 ` Stefan G. Weichinger
@ 2010-04-30 14:41 ` Florian Philipp
2010-04-30 16:55 ` Stefan G. Weichinger
0 siblings, 1 reply; 10+ messages in thread
From: Florian Philipp @ 2010-04-30 14:41 UTC (permalink / raw
To: gentoo-user
[-- Attachment #1: Type: text/plain, Size: 1596 bytes --]
Am 29.04.2010 20:22, schrieb Stefan G. Weichinger:
> Am 18.03.2010 22:16, schrieb Stefan G. Weichinger:
>> Am 13.03.2010 19:25, schrieb Stefan G. Weichinger:
>>
>>>> If you are on linux soft raid you might check your disks for errors
>>>> with smartmontools. Other than that the only thing I can think of is
>>>> something like a performance regression in the ide/scsi/sata
>>>> controller (on host or virtual) or mdadm on host. If the host system
>>>> is bogged before starting vmware instances I would suspect the former
>>>> (host controller or mdadm).
>>>
>>> The disks look good so far ...
>>
>> Just to bump this one up again ...
>>
>> Hard disks OK, ran long smart-tests, completely ok.
>>
>> Still that high io-load from kdmflush.
>
> No change since then.
>
> What do you guys use? RAID1, RAID0 ?? LVM? Specific filesystems?
> I could also transfer it to another box using NFSv4 ... but that wasn't
> much difference back then.
>
> I would like to hear your thoughts, thanks, Stefan
>
Hi!
I just want to tell you that I experience similar problems with
vmware-player. I'm currently on kernel 2.6.32. The guest system is a
Ubuntu with an Oracle Express database (used for a database lecture I'm
taking).
The system feels like it swaps out the complete host system when I
switch to the guest system and vice versa although there is plenty of
free memory. It is so bad that the system becomes completely unusable
for more than 15 minutes. I didn't investigate it yet because I don't
really need that guest OS.
Regards,
Florian Philipp
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [gentoo-user] vmware-server performance
2010-04-30 14:41 ` Florian Philipp
@ 2010-04-30 16:55 ` Stefan G. Weichinger
2010-05-03 9:25 ` Stefan G. Weichinger
0 siblings, 1 reply; 10+ messages in thread
From: Stefan G. Weichinger @ 2010-04-30 16:55 UTC (permalink / raw
To: gentoo-user; +Cc: Florian Philipp
Am 30.04.2010 16:41, schrieb Florian Philipp:
> I just want to tell you that I experience similar problems with
> vmware-player.
Good to hear that ... in a way.
> I'm currently on kernel 2.6.32. The guest system is a
> Ubuntu with an Oracle Express database (used for a database lecture
> I'm taking).
I had those problems with 2.6.32 as well.
Should try to go back further for a check ...
> The system feels like it swaps out the complete host system when I
> switch to the guest system and vice versa although there is plenty
> of free memory. It is so bad that the system becomes completely
> unusable for more than 15 minutes. I didn't investigate it yet
> because I don't really need that guest OS.
Good for you ;-)
It's not THAT bad here, but the XP-guest takes a while to boot, yes.
Right now I simply don't shutdown the guest and hibernate-to-ram the
whole linux-box.
Thanks, Stefan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [gentoo-user] vmware-server performance
2010-04-30 16:55 ` Stefan G. Weichinger
@ 2010-05-03 9:25 ` Stefan G. Weichinger
0 siblings, 0 replies; 10+ messages in thread
From: Stefan G. Weichinger @ 2010-05-03 9:25 UTC (permalink / raw
To: gentoo-user; +Cc: Florian Philipp
Am 30.04.2010 18:55, schrieb Stefan G. Weichinger:
> It's not THAT bad here, but the XP-guest takes a while to boot, yes.
> Right now I simply don't shutdown the guest and hibernate-to-ram the
> whole linux-box.
I moved the VM from a LV formatted with XFS to another LV formatted with
ext4 (both mounted with noatime).
It seems to help a bit, the VM boots faster and also works smoother.
iotop shows less load for kdmflush as well.
Just for the records ...
Stefan
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2010-05-03 9:26 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-10 19:47 [gentoo-user] vmware-server performance Stefan G. Weichinger
2010-03-11 15:54 ` Kyle Bader
2010-03-12 7:22 ` Stefan G. Weichinger
2010-03-12 22:37 ` Kyle Bader
2010-03-13 18:25 ` Stefan G. Weichinger
2010-03-18 21:16 ` Stefan G. Weichinger
2010-04-29 18:22 ` Stefan G. Weichinger
2010-04-30 14:41 ` Florian Philipp
2010-04-30 16:55 ` Stefan G. Weichinger
2010-05-03 9:25 ` Stefan G. Weichinger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox