From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from pigeon.gentoo.org ([208.92.234.80] helo=lists.gentoo.org) by finch.gentoo.org with esmtp (Exim 4.60) (envelope-from ) id 1P1GyP-0006mL-9x for garchives@archives.gentoo.org; Thu, 30 Sep 2010 11:06:49 +0000 Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 0E64CE0880; Thu, 30 Sep 2010 11:06:42 +0000 (UTC) Received: from mail-ww0-f53.google.com (mail-ww0-f53.google.com [74.125.82.53]) by pigeon.gentoo.org (Postfix) with ESMTP id C4C9DE0880 for ; Thu, 30 Sep 2010 11:06:41 +0000 (UTC) Received: by wwd20 with SMTP id 20so15780wwd.10 for ; Thu, 30 Sep 2010 04:06:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:date:message-id :subject:from:to:content-type; bh=BkhmfWAnYmGTJg26HxbD544jIoZ4RyWALljngFlnHck=; b=O9yl7a8QiR7T5c7HKBYRHWJI2rlbBfvCE7Y1eF2jCcRzIgVRIfFtjFVmUsIdSKE7lM 8ze8ybXtEGctL+VsZlLeje2WX2rd4jLXduBWFNLA9GCultZ4rDlZAEKG5VKy+UVaGg7a FWOrVRmI+6u2Yf2Yi+4t6AprnBvsJSdf2GyJg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=aB+5RMGI/fHkaZ8Z2RyixqbaAwgC0sEQeyfRp9W1VcySrpx2XLZ9lS06uri7sESAAV lqMaIZqdhueSL5MCKQHgMp6//m1VxU3CawOR1LCIuoV6Lbwu0BE4T6c4nTA154lh9C8k YijsXQwYLciS5vdICwdXmOxpR9h2sYmRe3Byk= Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-user@lists.gentoo.org Reply-to: gentoo-user@lists.gentoo.org MIME-Version: 1.0 Received: by 10.216.155.206 with SMTP id j56mr2779644wek.67.1285844316182; Thu, 30 Sep 2010 03:58:36 -0700 (PDT) Received: by 10.216.21.141 with HTTP; Thu, 30 Sep 2010 03:58:36 -0700 (PDT) Date: Thu, 30 Sep 2010 20:58:36 +1000 Message-ID: Subject: [gentoo-user] Normal disk speed? From: Adam Carter To: gentoo-user@lists.gentoo.org Content-Type: multipart/alternative; boundary=0016363ba3e2b3d85a049177f539 X-Archives-Salt: 9d721f5c-84f9-43d6-adea-0a2bca0ff209 X-Archives-Hash: e11b56744bc155012a9add79042b6b33 --0016363ba3e2b3d85a049177f539 Content-Type: text/plain; charset=ISO-8859-1 Taring my mp3 collection from 2.5in 500MB internal sata drive (sda) to esata 3.5in 500MB drive (sdb) and it seems slow. In vmstat i can see that the external drive writes faster than the internal can read (external has periods of inactivity) # time tar cf /mnt/usbdrive/mp3back.tar mp3/ real 10m9.679s user 0m1.577s sys 2m1.769s # du -ks mp3/ 21221661 mp3/ So 21221MB in 610 seconds = 35 MB/s # hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 220 MB in 3.01 seconds = 73.14 MB/sec (77 with --direct) FWIW; # hdparm /dev/sda /dev/sda: multcount = 16 (on) IO_support = 1 (32-bit) readonly = 0 (off) readahead = 256 (on) geometry = 60801/255/63, sectors = 976773168, start = 0 So the should i expect filesystem (reiser3) and other overhead to cut the read performance to less than half of what hdparm reports? Anything else i can look at to speed it up? Im using CFQ io scheduler. --0016363ba3e2b3d85a049177f539 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Taring my mp3 collection from 2.5in 500MB internal sata drive (sda) to esat= a 3.5in 500MB drive (sdb) and it seems slow. In vmstat i can see that the e= xternal drive writes faster than the internal can read (external has period= s of inactivity)
# time tar cf /mnt/usbdrive/mp3back.tar mp3/

real=A0=A0=A0 10m9.679s=
user=A0=A0=A0 0m1.577s
sys=A0=A0=A0 2m1.769s
# du -ks mp3/
212= 21661=A0=A0=A0 mp3/

So 21221MB in 610 seconds =3D 35 MB/s

# h= dparm -t /dev/sda

/dev/sda:
=A0Timing buffered disk reads:=A0 220 MB in=A0 3.01 seconds = =3D=A0 73.14 MB/sec (77 with --direct)

FWIW;
# hdparm /dev/sda
/dev/sda:
=A0multcount=A0=A0=A0=A0 =3D 16 (on)
=A0IO_support=A0= =A0=A0 =3D=A0 1 (32-bit)
=A0readonly=A0=A0=A0=A0=A0 =3D=A0 0 (off)
=A0readahead=A0=A0=A0=A0 =3D 256 (on)
=A0geometry=A0=A0=A0=A0=A0 =3D 608= 01/255/63, sectors =3D 976773168, start =3D 0


So the should i ex= pect filesystem (reiser3) and other overhead to cut=20 the read performance to less than half of what hdparm reports? Anything els= e i can look at to speed=20 it up? Im using CFQ io scheduler.
--0016363ba3e2b3d85a049177f539--