I'm doing some test at the moment after I copied all the zpool versions from http://www.solarismen.de/archives/12-Modified-zpool-program-for-newer-Solaris-versions.html - both zpool-s10u8 and zpool-s10u9 work after linking the libzfs library.
1344 -rwxr-xr-x 1 root bin 673388 Apr 4 2010 libzfs.so.1
2 lrwxrwxrwx 1 root root 11 Mar 25 12:27 libzfs.so.2 -> libzfs.so.1
I created three different zpools over three harddisks which now are showing :
#zpool create zpool-t1p0 /dev/dsk/c1t1d0
#./zpool-s10u8 create zpool-t2p0 /dev/dsk/c1t2d0
#./zpool-s10u9 create zpool-t3p0 /dev/dsk/c1t3d0
#zdb
zpool-t1p0:
version: 22
name: 'zpool-t1p0'
state: 0
txg: 15
pool_guid: 73485483957774418
hostid: 13571568
hostname: 'eon1'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 73485483957774418
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 928407176351616095
path: '/dev/dsk/c1t1d0s0'
devid: 'id1,***@SATA_____WDC_WD20EARS-00M_____WD-WCAZA270/a'
phys_path: '/***@0,0/pci8086,***@1f,2/***@1,0:a'
whole_disk: 1
metaslab_array: 23
metaslab_shift: 34
ashift: 9
asize: 2000385474560
is_log: 0
create_txg: 4
zpool-t2p0:
version: 22
name: 'zpool-t2p0'
state: 0
txg: 15
pool_guid: 2601048085308766544
hostid: 13571568
hostname: 'eon1'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 2601048085308766544
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 12187966011736420873
path: '/dev/dsk/c1t2d0s0'
devid: 'id1,***@SATA_____WDC_WD20EARS-00M_____WD-WCAZA271/a'
phys_path: '/***@0,0/pci8086,***@1f,2/***@2,0:a'
whole_disk: 1
metaslab_array: 23
metaslab_shift: 34
ashift: 12
asize: 2000385474560
is_log: 0
create_txg: 4
zpool-t3p0:
version: 22
name: 'zpool-t3p0'
state: 0
txg: 4
pool_guid: 15215334829979844812
hostid: 13571568
hostname: 'eon1'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 15215334829979844812
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 8884729500419644159
path: '/dev/dsk/c1t3d0s0'
devid: 'id1,***@SATA_____WDC_WD20EARS-00M_____WD-WCAZA465/a'
phys_path: '/***@0,0/pci8086,***@1f,2/***@3,0:a'
whole_disk: 1
metaslab_array: 23
metaslab_shift: 34
ashift: 12
asize: 2000385474560
is_log: 0
create_txg: 4
Please notice the "ashift: 12" for pool-t2p0 and pool-t3p0.
eon1:153:~#df -k
Filesystem size used avail capacity Mounted on
...
/dev/dsk/c0t0d0s0 7.4G 264M 7.0G 4% /mnt/eon0
swap 1.3G 37M 1.3G 3% /tmp
swap 1.3G 60K 1.3G 1% /var/run
zpool-t1p0 1.8T 21K 1.8T 1% /zpool-t1p0
zpool-t2p0 1.8T 112K 1.8T 1% /zpool-t2p0
zpool-t3p0 1.8T 112K 1.8T 1% /zpool-t3p0
Notice the used space differs for zpool-t2p0 and zpool-t3p0.
#zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zpool-t1p0 127M 1.81T 0 1 592 112K
c1t1d0 127M 1.81T 0 1 592 112K
---------- ----- ----- ----- ----- ----- -----
zpool-t2p0 132M 1.81T 0 1 631 125K
c1t2d0 132M 1.81T 0 1 631 125K
---------- ----- ----- ----- ----- ----- -----
zpool-t3p0 132M 1.81T 0 1 637 126K
c1t3d0 132M 1.81T 0 1 637 126K
---------- ----- ----- ----- ----- ----- -----
While running dtrace's iosnoop on a "touch test" file in every pool I can see that the "ashift: 12" pools (zpool-t2p0 and zpool-t3p0) use the 4096 bytes per sector while the regular zpool-t1p0 uses 512 bytes.
0 1107 W 268196 3072 zpool-zpool-t1p0 <none>
0 1107 W 268204 512 zpool-zpool-t1p0 <none>
0 1107 W 738206868 3072 zpool-zpool-t1p0 <none>
0 1107 W 738206875 1024 zpool-zpool-t1p0 <none>
0 1111 W 738210624 4096 zpool-zpool-t2p0 <none>
0 1111 W 274560 4096 zpool-zpool-t2p0 <none>
0 1111 W 738210632 4096 zpool-zpool-t2p0 <none>
0 1111 W 274568 4096 zpool-zpool-t2p0 <none>
0 1113 W 738210616 4096 zpool-zpool-t3p0 <none>
0 1113 W 274568 4096 zpool-zpool-t3p0 <none>
0 1113 W 738210624 4096 zpool-zpool-t3p0 <none>
0 1113 W 1476403760 8192 zpool-zpool-t3p0 <none>
Any thoughts how this will affect the performance and available space ?
--
This message posted from opensolaris.org