In summary, I'm using ESXi 4.1.
Areca ARC-1260 Controller with 10 x 500GB SATA drives in RAID 6.
Presented a 100GB LUN to the ESX server and followed the steps outlined here
http://www.vm-help.com/esx40i/SATA_RDMs.php
My mistake was that in step 3 i wasn't in the equivalent directory to /vmfs/volumes/4a7cf921-017eb919-bb32-000423e540c6/RDMs. I just found the directory for my main datastore, which was called out in the vSphere configurations page under storage. I created the RDM directory and then used the vmkfstools -r command exactly as the link describes. worked great. and my performance is fast, fast fast.
I also noticed that if you try to move the RDMs in the vSphere menu you will get an out of space error, because vSphere thinks the the volumes are actually the size that you created in the example above. but...you didn't really create a VMDK of that size, you only created a near zero byte pointer to the physical device. so you need to move things using ssh under the covers of vSphere.
a note on performance. My setup is showing 800+MB/s write and 250+MB/s read using the ubuntu disk utility benchmark. It's fast but very jagged. I tried doing this with multiple 100GB volumes in RAID 0 as a test...basically I put 5 x 100GB RDM volumes up on my guest and striped with mdadm. I got a really slow read 80+MB/s and writes were extremely jagged with spikes at 800 dropping to 100MB/s .
I think I will stick to large volumes.
here is what I got from the link above:
ESXi provides the ability to use raw device mapping (RDM) as a method to provide a VM with direct access to a LUN on a Fibre CHannel or iSCSI storage device. RDMs are useful should you have to share a LUN with a physical server or have SAN utilities running in the VM that will be able to access the LUN directly.
You can also use a RDM to provide a VM direct access to a local SATA drive that is connected a SATA controller. This method was first posted by Mário Simões in the forums here as a method to run RAID 5 in a VM on a SATA controller that did not support RAID. This method would also be useful for importing data from existing servers, but one should note that this is not officially supported. In the below example, the host is an Asus P5E-VM DO with a Seagate drive connected to an Intel ICH9 controller.
1) The first step of the process is to determine the disk that you'll want to use for the RDM. You can run fdisk -l and you'll be able to see all the disks that ESXi has access to. As shown below the system has a WD drive upon which ESXi is installed and hold the main datastore. An existing VMFS datastore is required as a pointer VMDK file will be created for the RDM and this must be stored on a VMFS datastore. The system also has a Seagate drive that is identified as t10.ATA_____ST3500630AS_________________________________________9QG3CC60 and is unused.
~ # fdisk -l
Disk /dev/disks/t10.ATA_____ST3500630AS_________________________________________9QG3CC60: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/disks/t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992: 500.1 GB, 500107862016 bytes
64 heads, 32 sectors/track, 476940 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/disks/t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992p1 5 900 917504 5 Extended
/dev/disks/t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992p2 901 4995 4193280 6 FAT16
/dev/disks/t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992p3 4996 476941 483271704 fb VMFS
/dev/disks/t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992p4 * 1 4 4080 4 FAT16 <32M
/dev/disks/t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992p5 5 254 255984 6 FAT16
/dev/disks/t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992p6 255 504 255984 6 FAT16
/dev/disks/t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992p7 505 614 112624 fc VMKcore
/dev/disks/t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992p8 615 900 292848 6 FAT16
2) The next step is to run the command ls /dev/disks/ -l. This is used to determine the VML identifier for the disk. The VML ID will be used in the command to create the RDM.
ls /dev/disks/ -l
-rw------- 1 root root 500107862016 Oct 22 07:15 t10.ATA_____ST3500630AS_________________________________________9QG3CC60
-rw------- 1 root root 500107862016 Oct 22 07:15 t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992
-rw------- 1 root root 939524096 Oct 22 07:15 t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:1
-rw------- 1 root root 4293918720 Oct 22 07:15 t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:2
-rw------- 1 root root 494870224896 Oct 22 07:15 t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:3
-rw------- 1 root root 4177920 Oct 22 07:15 t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:4
-rw------- 1 root root 262127616 Oct 22 07:15 t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:5
-rw------- 1 root root 262127616 Oct 22 07:15 t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:6
-rw------- 1 root root 115326976 Oct 22 07:15 t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:7
-rw------- 1 root root 299876352 Oct 22 07:15 t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:8
l--------- 0 root root 1984 Jan 1 1970 vml.01000000002020202020202020202020203951473358423630535433353030 -> t10.ATA_____ST3500630AS_________________________________________9QG3CC60
l--------- 0 root root 1984 Jan 1 1970 vml.0100000000202020202057442d574341533832393031393932574443205744 -> t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992
l--------- 0 root root 1984 Jan 1 1970 vml.0100000000202020202057442d574341533832393031393932574443205744:1 -> t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:1
l--------- 0 root root 1984 Jan 1 1970 vml.0100000000202020202057442d574341533832393031393932574443205744:2 -> t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:2
l--------- 0 root root 1984 Jan 1 1970 vml.0100000000202020202057442d574341533832393031393932574443205744:3 -> t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:3
l--------- 0 root root 1984 Jan 1 1970 vml.0100000000202020202057442d574341533832393031393932574443205744:4 -> t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:4
l--------- 0 root root 1984 Jan 1 1970 vml.0100000000202020202057442d574341533832393031393932574443205744:5 -> t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:5
l--------- 0 root root 1984 Jan 1 1970 vml.0100000000202020202057442d574341533832393031393932574443205744:6 -> t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:6
l--------- 0 root root 1984 Jan 1 1970 vml.0100000000202020202057442d574341533832393031393932574443205744:7 -> t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:7
l--------- 0 root root 1984 Jan 1 1970 vml.0100000000202020202057442d574341533832393031393932574443205744:8 -> t10.ATA_____WDC_WD5000AAKS2D00YGA0________________________WD2DWCAS82934992:8
3) To create the RDM you'll use the command vmkfstools. In the below example I've created a folder called RDMs on a datastore to hold the VMDK mapping file that will be created for the RDM. The vmkfstools command is run the the -r switch and I've also specified that the controller should be set to be LSILogic (instead of the default BusLogic). The command created the mapping files RDM1.vmdk and RDM1-flat.vmdk. The contents of RDM1.vmdk are shown below and while the RDM1-flat.vmdk file appears to be 500 GB in size it actually takes up next to no disk space.
/vmfs/volumes/4a7cf921-017eb919-bb32-000423e540c6/RDMs # vmkfstools -r /vmfs/devices/disks/vml.01000000002020202020202020202020203951473358423630535433353030 RDM1.vmdk -a lsilogic
/vmfs/volumes/4a7cf921-017eb919-bb32-000423e540c6/RDMs # ls -l
-rw------- 1 root root 500107862016 Oct 22 07:28 RDM1-rdm.vmdk
-rw------- 1 root root 459 Oct 22 07:28 RDM1.vmdk
/vmfs/volumes/4a7cf921-017eb919-bb32-000423e540c6/RDMs # cat RDM1.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=ef5ed87c
parentCID=ffffffff
createType="vmfsRawDeviceMap"
# Extent description
RW 976773168 VMFSRDM "RDM1-rdm.vmdk"
# The Disk Data Base
#DDB
ddb.virtualHWVersion = "7"
ddb.longContentID = "99893594518fe8d6d9454db3ef5ed87c"
ddb.uuid = "60 00 C2 9f 7d 63 e6 e1-21 a4 da 24 ef f6 af 0b"
ddb.geometry.cylinders = "60801"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.adapterType = "lsilogic"
4) Once the RDM has been created, you'll be able to add it to a VM. As shown in the below image, a VM was edited to add an existing disk file and the RDM1.vmdk file was chosen. The VM was then powered on and Windows was installed to the RDM. After the install the VM displayed the drive as a VMware Virtual Disk. At this point it would have been possible to remove the drive from the server, plug it into a physical server and access the data written by the OS install onto the disk as there was no VMFS layer present as is the case when a VMDK is stored on datastore.