Create NFS Datastore for ESX in WIndows Server 2008 R2

By , September 17, 2012 2:22 PM

Create NFS Datastore for ESX in WIndows Server 2008 R2

Here, I am going to explain step by step procedure to configure NFS share in windows 2008 R2 to use with ESX data storeAdding NFS Share Role in Windows

Choose Start -> Administrative Tools -> Server Manager
I have already File services installed on my windows server with default options. So, Go to file services role and click on add Role services and select “services for network file system”

Click on Install

Create a folder called “nfstest”. Right click the folder and click on properties

Click on the “NFS sharing” tab and click on “Manage NFS Sharing”

Select the check mark  “Share this folder” and remove the check mark Kerberos V5 integrity and authentication  & Kerberos V5 authentication.

Select the Allow anonymous access and click on Permissions tab

In Type of Access , choose “Read-Write” and check mark the Allow Root access and apply ok.

Adding  the NFS datastore to ESX\ESXi host

Make sure you have vmkernel port is configured in your ESX/ESXi host.

Goto Configuration Tab and select Storage. Click on Add storage.

Select Network File system.

Enter the Fully qualified domain name of the server or IP address, share name and Datastore name. Click on Next.

NFS datastore named “NFS DATASTORE” is created.

NFS datastore is created and we are ready to go.Thanks For Reading!!!!

Vmware ESX NFS datastore on Windows Server 2008

By , September 17, 2012 2:20 PM

Vmware ESX NFS datastore on Windows Server 2008

So if this is the need for slow file transfers, you can do it that way:

1. Install Services for Network File System (NFS)
Server Manager – Add Roles – File Services – Services for Network File System

2. Edit Local policy(or GPO) to Everyone permissions apply to anonymous users
Administrative tools – Local Security Policy – Local Policies – Security Options – Network Access: Let Everyone permissions apply to anonymous users – Enabled
GPO – Computer Configuration – Policies – Windows Settings – Security Settings – Local Policies – Security Options – Network Access – Network Access: Let Everyone permissions apply to anonymous users – Enabled

3. set NFS to TCP only
Administrative tools – Services for Network File System (NFS) – Server for NFS Properties – Transport protocol to TCP only (default is TCP+UDP)
Reboot server!

4. Create Share and set IP access
Open Folder Properties – NFS Sharing – Manage NFS Sharing – select Share this folder – select Allow anonymous access – set Anonymous UID and Anonymous GID to 0 – Permissions – Add VMKernel IP to Add names, leave read-write, select Allow root access – ok – ok – ok – ok

5. Set datastore on esx server
Go to service console and type: esxcfg-nas -a -o (Windows 2008 IP) -s /(sharename) (datastore_name)

… and now can cpoy ISO images and backup files betwen esx and windows. For virtual machine running you beter to buy dedicated NAS or SAN system.

ESX and PERC6 Monitoring

By , September 13, 2012 3:16 PM

http://blog.rebelit.net/?p=283

 

After upgrading my ESXi whitebox server using the official ESXi 5.0 install DVD I noticed that the health status monitoring for my PERC 6i RAID card was not showing up anymore. Everything else went smoothly during the upgrade and the test VMs all powered on from the datastore on the PERC 6i without issues. When checking health status only the processors and software components were listed. As it turns out VMware has removed the vendor specific VIBs for health monitoring in ESXi 5.0.

In order to restore health monitoring for the PERC 6i to the health status screen you will need the latest LSI offline bundle VIB for ESXi 5.0. I tried using the Dell OpenManage offline bundle but it stopped displaying all monitoring after the reboot and the system would not reset the sensors. After removing the installed OpenManage VIB and after a few hours of scouring the internet I managed to find the solution. The Dell PERC 6i cards use the LSI MegaRAID chipset for their controller.

LSI’s latest offline bundle package supports a variety of cards. After finding the proper version (500.04.V0.24) I was able to locate the download on one of the other controller card pages. Doing a search for “LSI 500.04.V0.24 site:lsi.com” on Google brought up several results. I selected the first result for the MegaRAID SAS 9260CV-4i – LSI and scrolled down to the Management Tools section. Here you will find VIB downloads for 4.x and 5.x. Download the file for ESXi 5.x from any of the listed card pages. You will need to extract the offline bundle from the archive otherwise it will not install and you will get errors about being unable to load the index.xml file.

You will need VMware vSphere CLI installed on a machine. The update requires maintenance mode and a host reboot so if you are using a vMA make sure it’s on another physical host. Using CLI on my Windows desktop machine I first copied the extracted offline bundle zip to the root of the ESXi host datastore via the vSphere Client. Then on the machine with CLI installed I opened command prompt and browsed to the folder C:\Program Files (x86)\VMware\VMware vSphere CLI\bin.

I put the ESXi host in maintenance mode using the following command,

vicfg-hostops.pl -server x.x.x.x -operation enter

Note: Several times CLI returned connection errors or said that operation is a mandatory argument. I found that pasting the command was the culprit and manually typing in each command solved the issue. Also note that the VMs must be powered off to enter maintenance mode.

After the server was in maintenance mode I verified the status by running the following command,

vicfg-hostops.pl -server x.x.x.x -operation info

Once the host was in maintenance mode I ran the following command to install the vib offline bundle,

esxcli.exe -s x.x.x.x software vib install -d [datastore]VMW-ESX-5.0.0-LSIProvider-500.04.V0.24-261033-offline_bundle-456178.zip

When running the command and supplying credentials CLI sat at a flashing cursor for a few minutes. If it’s going to throw an error it will do it right away, otherwise it’s installing and you should leave it alone. There are no status updates until the install has completed.

Once the install was complete the following was returned,

Now you need to restart the ESXi host in order for the changes to work. You can also do this with CLI running the following command,

vicfg-hostops.pl -server x.x.x.x -operation reboot

After the host was done rebooting I logged in with the vSphere Client and checked the Health Status. It now shows the Storage category and displays all of the information related to my Dell PERC 6i including battery status.

I removed the host from maintenance mode and powered all of the test VMs on without any issues. I hope this helps any users out there upgrading with a PERC 6i RAID controller that want to retain the ability to monitor their storage array.

Perc 6i and ESXi5

By , September 13, 2012 1:09 PM

Hardware: Any system with a free 8x or 16x pcie slot.
The Dell perc 6i is a card made by LSI for dell, it is a volume
solution used in many servers. so many servers in fact, that it
is stupidly cheap on the used market. We will exploit that savings
fully.

 

Remember to order two SAS to SATA cables off ebay, and make sure you get the
32 pin and not the mini SAS cable.

 

Note about motherboards: Some boards will drop the second 16x slot to 8x or 2x
if there is a pci-e card in an adjacent slot.
Example: The second 16x slot has three 1x/2x slots around it.
Anything in those slots will drop the 16x slot down to 8x or worse, so
double check.

To overcome this, I used the “graphics slot” which is just a 16x pci-e slot.
Because of this, I used a basic low power pci video card. (Ati rage appx 3-5watts)

 

http://www.overclock.net/t/954061/dell-perc-6i-the-120-raid-6-solution-up-to-32-devices-howto

RAID 5 / 6 and ESXi

By , September 13, 2012 12:29 PM

RAID5 (and 6) have TERRIBLE write speeds, however this can be significantly improved by enabling write cache – both the write cache on the hard disks, and the cache on the RAID controller (This mode is called Write-Back; “Write data to cache, and get back to it later when we have time to put it to disk”). This significantly improves write throughput as the OS can write data to the array (RAID Controller); the Controller caches the data and immediately notifies the OS that it’s “done”; the OS goes on about it’s business.

The danger of this, is that the filesystem can be written to by the OS and these changes can be kept in the cache – and NOT written to disk yet – so in a power-fail event, would (in 90% of cases) result in very severe filesystem corruption, depending on how much data was queued to be written (and where).

This is where the BBU comes in; it will keep the cache active (with the cache’s data kept intact) until the PC fires up again, the disks spin up, and the controller will push all the cache out to the disks, thus drastically reducing the potential for corruption.

Other RAID levels (0, 1, 10) do not have the bad write performance that RAID5/6 does – and can survive without *any* write caching enabled (The mode know as Write-through; “Write through directly to the device, without caching.”) – In this case the OS writes to the controller, the controller tells the OS to “hang on” while it writes the data to disk, the disk reports done, the controller tells the OS “All good, continue”.

My real world example (My recent experience):

Thomas Challenger Thomas Challenger