OpenFiler: Difference between revisions

From vwiki
Jump to navigation Jump to search
(Initial creation)
 
m (→‎Create iSCSI Disk: Final corrections)
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Overview ==
=== Terminology ===
* '''Physical Volume''' - Assigned space on a physical disk, to be used in a Volume Group
* '''Volume Group''' - Contains the Physical Volumes from which a Logical Volume can be created
* '''Logical Volume''' - What's presented to the outside world as storage (LUN)
== Set-up ==
== Set-up ==
=== Installation ===
=== Installation ===
Line 9: Line 15:
## Change line 94 of <code> /opt/openfiler/var/www/includes/network.inc </code> to
## Change line 94 of <code> /opt/openfiler/var/www/includes/network.inc </code> to
## <code> exec("sudo ifconfig -a | grep  'eth'",$output); </code>
## <code> exec("sudo ifconfig -a | grep  'eth'",$output); </code>
=== Create NFS Disk ===
# Login to OpenFiler (user/pass openfiler/password)
# First create a partition...
## Go to '''Volumes''', and to the '''Create a new volume group''' section
## Click on the '''create new physical volumes''' link
## In the resulting '''Block Device Management''' page, edit the disk that we want to create the NFS storage on.
## Find the '''Create a partition in /dev/sd''x''''' section and hit '''Create''' (default options)
# Then create a volume group for NFS use...
## Go to '''Volume Groups''' (right-hand menu) and find the '''Create a new volume group''' section
## Give the volume group a name (eg volgrp1), select the partition/physical volume and hit '''Add volume group'''
## Click on the '''create new physical volumes''' link
## Go to '''Add Volume''', and find the '''Create a volume in ''volgrp''''' section
## Give the volume a name (eg nfs), select all space, change Filesystem to '''ext3''', and hit '''Create'''
# Create network share
## On '''System''' tab, find '''Network Access Configuration'''
## Add in the local network address, or specify individual NFS client IP address, etc, and hit '''Update'''
# Enable NFS share
## On '''Services''' tab, '''Enable''' NFSv3 server
## Go to the '''Shares''' tab, and create a new share (eg share) under the NFS mount
## Once created, click on the share name to edit it
## Enable RO or RW NFS access (as required) for the specified networks/hosts
## Click '''Update''', then return using the '''Back to shares list''' option
# Use the entire mount path (eg /mnt/volgrp1/nfs/share) when adding the share to an ESX


=== Create iSCSI Disk ===
=== Create iSCSI Disk ===
# Login to OpenFiler (user/pass openfiler/password)
# Login to OpenFiler (user/pass openfiler/password)
# First create a partition...
# First create a partition...
## Go to '''Volumes''', and to the '''Create a new volume group''' section
## Go to '''Volumes''', and to the '''Create a new volume group''' section
## Click on the '''create new physical volumes''' link
## Click on the '''create new physical volumes''' link
## In the resulting '''Block Device Management''' page, edit the disk that we want to create the iSCSI storage on.
## In the resulting '''Block Device Management''' page, edit the disk that we want to create the iSCSI storage on (should have 0 existing partitions).
## Find the '''Create a partition in /dev/sd''x''''' section and hit '''Create''' (default options)
## Find the '''Create a partition in /dev/sd''x''''' section and hit '''Create''' (default options)
# Then create a volume group for iSCSI use...
# Then create a volume group for iSCSI use...
Line 23: Line 54:
## Go to '''Add Volume''', and find the '''Create a volume in ''volgrp''''' section
## Go to '''Add Volume''', and find the '''Create a volume in ''volgrp''''' section
## Give the volume a name (eg iSCSIvolgrp), select all space, change Filesystem to '''iSCSI''', and hit '''Create'''
## Give the volume a name (eg iSCSIvolgrp), select all space, change Filesystem to '''iSCSI''', and hit '''Create'''
# Create the iSCSI Target...
## Go to '''iSCSI Targets''' and click on '''Add''' to create a IQN with the default name
## Go to the '''LUN Mapping''' tab, and click '''Map''' to map the volume to the IQN
## Go to the '''Network ACL''' tab, and ensure that connections are allowed
# Create network share
# Create network share
## On '''System''' tab, find '''Network Access Configuration'''
## On '''System''' tab, find '''Network Access Configuration'''
## Add in the OpenFiler's IP address, etc, and hit '''Update'''
## Add in the local network address, or specify individual iSCSI client IP address, etc, and hit '''Update'''
# Enable iSCSI target
# Enable iSCSI target
## On '''Services''' tab, '''Enable''' iSCSI target server
## On '''Services''' tab, '''Enable''' iSCSI target server
## Go to '''Volumes''' and '''iSCSI Targets'''
## Click on add to create the default name IQN
== Troubleshooting ==
=== High Memory Usage ===
Openfiler will cache all throughput, so as the NAS is used it will fill available memory.  This is normal, and to be expected.  Physical memory will be freed as required by other processes.  If you suspect a problem, login to the console and run the <code>top</code> command. As long as the amount of swap memory in use is low there is no problem.


See https://forums.openfiler.com/viewtopic.php?id=1017 for (a little) more info.


If running OpenFiler on a VM, you may want to minimise the amount of physical memory assigned to the NAS.  In which case keep an eye on the swap usage as you decrease the assigned physical memory, once it begins to rise you've gone a little too far.


== Troubleshooting ==
=== Failed to get disk partition information ===
=== Failed to get disk partition information ===
To fix "Failed to get disk partition information" error when mounting iSCSI partition, see [http://ict-freak.nl/2009/03/14/vmware-failed-to-get-disk-partition-information/ this fix].
To fix "Failed to get disk partition information" error when mounting iSCSI partition, see [http://ict-freak.nl/2009/03/14/vmware-failed-to-get-disk-partition-information/ this fix].

Latest revision as of 09:11, 24 November 2009

Overview

Terminology

  • Physical Volume - Assigned space on a physical disk, to be used in a Volume Group
  • Volume Group - Contains the Physical Volumes from which a Logical Volume can be created
  • Logical Volume - What's presented to the outside world as storage (LUN)

Set-up

Installation

  1. Download the appropriate appliance from OpenFiler
  2. Extract, upload to your ESX and Add to Inventory, and start
  3. The machine will attempt use DHCP to assign an IP
  4. Browse to the machine, and login using openfiler/password
  5. If change IP option doesn't appear on System page, attempt following fix
    1. Login to console of VM (root, no password)
    2. Change line 94 of /opt/openfiler/var/www/includes/network.inc to
    3. exec("sudo ifconfig -a | grep 'eth'",$output);

Create NFS Disk

  1. Login to OpenFiler (user/pass openfiler/password)
  2. First create a partition...
    1. Go to Volumes, and to the Create a new volume group section
    2. Click on the create new physical volumes link
    3. In the resulting Block Device Management page, edit the disk that we want to create the NFS storage on.
    4. Find the Create a partition in /dev/sdx section and hit Create (default options)
  3. Then create a volume group for NFS use...
    1. Go to Volume Groups (right-hand menu) and find the Create a new volume group section
    2. Give the volume group a name (eg volgrp1), select the partition/physical volume and hit Add volume group
    3. Click on the create new physical volumes link
    4. Go to Add Volume, and find the Create a volume in volgrp section
    5. Give the volume a name (eg nfs), select all space, change Filesystem to ext3, and hit Create
  4. Create network share
    1. On System tab, find Network Access Configuration
    2. Add in the local network address, or specify individual NFS client IP address, etc, and hit Update
  5. Enable NFS share
    1. On Services tab, Enable NFSv3 server
    2. Go to the Shares tab, and create a new share (eg share) under the NFS mount
    3. Once created, click on the share name to edit it
    4. Enable RO or RW NFS access (as required) for the specified networks/hosts
    5. Click Update, then return using the Back to shares list option
  6. Use the entire mount path (eg /mnt/volgrp1/nfs/share) when adding the share to an ESX

Create iSCSI Disk

  1. Login to OpenFiler (user/pass openfiler/password)
  2. First create a partition...
    1. Go to Volumes, and to the Create a new volume group section
    2. Click on the create new physical volumes link
    3. In the resulting Block Device Management page, edit the disk that we want to create the iSCSI storage on (should have 0 existing partitions).
    4. Find the Create a partition in /dev/sdx section and hit Create (default options)
  3. Then create a volume group for iSCSI use...
    1. Go to Volume Groups (right-hand menu) and find the Create a new volume group section
    2. Give the volume group a name (eg volgrp1), select the partition/physical volume and hit Add volume group
    3. Click on the create new physical volumes link
    4. Go to Add Volume, and find the Create a volume in volgrp section
    5. Give the volume a name (eg iSCSIvolgrp), select all space, change Filesystem to iSCSI, and hit Create
  4. Create the iSCSI Target...
    1. Go to iSCSI Targets and click on Add to create a IQN with the default name
    2. Go to the LUN Mapping tab, and click Map to map the volume to the IQN
    3. Go to the Network ACL tab, and ensure that connections are allowed
  5. Create network share
    1. On System tab, find Network Access Configuration
    2. Add in the local network address, or specify individual iSCSI client IP address, etc, and hit Update
  6. Enable iSCSI target
    1. On Services tab, Enable iSCSI target server
    2. Go to Volumes and iSCSI Targets
    3. Click on add to create the default name IQN

Troubleshooting

High Memory Usage

Openfiler will cache all throughput, so as the NAS is used it will fill available memory. This is normal, and to be expected. Physical memory will be freed as required by other processes. If you suspect a problem, login to the console and run the top command. As long as the amount of swap memory in use is low there is no problem.

See https://forums.openfiler.com/viewtopic.php?id=1017 for (a little) more info.

If running OpenFiler on a VM, you may want to minimise the amount of physical memory assigned to the NAS. In which case keep an eye on the swap usage as you decrease the assigned physical memory, once it begins to rise you've gone a little too far.

Failed to get disk partition information

To fix "Failed to get disk partition information" error when mounting iSCSI partition, see this fix.

[root@labesx-1 /]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         140     1124518+  83  Linux
/dev/sda2             141         154      112455   fc  VMware VMKCORE
/dev/sda3             155        2610    19727820    5  Extended
/dev/sda5             155        2610    19727788+  fb  VMware VMFS

Disk /dev/sdb: 7973 MB, 7973371904 bytes
255 heads, 63 sectors/track, 969 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          76      610438+  82  Linux swap / Solaris
/dev/sdb2              77         331     2048287+  83  Linux
/dev/sdb3             332         969     5124735    5  Extended
/dev/sdb5             332         969     5124703+  83  Linux

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdc: 217.9 GB, 217968541696 bytes
256 heads, 63 sectors/track, 26396 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       26397   212860927+  ee  EFI GPT
[root@labesx-1 /]# fdisk /dev/sdc

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.


The number of cylinders for this disk is set to 26396.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdc: 217.9 GB, 217968541696 bytes
256 heads, 63 sectors/track, 26396 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       26397   212860927+  ee  EFI GPT

Command (m for help): d
Selected partition 1

Command (m for help): p

Disk /dev/sdc: 217.9 GB, 217968541696 bytes
256 heads, 63 sectors/track, 26396 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): p

Disk /dev/sdc: 217.9 GB, 217968541696 bytes
256 heads, 63 sectors/track, 26396 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-26396, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-26396, default 26396):
Using default value 26396

Command (m for help): p

Disk /dev/sdc: 217.9 GB, 217968541696 bytes
256 heads, 63 sectors/track, 26396 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       26396   212857312+  83  Linux

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fb
Changed system type of partition 1 to fb (VMware VMFS)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.