OpenFiler: Difference between revisions
m (→High Memory Usage: Expanded slightly) |
m (Added "Overview > Terminology") |
||
Line 1: | Line 1: | ||
== Overview == | |||
=== Terminology === | |||
* '''Physical Volume''' - Assigned space on a physical disk, to be used in a Volume Group | |||
* '''Volume Group''' - Contains the Physical Volumes from which a Logical Volume can be created | |||
* '''Logical Volume''' - What's presented to the outside world as storage (LUN) | |||
== Set-up == | == Set-up == | ||
=== Installation === | === Installation === |
Revision as of 11:22, 20 November 2009
Overview
Terminology
- Physical Volume - Assigned space on a physical disk, to be used in a Volume Group
- Volume Group - Contains the Physical Volumes from which a Logical Volume can be created
- Logical Volume - What's presented to the outside world as storage (LUN)
Set-up
Installation
- Download the appropriate appliance from OpenFiler
- Extract, upload to your ESX and Add to Inventory, and start
- The machine will attempt use DHCP to assign an IP
- Browse to the machine, and login using openfiler/password
- If change IP option doesn't appear on System page, attempt following fix
- Login to console of VM (root, no password)
- Change line 94 of
/opt/openfiler/var/www/includes/network.inc
to exec("sudo ifconfig -a | grep 'eth'",$output);
Create NFS Disk
- Login to OpenFiler (user/pass openfiler/password)
- First create a partition...
- Go to Volumes, and to the Create a new volume group section
- Click on the create new physical volumes link
- In the resulting Block Device Management page, edit the disk that we want to create the NFS storage on.
- Find the Create a partition in /dev/sdx section and hit Create (default options)
- Then create a volume group for NFS use...
- Go to Volume Groups (right-hand menu) and find the Create a new volume group section
- Give the volume group a name (eg volgrp1), select the partition/physical volume and hit Add volume group
- Click on the create new physical volumes link
- Go to Add Volume, and find the Create a volume in volgrp section
- Give the volume a name (eg nfs), select all space, change Filesystem to ext3, and hit Create
- Create network share
- On System tab, find Network Access Configuration
- Add in the local network address, or specify individual NFS client IP address, etc, and hit Update
- Enable NFS share
- On Services tab, Enable NFSv3 server
- Go to the Shares tab, and create a new share (eg share) under the NFS mount
- Once created, click on the share name to edit it
- Enable RO or RW NFS access (as required) for the specified networks/hosts
- Click Update, then return using the Back to shares list option
- Use the entire mount path (eg /mnt/volgrp1/nfs/share) when adding the share to an ESX
Create iSCSI Disk
Either there is something missing from this proc, or there's a bug in the current version of code, as the iSCSI share never gets mounted, so isn't returned to a SendTargets query
- Login to OpenFiler (user/pass openfiler/password)
- First create a partition...
- Go to Volumes, and to the Create a new volume group section
- Click on the create new physical volumes link
- In the resulting Block Device Management page, edit the disk that we want to create the iSCSI storage on.
- Find the Create a partition in /dev/sdx section and hit Create (default options)
- Then create a volume group for iSCSI use...
- Go to Volume Groups (right-hand menu) and find the Create a new volume group section
- Give the volume group a name (eg volgrp1), select the partition/physical volume and hit Add volume group
- Click on the create new physical volumes link
- Go to Add Volume, and find the Create a volume in volgrp section
- Give the volume a name (eg iSCSIvolgrp), select all space, change Filesystem to iSCSI, and hit Create
- Create network share
- On System tab, find Network Access Configuration
- Add in the local network address, or specify individual iSCSI client IP address, etc, and hit Update
- Enable iSCSI target
- On Services tab, Enable iSCSI target server
Troubleshooting
High Memory Usage
Openfiler will cache all throughput, so as the NAS is used it will fill available memory. This is normal, and to be expected. Physical memory will be freed as required by other processes. If you suspect a problem, login to the console and run the top
command. As long as the amount of swap memory in use is low there is no problem.
See https://forums.openfiler.com/viewtopic.php?id=1017 for (a little) more info.
If running OpenFiler on a VM, you may want to minimise the amount of physical memory assigned to the NAS. In which case keep an eye on the swap usage as you decrease the assigned physical memory, once it begins to rise you've gone a little too far.
Failed to get disk partition information
To fix "Failed to get disk partition information" error when mounting iSCSI partition, see this fix.
[root@labesx-1 /]# fdisk -l Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 140 1124518+ 83 Linux /dev/sda2 141 154 112455 fc VMware VMKCORE /dev/sda3 155 2610 19727820 5 Extended /dev/sda5 155 2610 19727788+ fb VMware VMFS Disk /dev/sdb: 7973 MB, 7973371904 bytes 255 heads, 63 sectors/track, 969 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 76 610438+ 82 Linux swap / Solaris /dev/sdb2 77 331 2048287+ 83 Linux /dev/sdb3 332 969 5124735 5 Extended /dev/sdb5 332 969 5124703+ 83 Linux WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 217.9 GB, 217968541696 bytes 256 heads, 63 sectors/track, 26396 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 26397 212860927+ ee EFI GPT [root@labesx-1 /]# fdisk /dev/sdc WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. The number of cylinders for this disk is set to 26396. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sdc: 217.9 GB, 217968541696 bytes 256 heads, 63 sectors/track, 26396 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 26397 212860927+ ee EFI GPT Command (m for help): d Selected partition 1 Command (m for help): p Disk /dev/sdc: 217.9 GB, 217968541696 bytes 256 heads, 63 sectors/track, 26396 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Device Boot Start End Blocks Id System Command (m for help): p Disk /dev/sdc: 217.9 GB, 217968541696 bytes 256 heads, 63 sectors/track, 26396 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-26396, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-26396, default 26396): Using default value 26396 Command (m for help): p Disk /dev/sdc: 217.9 GB, 217968541696 bytes 256 heads, 63 sectors/track, 26396 cylinders Units = cylinders of 16128 * 512 = 8257536 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 26396 212857312+ 83 Linux Command (m for help): t Selected partition 1 Hex code (type L to list codes): fb Changed system type of partition 1 to fb (VMware VMFS) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks.