Ubuntu: Difference between revisions

From vwiki
Jump to navigation Jump to search
(→‎Exchange Server: Added "Message Filters")
(Undo revision 2603 by Sstrutt (talk) Rollback)
 
(47 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{Depreciated|category=Ubuntu}}
== Initial Setup ==
== Initial Setup ==
Much of this section is ''borrowed'' from http://www.howtoforge.com/perfect-server-ubuntu8.04-lts and http://www.howtoforge.com/how-to-install-ubuntu8.04-with-software-raid1, they are well worth a read!
Much of this section is ''borrowed'' from http://www.howtoforge.com/perfect-server-ubuntu8.04-lts and http://www.howtoforge.com/how-to-install-ubuntu8.04-with-software-raid1, they are well worth a read!


This section will create a Ubuntu VM installed on one partition, software RAID'ed across two VMDK's (my ESX's storage isn't resilient, hence the software RAID across VMDK's on separate physical disks, if you've got resilient storage you should '''not''' use software RAID).
This section will create a Ubuntu VM installed on one partition, software RAID'ed across two VMDK's.  To explain, my ESX's storage originally wasn't resilient, hence the software RAID across VMDK's on separate physical disks, if you've got resilient storage you should probably wouldn't use software RAID. 
 
''However, once I'd bought a nice (SOHO) NAS, I moved one disk and VM config across to NAS, thinking I'd eventually ditch the software RAID.  Luckily I didn't get round to it, so when I managed to destroy my NAS (partly my fault), I could easily recover my VM's from where they left off by creating new ones and re-using the surviving VMDK file.  Therefore, unless you're running a truly enterprise class NAS, that's cost you £1k's to buy, and £1k's in yearly support I'd still recommend you software RAID your critical VM's (eg mail server) across two separate devices. The whole reason you have a home set-up is to ''play'', which inevitably means ''break''!''


=== Prepare Virtual Machine ===
=== Prepare Virtual Machine ===
Line 67: Line 71:
         address 192.168.1.150
         address 192.168.1.150
         netmask 255.255.255.0
         netmask 255.255.255.0
         network 192.168.1.1
         network 192.168.1.0
         broadcast 192.168.1.255
         broadcast 192.168.1.255
         gateway 192.168.1.1
         gateway 192.168.1.1
Line 83: Line 87:


=== Install VM Tools ===
=== Install VM Tools ===
# The pre-built modules that come with the VMTools installer are compatible, therefore the script needs to be able to compile them, however the required library files aren't available by default, so as a pre-requite, install using the following commands...  
The pre-built modules that come with the VMTools installer aren't compatible, therefore the script needs to be able to compile them, however the required library files aren't available by default, so the procedure is a little laboured.
 
==== Ubuntu 8.04.4 LTS ====
# Install the build library files...  
#* <code> apt-get install build-essential </code>
#* <code> apt-get install build-essential </code>
#* <code> apt-get install linux-headers-2.6.24-26-server </code>
#* <code> apt-get install linux-headers-2.6.24-26-server </code>
Line 95: Line 102:
#* <code> tar xf VMwareTools-4.0.0-219382.tar.gz </code>
#* <code> tar xf VMwareTools-4.0.0-219382.tar.gz </code>
#* <code> cd vmware-tools-distrib </code>
#* <code> cd vmware-tools-distrib </code>
# Run the install script (which might complain enough to make you thing its failed, but check its worked via the VI Client)
# Run the install script
#* <code> ./vmware-install.pl  </code>
# Restart
#* <code> shutdown -r now </code>
 
==== Ubuntu 10.04.1 LTS ====
VM Tools can be installed via two methods, neither of which is ideal...
* Using the normal VM Tools ''CD'' - requires additional library install and sometimes mounting the CDROM doesn't work too well.
* Using APT package manager - doesn't work quite as well as it could (upgrading VM Tools isn't supported), and support for this method is rumoured to be dropped in future releases
 
'''VM Tools ''CD'''''
# Install the build library files (not required for ESX v4.0 update 2 and later)...
#* <code> apt-get install build-essential </code>
# Select "Install VM Tools" from the VI Client
# Mount the VM Tools CD-ROM
#* <code> mount /dev/cdrom /media/cdrom/ </code>
#** If <code>/media/cdrom/</code> doesn't exist, create with <code>mkdir /media/cdrom</code>
# Copy to tmp directory (version number below will vary)
#* <code> cp /media/cdrom/VMwareTools-4.0.0-236512.tar.gz /tmp/ </code>
# Unmount the CD-ROM, and move into tmp directory
#* <code> umount /media/cdrom/ </code>
#* <code> cd /tmp/ </code>
# Uncompress and then move into the <code> vmware-tools-distrib </code> directory
#* <code> tar xzvf VMware*.gz </code>
#* <code> cd vmware-tools-distrib </code>/
# Run the install script, and accept defaults
#* <code> ./vmware-install.pl  </code>
#* <code> ./vmware-install.pl  </code>
# Restart
# Restart
#* <code> shutdown -r now </code>
#* <code> shutdown -r now </code>
'''APT Package Manager'''
# Install VM Tools using apt package manager
# Open VMware Packaging Public GPG Key at http://packages.vmware.com/tools/VMWARE-PACKAGING-GPG-KEY.pub
# On the server open a new file called <code>VMWARE-PACKAGING-GPG-KEY.pub</code> with the <code>/tmp</code> directory
# Copy and paste the contents of the webpage into the file and save
# Import the key using the following command
#* <code>apt-key add /tmp/VMWARE-PACKAGING-GPG-KEY.pub</code>
#* You should get <code>OK</code> returned
# If you need to add a proxy see http://communities.vmware.com/servlet/JiveServlet/download/1554533-39836/Vmware%20Tools%20Guide%20Linux%20osp_install_guide.pdf
# Open a new vi in VI called <code>/etc/apt/sources.list.d/vmware-tools.list</code>
# Add the following line
#* <code> deb http://packages.vmware.com/tools/esx/<esx-version>/ubuntu lucid main restricted </code> where <esx-version> is the appropriate esx version found at http://packages.vmware.com/tools/esx/index.html
# Update the repository cacahe
#* <code> apt-get update </code>
# Install VM Tools
#* <code> apt-get install vmware-tools </code>
=== NTP ===
''Not required if your server doesn't really need bang on accurate time''
Out of the box your server will sync every time its restarted and drift a bit in-between.  There is an additional resource demand in running the NTP daemon so unless you need to, there's no need to install the full blown NTP daemon.
I tend to have one or two servers updating from remote (public) servers, and then all others updating from those.
# Install the service
#* <code> apt-get install ntp </code>
# Update the NTP config file, <code> /etc/ntp.conf </code> (Example below is for a server updating from public European servers - see http://www.pool.ntp.org/)
#* <code> server 0.europe.pool.ntp.org </code>
#* <code> server 1.europe.pool.ntp.org </code>
#* <code> server 2.europe.pool.ntp.org </code>
#* <code> server 3.europe.pool.ntp.org </code>
# Restart the NTP service
#* <code> service ntp restart </code>
# Verify using the following commands
#* <code> ntpq -np </code>
#* <code> date </code>


=== Update the OS ===  
=== Update the OS ===  
Line 107: Line 176:


== Random Settings ==
== Random Settings ==
=== Locale ===
To change the local '''time-zone''' use...
* <code> dpkg-reconfigure tzdata </code>
To change the keyboard layout in use...
* <code> dpkg-reconfigure console-data </code>
...if <code> console-data </code> isn't installed, use...
* <code> apt-get install console-data </code>
...and reboot to apply
=== <code>\tmp</code> Boot Time Clean-up ===
=== <code>\tmp</code> Boot Time Clean-up ===
The files in <code>/tmp</code> get deleted if their last modification time is more than <code>TMPTIME</code> days ago.
The files in <code>/tmp</code> get deleted if their last modification time is more than <code>TMPTIME</code> days ago.


# Edit <code> /etc/default/rcS </code>
# Edit <code> /etc/default/rcS </code>
# Change <code>TMPTIME</code> value to specify no of days
# Change <code>TMPTI80aM80E</code> value to specify no of days
#* Use <code> 0 </code> so that files are removed regardless of age.
#* Use <code> 0 </code> so that files are removed regardless of age.
#* Use <code> -1 </code> so that no files are removed.
#* Use <code> -1 </code> so that no files are removed.


== Exchange Server ==
=== Proxy Server ===
=== DNS Records ===
Proxy settings need to be added as environment variables, which can be added to to your profile file so as to be always be applied
Firstly, you need to own a public domain name, then get your ISP to create two DNS records...
 
# '''MX record''' - Mail Exchanger (MX) record
# Edit <code> /etc/profile </code>
#* EG <code> sandfordit.com [MX] -> mail.sandfordit.com </code>
# Append to the bottom (edit as required)
#* <code> sandfordit.com </code> is the domain you own, and <code> mail </code> is hostname of your email server (can be anything you like)
#* <code> export http_proxy=http://username:pass@proxyserver:port/ </code>
# '''A record''' - Standard DNS record
#* <code> export ftp_proxy=http://username:pass@proxyserver:port/ </code>
#* EG <code> mail.sandfordit.com [A] -> 158.25.34.124 </code>
 
#* <code> 158.25.34.124 </code> is the static IP address assigned by your ISPYou'll need to set-up a NAT on your router (often oddly called a virtual server in domestic routers) to map incoming mail on TCP 25 to your email server's actual address (EG <code> 158.25.34.124:25 -> 192.168.1.150:25 </code>.
Note that some applications will ignore the environment variables, and will need to be set specifically for those apps.
 
=== Hostname Change ===
Procedure below guides you through the files etc that need updating in order to change a machine's hostname.  Note that if you get probs SSH'ing to the server afterwards see [[#Server_Hostname_Change|Server Hostname Change]]
 
# Update the following files
#* <code> /etc/hosts </code>
#* <code> /etc/hostname  </code>
# Set the hostname (not FQDN)
#* <code> hostname <servername> </code>
# Reboot
 
=== Allow Remote SSH Login Without Password Prompt ===
In order to be able to access a remote server via an SSH session without needing to suppy a password, the remote server needs to trust the user on the local server.  In order to do this, the public key for the user needs to be imported to the remote server.  This is particularly useful when trying to script using ssh, scp, rsync, etc where you need to interract with a remote server.
 
You need to be clear on which user will access the remote the server, if your script is run as root, then its the root user that needs to have its public key exported.
 
Similarly, on the remote server you need to ensure that that the user that has the public key key imported into, has the rights to perform whatever it is that you want to achieve. This ''shouldn't'' be the root user (to do so you'd need to allow <code>PermitRootLogin </code> in the remote server's SSH config, which is a security no-no).
 
# On the local server, create a public/private rsa key pair while logged in as the user that will access the remote server
#* <code> ssh-keygen -t rsa </code> (leave passphrase blank)
#** This creates a public key in <code> ~/.ssh/id_rsa.pub </code>
# Copy the public key to the user on the remote server
#* <code> ssh-copy-id -i user@remote-svr </code>
#** The <code> user </code> is the user account on the remote server that the local server will be trusted by and run as.
# Test the login as suggested by <code> ssh-copy-id </code>
#* <code> ssh user@remote-svr </code>
 
== Packages ==
=== Commands ===
{|class="vwikitable"
|-
! Command                              !! Purpose
|-
| <code> dpkg --get-selections </code> || Show installed packages
|-
| <code> dpkg -L php5-gd </code>      || Show file locations of <code> php5-gd </code> package
|-
| <code> apt-get update </code>        || Update the package database
|-
| <code> apt-get install <package> </code> || Install the <code> <package> </code> package
|-
| <code> apt-get upgrade </code>      || Upgrade installed system and packages with latest levels in package database
|-
| <code> tasksel install <task> </code> || Installs a collection of packages as a single task, eg lamp-server
|-
| <code> tasksel --list-task </code>   || Show list of available tasks
|}
 
=== Troubleshooting ===
* '''Error 400 Bad Request'''
** Somewhat misleadingly, the problem is normal caused by being unable to contact the update server.  Consider adding proxy server config to your machine
* '''The following packages have been kept back'''
** Package manager can hold back updates because they will cause conflicts, or sometimes because they're major kernel updates.  Running <code>aptitude upgrade</code> normally seems to force kernel updates through.
 
== Firewall ==
Ubuntu comes with UFW (Uncomplicated Firewall), which is a config tool used to modify the standard inbuilt Netfilter.  If preferred, iptables can still be used.
 
Changes are applied immediately. Once you've added your first rule there's an implied deny all.
 
{|class="vwikitable"
|-
! Command                              !! Purpose
|-
| <code> ufw enable </code>            || Enables the firewall
|-
| <code> ufw status </code>            || Shows the firewall status and existing filters
|-
| <code> ufw status numbered </code>    || Shows the firewall status and numbered existing filters (easier to delete)
|-
| <code> ufw allow from 192.168.1.10 </code> || Allow all traffic from 192.168.1.10
|-
| <code> ufw allow http </code>        || Allow http from any IP
|-
| <code> ufw allow proto tcp from 192.168.1.10 to any port 22 </code> || Allow TCP 22 (SSH) from 192.168.1.10
|-
| <code> ufw delete 2 </code>          || Delete rule 2
|}
 
So, for example, to create a couple of rules and enable...
ufw allow proto tcp from 192.168.10.0/24 to any port 22
ufw allow proto tcp to any port 443
ufw enable
 
== SNMP ==
=== Setup (Pre v10) ===
# Run the following command to update the package database
#* <code> apt-get update </code>
# Run the following command to install SNMP
#* <code> apt-get install snmpd </code>
# Create config file with contents as shown below
#* <code> vi /etc/snmp/snmpd.conf </code>
# Edit SNMPD config to allow remote polls
#* <code> vi /etc/default/snmpd </code>
# Remove <code> 127.0.0.1 </code> from line below
#* <code> <nowiki>#</nowiki>snmpd options (use syslog, close stdin/out/err). </code>
#* <code> SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid 127.0.0.1' </code>
# Restart SNMP
#* <code> /etc/init.d/snmpd restart </code>
# Test with the following, replacing <hostname> with server's hostname
#* <code> snmpwalk -v 1 -c public -O e <hostname> </code>
 
rocommunity public
syslocation "CR DC"
syscontact info@sandfordit.com
 
=== Setup (v10) ===
# Run the following command to update the package database
#* <code> apt-get update </code>
# Run the following command to install SNMP
#* <code> apt-get install snmpd </code>
# Create config file with contents as shown below the procedure
#* <code> vi /etc/snmp/snmpd.conf </code>
# Edit SNMPD config to allow remote polls
#* <code> vi /etc/default/snmpd </code>
# Remove <code> 127.0.0.1 </code> from line below
#* <code> <nowiki>#</nowiki>snmpd options (use syslog, close stdin/out/err). </code>
#* <code> SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid 127.0.0.1' </code>
# Restart SNMP
#* <code> /etc/init.d/snmpd restart </code>
# Test with the following, replacing <hostname> with server's hostname (must be run from a machine with snmp installed, not just snmpd)
#* <code> snmpwalk -v 1 -c public <hostname> system <hostname> </code>
 
####
# First, map the community name (COMMUNITY) into a security name
# (local and mynetwork, depending on where the request is coming
# from):
#      sec.name  source          community
#com2sec paranoid  default        public '''<- Comment'''
com2sec readonly  default        public '''<- Uncomment'''
 
  '''... then later ...'''
 
syslocation "CR DC"
syscontact info@sandfordit.com
 
== MySQL ==
=== Install ===
# Run the following command to update the package database
#* <code> apt-get update </code>
# Run the following command to install MySQL
#* <code> apt-get install mysql-server </code>
 
To allow access from remote hosts...
# Open MySQL service TCP/IP port by editing the <code> /etc/mysql/my.cnf </code> config file and restarting
#* Change bind IP to server's IP, EG <code> bind-address = 192.168.1.123 </code>
#* Restart service <code> /etc/init.d/mysql restart </code>
# Allow remote access to a user account
#* EG <code> GRANT ALL PRIVILEGES ON *.* TO 'user'@'%' IDENTIFIED BY 'pass' WITH GRANT OPTION; </code>
 


Note, instead of an A record you can use a CNAME record if you prefer, though obviously the CNAME record will still need to point to a valid A record. Using a CNAME might be preferable, if for example you've multiple services running from a single public IP, that you might want to split out in the future to run on separate IP's, at which point you can replace the CNAME records with A records.
=== Backup ===
Based on http://www.cyberciti.biz/faq/ubuntu-linux-mysql-nas-ftp-backup-script/


=== OS DNS Setup ===
# Create the required folders using...
In order to get round the fact that your exchange server won't have the same IP (or name even) on the public internet as it will on your internal network, a DNS server is installed on the exchange server to provide MX record resolution.  Procedure assumes DNS (Bind) is already installed.
#* <code> mkdir backup </code>
#* <code> mkdir backup/mysql </code>
# Create the file below (editing as required) as <code> /backup/mysql.sh </code>
# Make the file executable
#* <code> chmod +x /backup/mysql.sh </code>
# Perform a test run of the backup
# Schedule the script to run with crontab
#* <code> crontab -e </code>
#* <code> 30 1 * * *      /bin/bash      /backup/mysql.sh </code>


Terminology...
<source lang="bash">
* '''Private''' = Home or internal network IP address and network name (eg <code>192.168.1.150</code> and <code>mail.home.int</code>)
#!/bin/bash
* '''Public''' = Global internet, ISP assigned IP address and registered domain name (eg <code>158.25.34.124</code> and <code>mail.sandfordit.com</code>)


Firstly, add the IP('s) of the DNS servers you use for resolution on your other machines to your local DNS server's list of forwarders (so that your exchange server forwards DNS resolution requests for unknown names to your normal DNS servers), edit <code>/etc/bind/named.conf.options</code>
### MySQL Server Login and local backup info ###
<pre>
MUSER="root"
options {
MPASS="password"
        directory "/var/cache/bind";
MHOST="localhost"
        query-source address * port 53;
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
BAK="/backup/mysql"
LOG="/backup/mysql.log"
GZIP="$(which gzip)"
NOW=$(date -u +%Y%m%d)


        forwarders {
## FTP info
                192.168.1.1; 158.25.30.10;
FTPDIR="/Backup/db"
        };
FTPUSER="backup"
FTPPASS="backup"
FTPSERVER="ftphost"


        auth-nxdomain no;    # conform to RFC1035
## Functions
};
Logger()
</pre>
{
        echo `date "+%a %d/%m/%y %H:%M:%S"`: $1 >> $LOG
}


Edit <code>/etc/resolv.conf</code> to force the server to use its local DNS server for resolution
## Main Script
nameserver 127.0.0.1
Logger "Started backup script..."


Restart bind using <code> /etc/init.d/bind9 restart </code> and check you can resolve external addresses properly.
[ ! -d $BAK ] && mkdir -p $BAK
[ ! -d $BAK/tmp ] && mkdir -p $BAK/tmp
mv $BAK/* $BAK/tmp


Now create the internal zone that will eventually contain the local MX record for your exchange server, append the following to <code> /etc/bind/named.conf.local </code>, using your publicly registered domain name
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
<pre>
for db in $DBS
zone "sandfordit.com" {
do
    type master;
FILE=$BAK/$db.$NOW.gz
    file "/etc/bind/db.sandfordit.com";
  Logger "Backing up $db to $FILE"
};
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
</pre>
done


Lastly create the database file for you DNS domain <code>/etc/bind/db.sandfordit.com</code>, using your publicly registered domain name and private (internal) IP address for your exchange server...
Logger "Completed local backup"
<pre>
;
; BIND data file for sandfordit.com
;
$TTL    604800
@      IN      SOA    mail.sandfordit.com. admin.sandfordit.com. (
                        070725        ; Serial
                        604800        ; Refresh
                          86400        ; Retry
                        2419200        ; Expire
                        604800 )      ; Negative Cache TTL
;
@      IN      NS      mail
        IN      MX      10 mail
        IN      A      192.168.1.150
mail    IN      A      192.168.1.150
</pre>


=== Zimba Install ===
## FTP to remote server
Reference http://wiki.zimbra.com/index.php?title=Ubuntu_8.04_LTS_Server_%28Hardy_Heron%29_Install_Guide
ftp -in <<EOF
open $FTPSERVER
user $FTPUSER $FTPPASS
bin
cd $FTPDIR
lcd $BAK
mput *
close
bye
EOF


# Copy the install to the server
if [ "$?" == "0" ]; then
#* EG <code> pscp zcs-6.0.5_GA_2213.UBUNTU8.20100202225756.tgz simons@mail:zcs-6.0.5_GA_2213.UBUNTU8.20100202225756.tgz </code>
Logger "FTP upload completed successfully"
# Uncompress the package
/bin/rm -f $BAK/tmp*
#* <code> tar -xzf zcs-6.0.5_GA_2213.UBUNTU8.20100202225756.tgz </code>
Logger "Previous local backup files removed"
# Start the install
else
#* <code> ./install.sh </code>
Logger "FTP upload failed !!!"
#* The install ''will'' fail due to missing packages!
fi
# Install the missing prerequisite packages
</source>
#* EG <code> apt-get install libpcre3 libgmp3c2 libstdc++5 sysstat </code>
# Restart the install
# Part-way through the install will complain about your domain not having a DNS record, change the domain to your publicly registered domain (without server hostname, so <code>sandfordit.com</code> rather than <code>mail.sandfordit.com</code>
# At the end of the install, address the unconfigured item (ie an admin password)


Once the install is completed, login to administer the exchange server using https://mail:7071


To enforce https for Zimbra Desktop clients use the following commands (requires a restart to take effect)...
In some versions of MySQL you will receive an error similar to...
<pre>
mysqldump: Got error: 1044: Access denied for user 'root'@'localhost' to database 'information_schema' when using LOCK TABLES
su - zimbra
It appears to be a [http://bugs.mysql.com/bug.php?id=21527|MySQL bug], which seems to keep cropping up.  As a workaround change the <code> $MYSQLDUMP </code> line to 
zmtlsctl https
<source lang="bash">
</pre>
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS --skip-lock-tables $db | $GZIP -9 > $FILE
</source>
Note that you won't backup the <code> information_schema </code> table if you need to implement this workaround


==== High CPU Workaround ====
[[Category:MySQL]]
Zimbra seems to have some real issues with constant high CPU spikes every minute, to limit reduce the logging retention and failed process checking.
<pre>
su - zimbra
zmlocalconfig -e zmmtaconfig_interval=6000
zmprov mcf zimbraLogRawLifetime 7d
zmprov mcf zimbraLogSummaryLifetime 30d
/opt/zimbra/libexec/zmlogprocess


crontab -e
== Perl ==
*/60 * * * * /opt/zimbra/libexec/zmstatuslog
=== Install Module ===
</pre>
Installing a perl module isn't tricky, but there is a certain nack to it, see below...


==== Backup ====
# Get the module's package name (eg for Net::XWhois)
'''Basic manual backup'''
#* <code> sudo apt-cache search perl net::xwhois </code>
# SU to Zimbra admin
# Then install the package
#* <code> su - zimbra </code>
#* <code> sudo apt-get install libnet-xwhois-perl </code>
# Stop Zimbra services
#* <code> zmcontrol stop </code>
# Exit Zimbra user and create copy of directory
#* EG <code> cp -rp /opt/zimbra /home/simons/zimbra_backup_100301 </code>


<br>'''More elaborate scripted version'''<br>
=== Check Module(s) Installed ===
* For more info see - http://www.zimbra.com/forums/administrators/15275-solved-yet-another-backup-script-community-version.html
To check for a specific module use (checking for <code>Net::XWhois</code>)
* Script is downloadable from - http://www.osoffice.de/downloads/viewcategory-7.html
perl -MNet::XWhois -e "print \"Module installed.\\n\";"


# Check the size of the <code> /opt/zimbra </code> dir, this will be replicated to a sync directory, from which the actual backup is taken, and check available free space
To list all installed modules
#* <code> du -hs /opt/zimbra </code>
perl -MFile::Find=find -MFile::Spec::Functions -Tlwe \
#* <code> df -h </code>
'find { wanted => sub { print canonpath $_ if /\.pm\z/ }, no_chdir => 1 }, @INC'
# Un-gzip and upload the config file to somewhere convenient
# Edit required config params at start of script
# Run the script to install (as root), allow creation of required folders and install of required utils
#* <code> ./zmbak_v.0.8.sh --INSTALL </code>
# Perform a first full run to check everything works alight and to create the first full backup
#* <code> ./zmbak_v.0.8.sh -f </code>


To restore, see http://www.zimbra.com/forums/administrators/15275-solved-yet-another-backup-script-community-version-24.html
Source: http://www.linuxquestions.org/questions/linux-general-1/how-to-list-all-installed-perl-modules-216603/


=== Upgrade ===
== Python ==
Use the same package to upgrade the software as used for a brand new install (there is no separate upgrade package).  The important part of any upgrade ''IS NOT'' how to get your system upgraded, it ''IS'' how you're going to recover if it all goes horribly wrong.  
Python v2 comes pre-installed, however if you want to run newer Python 3 scripts, this will need to be installed alongside.


# Isolate the server from the internet (so new mails can't received following the pre-upgrade backup)
# Install the package
# Stop the mail server running
#* <code> apt-get install python3 </code>
#* <code> su - zimbra </code>
#** Note that more than one version of Python 3 may be available, cancel the install are retry with specific version if required, eg <code> apt-get install python3.1 </code>
#* <code> zmcontrol stop </code>
# Backup the server 1st
#* If hosted on an ESX, probably most easily achieved by starting a snapshot (remember to delete the snapshot after a few days if no probs are encountered)
#* Also copy off any existing local backup (so that a new full backup can be started following the upgrade)
# Copy the install to the server
#* EG <code> pscp zcs-6.0.6_GA_2324.UBUNTU8.20100406144520.tgz simons@mail:zcs-6.0.6_GA_2324.UBUNTU8.20100406144520.tgz </code>
# Uncompress the package
#* <code> tar -xzf zcs-6.0.6_GA_2324.UBUNTU8.20100406144520.tgz </code>
# Start the upgrade using the install script
#* <code> ./install.sh </code>
# The script should detect an existing installation and upgrade it, do not install additional components, but do confirm the upgrade.
# Once completed, test thoroughly
# Perform a full local backup
# Reconnect to network


==== Patch ====
To enter the Python 3 interpreter, run <code> phython3 </code>, to make sure you get the right environment for a script use the following shebang
Sometimes patch packages are supplied for minor upgrades between specific versions.  Take the same backup precautions as for a normal upgrade.  The actual application of the patch varies slightly from an upgrade...
<source lang="python">
#! /usr/bin/env python3
</source>


# Copy the patch package to the server
See [[:Category:Python|Python]] for further info
#* EG <code> pscp zcs-patch-6.0.6_GA_2332.tgz simons@mail:zcs-patch-6.0.6_GA_2332.tgz </code>
# Uncompress the package
#* <code> tar -xzf zcs-patch-6.0.6_GA_2332.tgz </code>
# Start the patch upgrade using the install script
#* <code> ./installPatch.sh </code>
# Restart the software to apply changes
#* <code> su - zimbra </code>
#* <code> zmcontrol stop </code>
#* <code> zmcontrol start </code>


=== Install Zimlet ===
[[Category:Python]]
Zimlets ''only'' work when accessing via the web client, they are not usable from the full-fat Zimbra client.


# Copy the Zimlet to the server
== AWStats ==
#* EG <code> pscp com_zimbra_tasksreminder.zip simons@mail:com_zimbra_tasksreminder.zip </code>
=== Initial Setup ===
# Move the file to the <code> /opt/zimbra/zimlets </code> directory
# Install the package
# Deploy the Zimlet
#* <code> apt-get install awstats </code>
#* EG <code> zmzimletctl deploy com_zimbra_tasksreminder.zip </code>
# Edit the the generic template config file if required
#* <code> /etc/awstats/awstats.conf </code>
# Create apache config file for site with contents show below
#* eg <code> /etc/apache2/sites-enabled/awstats </code>
# Restart apache
#* <code> service apache2 restart </code>
# Site should now be available via URL similar to
#* http://yourserver/awstats/awstats.pl


=== Signature Length Increase ===
<pre>
The maximum length of an email signature is limited to 10240 by default, to increase...
Alias /awstatsclasses "/usr/share/awstats/lib/"
Alias /awstats-icon/ "/usr/share/awstats/icon/"
Alias /awstatscss "/usr/share/doc/awstats/examples/css"
ScriptAlias /awstats/ /usr/lib/cgi-bin/


# '''Update appropriate CoS/user pref...'''
<Directory /usr/lib/cgi-bin/>
## In server admin console
        Options ExecCGI -MultiViews +SymLinksIfOwnerMatch
## Either update the
        Order allow,deny
### ''User''
        Allow from all
###* Addresses > Accounts > <user>
</Directory>
### Or ''CoS''
###* Configuration > Class of Service > <CoS>
## Go to Preferences > Mail Options > Composing mail
## Change ''Maximum length of mail signature'' value (eg 20480)
# '''Update Zimbra Desktop'''
## Delete, then re-add the account and allow to resync fully


=== LDAP Config Item Check/Modify ===
<Directory /usr/share/awstats/>
* To check config
        Order allow,deny
** EG <code> zmprov gcf zimbraMailPurgeSleepInterval </code>
        Allow from all
* To modify config
</Directory>
** EG <code> zmprov mcf zimbraMailPurgeSleepInterval 1m </code>
</pre>


=== Message Filters ===
=== Add a Site ===
* To verify email account filters setup
# Create a specific config file for the site to monitor
** EG <code> zmmailbox -z -m simon gfrl </code>
#* <code> cp /etc/awstats/awstats.conf /etc/awstats/awstats.mysite.com.conf </code>
# Edit the config file for the site, specifically (see below for further options)
#* <code> LogFile=”/path/to/your/domain/access.log” </code>
#* <code> LogFormat=1  </code>(this will give you more detailed stats)
#* <code> SiteDomain=”mysite.com” </code>
#* <code> HostAliases=”www.mysite.com localhost 127.0.0.1" </code> (example for a local site)
# Perform an initial stats gather for the site
#* <code> /usr/lib/cgi-bin/awstats.pl -config=mysite.com -update </code>
# Test that you can see some stats, using URL similar to
#* http://yourserver/awstats/awstats.pl?config=mysite.com
# Add a scheduled job to crontab to update automatically
#* <code> crontab -e </code>
#* EG every 30 mins <code> */30 * * * *    /bin/perl      /usr/lib/cgi-bin/awstats.pl -config=mysite.com -update >/dev/null </code>


=== Message Purging ===
Further options
* Wiki sites (and other sites where an URL parameter can specify a specific page
** <code> URLWithQuery=1 </code> - useful for Wiki's etc where query param indicates a different page
** <code> URLWithQueryWithOnlyFollowingParameters="title" </code> - only treats variances in param title as distinct pages
** <code> URLReferrerWithQuery=1 </code> follows on from two above


Check per-user settings
=== Other ===
zmprov ga simon@sandfordit.com | grep Lifetime
To perform a one-off update from a specific log file...
* <code> /usr/lib/cgi-bin/awstats.pl -config=server -LogFile=access.log </code>
** Updates can only be added in chronological order, therefore you may need to delete the data file for a particular month, and rebuild it entirely.
Scheduled updates are configured in <code> /etc/cron.d/awstats </code>


== Syslog to MySQL Database ==
This procedure achieves three things...
# Allows remote hosts to use the local server as a syslog destination
# Directs syslogs to MySQL database on the server
# Allows viewing of syslogged events through [http://loganalyzer.adiscon.com/ LogAnalyser] web front end
...it is assumed that you already have a local MySQL and Apache server running!


more /opt/zimbra/log/mailbox.log | grep MailboxPurge
# '''Set-up your server to send syslog messages to a MySQL database'''
#* <code> apt-get install rsyslog-mysql </code>
#* Enter the root password to your MySQL instance when prompted
# '''Update the <code> rsyslog </code> config (<code>/etc/rsyslog.conf</code>) to receive syslog data, and to route messages through a queue'''
## Uncoment the following..
##* <code>$ModLoad ommysql  # load the output driver (use ompgsql for PostgreSQL)</code>
##* <code>$ModLoad imudp    # network reception</code>
##* <code>$UDPServerRun 514 # start a udp server at port 514</code>
## Add the following...
##* <code>$WorkDirectory /rsyslog/work # default location for work (spool) files</code>
##* <code>$ActionQueueType LinkedList # use asynchronous processing</code>
##* <code>$ActionQueueFileName dbq    # set file name, also enables disk mode</code>
##* <code>$ActionResumeRetryCount -1  # infinite retries on insert failure</code>
## Restart the service
##* <code> service rsyslog restart </code>
# '''Install LogAnalyser'''
## Download latest build from http://loganalyzer.adiscon.com/downloads
##* EG <code>wget http://download.adiscon.com/loganalyzer/loganalyzer-3.5.0.tar.gz</code>
## Uncompress
##* EG <code>tar xf loganalyzer-3.5.0.tar.gz</code>
## Move the contents or <code>/src</code> to webserver
##* EG <code> mkdir /var/www/syslog </code>
##* EG <code> mv /src/* /var/www/syslog/ </code>
## Move utility scripts to same folder
##* EG <code> mv /contrib/* /var/www/syslog/ </code>
## Make them both executable,
##* EG <code> chmod +x /var/www/syslog/*.sh </code>
## Run the config script in the directory
##* EG <code> /var/www/syslog# ./configure.sh </code>
## Browse to webpage
##* EG http://your-www-svr/syslog/index.php
## Ignore the error, and follow the link to install (configure)
## Accept defaults until step 7, where you change the following
##* Name of the Source - ''your name for the local syslog db''
##* Source Type - MySQL Native
##* Database Name - Syslog
##* Database Tablename - SystemEvents
##* Database User - rsyslog
##* Database Password - rsyslog
## Config completed!


=== Documentation Links ===
== Troubleshooting ==
* '''[http://wiki.zimbra.com/index.php?title=Working_with_Zimlets Zimlets]'''
=== Network ===
* '''[http://wiki.zimbra.com/wiki/Zimbra_Desktop_FAQ#How_to_install_spell_checker_dictionaries.3F Zimbra Client Dictionary Install]'''
==== No NIC ====
Especially after hardware changes, its possible the networking config no longer refers to the right interface.


== MySQL ==
# Use <code> ifconfig </code> to confirm the current network config
=== Install ===
# Use <code> dmesg | grep -i eth </code> to ascertain what's been detected at boot time
# Run the following command to update the package database
# Assuming it states that say <code>eth0</code> has been changed to <code>eth1</code> then just update the <code>/etc/network/interfaces</code> file
#* <code> apt-get update </code>
# Run the following command to install MySQL
#* <code> apt-get install mysql-server </code>


To allow access from remote hosts...
=== Software RAID ===
# Open MySQL service TCP/IP port by editing the <code> /etc/mysql/my.cnf </code> config file and restarting
==== Replacing a RAID 1 Disk ====
#* Change bind IP to server's IP, EG <code> bind-address = 192.168.1.123 </code>
This procedure was written from the following starting point...
#* Restart service <code> /etc/init.d/mysql restart </code>
* A machine originally with two disks in RAID1 has failed, one disk has been replaced, and machine started again
# Allow remote access to a user account
...and adapted from this post http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array
#* EG <code> GRANT ALL PRIVILEGES ON *.* TO 'user'@'%' IDENTIFIED BY 'pass' WITH GRANT OPTION; </code>


# Backup whatever you can before proceeding, one mistake or system error could destroy your machine
# Confirm which disk is new, and which is old (if the new disk is blank this is easy as there will be no partition info!)
#* <code> fdisk -l </code>
# Partition the new disk the same as the original
#* <code> sfdisk -d /dev/sda | sfdisk /dev/sdb </code>
# Confirm that the layout of both disks is now that same
#* <code> fdisk -l </code>
# Add the newly created partitions to the RAID disks
#* <code> mdadm --manage /dev/md0 --add /dev/sdb1 </code>
#* ''You may have more <code>sd</code> partitions than <code>md</code> partitions, the array size return through <code>mdadm -D /dev/md*</code> should roughly match the number of blocks found from <code>fdisk -l</code>''
# The arrays should now be being sync'ed, check progress by monitoring <code>/proc/mdstat</code>
#* <code> more /proc/mdstat </code>


=== Backup ===
=== SSH ===
http://www.cyberciti.biz/faq/ubuntu-linux-mysql-nas-ftp-backup-script/
==== Server Hostname Change ====
If the hostname (or IP) of the server you are SSH'ing to changes, the old entry needs to be removed from your SSH key known hosts file
* <code> ssh-keygen -R <name or IP> </code>


=== Reboot Required? ===
If a package update/installation requires a reboot to complete the following file will exist...
/var/run/reboot-required


[[Category:VMware]]
To see which packages caused this to be set, inspect the contents of...
[[Category:Zimbra]]
/var/run/reboot-required.pkgs
[[Category:MySQL]]

Latest revision as of 15:21, 23 August 2016

This page is now depreciated, and is no longer being updated.
The page was becoming too large - all content from this page, and newer updates, can be found via the Category page link below.

This page and its contents will not be deleted.

See Ubuntu

Initial Setup

Much of this section is borrowed from http://www.howtoforge.com/perfect-server-ubuntu8.04-lts and http://www.howtoforge.com/how-to-install-ubuntu8.04-with-software-raid1, they are well worth a read!

This section will create a Ubuntu VM installed on one partition, software RAID'ed across two VMDK's. To explain, my ESX's storage originally wasn't resilient, hence the software RAID across VMDK's on separate physical disks, if you've got resilient storage you should probably wouldn't use software RAID.

However, once I'd bought a nice (SOHO) NAS, I moved one disk and VM config across to NAS, thinking I'd eventually ditch the software RAID. Luckily I didn't get round to it, so when I managed to destroy my NAS (partly my fault), I could easily recover my VM's from where they left off by creating new ones and re-using the surviving VMDK file. Therefore, unless you're running a truly enterprise class NAS, that's cost you £1k's to buy, and £1k's in yearly support I'd still recommend you software RAID your critical VM's (eg mail server) across two separate devices. The whole reason you have a home set-up is to play, which inevitably means break!

Prepare Virtual Machine

  1. Create a virtual machine with the following options (use Custom)
    • Guest OS: Linux > Ubuntu 32bit
    • CPU: 1
    • Memory: 756 MB
    • Disk: 36GB
  2. Then add a second 36GB disk on a separate physical datastore (if you intend to use software RAID)
  3. Attach Ubuntu install ISO to the CD-ROM

OS Installation

Follow the default or sensible choices for your locale, however, use the following notes as well...

  • Configure the network
    • Enter the server's hostname (not a FQDN, just the hostname)
  • Partition Disks
    • If setting up software RAID follow the steps below, otherwise just select Guided - use entire disk and set up LVM
      1. Select "Manual
      2. Then create a partition...
        1. Select the first disk (sda) and on the next screen, Yes, to Create new empty partition table on this device?
        2. Select the FREE SPACE, then Create a new Partition, and use all but the last 2GB of space,
        3. And then select type of Primary, and create at Beginning
        4. Change Use as to physical volume for RAID, and change the Bootable flag to Yes, the select Done setting up this partition
      3. Repeat the above on the remaining FREE SPACE on sda, to create another primary physical volume for RAID, but 'not bootable
      4. Select the second disk, sdb, and repeat the steps taken for sda to create two identical partitions
      5. On the same screen, select the Configure Software RAID option (at the top), and then confirm through the next screen
      6. Create a RAID pack/multidisk...
        1. Select Create MD device, then select RAID1 (ie a mirror), then confirm 2 Active devices, and 0 Spare devices
        2. Select both /dev/sda1 and /dev/sdb1 partitions, and then select Finish
      7. Repeat the above to create a RAID volume using /dev/sda2 and /dev/sdb2 partitions
      8. Now select the RAID device #0 partition (select the #1 just under RAID1 device line), and change the Use as and select Ext3...
      9. Change the Mount point to /, then select Done configuring this partition
      10. Now select the RAID device #1 partition (select the #1 just under RAID1 device line), and change the Use as and select Swap area
      11. Then select Done configuring this partition then finally Finish partitioning and write changes to disk, and confirm to Write the changes to disks
      12. Accept the "The kernel was unable to re-read...system will need to restart" complaints for each RAID multidisk, after which the install will continue (note there's a little more to do post install to ensure you can boot using the second disk should the first fail).
  • Software Selection
    • DNS Server - Only required in order to configure split DNS, which is required for an exchange server install
    • OpenSSH Server - Required (allows you to Putty/SSH to the server)

Post OS Install Config

  • Enable Root
    1. Use the command sudo passwd root
    2. Enter user password, and then a strong password for the root account
  • Finish Software RAID config - only if configured during install
    1. Start-up grub (by entering grub and enter the following commands (seems to work better via SSH than direct console)...
      • device (hd1) /dev/sdb
      • root (hd1,0)
      • setup (hd1)
      • quit
    2. Then edit the /boot/grub/menu.lst config file. Go to the end of the file where the boot options are, and create a copy of the first option and edit the following lines
      • title Add "Primary disk fail" or something similar to end
      • root Change hd0 to hd1
    3. To check the RAID setup of your drives use
      • mdadm --misc -D /dev/md0
      • mdadm --misc -D /dev/md1

Change IP Address

  • Edit the /etc/network/interfaces file in the following fashion
# The primary network interface
auto eth0
iface eth0 inet static
        address 192.168.1.150
        netmask 255.255.255.0
        network 192.168.1.0
        broadcast 192.168.1.255
        gateway 192.168.1.1
  • Then check the local hosts file /etc/hosts , so that the IP v4 part looks like...
127.0.0.1       localhost
192.168.1.150   mail.home.int   mail
  • Check that DNS resolution is setup correctly (add DNS nameservers as required, as found in /etc/resolv.conf in order of pref...
nameserver 127.0.0.1
  • Then restart networking
    • sudo /etc/init.d/networking restart

Install VM Tools

The pre-built modules that come with the VMTools installer aren't compatible, therefore the script needs to be able to compile them, however the required library files aren't available by default, so the procedure is a little laboured.

Ubuntu 8.04.4 LTS

  1. Install the build library files...
    • apt-get install build-essential
    • apt-get install linux-headers-2.6.24-26-server
      • Use uname -r to get the right headers version number
  2. Select "Install VM Tools" from the VI Client
  3. Mount the VM Tools CD-ROM
    • mount /media/cdrom0/
  4. Copy to home directory
    • cp /media/cdrom/VMwareTools-4.0.0-219382.tar.gz /home/user/
  5. Uncompress and then move into the vmware-tools-distrib directory
    • tar xf VMwareTools-4.0.0-219382.tar.gz
    • cd vmware-tools-distrib
  6. Run the install script
    • ./vmware-install.pl
  7. Restart
    • shutdown -r now

Ubuntu 10.04.1 LTS

VM Tools can be installed via two methods, neither of which is ideal...

  • Using the normal VM Tools CD - requires additional library install and sometimes mounting the CDROM doesn't work too well.
  • Using APT package manager - doesn't work quite as well as it could (upgrading VM Tools isn't supported), and support for this method is rumoured to be dropped in future releases

VM Tools CD

  1. Install the build library files (not required for ESX v4.0 update 2 and later)...
    • apt-get install build-essential
  2. Select "Install VM Tools" from the VI Client
  3. Mount the VM Tools CD-ROM
    • mount /dev/cdrom /media/cdrom/
      • If /media/cdrom/ doesn't exist, create with mkdir /media/cdrom
  4. Copy to tmp directory (version number below will vary)
    • cp /media/cdrom/VMwareTools-4.0.0-236512.tar.gz /tmp/
  5. Unmount the CD-ROM, and move into tmp directory
    • umount /media/cdrom/
    • cd /tmp/
  6. Uncompress and then move into the vmware-tools-distrib directory
    • tar xzvf VMware*.gz
    • cd vmware-tools-distrib /
  7. Run the install script, and accept defaults
    • ./vmware-install.pl
  8. Restart
    • shutdown -r now

APT Package Manager

  1. Install VM Tools using apt package manager
  2. Open VMware Packaging Public GPG Key at http://packages.vmware.com/tools/VMWARE-PACKAGING-GPG-KEY.pub
  3. On the server open a new file called VMWARE-PACKAGING-GPG-KEY.pub with the /tmp directory
  4. Copy and paste the contents of the webpage into the file and save
  5. Import the key using the following command
    • apt-key add /tmp/VMWARE-PACKAGING-GPG-KEY.pub
    • You should get OK returned
  6. If you need to add a proxy see http://communities.vmware.com/servlet/JiveServlet/download/1554533-39836/Vmware%20Tools%20Guide%20Linux%20osp_install_guide.pdf
  7. Open a new vi in VI called /etc/apt/sources.list.d/vmware-tools.list
  8. Add the following line
  9. Update the repository cacahe
    • apt-get update
  10. Install VM Tools
    • apt-get install vmware-tools

NTP

Not required if your server doesn't really need bang on accurate time

Out of the box your server will sync every time its restarted and drift a bit in-between. There is an additional resource demand in running the NTP daemon so unless you need to, there's no need to install the full blown NTP daemon.

I tend to have one or two servers updating from remote (public) servers, and then all others updating from those.

  1. Install the service
    • apt-get install ntp
  2. Update the NTP config file, /etc/ntp.conf (Example below is for a server updating from public European servers - see http://www.pool.ntp.org/)
    • server 0.europe.pool.ntp.org
    • server 1.europe.pool.ntp.org
    • server 2.europe.pool.ntp.org
    • server 3.europe.pool.ntp.org
  3. Restart the NTP service
    • service ntp restart
  4. Verify using the following commands
    • ntpq -np
    • date

Update the OS

  • Run the following command to update the apt package database
    • apt-get update
  • To install any updates
    • apt-get upgrade

Random Settings

Locale

To change the local time-zone use...

  • dpkg-reconfigure tzdata

To change the keyboard layout in use...

  • dpkg-reconfigure console-data

...if console-data isn't installed, use...

  • apt-get install console-data

...and reboot to apply

\tmp Boot Time Clean-up

The files in /tmp get deleted if their last modification time is more than TMPTIME days ago.

  1. Edit /etc/default/rcS
  2. Change TMPTI80aM80E value to specify no of days
    • Use 0 so that files are removed regardless of age.
    • Use -1 so that no files are removed.

Proxy Server

Proxy settings need to be added as environment variables, which can be added to to your profile file so as to be always be applied

  1. Edit /etc/profile
  2. Append to the bottom (edit as required)

Note that some applications will ignore the environment variables, and will need to be set specifically for those apps.

Hostname Change

Procedure below guides you through the files etc that need updating in order to change a machine's hostname. Note that if you get probs SSH'ing to the server afterwards see Server Hostname Change

  1. Update the following files
    • /etc/hosts
    • /etc/hostname
  2. Set the hostname (not FQDN)
    • hostname <servername>
  3. Reboot

Allow Remote SSH Login Without Password Prompt

In order to be able to access a remote server via an SSH session without needing to suppy a password, the remote server needs to trust the user on the local server. In order to do this, the public key for the user needs to be imported to the remote server. This is particularly useful when trying to script using ssh, scp, rsync, etc where you need to interract with a remote server.

You need to be clear on which user will access the remote the server, if your script is run as root, then its the root user that needs to have its public key exported.

Similarly, on the remote server you need to ensure that that the user that has the public key key imported into, has the rights to perform whatever it is that you want to achieve. This shouldn't be the root user (to do so you'd need to allow PermitRootLogin in the remote server's SSH config, which is a security no-no).

  1. On the local server, create a public/private rsa key pair while logged in as the user that will access the remote server
    • ssh-keygen -t rsa (leave passphrase blank)
      • This creates a public key in ~/.ssh/id_rsa.pub
  2. Copy the public key to the user on the remote server
    • ssh-copy-id -i user@remote-svr
      • The user is the user account on the remote server that the local server will be trusted by and run as.
  3. Test the login as suggested by ssh-copy-id
    • ssh user@remote-svr

Packages

Commands

Command Purpose
dpkg --get-selections Show installed packages
dpkg -L php5-gd Show file locations of php5-gd package
apt-get update Update the package database
apt-get install <package> Install the <package> package
apt-get upgrade Upgrade installed system and packages with latest levels in package database
tasksel install <task> Installs a collection of packages as a single task, eg lamp-server
tasksel --list-task Show list of available tasks

Troubleshooting

  • Error 400 Bad Request
    • Somewhat misleadingly, the problem is normal caused by being unable to contact the update server. Consider adding proxy server config to your machine
  • The following packages have been kept back
    • Package manager can hold back updates because they will cause conflicts, or sometimes because they're major kernel updates. Running aptitude upgrade normally seems to force kernel updates through.

Firewall

Ubuntu comes with UFW (Uncomplicated Firewall), which is a config tool used to modify the standard inbuilt Netfilter. If preferred, iptables can still be used.

Changes are applied immediately. Once you've added your first rule there's an implied deny all.

Command Purpose
ufw enable Enables the firewall
ufw status Shows the firewall status and existing filters
ufw status numbered Shows the firewall status and numbered existing filters (easier to delete)
ufw allow from 192.168.1.10 Allow all traffic from 192.168.1.10
ufw allow http Allow http from any IP
ufw allow proto tcp from 192.168.1.10 to any port 22 Allow TCP 22 (SSH) from 192.168.1.10
ufw delete 2 Delete rule 2

So, for example, to create a couple of rules and enable...

ufw allow proto tcp from 192.168.10.0/24 to any port 22
ufw allow proto tcp to any port 443
ufw enable

SNMP

Setup (Pre v10)

  1. Run the following command to update the package database
    • apt-get update
  2. Run the following command to install SNMP
    • apt-get install snmpd
  3. Create config file with contents as shown below
    • vi /etc/snmp/snmpd.conf
  4. Edit SNMPD config to allow remote polls
    • vi /etc/default/snmpd
  5. Remove 127.0.0.1 from line below
    • #snmpd options (use syslog, close stdin/out/err).
    • SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid 127.0.0.1'
  6. Restart SNMP
    • /etc/init.d/snmpd restart
  7. Test with the following, replacing <hostname> with server's hostname
    • snmpwalk -v 1 -c public -O e <hostname>
rocommunity public
syslocation "CR DC"
syscontact info@sandfordit.com

Setup (v10)

  1. Run the following command to update the package database
    • apt-get update
  2. Run the following command to install SNMP
    • apt-get install snmpd
  3. Create config file with contents as shown below the procedure
    • vi /etc/snmp/snmpd.conf
  4. Edit SNMPD config to allow remote polls
    • vi /etc/default/snmpd
  5. Remove 127.0.0.1 from line below
    • #snmpd options (use syslog, close stdin/out/err).
    • SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid 127.0.0.1'
  6. Restart SNMP
    • /etc/init.d/snmpd restart
  7. Test with the following, replacing <hostname> with server's hostname (must be run from a machine with snmp installed, not just snmpd)
    • snmpwalk -v 1 -c public <hostname> system <hostname>
####
# First, map the community name (COMMUNITY) into a security name
# (local and mynetwork, depending on where the request is coming
# from):

#       sec.name  source          community
#com2sec paranoid  default         public	<- Comment
com2sec readonly  default         public	<- Uncomment
... then later ...
syslocation "CR DC"
syscontact info@sandfordit.com

MySQL

Install

  1. Run the following command to update the package database
    • apt-get update
  2. Run the following command to install MySQL
    • apt-get install mysql-server

To allow access from remote hosts...

  1. Open MySQL service TCP/IP port by editing the /etc/mysql/my.cnf config file and restarting
    • Change bind IP to server's IP, EG bind-address = 192.168.1.123
    • Restart service /etc/init.d/mysql restart
  2. Allow remote access to a user account
    • EG GRANT ALL PRIVILEGES ON *.* TO 'user'@'%' IDENTIFIED BY 'pass' WITH GRANT OPTION;


Backup

Based on http://www.cyberciti.biz/faq/ubuntu-linux-mysql-nas-ftp-backup-script/

  1. Create the required folders using...
    • mkdir backup
    • mkdir backup/mysql
  2. Create the file below (editing as required) as /backup/mysql.sh
  3. Make the file executable
    • chmod +x /backup/mysql.sh
  4. Perform a test run of the backup
  5. Schedule the script to run with crontab
    • crontab -e
    • 30 1 * * * /bin/bash /backup/mysql.sh
#!/bin/bash

### MySQL Server Login and local backup info ###
MUSER="root"
MPASS="password"
MHOST="localhost"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
BAK="/backup/mysql"
LOG="/backup/mysql.log"
GZIP="$(which gzip)"
NOW=$(date -u +%Y%m%d)

## FTP info
FTPDIR="/Backup/db"
FTPUSER="backup"
FTPPASS="backup"
FTPSERVER="ftphost"

## Functions
Logger()
{
        echo `date "+%a %d/%m/%y %H:%M:%S"`: $1 >> $LOG
}

## Main Script
Logger "Started backup script..."

[ ! -d $BAK ] && mkdir -p $BAK
[ ! -d $BAK/tmp ] && mkdir -p $BAK/tmp
mv $BAK/* $BAK/tmp

DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
 FILE=$BAK/$db.$NOW.gz
 Logger "Backing up $db to $FILE"
 $MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
done

Logger "Completed local backup"

## FTP to remote server
ftp -in <<EOF
open $FTPSERVER
user $FTPUSER $FTPPASS
bin
cd $FTPDIR
lcd $BAK
mput *
close
bye
EOF

if [ "$?" == "0" ]; then
 Logger "FTP upload completed successfully"
 /bin/rm -f $BAK/tmp*
 Logger "Previous local backup files removed"
else
 Logger "FTP upload failed !!!"
fi


In some versions of MySQL you will receive an error similar to...

mysqldump: Got error: 1044: Access denied for user 'root'@'localhost' to database 'information_schema' when using LOCK TABLES

It appears to be a bug, which seems to keep cropping up. As a workaround change the $MYSQLDUMP line to

 $MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS --skip-lock-tables $db | $GZIP -9 > $FILE

Note that you won't backup the information_schema table if you need to implement this workaround

Perl

Install Module

Installing a perl module isn't tricky, but there is a certain nack to it, see below...

  1. Get the module's package name (eg for Net::XWhois)
    • sudo apt-cache search perl net::xwhois
  2. Then install the package
    • sudo apt-get install libnet-xwhois-perl

Check Module(s) Installed

To check for a specific module use (checking for Net::XWhois)

perl -MNet::XWhois -e "print \"Module installed.\\n\";"

To list all installed modules

perl -MFile::Find=find -MFile::Spec::Functions -Tlwe \
'find { wanted => sub { print canonpath $_ if /\.pm\z/ }, no_chdir => 1 }, @INC'

Source: http://www.linuxquestions.org/questions/linux-general-1/how-to-list-all-installed-perl-modules-216603/

Python

Python v2 comes pre-installed, however if you want to run newer Python 3 scripts, this will need to be installed alongside.

  1. Install the package
    • apt-get install python3
      • Note that more than one version of Python 3 may be available, cancel the install are retry with specific version if required, eg apt-get install python3.1

To enter the Python 3 interpreter, run phython3 , to make sure you get the right environment for a script use the following shebang

#! /usr/bin/env python3

See Python for further info

AWStats

Initial Setup

  1. Install the package
    • apt-get install awstats
  2. Edit the the generic template config file if required
    • /etc/awstats/awstats.conf
  3. Create apache config file for site with contents show below
    • eg /etc/apache2/sites-enabled/awstats
  4. Restart apache
    • service apache2 restart
  5. Site should now be available via URL similar to
Alias /awstatsclasses "/usr/share/awstats/lib/"
Alias /awstats-icon/ "/usr/share/awstats/icon/"
Alias /awstatscss "/usr/share/doc/awstats/examples/css"
ScriptAlias /awstats/ /usr/lib/cgi-bin/

<Directory /usr/lib/cgi-bin/>
        Options ExecCGI -MultiViews +SymLinksIfOwnerMatch
        Order allow,deny
        Allow from all
</Directory>

<Directory /usr/share/awstats/>
        Order allow,deny
        Allow from all
</Directory>

Add a Site

  1. Create a specific config file for the site to monitor
    • cp /etc/awstats/awstats.conf /etc/awstats/awstats.mysite.com.conf
  2. Edit the config file for the site, specifically (see below for further options)
    • LogFile=”/path/to/your/domain/access.log”
    • LogFormat=1 (this will give you more detailed stats)
    • SiteDomain=”mysite.com”
    • HostAliases=”www.mysite.com localhost 127.0.0.1" (example for a local site)
  3. Perform an initial stats gather for the site
    • /usr/lib/cgi-bin/awstats.pl -config=mysite.com -update
  4. Test that you can see some stats, using URL similar to
  5. Add a scheduled job to crontab to update automatically
    • crontab -e
    • EG every 30 mins */30 * * * * /bin/perl /usr/lib/cgi-bin/awstats.pl -config=mysite.com -update >/dev/null

Further options

  • Wiki sites (and other sites where an URL parameter can specify a specific page
    • URLWithQuery=1 - useful for Wiki's etc where query param indicates a different page
    • URLWithQueryWithOnlyFollowingParameters="title" - only treats variances in param title as distinct pages
    • URLReferrerWithQuery=1 follows on from two above

Other

To perform a one-off update from a specific log file...

  • /usr/lib/cgi-bin/awstats.pl -config=server -LogFile=access.log
    • Updates can only be added in chronological order, therefore you may need to delete the data file for a particular month, and rebuild it entirely.

Scheduled updates are configured in /etc/cron.d/awstats

Syslog to MySQL Database

This procedure achieves three things...

  1. Allows remote hosts to use the local server as a syslog destination
  2. Directs syslogs to MySQL database on the server
  3. Allows viewing of syslogged events through LogAnalyser web front end

...it is assumed that you already have a local MySQL and Apache server running!

  1. Set-up your server to send syslog messages to a MySQL database
    • apt-get install rsyslog-mysql
    • Enter the root password to your MySQL instance when prompted
  2. Update the rsyslog config (/etc/rsyslog.conf) to receive syslog data, and to route messages through a queue
    1. Uncoment the following..
      • $ModLoad ommysql # load the output driver (use ompgsql for PostgreSQL)
      • $ModLoad imudp # network reception
      • $UDPServerRun 514 # start a udp server at port 514
    2. Add the following...
      • $WorkDirectory /rsyslog/work # default location for work (spool) files
      • $ActionQueueType LinkedList # use asynchronous processing
      • $ActionQueueFileName dbq # set file name, also enables disk mode
      • $ActionResumeRetryCount -1 # infinite retries on insert failure
    3. Restart the service
      • service rsyslog restart
  3. Install LogAnalyser
    1. Download latest build from http://loganalyzer.adiscon.com/downloads
    2. Uncompress
      • EG tar xf loganalyzer-3.5.0.tar.gz
    3. Move the contents or /src to webserver
      • EG mkdir /var/www/syslog
      • EG mv /src/* /var/www/syslog/
    4. Move utility scripts to same folder
      • EG mv /contrib/* /var/www/syslog/
    5. Make them both executable,
      • EG chmod +x /var/www/syslog/*.sh
    6. Run the config script in the directory
      • EG /var/www/syslog# ./configure.sh
    7. Browse to webpage
    8. Ignore the error, and follow the link to install (configure)
    9. Accept defaults until step 7, where you change the following
      • Name of the Source - your name for the local syslog db
      • Source Type - MySQL Native
      • Database Name - Syslog
      • Database Tablename - SystemEvents
      • Database User - rsyslog
      • Database Password - rsyslog
    10. Config completed!

Troubleshooting

Network

No NIC

Especially after hardware changes, its possible the networking config no longer refers to the right interface.

  1. Use ifconfig to confirm the current network config
  2. Use dmesg | grep -i eth to ascertain what's been detected at boot time
  3. Assuming it states that say eth0 has been changed to eth1 then just update the /etc/network/interfaces file

Software RAID

Replacing a RAID 1 Disk

This procedure was written from the following starting point...

  • A machine originally with two disks in RAID1 has failed, one disk has been replaced, and machine started again

...and adapted from this post http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array

  1. Backup whatever you can before proceeding, one mistake or system error could destroy your machine
  2. Confirm which disk is new, and which is old (if the new disk is blank this is easy as there will be no partition info!)
    • fdisk -l
  3. Partition the new disk the same as the original
    • sfdisk -d /dev/sda | sfdisk /dev/sdb
  4. Confirm that the layout of both disks is now that same
    • fdisk -l
  5. Add the newly created partitions to the RAID disks
    • mdadm --manage /dev/md0 --add /dev/sdb1
    • You may have more sd partitions than md partitions, the array size return through mdadm -D /dev/md* should roughly match the number of blocks found from fdisk -l
  6. The arrays should now be being sync'ed, check progress by monitoring /proc/mdstat
    • more /proc/mdstat

SSH

Server Hostname Change

If the hostname (or IP) of the server you are SSH'ing to changes, the old entry needs to be removed from your SSH key known hosts file

  • ssh-keygen -R <name or IP>

Reboot Required?

If a package update/installation requires a reboot to complete the following file will exist...

/var/run/reboot-required 

To see which packages caused this to be set, inspect the contents of...

/var/run/reboot-required.pkgs