|
|
(7 intermediate revisions by the same user not shown) |
Line 1: |
Line 1: |
| == High System Load ==
| | '''For performance problems related load, see [[High_System_Load_(Ubuntu)|High System Load]]''' |
| The system load is normally represented by the load average over the last 1, 5 and 15 minutes.
| |
| | |
| For example, the <code>uptime</code> command gives a single line summary of system uptime and recent load
| |
| | |
| <pre>
| |
| user@server:~$ uptime
| |
| 14:28:49 up 9 days, 22:41, 1 user, load average: 0.34, 0.36, 0.32
| |
| </pre>
| |
| | |
| So in the above, as of 14:28:49 hrs the server has been up for 9 days 22 hours odd, has 1 user logged in, and the system load averages for the past 1, 5, and 15 minutes are shown.
| |
| | |
| The load average for a given period indicates how many processes were running or in a uninterruptable (waiting for IO) state. What's bad depends on your system, for a single CPU system a load average greater than 1 could be considered bad as there are more processes running than CPU's to service them. Though if you expect peaks in load, then a high load over the last minute might not concern, whereas over 15mins it would.
| |
| | |
| The problem with investigating performance issues is that you need to know what is normal, so you can determine what's wrong once application/service performance deteriorates. But its unlikely that you would have pain much attention to underlying system metrics until things are already bad.
| |
| | |
| === <code>top</code> ===
| |
| The <code>top</code> command allows some basic insight into the system's performance, and is akin to the Task Manager in Windows. It probably won't provide the answer as to what the problem is, but it will probably allow you to focus in on the process(es) that are causing grief.
| |
| | |
| <pre>
| |
| user@server:~$ top
| |
| top - 14:32:09 up 9 days, 22:44, 1 user, load average: 0.70, 0.44, 0.34
| |
| Tasks: 137 total, 1 running, 136 sleeping, 0 stopped, 0 zombie
| |
| Cpu(s): 93.8%us, 6.2%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
| |
| Mem: 1023360k total, 950520k used, 72840k free, 10836k buffers
| |
| Swap: 1757176k total, 1110228k used, 646948k free, 135524k cached
| |
| | |
| PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
| |
| 6608 zimbra 20 0 556m 69m 12m S 69.1 6.9 0:03.26 java
| |
| 17284 zimbra 20 0 649m 101m 3604 S 4.6 10.1 31:34.74 java
| |
| 2610 zimbra 20 0 976m 181m 3700 S 0.7 18.1 133:06.68 java
| |
| 1 root 20 0 23580 1088 732 S 0.0 0.1 0:04.70 init
| |
| 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
| |
| 3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
| |
| ....
| |
| </pre>
| |
| | |
| Note that CPU metrics are with respect to 1 CPU, so on a multiple CPU system, seeing values > 100% is valid.
| |
| | |
| {|class="vwikitable-equal"
| |
| |+ Overview of CPU Metrics, % of CPU time spent on
| |
| ! Code
| |
| ! <code> us </code>
| |
| ! <code> sy </code>
| |
| ! <code> ni </code>
| |
| ! <code> id </code>
| |
| ! <code> wa </code>
| |
| ! <code> ha </code>
| |
| ! <code> si </code>
| |
| ! <code> st </code>
| |
| |-
| |
| ! Name
| |
| | User CPU
| |
| | System CPU
| |
| | Nice CPU
| |
| | Idle CPU
| |
| | IO Wait
| |
| | Hardware Interrupts
| |
| | Software Interrupts
| |
| | Steal
| |
| |-
| |
| ! Description
| |
| | user processes (excluding nice)
| |
| | kernel processes
| |
| | user nice processes (nice reduces the priority of process)
| |
| | idling (doing nothing)
| |
| | waiting for IO (high indicates disk/network bottleneck)
| |
| | hardware interrupts
| |
| | software interrupts
| |
| | servicing virtual machines
| |
| |}
| |
| | |
| | |
| | |
| {|class="vwikitable"
| |
| |+ Task column heading descriptions (to change what columns are shown press <code>f</code>)
| |
| ! Key !! Display !! Name !! Description
| |
| |-
| |
| | <code>a</code> || <code>PID</code> || Process ID || Task/process identifier
| |
| |-
| |
| | <code>b</code> || <code>PPID</code> || Parent PID || Task/process identifier of processes parent (ie the process that launched this process)
| |
| |-
| |
| | <code>c</code> || <code>RUSER</code> || Real User Name || Real username of task's owner
| |
| |-
| |
| | <code>d</code> || <code>UID</code> || User ID || User ID of task's owner
| |
| |-
| |
| | <code>e</code> || <code>USER</code> || User Name || Username ID of task's owner
| |
| |-
| |
| | <code>f</code> || <code>GROUP</code> || Group Name || Group name of task's owner
| |
| |-
| |
| | <code>g</code> || <code>TTY</code> || Controlling TTY || Device that started the process
| |
| |-
| |
| | <code>h</code> || <code>PR</code> || Priority || The task's priority
| |
| |-
| |
| | <code>i</code> || <code>NI</code> || Nice value || Adjusted task priority. From -20 meaning high priority, through 0 meaning unadjusted, to 19 meaning low priority
| |
| |-
| |
| | <code>j</code> || <code>P</code> || Last Used CPU || ID of the CPU last used by the task
| |
| |-
| |
| | <code>k</code> || <code>%CPU</code> || CPU Usage || Task's usage of CPU
| |
| |-
| |
| | <code>l</code> || <code>TIME</code> || CPU Time || Total CPU time used by the task
| |
| |-
| |
| | <code>m</code> || <code>TIME+</code> || CPU Time, hundredths || Total CPU time used by the task in sub-second accuracy
| |
| |-
| |
| | <code>n</code> || <code>%MEM</code> || Memory usage (RES) || Task's usage of available physical memory
| |
| |-
| |
| | <code>o</code> || <code>VIRT</code> || Virtual Image (kb) || Task's allocation of virtual memory
| |
| |-
| |
| | <code>p</code> || <code>SWAP</code> || Swapped size (kb) || Task's swapped memory (resident in swap-file)
| |
| |-
| |
| | <code>q</code> || <code>RES</code> || Resident size (kb) || Task's unswapped memory (resident in physical memory)
| |
| |-
| |
| | <code>r</code> || <code>CODE</code> || Code size (kb) || Task's virtual memory used for executable code
| |
| |-
| |
| | <code>s</code> || <code>DATA</code> || Data+Stack size (kb) || Task's virtual memory not used for executable code
| |
| |-
| |
| | <code>t</code> || <code>SHR</code> || Shared Mem size (kb) || Task's shared memory
| |
| |-
| |
| | <code>u</code> || <code>nFLT</code> || Page Fault count || Major/Hard page faults that have occurred for the task
| |
| |-
| |
| | <code>v</code> || <code>nDRT</code> || Dirty Pages count || Tasks memory pages that have been modified since last write to disk, and so can be readily freed from physical memory
| |
| |-
| |
| | <code>w</code> || <code>S</code> || Process Status ||
| |
| * D - Uninterruptible sleep
| |
| * R - Running
| |
| * S - Sleeping
| |
| * T - Traced or Stopped
| |
| * Z - Zombie
| |
| |-
| |
| | <code>x</code> || <code>Command</code> || Command Line || Command used to start task
| |
| |-
| |
| | <code>y</code> || <code>WCHAN</code> || Sleeping in Function || Name (or address) of function that the task is sleeping in
| |
| |-
| |
| | <code>z</code> || <code>Flags</code> || Taks Flags || Task's scheduling flags
| |
| |}
| |
| | |
| | |
| ==== Identify Process Causing Occasional High System Load ====
| |
| If the high load is constant, just fire up <code>top</code> and see if there is a specific process to blame, or if your stuck waiting for disk or network IO.
| |
| | |
| If the high load is transient but repetitive, then you'll need to capture the output of <code>top</code> at the right time, the following script will create a log of <code>top</code> output during periods of high load
| |
| | |
| <source lang="bash">#!/bin/bash
| |
| #
| |
| # During high load, write output form top to file.
| |
| #
| |
| # Simon Strutt - July 2012
| |
| | |
| LOGFILE="/home/user/load_log.txt" # Update to a valid folder path
| |
| MAXLOAD=100 # Multiple by 100 as 'if' comparison can only handle integers
| |
| | |
| LOAD=`cut -d ' ' -f 1 /proc/loadavg`
| |
| LOAD=`echo $LOAD '*100' | bc -l | awk -F '.' '{ print $1; exit; }'` # Convert load to x100 integer
| |
| | |
| if [ $LOAD -gt $MAXLOAD ]; then
| |
| echo `date '+%Y-%m-%d %H:%M:%S'`>> ${LOGFILE}
| |
| top -b -n 1 >> ${LOGFILE}
| |
| fi</source>
| |
| | |
| Schedule with something like (update with correct path to <code>load_log</code>...
| |
| <pre>crontab -e
| |
| 1 * * * * /bin/bash /home/user/load_log</pre>
| |
| | |
| === <code> vmstat </code> ===
| |
| [http://www.linuxcommand.org/man_pages/vmstat8.html <code>vmstat</code>] is principally used for reporting on virtual memory statistics, for example <code> vmstat 5 3 </code> creates an output every 5 seconds for 3 iterations, | |
| <pre>user@server:~$ vmstat 5 3
| |
| procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
| |
| r b swpd free buff cache si so bi bo in cs us sy id wa
| |
| 0 0 42676 479556 34192 106944 5 5 31 3678 84 89 9 9 75 7
| |
| 0 0 42676 479548 34208 106948 0 0 0 10 50 105 6 0 88 5
| |
| 0 0 42676 479548 34216 106948 0 0 0 18 37 61 0 0 96 4 </pre>
| |
| | |
| Note that the first line of output contains average/total counts since system start, with subsequent output being for the period since the last line of output.
| |
| | |
| {|class="vwikitable-equal"
| |
| |+ Overview of VMSTAT Metrics
| |
| ! Section
| |
| ! colspan="2"| Procs
| |
| ! colspan="4"| Memory
| |
| ! colspan="2"| Swap
| |
| ! colspan="2"| IO
| |
| ! colspan="2"| System
| |
| ! colspan="4"| CPU
| |
| |-
| |
| ! Code
| |
| ! <code> r </code>
| |
| ! <code> b </code>
| |
| ! <code> swpd </code>
| |
| ! <code> free </code>
| |
| ! <code> buff </code>
| |
| ! <code> cache </code>
| |
| ! <code> si </code>
| |
| ! <code> so </code>
| |
| ! <code> bi </code>
| |
| ! <code> bo </code>
| |
| ! <code> in </code>
| |
| ! <code> cs </code>
| |
| ! <code> us </code>
| |
| ! <code> sy </code>
| |
| ! <code> id </code>
| |
| ! <code> wa </code>
| |
| |-
| |
| ! Name
| |
| | style="text-align: center;" | Run
| |
| | style="text-align: center;" | Block
| |
| | style="text-align: center;" | Swap<br>(kB)
| |
| | style="text-align: center;" | Free<br>(kB)
| |
| | style="text-align: center;" | Buffer<br>(kB)
| |
| | style="text-align: center;" | Cache<br>(kB)
| |
| | style="text-align: center;" | Swap In<br>(kB/s)
| |
| | style="text-align: center;" | Swap Out<br>(kB/s)
| |
| | style="text-align: center;" | Blocks In<br>(blocks/s)
| |
| | style="text-align: center;" | Blocks Out<br>(blocks/s)
| |
| | style="text-align: center;" | Interrupts<br>(/s)
| |
| | style="text-align: center;" | Context Switch<br>(/s)
| |
| | style="text-align: center;" | User<br>(% time)
| |
| | style="text-align: center;" | System<br>(% time)
| |
| | style="text-align: center;" | Idle<br>(% time)
| |
| | style="text-align: center;" | Wait<br>(% time)
| |
| |-
| |
| ! Description
| |
| | Processes waiting for run time
| |
| | Processes in uninterruptible sleep (eg waiting for IO)
| |
| | Virtual memory used
| |
| | Unused memory
| |
| | Memory used as buffers
| |
| | Memory used as cache
| |
| | Memory swapped in from disk
| |
| | Memory swapped out to disk
| |
| | Blocks in from a storage device
| |
| | Blocks out from a storage device
| |
| | Interrupts
| |
| | Context switches
| |
| | CPU running user processes
| |
| | CPU running kernel processes
| |
| | CPU idle
| |
| | CPU waiting for IO
| |
| |}
| |
| | |
| === <code> mpstat </code> ===
| |
| [http://www.linuxcommand.org/man_pages/mpstat1.html <code>mpstat</code>] reports on basic processor stats. It creates a timestamped output which is useful to leave running on a console (or logged to a file) for when you might here or find out about service performance problems after the fact. A number of the metrics are also provided by <code>[[#vmstat|vmstat]]</code>, but are reported to a greater accuracy by <code>mpstat</code>.
| |
| | |
| Its not available by default, and comes as part of the [http://sebastien.godard.pagesperso-orange.fr/ <code>sysstat</code>] package (to install, use <code>apt-get install sysstat</code>).
| |
| | |
| For example <code> mpstat 5 3 </code> creates an output every 5 seconds for 3 iterations,
| |
| <pre>user@server:~# mpstat 5 3
| |
| Linux 2.6.32-41-server (server) 25/07/12 _x86_64_ (1 CPU)
| |
| | |
| 11:50:59 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle
| |
| 11:51:04 all 1.00 0.00 0.80 1.60 0.00 0.00 0.00 0.00 96.60
| |
| 11:51:09 all 4.60 0.00 0.40 2.60 0.00 0.00 0.00 0.00 92.40
| |
| 11:51:14 all 43.20 0.00 6.00 3.00 0.00 0.00 0.00 0.00 47.80
| |
| Average: all 16.27 0.00 2.40 2.40 0.00 0.00 0.00 0.00 78.93 </pre>
| |
| | |
| {|class="vwikitable-equal"
| |
| |+ Overview of MPSTAT Metrics
| |
| ! Code
| |
| ! <code> CPU </code>
| |
| ! <code> %usr </code>
| |
| ! <code> %nice </code>
| |
| ! <code> %sys </code>
| |
| ! <code> %iowait </code>
| |
| ! <code> %irq </code>
| |
| ! <code> %soft </code>
| |
| ! <code> %steal </code>
| |
| ! <code> %guest </code>
| |
| ! <code> %idle </code>
| |
| |-
| |
| ! Name
| |
| | style="text-align: center;" | CPU No.
| |
| | style="text-align: center;" | User<br>(% util)
| |
| | style="text-align: center;" | Nice<br>(% util)
| |
| | style="text-align: center;" | System<br>(% util)
| |
| | style="text-align: center;" | IO Wait<br>(% time)
| |
| | style="text-align: center;" | Hard IRQ<br>(% time)
| |
| | style="text-align: center;" | Soft IRQ<br>(% time)
| |
| | style="text-align: center;" | Steal<br>(% time)
| |
| | style="text-align: center;" | Guest In<br>(% time)
| |
| | style="text-align: center;" | Idle<br>(% time)
| |
| |-
| |
| ! Description
| |
| | CPU number (or ''ALL'')<br>Set with <code>-P <n></code> option switch
| |
| | CPU running user processes
| |
| | CPU running nice (adjusted priority) user processes
| |
| | CPU running kernel processes (excludes [[Acronyms#I|IRQ]]s)
| |
| | CPU waiting for (disk) IO
| |
| | CPU servicing hardware interrupts
| |
| | CPU servicing software interrupts
| |
| | Virtual CPU wait due to CPU busy with other [[Acronyms#V|vCPU]]
| |
| | CPU servicing vCPU(s)
| |
| | CPU idle
| |
| |}
| |
| | |
| === <code> iostat </code> ===
| |
| [http://www.linuxcommand.org/man_pages/iostat1.html <code>iostat</code>] reports on IO (and CPU) stats.
| |
| | |
| Its not available by default, and comes as part of the [http://sebastien.godard.pagesperso-orange.fr/ <code>sysstat</code>] package (to install, use <code>apt-get install sysstat</code>).
| |
| | |
| IO stats can be displayed either by device (default, and extra metrics with <code>-x</code> switch) or by partition (<code>-p</code> switch). Note that the first line of output contains average/total counts since system start, with subsequent output being for the period since the last line of output.
| |
| | |
| Device stats output...
| |
| <pre>root@servername:~# iostat -x 5 3
| |
| Linux 2.6.32-41-server (servername) 25/07/12 _x86_64_ (1 CPU)
| |
| | |
| avg-cpu: %user %nice %system %iowait %steal %idle
| |
| 11.56 0.54 2.17 6.67 0.00 79.06
| |
| | |
| Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
| |
| sda 18.89 9.63 6.36 8.83 367.22 146.02 33.78 0.34 22.18 5.73 8.70
| |
| dm-0 0.00 0.00 3.16 10.68 190.37 86.53 20.01 0.62 44.79 2.18 3.02
| |
| dm-1 0.00 0.00 22.11 7.44 176.85 59.48 8.00 0.71 23.92 2.21 6.52
| |
| | |
| avg-cpu: %user %nice %system %iowait %steal %idle
| |
| 45.18 0.00 5.02 0.40 0.00 49.40
| |
| | |
| Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
| |
| sda 0.00 4.82 0.20 3.82 1.61 65.86 16.80 0.02 4.50 4.00 1.61
| |
| dm-0 0.00 0.00 0.20 8.23 1.61 65.86 8.00 0.07 7.86 1.90 1.61
| |
| dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
| |
| | |
| avg-cpu: %user %nice %system %iowait %steal %idle
| |
| 95.80 0.00 4.20 0.00 0.00 0.00
| |
| | |
| Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
| |
| sda 0.00 11.80 0.00 8.20 0.00 156.80 19.12 0.06 7.07 0.24 0.20
| |
| dm-0 0.00 0.00 0.00 19.60 0.00 156.80 8.00 0.18 8.98 0.10 0.20
| |
| dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 </pre>
| |
| | |
| Partition stats output...
| |
| <pre>root@servername:~# iostat -t -p 5 3
| |
| Linux 2.6.32-41-server (servername) 30/07/12 _x86_64_ (1 CPU)
| |
| | |
| 30/07/12 12:05:15
| |
| avg-cpu: %user %nice %system %iowait %steal %idle
| |
| 0.33 0.32 0.12 0.27 0.00 98.96
| |
| | |
| Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
| |
| sda 1.12 13.57 14.71 721218 782038
| |
| sda1 0.00 0.02 0.00 804 14
| |
| sda2 0.00 0.00 0.00 4 0
| |
| sda5 0.91 13.54 14.71 719994 782024
| |
| dm-0 2.23 13.49 14.45 716850 768240
| |
| dm-1 0.04 0.05 0.26 2632 13784
| |
| | |
| 30/07/12 12:05:20
| |
| avg-cpu: %user %nice %system %iowait %steal %idle
| |
| 0.00 0.00 0.00 0.00 0.00 100.00
| |
| | |
| Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
| |
| sda 1.20 0.00 14.40 0 72
| |
| sda1 0.00 0.00 0.00 0 0
| |
| sda2 0.00 0.00 0.00 0 0
| |
| sda5 0.80 0.00 14.40 0 72
| |
| dm-0 1.80 0.00 14.40 0 72
| |
| dm-1 0.00 0.00 0.00 0 0
| |
| | |
| 30/07/12 12:05:25
| |
| avg-cpu: %user %nice %system %iowait %steal %idle
| |
| 0.00 0.00 0.00 0.00 0.00 100.00
| |
| | |
| Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
| |
| sda 0.00 0.00 0.00 0 0
| |
| sda1 0.00 0.00 0.00 0 0
| |
| sda2 0.00 0.00 0.00 0 0
| |
| sda5 0.00 0.00 0.00 0 0
| |
| dm-0 0.00 0.00 0.00 0 0
| |
| dm-1 0.00 0.00 0.00 0 0</pre>
| |
| | |
| {|class="vwikitable-equal"
| |
| |+ Overview of IOSTAT Device Metrics
| |
| ! Stats
| |
| ! colspan="10"| Device IO Stats
| |
| ! colspan="5"| Partition IO Stats
| |
| |-
| |
| ! Code
| |
| ! <code> rrqm/s </code>
| |
| ! <code> wrqm/s </code>
| |
| ! <code> r/s </code>
| |
| ! <code> w/s </code>
| |
| ! <code> rsec/s </code>
| |
| ! <code> wsec/s </code>
| |
| ! <code> avgrq-sz </code>
| |
| ! <code> avgqu-sz </code>
| |
| ! <code> svctm </code>
| |
| ! <code> %util </code>
| |
| ! <code> tps </code>
| |
| ! <code> Blk_read/s </code>
| |
| ! <code> Blk_wrtn/s </code>
| |
| ! <code> Blk_read </code>
| |
| ! <code> Blk_wrtn </code>
| |
| |-
| |
| ! Name
| |
| | style="text-align: center;" | Read Merge<br>(/s)
| |
| | style="text-align: center;" | Write Merge<br>(/s)
| |
| | style="text-align: center;" | Read<br>(/s)
| |
| | style="text-align: center;" | Write<br>(/s))
| |
| | style="text-align: center;" | Read<br>(sectors/s)
| |
| | style="text-align: center;" | Write<br>(sectors/s)
| |
| | style="text-align: center;" | Av. Req. Size<br>(sectors)
| |
| | style="text-align: center;" | Av. Queue Len.<br>(sectors)
| |
| | style="text-align: center;" | Av. Service Time<br>(msec)
| |
| | style="text-align: center;" | Utilisation<br>(% CPU Time)
| |
| | style="text-align: center;" | Transfers<br>(/s)
| |
| | style="text-align: center;" | Read<br>(blocks/s)
| |
| | style="text-align: center;" | Write<br>(blocks/s)
| |
| | style="text-align: center;" | Read<br>(blocks)
| |
| | style="text-align: center;" | Write<br>(blocks)
| |
| |-
| |
| ! Description
| |
| | Read requests merged
| |
| | Write requests merged
| |
| | Read requests
| |
| | Write requests
| |
| | Sector reads
| |
| | Sector writes
| |
| | Average read/write request size
| |
| | Average request queue length
| |
| | Average time to service requests
| |
| | Bandwidth utilisation / device saturation
| |
| | IO transfer rate (TPS - Transfers Per Second)
| |
| | Data read
| |
| | Data write
| |
| | Data read
| |
| | Data write
| |
| |}
| |
| | |
| If when using the above tools you're presented with disk/devices names of <code>dm-0</code>, <code>dm-1</code>, etc., which won't mean much. These are LVM logical devices, to understand what they map to use
| |
| <pre>lvdisplay|awk '/LV Name/{n=$3} /Block device/{d=$3; sub(".*:","dm-",d); print d,n;}'</pre>
| |
| | |
| {{GoogleAdBanner}}
| |
|
| |
|
| == Network == | | == Network == |
Line 437: |
Line 8: |
| # Use <code> dmesg | grep -i eth </code> to ascertain what's been detected at boot time | | # Use <code> dmesg | grep -i eth </code> to ascertain what's been detected at boot time |
| # Assuming it states that say <code>eth0</code> has been changed to <code>eth1</code> then just update the <code>/etc/network/interfaces</code> file | | # Assuming it states that say <code>eth0</code> has been changed to <code>eth1</code> then just update the <code>/etc/network/interfaces</code> file |
| | # Alternatively, force the ''new'' NIC to be <code>eth0</code> by editing the <code>/etc/udev/rules.d/70-persistent-net.rules</code> file |
| | #* You'll need to reboot the server for changes to take effect |
|
| |
|
| == File System == | | == File System == |
Line 472: |
Line 45: |
| # The arrays should now be being sync'ed, check progress by monitoring <code>/proc/mdstat</code> | | # The arrays should now be being sync'ed, check progress by monitoring <code>/proc/mdstat</code> |
| #* <code> more /proc/mdstat </code> | | #* <code> more /proc/mdstat </code> |
| | |
| | === Recover Deleted Files === |
| | Ideally you should recover files to a seperate disk partition to the one you are attempting to recover from. This procedure should help to recover lost or corrupted files from a filesystem using [http://manpages.ubuntu.com/manpages/lucid/man1/scalpel.1.html Scalpel], a data recovery utility built on the foundation of [http://foremost.sourceforge.net/ Foremost] |
| | |
| | # Install Scalpel |
| | #* <code> apt-get install scalpel </code> |
| | # Update the config file to search for the lost files (uncomment/add as neccessary) |
| | #* <code> /etc/scalpel/scalpel.conf </code> |
| | #* For PHP files (not embedded in HTML) use <code> php n 50000 <?php ?> </code> |
| | # Create a folder for the recovered files to go to |
| | #* <code> mkdir /tmp/recov </code> |
| | # Launch Scalpel to trawl the disk image (will takes ages, and source disk will be under high load) |
| | #* <code> scalpel /dev/mapper/svr-root -o /tmp/recov/ </code> |
| | # Search through recovered files to find the data of interest |
| | #* <code> grep -R "string you want to find" /tmp/recov/* </code> |
|
| |
|
| == SSH == | | == SSH == |
Line 484: |
Line 72: |
| * '''The following packages have been kept back''' | | * '''The following packages have been kept back''' |
| ** Package manager can hold back updates because they will cause conflicts, or sometimes because they're major kernel updates. Running <code>aptitude safe-upgrade</code> normally seems to force kernel updates through. | | ** Package manager can hold back updates because they will cause conflicts, or sometimes because they're major kernel updates. Running <code>aptitude safe-upgrade</code> normally seems to force kernel updates through. |
| | |
| | === Add EOL Repository === |
| | Once a version of Ubuntu has gone End Of Line (EOL), you can't install software packages using the normal repository. On trying you'll get an error similar to |
| | * <code>Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/main/s/<package> 404 Not Found</code> |
| | |
| | The repository is still available, but via a different URL - http://old-releases.ubuntu.com |
| | |
| | Edit <code>/etc/apt/sources.list</code> and add the following (replace hardy with your flavour of Ubuntu). Remove the existing ubuntu repositories (they'll just cause errors as they're inaccessible) |
| | |
| | <pre> |
| | # Hardy EOL |
| | # Required |
| | deb http://old-releases.ubuntu.com/ubuntu/ hardy main restricted universe multiverse |
| | deb http://old-releases.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse |
| | deb http://old-releases.ubuntu.com/ubuntu/ hardy-security main restricted universe multiverse |
| | |
| | # Optional |
| | #deb http://old-releases.ubuntu.com/ubuntu/ hardy-backports main restricted universe multiverse |
| | </pre> |
|
| |
|
| == Reboot Required? == | | == Reboot Required? == |
Line 491: |
Line 98: |
| To see which packages caused this to be set, inspect the contents of... | | To see which packages caused this to be set, inspect the contents of... |
| /var/run/reboot-required.pkgs | | /var/run/reboot-required.pkgs |
| | |
| | == Firewall == |
| | === ERROR: problem running ufw-init === |
| | If on starting or reloading <code>ufw</code> you receive this error, its likely that you have a configuration problem. This is especially likely if you've needed to edit <code>ufw</code>'s config files directly. |
| | |
| | # Ensure that <code>ufw</code> is running |
| | #* <code> ufw enable </code> |
| | # Force the config to be reloaded |
| | #* <code> /lib/ufw/ufw-init force-reload </code> |
| | # Or if <code>ufw</code> failed to start use |
| | #* <code> /lib/ufw/ufw-init start </code> |
| | |
| | Doing the above should trigger the error, and present a better description of what the problem is |
| | |
| | See http://ubuntuforums.org/showthread.php?t=1660916 for further info |
| | |
|
| |
|
| [[Category:Ubuntu]] | | [[Category:Ubuntu]] |
| [[Category:Troubleshooting]] | | [[Category:Troubleshooting]] |
| [[Category:Bash]] | | [[Category:Bash]] |