Locations of visitors to this page

Wednesday, September 23, 2009

introducing maatkit - parallel dump

introducing maatkit - parallel dump
maatkit介绍 - 并行导出

Maatkit是一组为MySQL提供的命令行工具集, 是由Percona公司开发的开源软件
该公司同时还开发了XtraDB存储引擎, XtraBackup热备工具, MySQL增强补丁版本Percona

包括:
  • mk-archiver 将表数据清除或归档到另外的表或文件
  • mk-audit 分析MySQL的配置,概要,操作, 生成报表
  • mk-checksum-filter mk-table-checksum的过滤器
  • mk-deadlock-logger 记录InnoDB的死锁信息
  • mk-duplicate-key-checker 查找重复或冗余的外键和索引
  • mk-fifo-split 将一个文件拆分为多个部分, 输出到FIFO管道(有用吗?)
  • mk-find 按指定规则查找表名, 然后执行操作
  • mk-heartbeat 监视数据库之间复制的延迟
  • mk-log-player 拆分并重演慢速查询日志
  • mk-parallel-dump 多线程导出
  • mk-parallel-restore 多线程导入
  • mk-profile-compact 压缩mk-query-profiler输出
  • mk-query-digest 分析日志
  • mk-query-profiler 查询性能分析工具
  • mk-show-grants 显示用户权限
  • mk-slave-delay 实现备库与主库之间一定的延时
  • mk-slave-find 查找/显示出备库的树型层次结构
  • mk-slave-move 在层次结构中移动备库(什么玩意?)
  • mk-slave-prefetch 在备库上运行SELECT查询语句, 使数据预读取到内存中
  • mk-slave-restart 监测备库发生的错误并重启
  • mk-table-checksum 快速检测两个表的数据是否相同. 可以用来检测备库和主库的数据一致性
  • mk-table-sync 发现并修复不同服务器上的两个表之间的数据差异
  • mk-upgrade 比较2个数据库中语句的运行结果
  • mk-visual-explain 以树形显示执行计划



1. 安装
http://maatkit.googlecode.com/处下载RPM包进行安装
yum -y install perl-TermReadKey.x86_64
rpm -Uvh http://maatkit.googlecode.com/files/maatkit-4623-1.noarch.rpm

依赖以下安装包
# rpm -q --requires maatkit
/usr/bin/env
perl(DBD::mysql) >= 1.0
perl(DBI)
perl(DBI) >= 1.13
perl(Data::Dumper)
perl(Digest::MD5)
perl(English)
perl(Exporter)
perl(File::Basename)
perl(File::Find)
perl(File::Spec)
perl(File::Temp)
perl(Getopt::Long)
perl(IO::File)
perl(List::Util)
perl(POSIX)
perl(Socket)
perl(Term::ReadKey) >= 2.10
perl(Time::HiRes)
perl(Time::Local)
perl(constant)
perl(sigtrap)
perl(strict)
perl(warnings)
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1


2. 并行导出mk-parallel-dump和导入mk-parallel-restore
使用方法见:
mk-parallel-dump - Dump sets of MySQL tables in parallel.
mk-parallel-restore - Load files into MySQL in parallel.

1) 生成测试表
生成一个3百万行记录的测试表
mysql --socket=/var/lib/mysql/data_3306/mysql.sock

set autocommit=0;
drop database if exists dbtest;
create database dbtest;
use dbtest;
drop table if exists t1;
create table t1 (
  id int(9) not null auto_increment,
  name varchar(20) not null,
  age int(3) not null,
  notes varchar(100),
  primary key (id),
  index ind_t1_name (name)
);
truncate table t1;
insert into t1 (name, age, notes)
select conv(floor(rand() * 99999999999999), 10, 36), floor(1+rand()*(100-1)), md5(rand())
  from information_schema.COLUMNS a
       , information_schema.COLUMNS b
       , information_schema.COLUMNS c
 limit 3000000;
commit;
大小300多M
mysql> insert into t1 (name, age, notes)
    -> select conv(floor(rand() * 99999999999999), 10, 36), floor(1+rand()*(100-1)), md5(rand())
    ->   from information_schema.COLUMNS a
    ->        , information_schema.COLUMNS b
    ->        , information_schema.COLUMNS c
    ->  limit 3000000;
Query OK, 3000000 rows affected (3 min 30.61 sec)
Records: 3000000  Duplicates: 0  Warnings: 0

mysql> commit;
Query OK, 0 rows affected (0.07 sec)
mysql> show table status like 't1';
+------+--------+---------+------------+---------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+----------------------+
| Name | Engine | Version | Row_format | Rows    | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time         | Update_time | Check_time | Collation       | Checksum | Create_options | Comment              |
+------+--------+---------+------------+---------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+----------------------+
| t1   | InnoDB |      10 | Compact    | 3000249 |             76 |   228294656 |               0 |     86654976 |         0 |        6000000 | 2009-09-23 05:26:34 | NULL        | NULL       | utf8_general_ci |     NULL |                | InnoDB free: 4096 kB |
+------+--------+---------+------------+---------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+----------------------+
1 row in set (0.08 sec)

mysql> system ls -l /var/lib/mysql/data_3306/dbtest
total 319820
-rw-rw---- 1 mysql mysql        61 Sep 23 04:17 db.opt
-rw-rw---- 1 mysql mysql      8646 Sep 23 05:26 t1.frm
-rw-rw---- 1 mysql mysql 327155712 Sep 23 05:30 t1.ibd
mysql>

2) mk-parallel-dump和mysqldump分别导出该表作比较
mysqldump导出表
# mkdir -p $HOME/backup
# cd $HOME/backup && rm -rf *
# time mysqldump --socket=/var/lib/mysql/data_3306/mysql.sock --opt dbtest t1 >dbback-dbtest-t1.sql

real    0m11.316s
user    0m2.348s
sys     0m0.472s
# ls -l dbback-dbtest-t1.sql
-rw-r--r-- 1 root root 179090589 Sep 23 05:31 dbback-dbtest-t1.sql

mk-parallel-dump导出表, 导出文件放在目录$HOME/backup/pdump1下, 不压缩, 不记录binlog的位置
# time mk-parallel-dump --socket=/var/lib/mysql/data_3306/mysql.sock --base-dir=$HOME/backup/pdump1 --no-gzip --nobin-log-position --tables="dbtest.t1"
     default:              1 tables,     1 chunks,     1 successes,  0 failures,  11.40 wall-clock time,  11.31 dump time

real    0m11.608s
user    0m2.176s
sys     0m0.556s
# find pdump1 -ls
7162198    4 drwxr-xr-x   3 root     root         4096 Sep 23 05:33 pdump1
7162199    4 drwxr-xr-x   3 root     root         4096 Sep 23 05:33 pdump1/default
7162200    4 drwxr-xr-x   2 root     root         4096 Sep 23 05:33 pdump1/default/dbtest
7162201 175076 -rw-r--r--   1 root     root     179096039 Sep 23 05:33 pdump1/default/dbtest/t1.000000.sql

mk-parallel-dump导出表, 增加每一百万条(近似值)导出一个文件
# time mk-parallel-dump --socket=/var/lib/mysql/data_3306/mysql.sock --base-dir=$HOME/backup/pdump2 --no-gzip --nobin-log-position --tables="dbtest.t1" --chunk-size=1000000
     default:              1 tables,     4 chunks,     4 successes,  0 failures,  10.36 wall-clock time,  15.92 dump time

real    0m10.509s
user    0m2.580s
sys     0m0.560s
# find pdump2 -ls
7162202    4 drwxr-xr-x   3 root     root         4096 Sep 23 05:33 pdump2
7162203    4 drwxr-xr-x   3 root     root         4096 Sep 23 05:33 pdump2/default
7162204    4 drwxr-xr-x   2 root     root         4096 Sep 23 05:33 pdump2/default/dbtest
7162205    4 -rw-r--r--   1 root     root          101 Sep 23 05:33 pdump2/default/dbtest/t1.chunks
7162208 58544 -rw-r--r--   1 root     root     59879876 Sep 23 05:33 pdump2/default/dbtest/t1.000001.sql
7162206   16 -rw-r--r--   1 root     root        16365 Sep 23 05:33 pdump2/default/dbtest/t1.000003.sql
7162209 58000 -rw-r--r--   1 root     root     59324125 Sep 23 05:33 pdump2/default/dbtest/t1.000000.sql
7162207 58544 -rw-r--r--   1 root     root     59880064 Sep 23 05:33 pdump2/default/dbtest/t1.000002.sql
# cat pdump2/default/dbtest/t1.chunks
`id` < 1999835
`id` >= 1999835 AND `id` < 3999669
`id` >= 3999669 AND `id` < 5999503
`id` >= 5999503
.chunks文件记录了分块规则

mk-parallel-dump导出表, 增加启动4个线程同时导出(不指定则默认为2个线程)
# time mk-parallel-dump --socket=/var/lib/mysql/data_3306/mysql.sock --base-dir=$HOME/backup/pdump3 --no-gzip --nobin-log-position --tables="dbtest.t1" --chunk-size=1000000 --threads=4
     default:              1 tables,     4 chunks,     4 successes,  0 failures,   9.37 wall-clock time,  25.29 dump time

real    0m9.529s
user    0m2.572s
sys     0m0.516s
# find pdump3 -ls
7359077    4 drwxr-xr-x   3 root     root         4096 Sep 23 05:34 pdump3
7359078    4 drwxr-xr-x   3 root     root         4096 Sep 23 05:34 pdump3/default
7359079    4 drwxr-xr-x   2 root     root         4096 Sep 23 05:34 pdump3/default/dbtest
7359080    4 -rw-r--r--   1 root     root          101 Sep 23 05:34 pdump3/default/dbtest/t1.chunks
7359084 58544 -rw-r--r--   1 root     root     59879876 Sep 23 05:34 pdump3/default/dbtest/t1.000001.sql
7359081   16 -rw-r--r--   1 root     root        16365 Sep 23 05:34 pdump3/default/dbtest/t1.000003.sql
7359083 58000 -rw-r--r--   1 root     root     59324125 Sep 23 05:34 pdump3/default/dbtest/t1.000000.sql
7359082 58544 -rw-r--r--   1 root     root     59880064 Sep 23 05:34 pdump3/default/dbtest/t1.000002.sql

导出速度差不多, 无显著差异, 因为这是个单CPU的系统, 只导出一个表, 分块和多线程可能还会带来额外的开销.
如果是多核多CPU系统导出多个表, mk-parallel-dump应该会更快些.

3) 比较mk-parallel-dump和mysqldump导入
mysql导入
# time mysql --socket=/var/lib/mysql/data_3306/mysql.sock dbtest <dbback-dbtest-t1.sql

real    3m16.760s
user    0m1.672s
sys     0m0.156s

mk-parallel-restore导入
# time mk-parallel-restore --socket=/var/lib/mysql/data_3306/mysql.sock $HOME/backup/pdump1
    1 tables,     1 files,     1 successes,  0 failures, 199.75 wall-clock time, 199.75 load time

real    3m19.910s
user    0m0.232s
sys     0m0.136s

mk-parallel-restore导入多个文件
# mysql --socket=/var/lib/mysql/data_3306/mysql.sock -e "drop table dbtest.t1;"
# time mk-parallel-restore --socket=/var/lib/mysql/data_3306/mysql.sock $HOME/backup/pdump2
    1 tables,     4 files,     1 successes,  0 failures, 196.55 wall-clock time, 196.54 load time

real    3m16.653s
user    0m0.268s
sys     0m0.148s

mk-parallel-restore导入多个文件, 启4个线程
# mysql --socket=/var/lib/mysql/data_3306/mysql.sock -e "drop table dbtest.t1;"
# time mk-parallel-restore --socket=/var/lib/mysql/data_3306/mysql.sock $HOME/backup/pdump3 --threads=4
    1 tables,     4 files,     1 successes,  0 failures, 194.19 wall-clock time, 194.19 load time

real    3m14.606s
user    0m0.204s
sys     0m0.164s

速度也都差不多


总体感觉很一般啊, 以后再试试其它工具. 关于数据库复制, 日志/语句分析等工具可能还比较有用.



外部链接:
Tools for MySQL - Maatkit makes MySQL - easier to manage.
maatkit - A toolkit that provides advanced functionality for MySQL
mysql-parallel-dump test


-fin-

Monday, September 21, 2009

DML Error Logging

DML Error Logging

提问:一个问题, 在存储过程中,有对20万条记录进行修改的一个UPDATE,当中间某个表由于字段长度问题而导致修改失败,在异常处理中,如何定位是哪条记录问题导致失败?

这个无法显示哪行出现超出?你能定位到哪行??
SQL> CREATE OR REPLACE PROCEDURE test111 AS
  2 
  3    v_sqlcode number;
  4    v_sqlerrm varchar2(100);
  5  BEGIN
  6 
  7 
  8  update test set aa=aa||'string' ;
  9      commit;
 10  EXCEPTION
 11      when others then
 12      v_sqlcode:=Sqlcode;
 13      v_sqlerrm:=Sqlerrm;
 14       rollback;
 15  DBMS_OUTPUT.PUT_LINE('AAA' || SQLCODE || SQLERRM || 'START');
 16 
 17  END;
 18  /
 
Procedure created.
 
SQL> set serveroutput on
SQL> exec test111;
AAA-12899ORA-12899: value too large for column "HR"."TEST"."AA" (actual: 11,
maximum: 10)START

回答: 数据操纵语言错误记录(DML Error Logging)能够满足您的要求

运行SQL语句, 如果只有一条记录出错, 也会立刻终止语句运行, 回滚事务, 导致对数据的修改全部失败, 尤其是当用一条语句批量处理大量记录时这个问题更加突出
10gR2增加了DML错误记录功能, 在运行SQL语句发生某些异常时, 不会中断整个事务, 而是自动将错误信息记录到另一个指定的表, 然后继续处理

1. 举例
创建测试表
set serveroutput on size unlimited
set pages 50000 line 130
drop table t purge;
drop table err$_t purge;
create table t(a number(1) primary key, b char);

然后需要建一个记录错误信息的表, 用Oracle提供的DBMS_ERRLOG包自动创建或手工创建
exec dbms_errlog.create_error_log('t');
缺省名称是ERR$_加原表名的前25个字符
SQL> select * from tab;

TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
T                              TABLE
ERR$_T                         TABLE

SQL> desc ERR$_T
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- ------------------------------------------------
 ORA_ERR_NUMBER$                                                                  NUMBER
 ORA_ERR_MESG$                                                                    VARCHAR2(2000)
 ORA_ERR_ROWID$                                                                   ROWID
 ORA_ERR_OPTYP$                                                                   VARCHAR2(2)
 ORA_ERR_TAG$                                                                     VARCHAR2(2000)
 A                                                                                VARCHAR2(4000)
 B                                                                                VARCHAR2(4000)

SQL>
ORA_ERR_NUMBER$ 错误编号
ORA_ERR_MESG$ 错误信息
ORA_ERR_ROWID$ 出错行的rowid(只对update和delete)
ORA_ERR_OPTYP$ 错误类型 I:插入 U:更新 D:删除
ORA_ERR_TAG$ 由用户定义的标签
(如果采用手工创建, 必须包括上述字段)
后两个字段与原表对应, 数据类型为varchar2(4000), 用于存储出错的记录(数据类型转换见Table 15-2 Error Logging Table Column Data Types)

用原始方式插入测试数据
SQL> insert into t (a) select level from dual connect by level <= 12;
insert into t (a) select level from dual connect by level <= 12
                               *
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column


字段a只有一位数字, 不能超过9, 插入10导致出错了

增加LOG ERRORS子句后
SQL> insert into t (a) select level from dual connect by level <= 12 log errors reject limit unlimited;

9 rows created.

SQL> select * from t;

         A B
---------- -
         1
         2
         3
         4
         5
         6
         7
         8
         9

9 rows selected.

SQL> col ORA_ERR_MESG$ for a50
SQL> col ORA_ERR_TAG$ for a10
SQL> col ORA_ERR_ROWID$ for a10
SQL> col A for a10
SQL> col b for a10
SQL> select * from err$_t;

ORA_ERR_NUMBER$ ORA_ERR_MESG$                                      ORA_ERR_RO OR ORA_ERR_TA A          B
--------------- -------------------------------------------------- ---------- -- ---------- ---------- ----------
           1438 ORA-01438: value larger than specified precision a            I             10
                llowed for this column

           1438 ORA-01438: value larger than specified precision a            I             11
                llowed for this column

           1438 ORA-01438: value larger than specified precision a            I             12
                llowed for this column


SQL语句运行成功并且将错误记录到了err$_t



2. 语法
Error logging的语法是:
LOG ERRORS [INTO [schema.]table]
[ (simple_expression) ]
[ REJECT LIMIT {integer|UNLIMITED} ]

INTO子句可选, 缺省表名是err$_原表名的前25个字符
simple_expression可以是一个由表达式构成字符串, 作为为标签插入到字段ORA_ERR_TAG$
REJECT LIMIT表示最多允许记录多少个错误, 超过这个范围就抛出异常. 如果是0, 表示不记录错误(见下)


3. REJECT LIMIT 子句
在10.2.0.4测试结果和10gR2文档说的有些出入

缺省情况也记录错误, 只记录一条错误
SQL> truncate table t;

Table truncated.

SQL> truncate table err$_t;

Table truncated.

SQL> insert into t (a) select level from dual connect by level <= 13 log errors;
insert into t (a) select level from dual connect by level <= 13 log errors
                               *
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column


SQL> select * from err$_t;

ORA_ERR_NUMBER$ ORA_ERR_MESG$                                                          ORA_ERR_RO OR ORA_ERR_TA A
--------------- ---------------------------------------------------------------------- ---------- -- ---------- ----------
B
----------
           1438 ORA-01438: value larger than specified precision allowed for this colu            I             10
                mn



SQL>

如果指定了数量n, 会记录n+1条
SQL> truncate table t;

Table truncated.

SQL> truncate table err$_t;

Table truncated.

SQL> insert into t (a) select level from dual connect by level <= 13 log errors reject limit 2;
insert into t (a) select level from dual connect by level <= 13 log errors reject limit 2
                               *
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column


SQL> select * from err$_t;

ORA_ERR_NUMBER$ ORA_ERR_MESG$                                                          ORA_ERR_RO OR ORA_ERR_TA A
--------------- ---------------------------------------------------------------------- ---------- -- ---------- ----------
B
----------
           1438 ORA-01438: value larger than specified precision allowed for this colu            I             10
                mn


           1438 ORA-01438: value larger than specified precision allowed for this colu            I             11
                mn


           1438 ORA-01438: value larger than specified precision allowed for this colu            I             12
                mn



SQL>

11gR1文档似乎修正了这个错误:
This subclause indicates the maximum number of errors that can be encountered before the INSERT statement terminates and rolls back. You can also specify UNLIMITED. The default reject limit is zero, which means that upon encountering the first error, the error is logged and the statement rolls back. For parallel DML operations, the reject limit is applied to each parallel server.

11gR2文档:
If REJECT LIMIT X had been specified, the statement would have failed with the error message of error X=1. The error message can be different for different reject limits. In the case of a failing statement, only the DML statement is rolled back, not the insertion into the DML error logging table. The error logging table will contain X+1 rows.


4. 错误记录表
错误记录表不会自动清除, 以自治事务运行. 如超过REJECT LIMIT限制, DML语句回滚, 错误记录表不回滚
错误记录表所属的用户和运行DML语句的用户可以不相同, 运行语句的用户对错误记录表必须具有插入权限


5. 使用限制
对以下情况记录错误:
  • 列值太大
  • 违反非空,唯一,引用(referential),或检查(check)约束条件
  • 由触发器抛出的异常
  • 数据类型转换错误
  • 分区映射错误
  • 某些merge操作错误(如 ORA-30926: Unable to get a stable set of rows for MERGE operation.)

下述情况不记录错误:
  • 违反了延迟的(deferred)约束条件
  • 空间不够
  • 直接路径插入操作(insert或merge)抛出的唯一约束或唯一索引错误
  • 更新操作(update或merge)抛出的唯一约束或唯一索引错误




外部链接:
DML Error Logging
Error Logging and Handling Mechanisms
Inserting Data with DML Error Logging
38 DBMS_ERRLOG

Faster Batch Processing By Mark Rittman
10gR2 New Feature: DML Error Logging
DML Error Logging in Oracle 10g Database Release 2
Oracle DBMS_ERRLOG
Oracle DML Error Logging
dml error logging in oracle 10g release 2



-fin-

Thursday, September 3, 2009

no login prompt in vm console

no login prompt in vm console
虚拟机终端没有登录提示

用xm console命令登录Xen创建的虚拟机, 屏幕上只显示启动时的信息, 没有登录提示, 按多次回车也有没反应, 如下:
Mounting other filesystems:  [  OK  ]
Starting sshd: [  OK  ]
Starting crond: [  OK  ]
Starting anacron: [  OK  ]
Starting atd: [  OK  ]
Starting Avahi daemon... [  OK  ]
Starting HAL daemon: [  OK  ]
Starting puppet: [  OK  ]
Starting smartd: [  OK  ]
INIT: version 2.86 reloading

按文档Console handling指示
需要在/etc/inittab增加一行xvc0, 并注释其它tty配置
co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
#1:2345:respawn:/sbin/mingetty tty1
#2:2345:respawn:/sbin/mingetty tty2
#3:2345:respawn:/sbin/mingetty tty3
#4:2345:respawn:/sbin/mingetty tty4
#5:2345:respawn:/sbin/mingetty tty5
#6:2345:respawn:/sbin/mingetty tty6

在/etc/securetty增加一行
xvc0

修改完/etc/inittab后运行
init q
使之生效

再次连接虚拟机终端, 出现登录界面
# xm console 14
INIT: Sending processes the TERM signal

CentOS release 5.3 (Final)
Kernel 2.6.18-128.7.1.el5xen on an x86_64

vm02.zone101.bj4.company.com login:
CentOS release 5.3 (Final)
Kernel 2.6.18-128.7.1.el5xen on an x86_64

vm02.zone101.bj4.company.com login:

至于为何没有自动添加参数, 原因不明, 有可能是BUG
Bug 463855 - No login prompt after headless serial console installation with virtualization
Bug 448429 - xm console requires DomU getty modification to point to xvc0


-fin-

Wednesday, September 2, 2009

multitail

multitail

multitail是一个小工具, 于tail和watch命令相比, 它的好处是可以用来同时查看多个文件或命令的输出, 还可以将多个文件或命令输出到一个窗口显示, 定义配色方案彩色显示, 按模式匹配关键字过滤或高亮显示.
最新版本是2008-05-19发布的v5.2.2
从源码编译安装或从rpmforge上rpm包直接安装

使用方法:
multitail file1 file2 ...

比如:
multitail /var/log/messages /var/log/httpd/access_log
multitail-simple-090902

复杂一点的, 缓冲10000行, 窗口分割为2列, 反显ERR|dhcpd|twitter字符, 同时显示2个日志文件, 运行iostat命令, 每隔5秒运行ls命令
multitail -M 10000 -s 2 -EC "ERR|dhcpd|twitter" \
/var/log/messages \
/u01/blurlog/mysql/bqa2/cloud1/db01/logs/general.log \
-l "iostat 3" \
-R 5 -l "ls -l /var/log/messages"
multitail-advanced-090902

在后台观察发现其实调用的就是一些tail命令
# ps -ef|grep 3454
root      3454 32310  0 10:45 pts/6    00:00:00 multitail -M 10000 -s 2 -EC ERR|dhcpd|twitter /var/log/messages /u01/blurlog/mysql/bqa2/cloud1/db01/logs/general.log -l iostat 3 -R 5 -l ls -l /var/log/messages
root      3455  3454  0 10:45 pts/6    00:00:00 tail --follow=name -n 62 /var/log/messages
root      3456  3454  0 10:45 pts/6    00:00:00 tail --follow=name -n 62 /u01/blurlog/mysql/bqa2/cloud1/db01/logs/general.log
root      3457  3454  0 10:45 pts/6    00:00:00 iostat 3
root      7440  3454  0 10:50 pts/6    00:00:00 multitail -M 10000 -s 2 -EC ERR|dhcpd|twitter /var/log/messages /u01/blurlog/mysql/bqa2/cloud1/db01/logs/general.log -l iostat 3 -R 5 -l ls -l /var/log/messages
root      7442  2641  0 10:50 pts/65   00:00:00 grep 3454
这个软件不是很稳定, 经常异常退出. 异常退出后, 必须手工杀掉后台的tail命令


外部链接:
MultiTail
用 multitail 查看许多文件 - 对话 UNIX: 适用于任何 UNIX 系统的 10 个出色的工具
Multitail
使用MultiTail同时监控多个文件


-fin-
Website Analytics

Followers