源码安装:
./configure
gmake
su
gmake install
adduser postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test
Linux VI 命令/usr/local/pgsql/data/postgresql.conf
## addeb by maguirun.
listen_addresses = '*' # what IP address(es) to listen on;
## Added by maguirun.
max_wal_senders = 1
wal_level = hot_standby
archive_mode = on
archive_command = 'cd ./'
wal_keep_segments = 64
________________________________________________________________________________________
postgresql的hot standby(replication stream)
PG在9.*版其他数据库相比较,有相同点,也有不同点,这些后述。下面是PG的流复制的安装步骤以及测试。
环境:
Vmware Workstation 8.0
操作系统:CentOS 6.2
数据库 :PostgreSQL 9.1.3本后热备提供了新的一个功能,那就是Stream Replication的读写分离,是PG高可用性的一个典型应用,也就是我们传统意义上说的Hot-Standby,比如Oracle的DG,mssql的mirror以及Mysql的读写分离等,与
虚拟主机2台
MASTER: 192.168.2.130
SLAVE: 192.168.2.129
环境参数
[postgres@localhost ~]$ echo $PGHOME
/home/postgres
[postgres@localhost ~]$ echo $PGDATA
/database/pgdata
Step1:安装PG数据库
略,slave端可以只装数据库,不初始化数据库
Step2:创建流复制用户
master端执行: CREATE USER repuser replication LOGIN CONNECTION LIMIT 2 ENCRYPTED PASSWORD 'repuser';
Step3:配置Master端的访问文件pg_hba.conf
增加一行:
host replication repuser 192.168.2.129/16 md5
Step4:配置MASTER端配置文件
max_wal_senders = 1
wal_level = hot_standby
archive_mode = on
archive_command = 'cd ./'
wal_keep_segments = 64
注:max_wal_senders是Slave库的节点数,有多少个slave库就设多少,
wal_level是write ahead log参数值,设置流复制务必将此值更新成hot_standby
wal_keep_segments默认值是16,是PG_XLOG下的日志文件大小
Step5:主库备份(Master端)
5.1:开启文件备份,前提是wal_level参数值必须是archive或者hot_standby,前面已做
select pg_start_backup('Replition work');
5.2:拷贝$PGDATA文件,并复制到Slave服务器上,排除pg_xlog内容,因为在slave还是要被清掉的,另外注意TAR打包时的权限问题,本次打包时遇到一个问题:
tar报错
tar (child): Cannot open: Permission denied
tar: Error is not recoverable: exiting now,后来将/database的权限授予了chown postgres解决.
打包
tar czvf pgdata.tar.gz pgdata --exclude=pgdata/pg_xlog
远程拷贝至slave端并解压
scp pgdata.tar.gz root@192.168.2.129:/database/
tar xzvf pgdata.tar.gz
5.3:上述步骤完成后,结束master端的备份
select pg_stop_backup(), current_timestamp;
Step6:修改Slave端配置信息
6.1:postgresql.conf文件
hot_standby = on
6.2:recovery.conf文件
$ cp $PGHOME/share/recovery.conf.sample $PGDATA/
$ cd $PGDATA
$ mv recovery.conf.sample recovery.conf
$ Linux VI 命令 recovery.conf --新增以下三行
$ standby_mode = 'on'
$ trigger_file = '/database/pgdata/postgresql.trigger.1949'
$ primary_conninfo = 'host=192.168.2.130 port=1949 user=repuser password=repuser keepalives_idle=60'
6.3:配置.pgpass文件(slave端)
新增slave访问master的密码文件,可以不用输密码
192.168.2.130:1949:postgres:repuser:repuser
6.4:删除slave端(从master端拷过来的)的pid文件和pg_xlog
$ rm -rf $PGDATA/pg_xlog
$ rm -f $PGDATA/postmaster.pid
$ mkdir $PGDATA/pg_xlog
Step7:启动Slave库
正常启动备库(pg_ctl -D $PGDATA -Lighttpd pg.log start),有异常可以看log
复制完成后,可以通过CSV日志去查看,本处未设,直接查看进程
7.1 查看master进程:
[postgres@localhost ~]$ ps -ef|grep postgres
root 2454 2438 0 20:25 pts/0 00:00:00 su - postgres
postgres 2461 2454 0 20:25 pts/0 00:00:00 -bash
postgres 2535 1 0 20:26 pts/1 00:00:00 /home/postgres/bin/postgres -D /database/pgdata
postgres 2537 2535 0 20:26 ? 00:00:00 postgres: writer process
postgres 2538 2535 0 20:26 ? 00:00:00 postgres: wal writer process
postgres 2539 2535 0 20:26 ? 00:00:00 postgres: autovacuum launcher process
postgres 2540 2535 0 20:26 ? 00:00:00 postgres: archiver process
postgres 2541 2535 0 20:26 ? 00:00:00 postgres: stats collector process
postgres 3079 2535 0 21:56 ? 00:00:00 postgres: wal sender process repuser 192.168.2.129(45446) streaming 0/C01EDB8
postgres 3116 2535 0 22:02 ? 00:00:00 postgres: postgres postgres 192.168.2.1(52648) idle
postgres 3118 2535 0 22:02 ? 00:00:00 postgres: postgres test 192.168.2.1(52649) idle
postgres 3120 2535 0 22:02 ? 00:00:00 postgres: postgres test 192.168.2.1(52654) idle
root 3126 2490 0 22:04 pts/1 00:00:00 su - postgres
postgres 3214 3128 0 22:16 pts/1 00:00:00 grep postgres
postgres 3128 3126 0 22:04 pts/1 00:00:00 -bash
postgres 3213 3128 2 22:16 pts/1 00:00:00 ps -ef
7.2 查看slave进程:
[postgres@localhost ~]$ ps -ef|grep postgres
postgres 2856 1 0 21:54 pts/2 00:00:00 /home/postgres/bin/postgres -D /database/pgdata
postgres 2858 2856 0 21:54 ? 00:00:00 postgres: startup process recovering 000000010000000000000003
postgres 2859 2856 0 21:54 ? 00:00:00 postgres: writer process
postgres 2860 2856 0 21:54 ? 00:00:00 postgres: stats collector process
postgres 2899 2856 0 21:56 ? 00:00:00 postgres: wal receiver process streaming 0/C01ED28
postgres 3007 2856 0 22:02 ? 00:00:00 postgres: postgres postgres 192.168.2.1(52652) idle
postgres 3013 2856 0 22:03 ? 00:00:00 postgres: postgres test 192.168.2.1(52657) idle
postgres 3014 2856 0 22:03 ? 00:00:00 postgres: postgres test 192.168.2.1(52658) idle
root 3020 2756 0 22:04 pts/2 00:00:00 su - postgres
postgres 3022 3020 0 22:04 pts/2 00:00:00 -bash
postgres 3091 3022 4 22:15 pts/2 00:00:00 ps -ef
postgres 3092 3022 0 22:15 pts/2 00:00:00 grep postgres
此时在slave端的pg_xlog下面也产生了日志文件,并且之前pg_start_backup生成的文件名也变成了old的了.
查看日志内容:
[postgres@localhost ~]$ more pgsql.log
LOG: database system was shut down in recovery at 2012-04-23 18:33:25 PDT
LOG: entering standby mode
LOG: streaming replication successfully connected to primary
LOG: redo starts at 0/8000020
LOG: consistent recovery state reached at 0/C000000
LOG: database system is ready to accept read only connections
Step8:测试
On Master:
test=# select * from kenyon;
id | name
----+--------
2 | kenyon
(1 row)
test=# insert into kenyon values (2,'kenyon testing data');
INSERT 0 1
test=#
On Slave:
test=# select * from kenyon;
id | name
----+---------------------
2 | kenyon
2 | kenyon testing data
(2 rows)
test=# delete from kenyon where id = 2;
ERROR: cannot execute DELETE in a read-only transaction
新增的数据已经传输过去,并且slave端的会话是只读的。
_______________________________________________________________________________________________
bucardo安装配置手册 [PostgreSQL]
post by osdba / 2012-4-25 21:59 Wednesday
bucardo是PostgreSQL数据库中实现双向同步的软件,可以实现PostgreSQL数据库的双master的方案,不过bucardo中的同步都是异步的,它是通过触发器记录变化,程序是perl写的。
===============安装bucardo===========================
bucardo是一个perl脚本,需要一些perl的包,先安装这些包:
Test-Simple-0.98.tar.gz
ExtUtils-MakeMaker-6.56.tar.gz
DBI-1.616.tar.gz
version-0.91.tar.gz
DBD-Pg-2.18.1.tar.gz
DBIx-Safe-1.2.5.tar.gz
perl包的安装方法都是到http://search.cpan.org找到这个包,下载下来,一般习惯解压到/usr/src目录:
然后:
perl Makefile.PL
make
make install
安装perl的包
先安装DBI-1.616
http://search.cpan.org/CPAN/authors/id/T/TI/TIMB/DBI-1.616.tar.gz
安装DBI前需要安装:
http://search.cpan.org/CPAN/authors/id/M/MS/MSCHWERN/Test-Simple-0.98.tar.gzhttp://search.cpan.org/CPAN/authors/id/M/MS/MSCHWERN/ExtUtils-MakeMaker-6.56.tar.gz
否则会报:
Warning: prerequisite ExtUtils::MakeMaker 6.48 not found. We have 6.30.
Warning: prerequisite Test::Simple 0.90 not found. We have 0.62.
安装DBD-pg前需要先安装version-0.91
http://search.cpan.org/CPAN/authors/id/J/JP/JPEACOCK/version-0.91.tar.gz
再安装DBD-pg:
http://search.cpan.org/CPAN/authors/id/T/TU/TURNSTEP/DBD-Pg-2.18.1.tar.gz
安装
http://bucardo.org/downloads/DBIx-Safe-1.2.5.tar.gz
安装Bucardo4.4.4
http://blog.endpoint.com/2011/06/bucardo-multi-master-for-postgresql.html
===================bucardo4.4.4的配置====================================
在用户的HOME目录下建立一个.bucardorc文件,在这个文件中设定一些参数:
default_email_from = myhost@myinc.com
default_email_host = mailhost.myinc.com
default_email_to = osdba@mailhost.com
log_conflict_file = /home/postgres/log/bucardo_conflict.log
piddir = /home/postgres/run
reason_file = /home/postgres/log/bucardo.restart.reason.log
warning_file = /home/postgres/log/bucardo.warning.log
syslog_facility = LOG_LOCAL1
bucardo会在所需要同步的数据库中建一个名叫bucardo的数据库,在其中存一些配置信息,这个过
程是由bucardo_ctl install命令完成的:
bucardo_ctl install
完成后,我们把bucardo数据库的搜索路径上加上bucardo这个schema名,以便连接到这个数据库
后就可以使用\d看到这些配置表:
alter database bucardo set search_path="$user",public,bucardo;
下面的配置的环境:
主机1: 192.168.195.35
主机2: 192.168.195.36
我们在这两台机器上的postgresql做双向同步:
这两台机器上的postgresql数据库已经装好。
操作步骤:
1. 在两台机器上:
bucardo_ctl install
2. 在两台机器上,先建我们要同步的数据库:
createdb aligputf8
createlang plpgsql
注意需要plpgsql装到数据库中。
使用pgbench 建一些测试表。
pgbench -i aligputf8 -s 300
我们同步就在pgbench_accounts pgbench_branches pgbench_tellers上进行:
3. bucardo中增加要同步的数据库,主要是配置bucardo如何连接这些数据库:
在第一台机器上:
bucardo_ctl add db db1 dbname=aligputf8 sourcelimit=100 targetlimit=100
bucardo_ctl add db db2 dbname=aligputf8 host=192.168.195.36 sourcelimit=100 targetlimit=100
这个命令中可以带的参数,基本上都是数据库连接参数:
name: the internal name used by Bucardo to refer to this database
port: the port this database runs on. Defaults to 5432.
host: the host this database is on. Defaults to no host (Unix socket)
user: the user to connect as. Defaults to 'bucardo'
pass: the password to connect with. Don't use this, use a .pgpass file instead!
conn: Any additional information add to the connection string, e.g. sslmode=require
sourcelimit: The maximum number of replication events that can run at one time using this database as a source. Defaults to 0 (no limit)
targetlimit: The maximum number of replication events that can run at one time using this database as a target. Defaults to 0 (no limit)
pgpass: Full path and filename of a Bucardo/pgpass file to use for this connection
由于我们是双向同步,需要指定同步的默认的冲突处理方法:
alter table bucardo.goat alter standard_conflict set default 'latest';
冲突解决方法可为:
source
target
skip
random
latest
abort
4. 增加哪些表要同步:
在第一台机器上:
bucardo_ctl add table pgbench_accounts pgbench_branches pgbench_tellers db=db1 --herd=herd1 --verbose
这个步骤其实是在表上建触发器,做完这个步骤后所产生的变化都会被同步
5. 增加群,群是为了便于管理同步中表的一个集合。
在第一台机器上:
bucardo_ctl add herd herd1 pgbench_accounts pgbench_branches pgbench_tellers
6. 增加同步
在第一台机器上:
bucardo_ctl add sync sync1 source=herd1 targetdb=db2 type=swap ping=yes
加同步的时候,实际上是把源数据库上的群(群中包含了一些表)同步到目标数据库中。
type为swap,就是两个数据库做双向同步。
type 还可以有的值为:
pushdelta: master/slave模式,只有主库把数据同步到备库
fullcopy: master/slave模式,全量同步
7. 如果初始时,源数据库上有数据,而目标数据库上没有数据,需要开始时做一次全量同步,这种
方式在bucardo中叫onetimecopy,要执行ontimecopy,需要源数据中的表至少有一条数据,而
目标数据库中的表不能有数据:
bucardo_ctl update sync sync onetimecopy=2
这可我们把同步的onetimecopy属性设置为2,这样当同步启动后,会自动做一次全量同步,然后再
慢慢增量同步。
全量同步完后,会自动把状态改为0。
8. 启动同步:
bucardo_ctl start
9. 查看同步状态:
bucardo_ctl status
bucardo_ctl status sync1
============================= 日常维护操作 ===============================
1. bucardo使用触发器把变化行的主键记到一张表中,所以这张表会变得越来越大,我们需要做一
个定时任务来清理这个表的内容:
0 2 * * * /usr/local/bin/psql -X -q -d aligputf8 -U postgres -c "SELECT bucardo.bucardo_purge_delta('10 minutes'::interval)"
2. 临时停止和启动同步的方法:
bucardo_ctl activate <syncname>
bucardo_ctl deactivate <syncname>
bucardo.sync中有:track_rates
设置为true。
3. onetimecopy:
bucardo_ctl update sync <syncname> onetimecopy=2
重启bucardo_ctl可以让其开始复制
bucardo_ctl reload sync1好象也可以
4. 新增表的步骤:
bucardo_ctl add all tables
bucardo_ctl update sync <syncname> add table <tab1> <tab2> ...
bucardo_ctl validate <syncname>
bucardo_ctl update sync onetimecopy=2
bucardo_ctl update sync onetimecopy=2 rebuild_index=1
bucardo_ctl reload <syncname>
5. 移除表:
bucardo_ctl update sync <syncname> remove table foobar
bucardo_ctl update sync <syncname> remove sequence foobar_seq
bucardo_ctl reload <syncname>
__________________________________________________
CREATE NODE db_1 WITH (TYPE='datanode',HOST = '10.3.2.158', PORT=15431);
CREATE NODE db_2 WITH (TYPE='datanode',HOST = '10.3.2.158', PORT=15432);
CREATE NODE coord2 WITH (TYPE='coordinator',HOST = '192.168.1.35', PORT=1925);
CREATE NODE coord2 WITH (TYPE='coordinator',HOST = 'localhost', PORT=1925);
CREATE NODE coord1 WITH (TYPE='coordinator',HOST = 'localhost', PORT=1921);
_______________________________________________________________________________________________
myron