Creating tablespace in a standalone postgres server is a very straight forward. But what if we need to create a tablespace in postgres with streaming replication enabled. Will the tablespace be created automatically in the standby?
Let’s see the below demo to find out.
1. Check streaming replication detail[ RUN ON PRIMARY ]
Expanded display is on.
postgres=# select * from pg_stat_replication
postgres-# ;
-[ RECORD 1 ]----+---------------------------------
pid | 6075
usesysid | 10
usename | enterprisedb
application_name | walreceiver
client_addr | 10.20.30.41. ---- > This is the hot standby server.
client_hostname |
client_port | 38324
backend_start | 15-JUN-21 15:05:20.010615 +03:00
backend_xmin |
state | streaming
sent_lsn | 0/70001C0
write_lsn | 0/70001C0
flush_lsn | 0/70001C0
replay_lsn | 0/70001C0
write_lag |
flush_lag |
replay_lag |
sync_priority | 0
sync_state | async
reply_time | 15-JUN-21 19:00:28.515848 +03:00
2. Create tablespace in primary:
-- Create directory structure
primary$ mkdir -p /pgdata/alm_ts
posgres# create tablespace alm_ts location '/pgdata/alm_ts';
CREATE TABLESPACE
postgres=# \db+
List of tablespaces
Name | Owner | Location | Access privileges | Options | Size | Description
------------+--------------+----------------+-------------------+---------+--------+-------------
pg_default | enterprisedb | | | | 51 MB |
pg_global | enterprisedb | | | | 768 kB |
alm_ts | enterprisedb | /pgdata/alm_ts | | | 13 MB |
(3 rows)
3 . Check whether standby(replication) server status:
-- Run on primary server
postgres=# select * from pg_stat_replication
;
(0 rows)
-- Standby server log snippet:
2021-06-15 10:48:23 +03 LOG: started streaming WAL from primary at 0/5000000 on timeline 1
2021-06-15 10:59:51 +03 FATAL: directory "/pgdata/alm_ts" does not exist
2021-06-15 10:59:51 +03 HINT: Create this directory for the tablespace before restarting the server.
2021-06-15 10:59:51 +03 CONTEXT: WAL redo at 0/5000520 for Tablespace/CREATE: 16385 "/pgdata/alm_ts"
2021-06-15 10:59:51 +03 LOG: startup process (PID 18171) exited with exit code 1
2021-06-15 10:59:51 +03 LOG: terminating any other active server processes
2021-06-15 10:59:51 +03 LOG: database system is shut down
We can see the standby server is down , because the directory structure of the new tablespace is not present.So to fix it just create the same directory structure in standby.
4. Create directory structure in standby server
-- on standby
standby$ mkdir -p /pgdata/alm_ts
5. Now start the postgres cluster on standby:
-- Start the postgres on standby
root# systemctl start edb-as-12
-- Run on standby (
postgres#=# SELECT pg_is_in_recovery();
pg_is_in_recovery
───────────────────
t
(1 row)
Here t means recovery is running fine .
-- check the log
2021-06-15 15:05:19 +03 LOG: database system was shut down in recovery at 2021-06-15 15:05:14 +03
2021-06-15 15:05:19 +03 LOG: entering standby mode
2021-06-15 15:05:19 +03 LOG: redo starts at 0/70000D8
2021-06-15 15:05:19 +03 LOG: consistent recovery state reached at 0/70001C0
2021-06-15 15:05:19 +03 LOG: invalid record length at 0/70001C0: wanted 24, got 0
2021-06-15 15:05:19 +03 LOG: database system is ready to accept read only connections
2021-06-15 15:05:20 +03 LOG: started streaming WAL from primary at 0/7000000 on timeline 1
--- Run replication status from primary:
postgres# \x
postgres=# select * from pg_stat_replication
postgres-# ;
-[ RECORD 1 ]----+---------------------------------
pid | 6075
usesysid | 10
usename | enterprisedb
application_name | walreceiver
client_addr | 10.20.30.41
client_hostname |
client_port | 38324
backend_start | 15-JUN-21 15:08:20.010615 +03:00
backend_xmin |
state | streaming
sent_lsn | 0/70001C0
write_lsn | 0/70001C0
flush_lsn | 0/70001C0
replay_lsn | 0/70001C0
write_lag |
flush_lag |
replay_lag |
sync_priority | 0
sync_state | async
reply_time | 15-JUN-21 19:04:28.515848 +03:00
The standby server started successfully.
Summary – > In postgres cluster setup with streaming replication , before creating tablespace on primary, create the tablespace directory structure on standby server .
SEE ALSO: