How to Back Up a Postgresql Database from Within Psql

Copying PostgreSQL database to another server

You don't need to create an intermediate file. You can do

pg_dump -C -h localhost -U localuser dbname | psql -h remotehost -U remoteuser dbname

or

pg_dump -C -h remotehost -U remoteuser dbname | psql -h localhost -U localuser dbname

using psql or pg_dump to connect to a remote host.

With a big database or a slow connection, dumping a file and transfering the file compressed may be faster.

As Kornel said there is no need to dump to a intermediate file, if you want to work compressed you can use a compressed tunnel

pg_dump -C dbname | bzip2 | ssh  remoteuser@remotehost "bunzip2 | psql dbname"

or

pg_dump -C dbname | ssh -C remoteuser@remotehost "psql dbname"

but this solution also requires to get a session in both ends.

Note: pg_dump is for backing up and psql is for restoring. So, the first command in this answer is to copy from local to remote and the second one is from remote to local. More -> https://www.postgresql.org/docs/9.6/app-pgdump.html

How to restore backup postgres database on windows machine

Don't use redirection on Windows, use the -f parameter to pass the file to be run:

psql -d test -f backup_database.sql

How to get a working and complete PostgreSQL DB backup and test

You can dump the whole PostgreSQL cluster with pg_dumpall. That's all the databases and all the globals for a single cluster. From the command line on the server, I'd do something like this. (Mine's listening on port 5433, not on the default port.) You may or may not need the --clean option.

$ pg_dumpall -U postgres -h localhost -p 5433 --clean --file=dump.sql

This includes the globals--information about users and groups, tablespaces, and so on.

If I were going to backup a single database and move it to a scratch server, I'd dump the database with pg_dump, and dump the globals with either

  • pg_dumpall --globals-only, or
  • pg_dumpall --roles-only (if you only need roles)

like this.

$ pg_dump -U postgres -h localhost -p 5433 --clean --file=sandbox.sql sandbox
$ pg_dumpall -U postgres -h localhost -p 5433 --clean --globals-only --file=globals.sql

Outputs are just text files.

After you move these files to a different server, load the globals first, then the database dump.

$ psql -U postgres -h localhost -p 5433 < globals.sql
$ psql -U postgres -h localhost -p 5433 < sandbox.sql

I thought pg_dumpall would at least backup foreign keys, but even that
seems to be an 'option'. According to:
http://www.postgresql.org/docs/9.1/static/app-pg-dumpall.html even
with pg_dumpall I need to use a -o option to backup foreign keys

No, that reference says "Use this option if your application references the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used." (Emphasis added.) I think it's unlikely that your application references the OID columns. You don't need to use this option to "backup foreign keys". (Read the dump file in your editor or file viewer.)

Restore a postgres backup file using the command line?

There are two tools to look at, depending on how you created the dump file.

Your first source of reference should be the man page pg_dump as that is what creates the dump itself. It says:

Dumps can be output in script or
archive file formats. Script dumps are
plain-text files containing the SQL
commands required to reconstruct
the database to the state it was
in at the time it was saved. To
restore from such a script, feed it to
psql(1). Script files can be used
to reconstruct the database even
on other machines and other
architectures; with some modifications
even on other SQL database products.

The alternative archive file formats
must be used with pg_restore(1) to
rebuild the database. They allow
pg_restore to be selective about what
is restored, or even to reorder the
items prior to being restored. The
archive file formats are designed to
be portable across architectures.

So depends on the way it was dumped out. If using Linux/Unix, you can probably figure it out using the excellent file(1) command - if it mentions ASCII text and/or SQL, it should be restored with psql otherwise you should probably use pg_restore.

Restoring is pretty easy:

psql -U username -d dbname < filename.sql

-- For Postgres versions 9.0 or earlier
psql -U username -d dbname -1 -f filename.sql

or

pg_restore -U username -d dbname -1 filename.dump

Check out their respective manpages - there's quite a few options that affect how the restore works. You may have to clean out your "live" databases or recreate them from template0 (as pointed out in a comment) before restoring, depending on how the dumps were generated.

Backup Postgres Table using pg_dump is there a workaround to include the --jobs option

The --jobs option, which works only with a "database" format dump, won't help you with a single table, because a single table is dumped by a single process.

You could of course start a couple of parallel COPY statements:

COPY (SELECT * FROM titan WHERE id % 5 = 0) TO '/path/titan0.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 1) TO '/path/titan1.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 2) TO '/path/titan2.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 3) TO '/path/titan3.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 4) TO '/path/titan4.csv' (FORMAT 'csv');

If you start these statements at the same time, you have a chance to get synchronized sequential scans and get done with a single read of the table. Then you can load those files in parallel.

If you need the table structure too, run these:

pg_dump --section=pre-data -t public.play titan
pg_dump --section=post-data -t public.play titan

First restore pre-data, then the data, then post-data.



Related Topics



Leave a reply



Submit