Tektastic: Why I Like PostgreSQL. Today I gave a short presentation at work about PostgreSQL, and why I much prefer it to MySQL. PostgreSQL vs MySQL: Eternal Battle I may be misreading this, but it seems that there is a recent trend within startups to move away from MySQL, probably thanks to folks like Heroku on one side (who use PostgreSQL to the extreme, and help and contribute to it's development), vs folks like Oracle on the other side, tainting the "open source pureness" of MySQL :) At my work we currently use a mid-sized MySQL 5.1 Percona instance, which is holding up quite well I must admit.
Both PostgreSQL and MySQL have definitely converged to cover most features that people want, but my leaning is still towards PostgreSQL. I work for an e-commerce company, where transactions are very, very important. Anyway, MySQL has plenty of support, fans and still enjoys wide spread usage. Note: parts of this post were inspired by a related post on data and PostgreSQL on SquareUp Technical blog. PostgreSQL Basics Configuration. tAp's blog: pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery. Streaming_Replication_Setup - pgcookbook - a PostgreSQL documentation project.
Suppose we have two instances running on two servers db1 (192.168.0.1) and db2 (192.168.0.2) each serving port 5432. We need to setup a streaming replication from db1 (primary) to db2 (standby). Before starting preparation of the database servers check the connection and the bandwidth between servers. It must be enough to transmit your new WAL files. If the situation is not very good it is recommended to forward the port from master to replica using ssh-tunneling with compression enabled. In future versions of PostgreSQL the compression will be implemented in the DBMS itself. Start the following sniplet via the screen utility on db1. While [ ! Then we need to prepare db1. Edit the the postgresql.conf. Set the wal_keep_segments configuration parameter. Wal_keep_segments = 256 After the replication is set up configure your notification system to inform you when the lag is close to the maximal expected value so it will be a half way before the standby falls behind to much. max_wal_senders = 3.
Streaming_Replication_Setup - pgcookbook - a PostgreSQL documentation project. Hot_standby_feedback Postgres Config · pganalyze. Hot_standby_feedback (boolean) hot_standby_feedback configuration parameter Specifies whether or not a hot standby will send feedback to the primary or upstream standby about queries currently executing on the standby. This parameter can be used to eliminate query cancels caused by cleanup records, but can cause database bloat on the primary for some workloads. Feedback messages will not be sent more frequently than once per wal_receiver_status_interval. The default value is off. If cascaded replication is in use the feedback is passed upstream until it eventually reaches the primary. Source: PostgreSQL 9.4 documentation © PostgreSQL Global Development Group. Pg_dump. Dbname Specifies the name of the database to be dumped. If this is not specified, the environment variable PGDATABASE is used. If that is not set, the user name specified for the connection is used.
-a --data-only Dump only the data, not the schema (data definitions). This option is only meaningful for the plain-text format. For the archive formats, you can specify the option when you call pg_restore. -b --blobs Include large objects in the dump. -c --clean Output commands to clean (drop) database objects prior to outputting the commands for creating them.
-C --create Begin the output with a command to create the database itself and reconnect to the created database. -E encoding --encoding=encoding Create the dump in the specified character set encoding. -f file --file=file Send output to the specified file. -F format --format=format Selects the format of the output. format can be one of the following: p plain Output a plain-text SQL script file (the default). c custom d directory t tar -o --oids -? SQL Dump. The idea behind this dump method is to generate a text file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. PostgreSQL provides the utility program pg_dump for this purpose. The basic usage of this command is: pg_dump dbname > outfile As you see, pg_dump writes its result to the standard output. We will see below how this can be useful. pg_dump is a regular PostgreSQL client application (albeit a particularly clever one).
To specify which database server pg_dump should contact, use the command line options -h host and -p port. Like any other PostgreSQL client application, pg_dump will by default connect with the database user name that is equal to the current operating system user name. Dumps created by pg_dump are internally consistent, meaning, the dump represents a snapshot of the database at the time pg_dump began running. pg_dump does not block other operations on the database while it is working.