background preloader

DB

Facebook Twitter

5 Things You Overlooked with MySQL Dumps. 1. Point In Time Recovery If you’ve never done point in time recovery, it’s time to take a second look. With a standard mysqldump you restore your database to the time when the backup happened. Only do them once a day, then you can lose as much as 24 hours of data. Enter point-in-time recovery, and you have the option to restore all those transactions that occured since your backup last night. Those changes, INSERTs, UPDATEs, DELETEs & even ALTER TABLEs are all stored in the binary logs. That’s right, all those statements are preserved for you!

The tool to fetch those statements with is called mysqlbinlog. That’s where the mysqldump option –master-data comes in. Now you’ll know where to start applying transactions. What I like to do is dump the statements into a text file first, then apply them. Here’s how you fetch all the transactions since your dump: $ mysqlbinlog --offset=12345 /var/lib/mysql/binlog.000005 > my_point_in_time_data.mysql Here’s how you would then apply them: 2. 3. 4. 5. Basics of designing a database. Peter Brawley and Arthur Fuller It's a 10-step: 1. The Requirement Write down everything the application has to do, down to the last detail.

If multiple apps will use the database, include everything that all the apps have to do. This first list needn't be organised by sequence or topic at this stage. 2. Make a working copy of the Requirement, and in it underline every noun relevant to the requirement. 3. Group the Elements from Step 2 into lists of attributes that belong together, so each list defines exactly one kind of thing that the app will have to deal with—a customer, a book, a widget &c—and no element is itself a list. 4. Turn the Entities from Step 3 into table specifications: A: Remove as much redundancy as possible. Customers orders orderitems If your app is to manage classes and instructors across multiple colleges in multiple counties, the core data will fall into a tree like this: county college school department course class section class datetime instructor 5. 6. 7. 8. 9. 10. MySQL, 40 Million Rows, MyISAM to InnoDB, 45 Minutes | Justin Carmony.

Warning: This blog post is NOT a walk-through or tutorial. If you don’t know what you’re doing, you could seriously screw up your database. This is just talking theory and ideas on how I solved my problem. The other night (really, the other morning) I had the wonderful pleasure of trying to convert a table with 40 millions from MyISAM to InnoDB. The reason for the conversion was this table had a high amount of queries, and a high amount of writes. With this table growing and updating daily, the table locks were killing us. Because MyISAM needs to peform a lock on the entire table, we would have dozens of queries backlog as locks and writes were taking place.

We decided to convert to InnoDB because it allowed for row level locking. I started the convertion at 12:00 AM when the site’s traffic was at a low. Just before I was going to restore my original backup, I had one more idea. However, I didn’t need a permamant use of the MEMORY table. The task completed in under 30 minutes. Large MySQL Imports | Tiger Technologies Support. When you’re adding large amounts of data to MySQL — more than a thousand rows at once — it’s important to do it efficiently.

In particular: Disable MySQL indexes You should turn off index updates until the import is done. If you disable index updates, MySQL can import many rows with a single disk write; if you don’t, MySQL will do many separate disk writes for each row. Disabling indexes makes the import go many, many times faster. (Note that if you restore a database file created with phpMyAdmin or mysqldump, the file will already contain commands to disable the indexes, so you don’t need to worry about it separately.) To disable MySQL indexes, make sure your file contains these commands before the import starts (replacing "table_name" with the real name of your table): ALTER TABLE `table_name` DISABLE KEYS; Then send these after the import: ALTER TABLE `table_name` ENABLE KEYS; That will make your import run much faster.

Tips for MyISAM tables Tips for InnoDB tables And add these at the end: How to efficiently dump a huge MySQL innodb database. Tuning MySQL Performance with MySQLTuner. Version 1.0 Author: Falko Timme <ft [at] falkotimme [dot] com> Last edited 08/28/2008 MySQLTuner is a Perl script that analyzes your MySQL performance and, based on the statistics it gathers, gives recommendations which variables you should adjust in order to increase performance. That way, you can tune your my.cnf file to tease out the last bit of performance from your MySQL server and make it work more efficiently. This document comes without warranty of any kind! I do not issue any guarantee that this will work for you! 1 Using MySQLTuner You can download the MySQLTuner script as follows: wget In order to run it, we must make it executable: chmod +x mysqltuner.pl Afterwards, we can run it. . server1:~# . -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB +ISAM -NDBCluster [--] Data in MyISAM tables: 301M (Tables: 2074) [--] Data in HEAP tables: 379K (Tables: 9) [!!]

Server1:~# 2 Links. MySQL Big DELETEs. The idea here is to have a sliding window of partitions. Let's say you need to purge news articles after 30 days. The "partition key" would be the datetime (or timestamp) that is to be used for purging, and the PARTITIONs would be BY RANGE. Every night, a cron job would come along and build a new partition for the next day, and drop the oldest partition.

Dropping a partition is essentially instantaneous, much faster than deleting that many rows. However, you must design the table so that the entire partition can be dropped. That is, you cannot have some items in a partition living longer than others. PARTITION tables have a lot of restrictions, some are rather weird. You can PARTITION InnoDB tables. Since two news articles could have the same timestamp, you cannot assume the partition key is sufficient for uniqueness of the PRIMARY KEY, so you need to find something else to help with that.

(This discussion applies to both MyISAM and InnoDB.) ... This is costly. Windows - Howto: Clean a mysql InnoDB storage engine. Taking the Pain Out of MySQL Schema Changes by John of 37signals. A common obstacle we face when releasing new features is making production schema changes in MySQL. Many new features require additional columns or indexes. Running an “ALTER TABLE” in MySQL to add the needed columns and indexes locks the table, hanging the application. We need a better solution. Option 1: Schema Change in Downtime This is the simplest option. Option 2: Role Swap This is the option that we have used in the past to perform schema changes on large tables. Here’s a sample of the process we follow to change the roles of the current master “A”, and the current replica “B”. On B, STOP SLAVE.On B, SHOW MASTER STATUS. Option 3: pt-online-schema-change We have recently started using pt-online-schema-change to perform our schema updates without needing to take downtime.

Create new table with same structure as original.Update schema on new table.Copy rows in batches from original table.Move original table out of the way and replace with new table.Drop old table.