Managing MySQL with Percona Toolkit by Frédéric Descamps

Posted on 5/2/2012, 10:58 am, by Colin Charles, under MySQL.

Frédéric Descamps of Percona.

Percona Toolkit is Maatkit & Aspersa combined. Opensource and the tools are very useful for a DBA.

You need Perl, DBI, DBD::mysql, Term::ReadKey. Most tools are written in Perl, and whatever is in Bash is being re-written in Perl. There is also a tarball or RPM or DEB packages.

Know your environment. The hardware & OS are crucial for you to know. How much memory/CPU do you use? Do you use swap? Is this a physical/virtual machine? Do you have free space? What kind of RAID controller? Volumes? Disk? What about the network interfaces? What IO schedulers are used? Which filesystem is the data stored on? To answer all that, just use pt-summary.

Know your MySQL environment. Version? Build? How many databases? Where is the data directory? What about replication? What are key InnoDB settings? Storage engine in use? Index type? Foreign keys? Full text indexes? To answer all this and more use pt-mysql-summary.

pt-slave-find shows you the topology and replication hierarchy of your MySQL replication instances. An inventory of replicas!

Where is my disk I/O going? Use pt-diskstats which is an improved iostat. There is pt-ioprofile but it can be dangerous in production.

Now its time to get more intimate with your database. Let’s try to find the answer to these questions: how are the indexes used? Are there duplicate keys? Which queries are eating most of the resources? You can use pt-duplicate-key-checker to check for duplicate/redundant indexes or foreign keys. pt-index-usage can tell you which indexes are unused. If you think you have bad SQL, check out pt-query-advisor.

You can use pt-query-digest to analyze the slow query log and show a profile of the workload. You mostly use this with slow query logs & tcpdump’s. Be careful when you have dropped packets — results may tend to be fake then!

After all this, its time to maintain your environment.

pt-deadlock-logger checks InnoDB status to log MySQL deadlock information. It needs to run continually to capture things.

pt-fk-error-logger extracts and logs MySQL foreign key errors.

pt-online-schema-change to alter tables. It makes a “shadow copy” and swaps them. Extremely useful for large, long-running ALTER. Facebook uses the same technique.

Validate your upgrades as upgrades are the leading cause of downtime. Are queries using different indexes? Is query execution plan different? New errors? See pt-upgrade for this. Best to run this on a third machine (i.e. the old machine and a new machine to see how it goes).

Verify replication integrity – pt-table-checksum. Perform an online replication consistency check or checksum MySQL tables efficiently on one or many servers. Use it routinely (mandatory for 95% of MySQL users). Put it in a weekly crontab. Repair differences with pt-table-sync.

Repair out-of-sync replicas – pt-table-sync

Measure delay acfurately – pt-heartbeat

Deliberately delay replication – pt-slave-delay

Watch & restart MySQL replication after errors – pt-slave-restart

When there are problems, get the symptoms when it hurts. Look at pt-stalk (wait for a condition to occur them begin collecting data – eg. everytime the threads go over 2,000 you have a problem, so it collects stuff – it calls pt-collect), pt-collect (collect information from a server for some period of time), and pt-sift.

pt-mext looks at many samples of MySQL SHOW GLOBAL STATUS side-by-side. Default STATUS shows counter since the MySQL instances started. It is very helpful to see a delta of recent activity.

The future: pt-query-digest will do query reviews; pt-stalk will do “magical fault detection algorithm”. Its all opensource and its all on Launchpad at lp:percona-toolkit.

Tags: FOSDEM, Frédéric Descamps, Percona Toolkit
View Comments

Replication features of 2011 by Sergey Petrunia

Posted on 5/2/2012, 9:32 am, by Colin Charles, under MariaDB, MySQL.

Sergey Petrunia of the MariaDB project & Monty Program.

MySQL 5.5 GA at the end of 2010. MariaDB 5.3 RC towards the end of 2011 (beta in June 2011).

MySQL 5.5 is merged to Percona Server 5.5 which included semi-sync replication, slave fsync options, atuomatic relay log recovery, RBR slave type conversions (question if this is useful or not), individual log flushing (very useful, but not many using), replication heartbeat, SHOW RELAYLOG EVENTS. About 2/3rds of the audience use MySQL 5.5 in production, with only 2 people using semi-sync replication.

MariaDB 5.3 brings replication features brings group commit in the binary log, which is merged into Percona Server 5.5. Checksums for binlog events which is merged from MySQL 5.6. Sergey goes in-depth about the group commit for the binary log. To find out a little more about MariaDB replication changes, see Replication in the Knowledgebase.

There are several implementations of group commit. Facebook started it, followed by MariaDB & Oracle. Percona 5.5 is GA so the feature is there, its not in MySQL 5.6 (yet?), and MariaDB 5.3 is where its at. Seems like the MariaDB implementation is the best so far – refer to the Facebook benchmark performed by Mark Callaghan.

Annotated RBR poses a compatibility problem. MariaDB 5.3 has annotate_rows, while MySQL 5.6 has rows_query event. They are different events. So you cannot have a MariaDB 5.3 master and a MySQL 5.6 slave at this moment. So MySQL 5.6 will have a flag to mark “ignorable” binlog events which will be merged into MariaDB and this will make binary logs compatible again.

There is now also optimized RBR for tables with no primary key.

MySQL 5.6 also has crash-safe slave (replication information stored in tables). Crash-safe master (binary log recovery if the server starts & sees the binary log is corrupted). Parallel event execution is something that is new in MySQL 5.6 which is the most important feature for Sergey.

Pre-heating: There is mk-slave-prefetch (famous quote: “Please don’t use mk-slave-prefetch on #MySQL unless you are Facebook.”). There is replication booster by Yoshinori Matsunobu. There is a Python version of mk-slave-prefetch that Facebook uses.

Tags: FOSDEM, MariaDB, MySQL, replication, Sergey Petrunia
View Comments

MySQL Creatively in a Sandbox by Giuseppe Maxia

Posted on 5/2/2012, 8:57 am, by Colin Charles, under MySQL.

Giuseppe Maxia of Continuent and long time creator of MySQL Sandbox.

Only works on Unix-like servers. Works with MySQL, Percona & MariaDB servers. MySQL server has the data directory, the port and the socket – you can’t share these.

To use it: make_sandbox foo.tar.gz. Then just do ./use.

$SANDBOX_HOME is ~/sandboxes. You can also create ~/opt/mysql/ and if you have MySQL 5.0.91 binary in that directory, you can just do “sb 5.1.91″.

Sandbox has features to start replication systems as well. You can have varying master/slave setups with varying versions as well (good idea to test from MySQL -> MariaDB master->slave for migration).

You can now also play with tungsten-sandbox, which is a great way to start playing with Tungsten Replicator (see documentation and tungsten-toolbox). There is apparently also a MySQL Cluster sandbox tool that someone is working on.

 

Tags: FOSDEM, giuseppe maxia, MySQL Sandbox
View Comments

Optimizing your InnoDB buffer pool usage by Steve Hardy

Posted on 5/2/2012, 8:29 am, by Colin Charles, under MariaDB, MySQL.

Steve Hardy of Zarafa.

Work that has been done to make Zarafa better. Why do you optimise your buffer pool? To decrease your I/O load. How can you do it? Buy more RAM, page compression, less (smaller) data, rearrange data.

MariaDB or Percona Server allows you to inspect your buffer pool (unsure if this is now available in MySQL 5.6). Giuseppe in the audience says this is available in MySQL 5.6, but Steve used this on MariaDB 5.2.

Strategies to fix it: Make records smaller. Remove indexes if you can use others almost as efficiently. Make records that are accessed around the same time have a higher chance of being on the same page. Use page compression. Buy more RAM. Try Batched Key Access (BKA) in MariaDB 5.3+.

Best to view the presentation since there are specific examples that speak about how Zarafa solves their problems like a user trying to sort their email, etc.

Tags: FOSDEM, Steve Hardy
View Comments

Practical MySQL Indexing guidelines by Stéphane Combaudon

Posted on 5/2/2012, 8:01 am, by Colin Charles, under MariaDB, MySQL.

Stéphane Combaudon of Dailymotion.

Index: separate data structure to speed up SELECTs. Think of index in a book. In MySQL, key=index. Consider that indexes are trees.

InnoDB’s clustered index – data is stored with the Primary Key (PK) so PK lookups are fast. Secondary keys hold the PK values. Designing InnoDB PK’s with care is critical for performance.

An index can filter and/or sort values. An index can contain all the fields needed for the query you don’t need to go to the table (a covering index).

MySQL only uses 1 index per table per query (not 100% true – OR clauses), so think of a composite index when you can. Can’t index TEXT fields (use a prefix). Same for BLOBs and long VARCHARs.

Indexes: speed up queries, increases the size of your dataset, slows down writes. How big is the write slowdown? Simple test by Stephane, for in-memory workloads he says adding 2 keys makes performance 2x worse; for on-disk workloads he says its 40x worse. Never neglect the slowdown of your writes when you have an index. There is a graph in the slidedeck.

What is a bad index? Unused indexes. Redundant indexes. Duplicate indexes.

Indexing is not an exact science, but guessing is probably not the best way to design indexes. Always check your assumptions – EXPLAIN does not tell you everything, time your queries with different index combinations, SHOW PROFILES is often valuable. Slow query log is a good place to start.

Many slides with examples, so I hope Stephane posts the deck soon. If possible, try to sort & filter (an index is not always the best for sorting).

InnoDB’s clustered index is always covering. SELECT by PK is the fastest access with InnoDB.

An index can give you 3 benefits: filtering, sorting, covering.

See Userstats v2 - you need Percona Server or MariaDB 5.2+. See also pt-duplicate-key-checker to find redundant indexes easily. See also pt-index-usage to help answer questions not covered by userstats.

Tags: FOSDEM, Stéphane Combaudon
View Comments

MySQL synchronous replication in practice with Galera by Oli Sennhauser

Posted on 5/2/2012, 7:30 am, by Colin Charles, under MySQL.

Oli Sennhauser of FromDual.

Synchronous multi-master replication with the Galera plugin. Your application connects to the load balancer and it redirects read/write traffic to the various MySQL Galera nodes. Tested a setup with 17 SQL nodes and you can have even more. Scaling reads and also a little bit for scaling writes is what Galera is good for.

If one node fails, the other two nodes still communicates with each other and the load balancer is aware of the failed node.

Why Galera? There is master-slave replication but its not multi-master, and its asynchronous and you can get inconsistencies. There is master-master replication but its asynchronous and can have inconsistencies and conflicts if you write on both nodes. MHA/MMM/Tungsten are not providing new technology but are based on the MySQL replication technology. MySQL Cluster is another solution but its not InnoDB storage & your need new know-how for Cluster. Also Cluster has problems with fast JOINs. Active/Passive failover clustering, but too often you have resources idling. Schooner being closed & expensive is hard to know much about what they’re doing.

Galera is synchronous & based on InnoDB (others should in theory be possible). Active-active real multi-master topology. True parallel replication on row level. Cluster speaks with each other. There is no slave lag. Won’t lose transactions. Read/write scalability, write throughput can be improved but can’t scale in the way like MySQL Cluster.

Disadvantages? Its not native MySQL binaries/sources but a patch. Codership provides binaries. Higher probability of deadlocks. When you do a full sync (like when a node comes back after downtime), one node is blocked. This is why the minimum you need a 3-node cluster. Also if you do a full sync with a database larger than 50GB, the recommended method is to use mysqldump (which can be very slow). You can use rsync. Percona is working on xtrabackup to do a full sync between nodes.

Setup: 3 nodes are recommended. Or just 2 nodes and one for garbd (Galera Arbitrator Daemon). 2 nodes works but pay attention to a split brain scenario. Go to the Codership website, download their binaries and wsrep (the Galera plug-in). Create your own user on all nodes (don’t use the default root user). You then need to configure my.cnf (there have been discussions for a galera.conf, but Oli just uses my.cnf). Galera works only with InnoDB, so in my.cnf make the default storage engine InnoDB (don’t for example, by accident have MyISAM tables).

The demo has a strange Galera start script, but its not been easy to work. Just start MySQL usually like you would do.

SST is Snapshot State Transfer (SST). Its the initial full sync between the 1st and the other node. SST blocks the donor node (hence why you need 3 nodes). With Galera v2.0, there is also incremental state transfer. It should be GA in February 2012. You can get deltas as opposed to the full sync. You can configure which will be the donor node.

Currently there are 27 variables about Galera in v1.1. You can do it just by doing SHOW GLOBAL VARIABLES LIKE ‘wsrep%’;. The plugin itself, wsrep_provider_options has plenty of options & plenty of room for tuning. SHOW GLOBAL STATUS LIKE ‘wsrep%’; currently has 38 status information fields in Galera v1.1.

For load balancing, you can do it in your application (on your own). You can also use Connector/J which provides load balancing. There is also a PHP MySQLnd that works.

Tags: FOSDEM, Galera, Oli Sennhauser
View Comments
« Older Entries

i
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.