{"id":9465,"date":"2017-04-26T08:14:21","date_gmt":"2017-04-26T06:14:21","guid":{"rendered":"https:\/\/thecamels.org\/prosty-szybki-backup-bazy-dzieki-percona-xtrabackup\/"},"modified":"2021-01-12T07:56:42","modified_gmt":"2021-01-12T06:56:42","slug":"simple-and-fast-database-backup-thanks-to-percona-xtrabackup","status":"publish","type":"post","link":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/","title":{"rendered":"Simple and fast database backup thanks to Percona XtraBackup"},"content":{"rendered":"\n<p>Backup of a small database is not a problem. A few seconds of downtime in the application or at night is not a big problem for us. The difficulty arises when we really have a lot of records to archive. Such an example can be MySQL 5.5 database, which contains about <strong>83 973 092 records<\/strong>, occupying nearly <strong>8.2 GB<\/strong>. How to back up the database quickly so that users do not feel it? <strong>Fortunately, there is a solution!<\/strong><\/p>\n\n\n\n<!--more-->\n\n\n\n<p>The simplest method can be to use the <a href=\"https:\/\/thecamels.org\/en\/what-is-mysql-replication\/\"><span>MySQL database replication<\/span><\/a> mechanism. Configure a second slave database, from which we make a backup. During the backup process we can stop the server, copy files or perform a dump using the <code>mysqldump<\/code> command. During the backup process we only load the clone of the main machine.<\/p>\n\n\n\n<p>And what if we can&#8217;t afford to buy a second server, which will serve us as a backup base? The answer to this question is <a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https:\/\/www.percona.com\/software\/mysql-database\/percona-xtrabackup\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><span>Percona XtraBackup<\/span><\/a> application, which is released under GPLv2 license. It allows you to back up your database without downtime. We support database servers such as: Percona Server, MySQL, MariaDB, and Drizzle. This software allows for such operations as:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>quick backup<\/li><li> not interrupting the transaction during the backup process<\/li><li> automatic verification of archived data<\/li><li> Faster recovery time<\/li><li> incremental backups<\/li><li> making replication simpler<\/li><li> Backup without server load<\/li><li> moving tables in real time between servers<\/li><li> backups only data stored in InnoDB<\/li><\/ul>\n\n\n\n<p><strong>Percona XtraBackup<\/strong> software is used by large social networks such as <strong>Facebook<\/strong>:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>&#8220;Facebook users create a vast amount of data every day. To make sure that data is stored reliably, we back up our databases daily. Facebook was an early adopter of incremental backup in XtraBackup.&#8221;<\/em> &#8211; Vamsi Ponnekanti, Facebook Engineering<\/p><\/blockquote>\n\n\n\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-custom ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\"><p class=\"ez-toc-title\" style=\"cursor:inherit\">Spis tre\u015bci<\/p>\n<\/div><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#percona-xtrabackup-installation\" >Percona XtraBackup installation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#backing-up\" >Backing up<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#backup-preparation\" >Backup preparation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#rollback\" >Rollback<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#incremental-copies\" >Incremental copies<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#preparing-backup-from-incremental-backup\" >Preparing backup from incremental backup<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#partial-backup\" >Partial backup<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#limitations-of-xtrabackup\" >Limitations of XtraBackup<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#statistics\" >Statistics<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#summary\" >Summary<\/a><\/li><\/ul><\/nav><\/div>\n<h3 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"percona-xtrabackup-installation\"><\/span>Percona XtraBackup installation<span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>Installation of the software is very easy with <a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https:\/\/www.percona.com\/downloads\/XtraBackup\/LATEST\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><span>binary packages<\/span><\/a> available for Red Hat, CentOS, Debian and Ubuntu. There are also <a target=\"_blank\" rel=\"noopener noreferrer\" href=\"http:\/\/www.percona.com\/docs\/wiki\/repositories:start\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><span>repositories<\/span><\/a> for Yum and Apt.<\/p>\n\n\n\n<p>The installation begins with adding a repository with the application to the system. Add an entry to <code>\/etc\/yum.repos.d\/percona.repo<\/code> file:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;percona]\nname = CentOS $releasever - Percona\nbaseurl=http:\/\/repo.percona.com\/centos\/$releasever\/os\/$basearch\/\nenabled = 1\ngpgkey = file:\/\/\/etc\/pki\/rpm-gpg\/RPM-GPG-KEY-percona\ngpgcheck = 1<\/code><\/pre>\n\n\n\n<p>We still need to copy the <a target=\"_blank\" rel=\"noopener noreferrer\" href=\"http:\/\/www.percona.com\/downloads\/RPM-GPG-KEY-percona\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"><span>GPG key<\/span><\/a> and save it to <code>\/etc\/pki\/rpm-gpg\/RPM-GPG-KEY-percona<\/code>. From this point on, we can start installing the software by issuing a command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>yum install xtrabackup<\/code><\/pre>\n\n\n\n<p>All software consists of three applications:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>xtrabackup<\/strong> &#8211; software written in C that copies InnoDB and XtraDB data<\/li><li><strong>innobackupex<\/strong> &#8211; a script that fills in the deficiencies of the previous program. Copies the entire contents of the MySQL server<\/li><li><strong>tar4ibd<\/strong> &#8211; packs InnoDB data into tar format<\/li><\/ul>\n\n\n\n<p>Three similar commands will be installed to perform backups from the relevant database server versions:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>xtrabackup<\/strong> &#8211; Percona Server 5.1 &amp; MySQL 5.1 w\/InnoDB plugin<\/li><li><strong>xtrabackup_51<\/strong> &#8211; Percona Server 5.0, MySQL 5.0 &amp; MySQL 5.1<\/li><li><strong>xtrabackup_55<\/strong> &#8211; Percona Server 5.5 &amp; MySQL 5.5<\/li><\/ul>\n\n\n\n<p>Time to perform the first backup with xtrabackup.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"backing-up\"><\/span>Backing up<span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>Najszybsz\u0105 metod\u0105 zrobienia kopii zapasowej jest wydanie polecenia <code>xtrabackup<\/code> z parametrem <code>--backup<\/code>. W naszym wypadku jest to serwer w wersji 5.5 zatem u\u017cyjemy polecenia:<\/p>\n\n\n\n<p>The quickest way to back up is to issue the <code>xtrabackup<\/code> command with the <code>--backup<\/code> parameter. In our case it is a 5.5 server, so we will use the command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 --backup --datadir=\/var\/lib\/mysql\/ --target-dir=\/backup<\/code><\/pre>\n\n\n\n<p>In addition, you need to specify the target directory (<code>--target_dir<\/code>) in which you want to save the backup, and the directory in which the MySQL server data (<code>--datadir<\/code>) is located. These options can be saved in <code>\/etc\/my.cnf<\/code>, so that they are not always given as a parameter:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;mysqld]\ndatadir=\/var\/lib\/mysql\/\n&#91;xtrabackup]\ntarget_dir=\/backup<\/code><\/pre>\n\n\n\n<p>However, if an error occurs to us:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55: ambiguous option '--innodb=FORCE' (innodb_adaptive_hash_index, innodb_doublewrite_file)<\/code><\/pre>\n\n\n\n<p>this means that in the file <code>\/etc\/my.cnf<\/code>, we have set the option <code>innodb=force<\/code>. A bypass is to replace it with <code>loose-innodb=force<\/code>. This error should be solved in subsequent versions of the application. If everything went well, we will see the result of the command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 version 1.6.2 for Percona Server 5.5.9 Linux (x86_64) (revision id: 274)\nxtrabackup: uses posix_fadvise().\nxtrabackup: cd to \/var\/lib\/mysql\/\nxtrabackup: Target instance is assumed as followings.\nxtrabackup:   innodb_data_home_dir = .\/\nxtrabackup:   innodb_data_file_path = ibdata1:10M:autoextend\nxtrabackup:   innodb_log_group_home_dir = .\/\nxtrabackup:   innodb_log_files_in_group = 2\nxtrabackup:   innodb_log_file_size = 5242880\n110812 11:11:41 InnoDB: Using Linux native AIO\n&gt;&gt; log scanned up to (274992220655)\n&#91;01] Copying .\/ibdata1 \n     to \/backup\/ibdata1\n&gt;&gt; log scanned up to (274992326877)\n&gt;&gt; log scanned up to (274992483657)\n&gt;&gt; log scanned up to (274992585754)\n&gt;&gt; log scanned up to (274992665719)\n&gt;&gt; log scanned up to (274992759458)\n&gt;&gt; log scanned up to (274992852501)\n&gt;&gt; log scanned up to (274992865265)\n&gt;&gt; log scanned up to (274993036879)\n&gt;&gt; log scanned up to (274993127087)\n&gt;&gt; log scanned up to (274993241361)\n&gt;&gt; log scanned up to (274993291339)\n&gt;&gt; log scanned up to (274993427789)\n&gt;&gt; log scanned up to (274993587569)\n&gt;&gt; log scanned up to (274993678820)\n&gt;&gt; log scanned up to (274993789123)\n&gt;&gt; log scanned up to (274993835340)\n&gt;&gt; log scanned up to (274994021991)\n&gt;&gt; log scanned up to (274994033880)\n&gt;&gt; log scanned up to (274994268323)\n&gt;&gt; log scanned up to (274994376271)\n&gt;&gt; log scanned up to (274994462281)\n&gt;&gt; log scanned up to (274994529600)\n&gt;&gt; log scanned up to (274994645157)\n&gt;&gt; log scanned up to (274994727307)\n&gt;&gt; log scanned up to (274994796009)\n&gt;&gt; log scanned up to (274994898785)\n&gt;&gt; log scanned up to (274994973583)\n&gt;&gt; log scanned up to (274995041628)\n&gt;&gt; log scanned up to (274995159543)\n&gt;&gt; log scanned up to (274995244693)\n&gt;&gt; log scanned up to (274995341883)\n&gt;&gt; log scanned up to (274995439890)\n&gt;&gt; log scanned up to (274995517633)\n&gt;&gt; log scanned up to (274995608306)\n&gt;&gt; log scanned up to (274995730882)\n&gt;&gt; log scanned up to (274995798412)\n&gt;&gt; log scanned up to (274995877561)\n&gt;&gt; log scanned up to (274995983804)\n&gt;&gt; log scanned up to (274996075140)\n&#91;01]        ...done\n&gt;&gt; log scanned up to (274996152436)\nxtrabackup: The latest check point (for incremental): '274994706269'\n&gt;&gt; log scanned up to (274996152436)\nxtrabackup: Stopping log copying thread.\nxtrabackup: Transaction log of lsn (274991525290) to (274996152436) was copied.<\/code><\/pre>\n\n\n\n<p>It took about 1 minute to back up a database of <strong>83,973,092 records<\/strong>, which take about <strong>8.2 GB<\/strong>. By default, the software will try to back up the database as quickly as possible. This may overload the machine, so to limit the <code>--throttle<\/code> parameter. Too restrictive limitation of the program may cause the copy not to be made, because more records will reach the database server than it is archived.<\/p>\n\n\n\n<p>Unfortunately, such a copy is not yet ready for use. Before restoring data, it should be specially prepared for this purpose. Please note, however, that these are only copied InnoDB data. To make the entire copy of the database, use the innobackupex script. First of all, it launches the xtrabackup program. After copying data is finished, it blocks tables with <code>FLUSH TABLES WITH READ LOCK<\/code>, copies MyISAM data and then removes the blockade.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"backup-preparation\"><\/span>Backup preparation<span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>During backup, some data may have changed, so the backup may be inconsistent. Trying to run the database on such files will cause it to crash. That&#8217;s why XtraBackup uses <em>crash-recovery<\/em> to prepare the data.<\/p>\n\n\n\n<p>The InnoDB engine runs the so-called <em>redo log<\/em> (transaction log). It contains a record of each change that occurred in the data. When the database starts, it reads this log and takes two steps. It accepts all data that have been atomized in the log, and reverses all changes that do not have a commitment.<\/p>\n\n\n\n<p>Data preparation by xtrabackup works in a similar way using the built-in InnoDB engine. To start this process we issue a command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 --prepare --target-dir=\/backup<\/code><\/pre>\n\n\n\n<p>The result should be a similar fragment of the log:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 version 1.6.2 for Percona Server 5.5.9 Linux (x86_64) (revision id: 274)\nxtrabackup: cd to \/backup\nxtrabackup: This target seems to be not prepared yet.\nxtrabackup: xtrabackup_logfile detected: size=5210112, start_lsn=(274991525290)\nxtrabackup: Temporary instance for recovery is set as followings.\nxtrabackup:   innodb_data_home_dir = .\/\nxtrabackup:   innodb_data_file_path = ibdata1:10M:autoextend\nxtrabackup:   innodb_log_group_home_dir = .\/\nxtrabackup:   innodb_log_files_in_group = 1\nxtrabackup:   innodb_log_file_size = 5210112\n110812 11:33:21 InnoDB: Using Linux native AIO\nxtrabackup: Starting InnoDB instance for recovery.\nxtrabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)\n110812 11:33:21 InnoDB: The InnoDB memory heap is disabled\n110812 11:33:21 InnoDB: Mutexes and rw_locks use GCC atomic builtins\n110812 11:33:21 InnoDB: Compressed tables use zlib 1.2.3\n110812 11:33:21 InnoDB: Using Linux native AIO\n110812 11:33:21 InnoDB: Warning: innodb_file_io_threads is deprecated. Please use innodb_read_io_threads and innodb_write_io_threads instead\n110812 11:33:21 InnoDB: Initializing buffer pool, size = 100.0M\n\n110812 11:33:21 InnoDB: Completed initialization of buffer pool\n110812 11:33:21 InnoDB: highest supported file format is Barracuda.\nInnoDB: Log scan progressed past the checkpoint lsn 274991525290\n110812 11:33:21  InnoDB: Database was not shut down normally!\nInnoDB: Starting crash recovery.\nInnoDB: Reading tablespace information from the .ibd files...\nInnoDB: Doing recovery: scanned up to log sequence number 274996152436 (99 %)\n110812 11:33:23  InnoDB: Starting an apply batch of log records to the database...\nInnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 \nInnoDB: Apply batch completed\n110812 11:33:26  InnoDB: Waiting for the background threads to start\n110812 11:33:27 Percona XtraDB (http:\/\/www.percona.com) 1.1.5-20.0 started; log sequence number 274996152436\n\n&#91;notice (again)]\n  If you use binary log and don't use any hack of group commit,\n  the binary log position seems to be:\n\nxtrabackup: starting shutdown with innodb_fast_shutdown = 1\n110812 11:33:27  InnoDB: Starting shutdown...\n110812 11:33:32  InnoDB: Shutdown completed; log sequence number 274996207841<\/code><\/pre>\n\n\n\n<p>The last line indicates the correct preparation of the backup for restoration. This process can be run on a separate machine, so as not to overload the one on which we perform the backup. Just remember to use the same version of Perona XtraBackup. While preparing, xtrabackup launches a special version of the <strong>InnoDB<\/strong> engine.<\/p>\n\n\n\n<p>The backup is ready for recovery. However, it is worth taking one more step to make restoring the database much faster. Make a second backup using the same command. The first execution of the command makes the data correct and continuous, but the log file InnoDB is not created. After restoring such a copy, MySQL server will have to create the log file itself. For large databases, this operation will take some time. Therefore, executing this command again should create these files.<\/p>\n\n\n\n<p>This time we should see in the logs:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>110812 13:43:49  InnoDB: Log file .\/ib_logfile0 did not exist: new to be created\nInnoDB: Setting log file .\/ib_logfile0 size to 5 MB\nInnoDB: Database physically writes the file full: wait...\n110812 13:43:49  InnoDB: Log file .\/ib_logfile1 did not exist: new to be created\nInnoDB: Setting log file .\/ib_logfile1 size to 5 MB\nInnoDB: Database physically writes the file full: wait..<\/code><\/pre>\n\n\n\n<p>It&#8217;s time to restore data.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"rollback\"><\/span>Rollback<span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>The software does not have the option of restoring data, so this action must be done by yourself with the help of any tool. For example, you can use the rsync command to copy files to the right place. Then make sure they have the appropriate permissions.<\/p>\n\n\n\n<p>Please note that xtrabackup copies only InnoDB data, so you should restore MyISAM data, table definitions (frm files) and everything else you need to run the server. To restore InnoDB data we will use commands:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cd \/backup\nrsync -rvt --exclude 'xtrabackup_checkpoints' --exclude 'xtrabackup_logfile' .\/ \/var\/lib\/mysql\nchown -R mysql:mysql \/var\/lib\/mysql\/<\/code><\/pre>\n\n\n\n<p>After restoring the remaining data, you can start the database.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"incremental-copies\"><\/span>Incremental copies<span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>Both applications (xtrabackup and innobackupex) support incremental backups. This means that it is only possible to archive data that has changed since the last backup. This allows us to implement an archiving policy of making full backups once a week and making incremental backups on a daily basis. At the beginning we will make a full backup with a command already known:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 --backup --datadir=\/var\/lib\/mysql\/ --target-dir=\/backup\/base<\/code><\/pre>\n\n\n\n<p>In addition to copying data, the application creates a file called xtrabackup_checkpoints, which contains the following data:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>backup_type = full-backuped\nfrom_lsn = 0\nto_lsn = 1291135<\/code><\/pre>\n\n\n\n<p>It saves information about where the backup was made. When you make an incremental backup, the application starts to copy data from there. In order to start such an archiving, we issue a command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 --backup --target-dir=\/backup\/inc1 --incremental-basedir=\/backup\/base --datadir=\/var\/lib\/mysql\/<\/code><\/pre>\n\n\n\n<p>The <code>\/backup\/inc1<\/code> directory will now contain *.delta files, which contain data changed since the last full backup. Looking at the file xtrabackup_checkpoints we will see the entry:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>backup_type = incremental\nfrom_lsn = 1291135\nto_lsn = 1291340<\/code><\/pre>\n\n\n\n<p>When making another incremental copy, we use the previous one. We give the same command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 --backup --target-dir=\/backup\/inc2 --incremental-basedir=\/backup\/inc1 --datadir=\/var\/lib\/mysql\/<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"preparing-backup-from-incremental-backup\"><\/span>Preparing backup from incremental backup<span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>The step <code>--prepare<\/code> is not the same as for the complete copy. In the previous method it was enough to issue the same command twice to get an efficient copy of InnoDB data. In this case it will look a bit different. In the example above, we made three backups:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><code>\/backup\/base<\/code> &#8211; full copy<\/li><li><code>\/backup\/inc1<\/code> &#8211; first incremental copy<\/li><li><code>\/backup\/inc2<\/code> &#8211; second incremental copy<\/li><\/ul>\n\n\n\n<p>We start by issuing a command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 --prepare --apply-log-only --target-dir=\/backup\/base<\/code><\/pre>\n\n\n\n<p>Now you have to put a full backup, further incremental copies. We start to issue commands one by one:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 --prepare --apply-log-only --target-dir=\/backup\/base --incremental-dir=\/backup\/inc1\nxtrabackup_55 --prepare --apply-log-only --target-dir=\/backup\/base --incremental-dir=\/backup\/inc2<\/code><\/pre>\n\n\n\n<p>Performing these commands we should be able to impose changes from each incremental copy. The finished copy will be located in \/backup\/base directory, and we should see the result of the last command on the screen:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>incremental backup from 1291135 is enabled.\nxtrabackup: cd to \/backup\/base\/\nxtrabackup: This target seems to be already prepared.\nxtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(1291340)\nApplying \/backup\/inc1\/ibdata1.delta ...\nApplying \/backup\/inc1\/test\/table1.ibd.delta ...<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"partial-backup\"><\/span>Partial backup<span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>The software supports partial backups when innodb_file_per_table is enabled on the server. There are two ways to approach this topic. For example, we will back up a database named <strong>test<\/strong>, which contains two tables: <strong>t1<\/strong> and <strong>t2<\/strong>.<\/p>\n\n\n\n<p>The first way is to use the <code>--tables<\/code> switch. This parameter supports regular expressions, so in order to back up the selected database we can use the command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 --backup --datadir=\/var\/lib\/mysql --target-dir=\/backup --tables=\"^test&#91;.].*\"<\/code><\/pre>\n\n\n\n<p>To archive a selected table, use the command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>xtrabackup_55 --backup --datadir=\/var\/lib\/mysql --target-dir=\/backup --tables=\"^test&#91;.]t1\"<\/code><\/pre>\n\n\n\n<p>Similar results will be obtained using the <code>--tables-file<\/code> switch, indicating instead of the name of the table, the file in which it is stored. The parameter is case-sensitive. The next step is to prepare a copy for recovery. We will use the <code>--prepare<\/code> parameter.<\/p>\n\n\n\n<p>The command is issued in the same way as in the previous steps, but there will be errors in the logs, that there are no other tables. This behavior is normal for applications, because we have not copied them. Such errors should not be worried about. Example result of the command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>InnoDB: Reading tablespace information from the .ibd files...\n101107 22:31:30  InnoDB: Error: table 'test1\/t'\nInnoDB: in InnoDB data dictionary has tablespace id 6,\nInnoDB: but tablespace with that id or name does not exist. It will be removed from<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"limitations-of-xtrabackup\"><\/span>Limitations of XtraBackup<span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>The software has a number of limitations that must be kept in mind when making a backup. This allows us to avoid many disappointments later on. This is a limitation:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>If the xtrabackup_logfile file is larger than 4GB, during the &#8211;prepare step, the application crashes and fails to perform the job. The error affects only the 32 bit version of xtrabackup. This limit also appears in older versions of the 64-bit program.<\/li><li>Currently, the software does not create InnoDB logs (e.g. ib_logfile0) during the first boot &#8211;prepare. To create these files you need to do this step twice.<\/li><li>The software copies only InnoDB data and logs. In order to have a full backup of the database server, you have to manually copy the table definitions (.frm files, MyISAM data, users, privileges, i.e. everything that is not stored in InnoDB. The innobackupex script was created to make a copy of this data.<\/li><li>xtrabackup does not recognize very old syntax &#8211;set-variable in my.cnf file.<\/li><li>In some cases, the prepared data may be damaged if &#8211;target-dir points to an installed NFS resource with the async option. If you copy the data into such a directory and then run the preparation of files from another server, which also assembles the data in an asynchronous way, you may damage the backup files.<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"statistics\"><\/span>Statistics<span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p>The XtraBackup software also allows you to analyze InnoDB files in read-only mode. This allows you to collect various statistics about the database. The <code>--stats<\/code> option is used for this. To relieve the load on the server, only selected tables can be analyzed using <code>--tables<\/code>.<\/p>\n\n\n\n<p>The analysis can be run on a running server, but it may not always succeed. It is recommended to back it up after printing &#8211;prepare. Example result of the command:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&lt;INDEX STATISTICS&gt;\n  table: test\/table1, index: PRIMARY, space id: 12, root page 3\n  estimated statistics in dictionary:\n    key vals: 25265338, leaf pages 497839, size pages 498304\n  real statistics:\n     level 2 pages: pages=1, data=5395 bytes, data\/pages=32%\n     level 1 pages: pages=415, data=6471907 bytes, data\/pages=95%\n        leaf pages: recs=25958413, pages=497839, data=7492026403 bytes, data\/pages=91%<\/code><\/pre>\n\n\n\n<p>A special script has also been prepared, which formats the generated data to a more readable form for the user. Below is the content of the script tabulate-xtrabackup-stats.pl:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>#!\/usr\/bin\/env perl\n    use strict;\n    use warnings FATAL =&gt; 'all';\n    my $script_version = \"0.1\";\n     \n    my $PG_SIZE = 16_384; # InnoDB defaults to 16k pages, change if needed.\n    my ($cur_idx, $cur_tbl);\n    my (%idx_stats, %tbl_stats);\n    my ($max_tbl_len, $max_idx_len) = (0, 0);\n    while ( my $line = &lt;&gt; ) {\n       if ( my ($t, $i) = $line =~ m\/table: (.*), index: (.*), space id:\/ ) {\n          $t =~ s!\/!.!;\n          $cur_tbl = $t;\n          $cur_idx = $i;\n          if ( length($i) &gt; $max_idx_len ) {\n             $max_idx_len = length($i);\n          }\n          if ( length($t) &gt; $max_tbl_len ) {\n             $max_tbl_len = length($t);\n          }\n       }\n       elsif ( my ($kv, $lp, $sp) = $line =~ m\/key vals: (\\d+), \\D*(\\d+), \\D*(\\d+)\/ ) {\n          @{$idx_stats{$cur_tbl}-&gt;{$cur_idx}}{qw(est_kv est_lp est_sp)} = ($kv, $lp, $sp);\n          $tbl_stats{$cur_tbl}-&gt;{est_kv} += $kv;\n          $tbl_stats{$cur_tbl}-&gt;{est_lp} += $lp;\n          $tbl_stats{$cur_tbl}-&gt;{est_sp} += $sp;\n       }\n       elsif ( my ($l, $pages, $bytes) = $line =~ m\/(?:level (\\d+)|leaf) pages:.*pages=(\\d+), data=(\\d+) bytes\/ ) {\n          $l ||= 0;\n          $idx_stats{$cur_tbl}-&gt;{$cur_idx}-&gt;{real_pages} += $pages;\n          $idx_stats{$cur_tbl}-&gt;{$cur_idx}-&gt;{real_bytes} += $bytes;\n          $tbl_stats{$cur_tbl}-&gt;{real_pages} += $pages;\n          $tbl_stats{$cur_tbl}-&gt;{real_bytes} += $bytes;\n       }\n    }\n     \n    my $hdr_fmt = \"%${max_tbl_len}s %${max_idx_len}s %9s %10s %10s\\n\";\n    my @headers = qw(TABLE INDEX TOT_PAGES FREE_PAGES PCT_FULL);\n    printf $hdr_fmt, @headers;\n     \n    my $row_fmt = \"%${max_tbl_len}s %${max_idx_len}s %9d %10d %9.1f%%\\n\";\n    foreach my $t ( sort keys %tbl_stats ) {\n       my $tbl = $tbl_stats{$t};\n       printf $row_fmt, $t, \"\", $tbl-&gt;{est_sp}, $tbl-&gt;{est_sp} - $tbl-&gt;{real_pages},\n          $tbl-&gt;{real_bytes} \/ ($tbl-&gt;{real_pages} * $PG_SIZE) * 100;\n       foreach my $i ( sort keys %{$idx_stats{$t}} ) {\n          my $idx = $idx_stats{$t}-&gt;{$i};\n          printf $row_fmt, $t, $i, $idx-&gt;{est_sp}, $idx-&gt;{est_sp} - $idx-&gt;{real_pages},\n             $idx-&gt;{real_bytes} \/ ($idx-&gt;{real_pages} * $PG_SIZE) * 100;\n       }\n    }<\/code><\/pre>\n\n\n\n<p>The previously generated data will be formatted into a form:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>          TABLE           INDEX TOT_PAGES FREE_PAGES   PCT_FULL\nart.link_out104                    832383      38561      86.8%\nart.link_out104         PRIMARY    498304         49      91.9%\nart.link_out104       domain_id     49600       6230      76.9%\nart.link_out104     domain_id_2     26495       3339      89.1%\nart.link_out104 from_message_id     28160        142      96.3%\nart.link_out104    from_site_id     38848       4874      79.4%\nart.link_out104   revert_domain    153984      19276      71.4%\nart.link_out104    site_message     36992       4651      83.4%<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"summary\"><\/span>Summary<span class=\"ez-toc-section-end\"><\/span><\/h3>\n\n\n\n<p><strong>Percona XtraBackup<\/strong> software is a very interesting solution in relation to others that we can find on the Internet. Simple and inexpensive, we can implement fast data archiving and use the technology used by large websites such as Facebook.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Backup of a small database is not a problem. A few seconds of downtime in the application or at night is not a big problem for us. The difficulty arises when we really have a lot of records to archive. Such an example can be MySQL 5.5 database, which contains about 83 973 092 records,&hellip;<\/p>\n","protected":false},"author":1,"featured_media":16972,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[150],"tags":[699,707],"class_list":["post-9465","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog","tag-server-administration","tag-servers"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Simple and fast database backup thanks to Percona XtraBackup<\/title>\n<meta name=\"description\" content=\"Get to know the Percona XtraBackup application and start backup of your database quickly, easily and conveniently. Check more on our blog!\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Simple and fast database backup thanks to Percona XtraBackup\" \/>\n<meta property=\"og:description\" content=\"Get to know the Percona XtraBackup application and start backup of your database quickly, easily and conveniently. Check more on our blog!\" \/>\n<meta property=\"og:url\" content=\"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/?utm_source=dark&amp;utm_medium=social&amp;utm_campaign=open-graph\" \/>\n<meta property=\"og:site_name\" content=\"Thecamels.org\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/thecamels.org\/\" \/>\n<meta property=\"article:published_time\" content=\"2017-04-26T06:14:21+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-01-12T06:56:42+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/thecamels.org\/wp-content\/uploads\/2017\/04\/32.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"627\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Kamil Porembi\u0144ski\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/thecamels.org\/wp-content\/uploads\/2017\/04\/32.png\" \/>\n<meta name=\"twitter:creator\" content=\"@thecamelsorg\" \/>\n<meta name=\"twitter:site\" content=\"@thecamelsorg\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kamil Porembi\u0144ski\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"17 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/\"},\"author\":{\"name\":\"Kamil Porembi\u0144ski\",\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/#\\\/schema\\\/person\\\/b7bd2aec5f506a68323eb40c86d38a32\"},\"headline\":\"Simple and fast database backup thanks to Percona XtraBackup\",\"datePublished\":\"2017-04-26T06:14:21+00:00\",\"dateModified\":\"2021-01-12T06:56:42+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/\"},\"wordCount\":1932,\"publisher\":{\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/thecamels.org\\\/wp-content\\\/uploads\\\/2017\\\/04\\\/33.png\",\"keywords\":[\"server administration\",\"servers\"],\"articleSection\":[\"Blog\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/\",\"url\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/\",\"name\":\"Simple and fast database backup thanks to Percona XtraBackup\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/thecamels.org\\\/wp-content\\\/uploads\\\/2017\\\/04\\\/33.png\",\"datePublished\":\"2017-04-26T06:14:21+00:00\",\"dateModified\":\"2021-01-12T06:56:42+00:00\",\"description\":\"Get to know the Percona XtraBackup application and start backup of your database quickly, easily and conveniently. Check more on our blog!\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/#primaryimage\",\"url\":\"https:\\\/\\\/thecamels.org\\\/wp-content\\\/uploads\\\/2017\\\/04\\\/33.png\",\"contentUrl\":\"https:\\\/\\\/thecamels.org\\\/wp-content\\\/uploads\\\/2017\\\/04\\\/33.png\",\"width\":1200,\"height\":627,\"caption\":\"Prosty i szybki backup bazy dzi\u0119ki Percona XtraBackup\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"[HOME]\",\"item\":\"https:\\\/\\\/thecamels.org\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Blog\",\"item\":\"https:\\\/\\\/thecamels.org\\\/en\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Simple and fast database backup thanks to Percona XtraBackup\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/thecamels.org\\\/en\\\/\",\"name\":\"Thecamels.org\",\"description\":\"Hosting SSD NVMe z certyfikatem SSL i HTTP\\\/2. Administracja serwerami, skalowanie infrastruktury. Mamy g\u0142ow\u0119 do serwer\u00f3w i zadbamy o Twoj\u0105 stron\u0119 w sieci.\",\"publisher\":{\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/thecamels.org\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/#organization\",\"name\":\"Thecamels\",\"url\":\"https:\\\/\\\/thecamels.org\\\/en\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/thecamels.org\\\/wp-content\\\/uploads\\\/2018\\\/09\\\/TC-logo-nowe.png\",\"contentUrl\":\"https:\\\/\\\/thecamels.org\\\/wp-content\\\/uploads\\\/2018\\\/09\\\/TC-logo-nowe.png\",\"width\":826,\"height\":106,\"caption\":\"Thecamels\"},\"image\":{\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/thecamels.org\\\/\",\"https:\\\/\\\/x.com\\\/thecamelsorg\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/the-camels\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UC01xYBZbIAApTuPWuqgGE4Q\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/thecamels.org\\\/en\\\/#\\\/schema\\\/person\\\/b7bd2aec5f506a68323eb40c86d38a32\",\"name\":\"Kamil Porembi\u0144ski\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4b2d40949e6453ecdd7663e9a61fac171f31810a28bdc5be0c4d7eca89f41571?s=96&d=identicon&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4b2d40949e6453ecdd7663e9a61fac171f31810a28bdc5be0c4d7eca89f41571?s=96&d=identicon&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4b2d40949e6453ecdd7663e9a61fac171f31810a28bdc5be0c4d7eca89f41571?s=96&d=identicon&r=g\",\"caption\":\"Kamil Porembi\u0144ski\"},\"description\":\"Architekt systemowy, administrator Linux, a czasem Windows. Lubi tematyk\u0119 security. Obecnie w\u0142a\u015bciciel firmy thecamels.org, zajmuj\u0105cej si\u0119 projektowaniem system\u00f3w o wysokiej dost\u0119pno\u015bci. Zajmuje si\u0119 skalowaniem du\u017cych aplikacji internetowych, wspieraniem startup\u00f3w w kwestiach serwerowych. Po godzinach zajmuje si\u0119 \u017ceglowaniem po morzach, lataniem, fotografi\u0105 i podr\u00f3\u017cami.\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Simple and fast database backup thanks to Percona XtraBackup","description":"Get to know the Percona XtraBackup application and start backup of your database quickly, easily and conveniently. Check more on our blog!","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/","og_locale":"en_US","og_type":"article","og_title":"Simple and fast database backup thanks to Percona XtraBackup","og_description":"Get to know the Percona XtraBackup application and start backup of your database quickly, easily and conveniently. Check more on our blog!","og_url":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/?utm_source=dark&utm_medium=social&utm_campaign=open-graph","og_site_name":"Thecamels.org","article_publisher":"https:\/\/www.facebook.com\/thecamels.org\/","article_published_time":"2017-04-26T06:14:21+00:00","article_modified_time":"2021-01-12T06:56:42+00:00","og_image":[{"width":1200,"height":627,"url":"https:\/\/thecamels.org\/wp-content\/uploads\/2017\/04\/32.png","type":"image\/png"}],"author":"Kamil Porembi\u0144ski","twitter_card":"summary_large_image","twitter_image":"https:\/\/thecamels.org\/wp-content\/uploads\/2017\/04\/32.png","twitter_creator":"@thecamelsorg","twitter_site":"@thecamelsorg","twitter_misc":{"Written by":"Kamil Porembi\u0144ski","Est. reading time":"17 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#article","isPartOf":{"@id":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/"},"author":{"name":"Kamil Porembi\u0144ski","@id":"https:\/\/thecamels.org\/en\/#\/schema\/person\/b7bd2aec5f506a68323eb40c86d38a32"},"headline":"Simple and fast database backup thanks to Percona XtraBackup","datePublished":"2017-04-26T06:14:21+00:00","dateModified":"2021-01-12T06:56:42+00:00","mainEntityOfPage":{"@id":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/"},"wordCount":1932,"publisher":{"@id":"https:\/\/thecamels.org\/en\/#organization"},"image":{"@id":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#primaryimage"},"thumbnailUrl":"https:\/\/thecamels.org\/wp-content\/uploads\/2017\/04\/33.png","keywords":["server administration","servers"],"articleSection":["Blog"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/","url":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/","name":"Simple and fast database backup thanks to Percona XtraBackup","isPartOf":{"@id":"https:\/\/thecamels.org\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#primaryimage"},"image":{"@id":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#primaryimage"},"thumbnailUrl":"https:\/\/thecamels.org\/wp-content\/uploads\/2017\/04\/33.png","datePublished":"2017-04-26T06:14:21+00:00","dateModified":"2021-01-12T06:56:42+00:00","description":"Get to know the Percona XtraBackup application and start backup of your database quickly, easily and conveniently. Check more on our blog!","breadcrumb":{"@id":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#primaryimage","url":"https:\/\/thecamels.org\/wp-content\/uploads\/2017\/04\/33.png","contentUrl":"https:\/\/thecamels.org\/wp-content\/uploads\/2017\/04\/33.png","width":1200,"height":627,"caption":"Prosty i szybki backup bazy dzi\u0119ki Percona XtraBackup"},{"@type":"BreadcrumbList","@id":"https:\/\/thecamels.org\/en\/simple-and-fast-database-backup-thanks-to-percona-xtrabackup\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"[HOME]","item":"https:\/\/thecamels.org\/en\/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https:\/\/thecamels.org\/en\/blog\/"},{"@type":"ListItem","position":3,"name":"Simple and fast database backup thanks to Percona XtraBackup"}]},{"@type":"WebSite","@id":"https:\/\/thecamels.org\/en\/#website","url":"https:\/\/thecamels.org\/en\/","name":"Thecamels.org","description":"Hosting SSD NVMe z certyfikatem SSL i HTTP\/2. Administracja serwerami, skalowanie infrastruktury. Mamy g\u0142ow\u0119 do serwer\u00f3w i zadbamy o Twoj\u0105 stron\u0119 w sieci.","publisher":{"@id":"https:\/\/thecamels.org\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/thecamels.org\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/thecamels.org\/en\/#organization","name":"Thecamels","url":"https:\/\/thecamels.org\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/thecamels.org\/en\/#\/schema\/logo\/image\/","url":"https:\/\/thecamels.org\/wp-content\/uploads\/2018\/09\/TC-logo-nowe.png","contentUrl":"https:\/\/thecamels.org\/wp-content\/uploads\/2018\/09\/TC-logo-nowe.png","width":826,"height":106,"caption":"Thecamels"},"image":{"@id":"https:\/\/thecamels.org\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/thecamels.org\/","https:\/\/x.com\/thecamelsorg","https:\/\/www.linkedin.com\/company\/the-camels","https:\/\/www.youtube.com\/channel\/UC01xYBZbIAApTuPWuqgGE4Q"]},{"@type":"Person","@id":"https:\/\/thecamels.org\/en\/#\/schema\/person\/b7bd2aec5f506a68323eb40c86d38a32","name":"Kamil Porembi\u0144ski","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/4b2d40949e6453ecdd7663e9a61fac171f31810a28bdc5be0c4d7eca89f41571?s=96&d=identicon&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/4b2d40949e6453ecdd7663e9a61fac171f31810a28bdc5be0c4d7eca89f41571?s=96&d=identicon&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4b2d40949e6453ecdd7663e9a61fac171f31810a28bdc5be0c4d7eca89f41571?s=96&d=identicon&r=g","caption":"Kamil Porembi\u0144ski"},"description":"Architekt systemowy, administrator Linux, a czasem Windows. Lubi tematyk\u0119 security. Obecnie w\u0142a\u015bciciel firmy thecamels.org, zajmuj\u0105cej si\u0119 projektowaniem system\u00f3w o wysokiej dost\u0119pno\u015bci. Zajmuje si\u0119 skalowaniem du\u017cych aplikacji internetowych, wspieraniem startup\u00f3w w kwestiach serwerowych. Po godzinach zajmuje si\u0119 \u017ceglowaniem po morzach, lataniem, fotografi\u0105 i podr\u00f3\u017cami."}]}},"_links":{"self":[{"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/posts\/9465","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/comments?post=9465"}],"version-history":[{"count":3,"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/posts\/9465\/revisions"}],"predecessor-version":[{"id":16467,"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/posts\/9465\/revisions\/16467"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/media\/16972"}],"wp:attachment":[{"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/media?parent=9465"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/categories?post=9465"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thecamels.org\/en\/wp-json\/wp\/v2\/tags?post=9465"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}