You are here

FromDual TechFeed (en)

FromDual Ops Center for MariaDB and MySQL 0.9 has been released

Shinguz - Tue, 2019-04-30 09:17

Caution: We have introduced an evil bug which prohibits installation of focmm. Sorry! Somehow it did pass our QA. To fix this bug update file lib/Upgrade.inc on Line 1965 as follows:

- $sql = sprintf("REPLACE INTO `focmm_configuration` (`key`, `value`) VALUES ('%s', '%s'), ('%s', '%s'), ('%s', '%s')" + $sql = sprintf("REPLACE INTO `focmm_configuration` (`key`, `value`) VALUES ('%s', '%s'), ('%s', '%s')"

In the meanwhile we prepare a new release.


FromDual has the pleasure to announce the release of the new version 0.9 of its popular FromDual Ops Center for MariaDB and MySQL focmm.

The FromDual Ops Center for MariaDB and MySQL (focmm) helps DBA's and System Administrators to manage MariaDB and MySQL database farms. Ops Center makes DBA and Admins life easier!

The main task of Ops Center is to support you in your daily MySQL and MariaDB operation tasks. More information about FromDual Ops Center you can find here.

Download

The new FromDual Ops Center for MariaDB and MySQL (focmm) can be downloaded from here. How to install and use focmm is documented in the Ops Center User Guide.

In the inconceivable case that you find a bug in the FromDual Ops Center for MariaDB and MySQL please report it to the FromDual bug tracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Installation of Ops Center 0.9

A complete guide on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.3 to 0.9

Upgrade from 0.3 to 0.9 should happen automatically. Please do a backup of you Ops Center Instance befor you upgrade! Please also check Upgrading.

Changes in Ops Center 0.9

Everything has changed!

Taxonomy upgrade extras: OperationsreleaseBackupfailoverRestore

FromDual Ops Center for MariaDB and MySQL 0.9 has been released

Shinguz - Tue, 2019-04-30 09:17

FromDual has the pleasure to announce the release of the new version 0.9 of its popular FromDual Ops Center for MariaDB and MySQL focmm.

The FromDual Ops Center for MariaDB and MySQL (focmm) helps DBA's and System Administrators to manage MariaDB and MySQL database farms. Ops Center makes DBA and Admins life easier!

The main task of Ops Center is to support you in your daily MySQL and MariaDB operation tasks. More information about FromDual Ops Center you can find here.

Download

The new FromDual Ops Center for MariaDB and MySQL (focmm) can be downloaded from here. How to install and use focmm is documented in the Ops Center User Guide.

In the inconceivable case that you find a bug in the FromDual Ops Center for MariaDB and MySQL please report it to the FromDual bug tracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Installation of Ops Center 0.9

A complete guide on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.3 to 0.9

Upgrade from 0.3 to 0.9 should happen automatically. Please do a backup of you Ops Center Instance befor you upgrade! Please also check Upgrading.

Changes in Ops Center 0.9

Everything has changed!

Taxonomy upgrade extras: OperationsreleaseBackupfailoverRestore

MariaDB Prepared Statements, Transactions and Multi-Row Inserts

Shinguz - Mon, 2019-04-15 18:09

Last week at the MariaDB/MySQL Developer Training we had one participant asking some tricky questions I did not know the answer by heart.

Also MariaDB documentation was not too verbose (here and here).

So time to do some experiments:

Prepared Statements and Multi-Row Inserts SQL> PREPARE stmt1 FROM 'INSERT INTO `test`.`test` (`data`) VALUES (?), (?), (?)'; Statement prepared SQL> SET @d1 = 'Bli'; SQL> SET @d2 = 'Bla'; SQL> SET @d3 = 'Blub'; SQL> EXECUTE stmt1 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.010 sec) Records: 3 Duplicates: 0 Warnings: 0 SQL> DEALLOCATE PREPARE stmt1; SQL> SELECT * FROM test; +----+------+---------------------+ | id | data | ts | +----+------+---------------------+ | 1 | Bli | 2019-04-15 17:26:22 | | 2 | Bla | 2019-04-15 17:26:22 | | 3 | Blub | 2019-04-15 17:26:22 | +----+------+---------------------+
Prepared Statements and Transactions SQL> SET SESSION autocommit=Off; SQL> START TRANSACTION; SQL> PREPARE stmt2 FROM 'INSERT INTO `test`.`test` (`data`) VALUES (?)'; Statement prepared SQL> SET @d1 = 'BliTrx'; SQL> EXECUTE stmt2 USING @d1; Query OK, 1 row affected (0.000 sec) SQL> SET @d1 = 'BlaTrx'; SQL> EXECUTE stmt2 USING @d1; Query OK, 1 row affected (0.000 sec) SQL> COMMIT; -- Theoretically we should do a START TRANSACTION; here again... SQL> SET @d1 = 'BlubTrx'; SQL> EXECUTE stmt2 USING @d1; Query OK, 1 row affected (0.000 sec) SQL> ROLLBACK; SQL> DEALLOCATE PREPARE stmt2; SQL> SELECT * FROM test; +----+---------+---------------------+ | id | data | ts | +----+---------+---------------------+ | 10 | BliTrx | 2019-04-15 17:33:30 | | 11 | BlaTrx | 2019-04-15 17:33:39 | +----+---------+---------------------+
Prepared Statements and Transactions and Multi-Row Inserts SQL> SET SESSION autocommit=Off; SQL> START TRANSACTION; SQL> PREPARE stmt3 FROM 'INSERT INTO `test`.`test` (`data`) VALUES (?), (?), (?)'; Statement prepared SQL> SET @d1 = 'Bli1Trx'; SQL> SET @d2 = 'Bla1Trx'; SQL> SET @d3 = 'Blub1Trx'; SQL> EXECUTE stmt3 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.000 sec) SQL> COMMIT; -- Theoretically we should do a START TRANSACTION; here again... SQL> SET @d1 = 'Bli2Trx'; SQL> SET @d2 = 'Bla2Trx'; SQL> SET @d3 = 'Blub2Trx'; SQL> EXECUTE stmt3 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.000 sec) SQL> ROLLBACK; -- Theoretically we should do a START TRANSACTION; here again... SQL> SET @d1 = 'Bli3Trx'; SQL> SET @d2 = 'Bla3Trx'; SQL> SET @d3 = 'Blub3Trx'; SQL> EXECUTE stmt3 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.001 sec) SQL> COMMIT; SQL> DEALLOCATE PREPARE stmt3; SQL> SELECT * FROM test; +----+----------+---------------------+ | id | data | ts | +----+----------+---------------------+ | 1 | Bli1Trx | 2019-04-15 17:37:50 | | 2 | Bla1Trx | 2019-04-15 17:37:50 | | 3 | Blub1Trx | 2019-04-15 17:37:50 | | 7 | Bli3Trx | 2019-04-15 17:38:38 | | 8 | Bla3Trx | 2019-04-15 17:38:38 | | 9 | Blub3Trx | 2019-04-15 17:38:38 | +----+----------+---------------------+

Seems all to work as expected. Now we know it for sure!

Taxonomy upgrade extras: transactioninsertprepared statementsmulti-row insert

MariaDB Prepared Statements, Transactions and Multi-Row Inserts

Shinguz - Mon, 2019-04-15 18:09

Last week at the MariaDB/MySQL Developer Training we had one participant asking some tricky questions I did not know the answer by heart.

Also MariaDB documentation was not too verbose (here and here).

So time to do some experiments:

Prepared Statements and Multi-Row Inserts SQL> PREPARE stmt1 FROM 'INSERT INTO `test`.`test` (`data`) VALUES (?), (?), (?)'; Statement prepared SQL> SET @d1 = 'Bli'; SQL> SET @d2 = 'Bla'; SQL> SET @d3 = 'Blub'; SQL> EXECUTE stmt1 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.010 sec) Records: 3 Duplicates: 0 Warnings: 0 SQL> DEALLOCATE PREPARE stmt1; SQL> SELECT * FROM test; +----+------+---------------------+ | id | data | ts | +----+------+---------------------+ | 1 | Bli | 2019-04-15 17:26:22 | | 2 | Bla | 2019-04-15 17:26:22 | | 3 | Blub | 2019-04-15 17:26:22 | +----+------+---------------------+
Prepared Statements and Transactions SQL> SET SESSION autocommit=Off; SQL> START TRANSACTION; SQL> PREPARE stmt2 FROM 'INSERT INTO `test`.`test` (`data`) VALUES (?)'; Statement prepared SQL> SET @d1 = 'BliTrx'; SQL> EXECUTE stmt2 USING @d1; Query OK, 1 row affected (0.000 sec) SQL> SET @d1 = 'BlaTrx'; SQL> EXECUTE stmt2 USING @d1; Query OK, 1 row affected (0.000 sec) SQL> COMMIT; -- Theoretically we should do a START TRANSACTION; here again... SQL> SET @d1 = 'BlubTrx'; SQL> EXECUTE stmt2 USING @d1; Query OK, 1 row affected (0.000 sec) SQL> ROLLBACK; SQL> DEALLOCATE PREPARE stmt2; SQL> SELECT * FROM test; +----+---------+---------------------+ | id | data | ts | +----+---------+---------------------+ | 10 | BliTrx | 2019-04-15 17:33:30 | | 11 | BlaTrx | 2019-04-15 17:33:39 | +----+---------+---------------------+
Prepared Statements and Transactions and Multi-Row Inserts SQL> SET SESSION autocommit=Off; SQL> START TRANSACTION; SQL> PREPARE stmt3 FROM 'INSERT INTO `test`.`test` (`data`) VALUES (?), (?), (?)'; Statement prepared SQL> SET @d1 = 'Bli1Trx'; SQL> SET @d2 = 'Bla1Trx'; SQL> SET @d3 = 'Blub1Trx'; SQL> EXECUTE stmt3 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.000 sec) SQL> COMMIT; -- Theoretically we should do a START TRANSACTION; here again... SQL> SET @d1 = 'Bli2Trx'; SQL> SET @d2 = 'Bla2Trx'; SQL> SET @d3 = 'Blub2Trx'; SQL> EXECUTE stmt3 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.000 sec) SQL> ROLLBACK; -- Theoretically we should do a START TRANSACTION; here again... SQL> SET @d1 = 'Bli3Trx'; SQL> SET @d2 = 'Bla3Trx'; SQL> SET @d3 = 'Blub3Trx'; SQL> EXECUTE stmt3 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.001 sec) SQL> COMMIT; SQL> DEALLOCATE PREPARE stmt3; SQL> SELECT * FROM test; +----+----------+---------------------+ | id | data | ts | +----+----------+---------------------+ | 1 | Bli1Trx | 2019-04-15 17:37:50 | | 2 | Bla1Trx | 2019-04-15 17:37:50 | | 3 | Blub1Trx | 2019-04-15 17:37:50 | | 7 | Bli3Trx | 2019-04-15 17:38:38 | | 8 | Bla3Trx | 2019-04-15 17:38:38 | | 9 | Blub3Trx | 2019-04-15 17:38:38 | +----+----------+---------------------+

Seems all to work as expected. Now we know it for sure!

Taxonomy upgrade extras: transactioninsertprepared statementsmulti-row insert

MariaDB Prepared Statements, Transactions and Multi-Row Inserts

Shinguz - Mon, 2019-04-15 18:09

Last week at the MariaDB/MySQL Developer Training we had one participant asking some tricky questions I did not know the answer by heart.

Also MariaDB documentation was not too verbose (here and here).

So time to do some experiments:

Prepared Statements and Multi-Row Inserts SQL> PREPARE stmt1 FROM 'INSERT INTO `test`.`test` (`data`) VALUES (?), (?), (?)'; Statement prepared SQL> SET @d1 = 'Bli'; SQL> SET @d2 = 'Bla'; SQL> SET @d3 = 'Blub'; SQL> EXECUTE stmt1 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.010 sec) Records: 3 Duplicates: 0 Warnings: 0 SQL> DEALLOCATE PREPARE stmt1; SQL> SELECT * FROM test; +----+------+---------------------+ | id | data | ts | +----+------+---------------------+ | 1 | Bli | 2019-04-15 17:26:22 | | 2 | Bla | 2019-04-15 17:26:22 | | 3 | Blub | 2019-04-15 17:26:22 | +----+------+---------------------+
Prepared Statements and Transactions SQL> SET SESSION autocommit=Off; SQL> START TRANSACTION; SQL> PREPARE stmt2 FROM 'INSERT INTO `test`.`test` (`data`) VALUES (?)'; Statement prepared SQL> SET @d1 = 'BliTrx'; SQL> EXECUTE stmt2 USING @d1; Query OK, 1 row affected (0.000 sec) SQL> SET @d1 = 'BlaTrx'; SQL> EXECUTE stmt2 USING @d1; Query OK, 1 row affected (0.000 sec) SQL> COMMIT; -- Theoretically we should do a START TRANSACTION; here again... SQL> SET @d1 = 'BlubTrx'; SQL> EXECUTE stmt2 USING @d1; Query OK, 1 row affected (0.000 sec) SQL> ROLLBACK; SQL> DEALLOCATE PREPARE stmt2; SQL> SELECT * FROM test; +----+---------+---------------------+ | id | data | ts | +----+---------+---------------------+ | 10 | BliTrx | 2019-04-15 17:33:30 | | 11 | BlaTrx | 2019-04-15 17:33:39 | +----+---------+---------------------+
Prepared Statements and Transactions and Multi-Row Inserts SQL> SET SESSION autocommit=Off; SQL> START TRANSACTION; SQL> PREPARE stmt3 FROM 'INSERT INTO `test`.`test` (`data`) VALUES (?), (?), (?)'; Statement prepared SQL> SET @d1 = 'Bli1Trx'; SQL> SET @d2 = 'Bla1Trx'; SQL> SET @d3 = 'Blub1Trx'; SQL> EXECUTE stmt3 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.000 sec) SQL> COMMIT; -- Theoretically we should do a START TRANSACTION; here again... SQL> SET @d1 = 'Bli2Trx'; SQL> SET @d2 = 'Bla2Trx'; SQL> SET @d3 = 'Blub2Trx'; SQL> EXECUTE stmt3 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.000 sec) SQL> ROLLBACK; -- Theoretically we should do a START TRANSACTION; here again... SQL> SET @d1 = 'Bli3Trx'; SQL> SET @d2 = 'Bla3Trx'; SQL> SET @d3 = 'Blub3Trx'; SQL> EXECUTE stmt3 USING @d1, @d2, @d3; Query OK, 3 rows affected (0.001 sec) SQL> COMMIT; SQL> DEALLOCATE PREPARE stmt3; SQL> SELECT * FROM test; +----+----------+---------------------+ | id | data | ts | +----+----------+---------------------+ | 1 | Bli1Trx | 2019-04-15 17:37:50 | | 2 | Bla1Trx | 2019-04-15 17:37:50 | | 3 | Blub1Trx | 2019-04-15 17:37:50 | | 7 | Bli3Trx | 2019-04-15 17:38:38 | | 8 | Bla3Trx | 2019-04-15 17:38:38 | | 9 | Blub3Trx | 2019-04-15 17:38:38 | +----+----------+---------------------+

Seems all to work as expected. Now we know it for sure!

Taxonomy upgrade extras: transactioninsertprepared statementsmulti-row insert

Uptime of a MariaDB Galera Cluster

Shinguz - Fri, 2019-03-15 17:58

A while ago somebody on Google Groups asked for the Uptime of a Galera Cluster. The answer is easy... Wait, no! Not so easy... The uptime of a Galera Node is easy (or not?). But Uptime of the whole Galera Cluster?

My answer then was: "Grep the error log." My answer now is still: "Grep the error log." But slightly different:

$ grep 'view(view_id' * 2019-03-07 16:10:26 [Note] WSREP: view(view_id(PRIM,0e0a2851,1) memb { 2019-03-07 16:14:37 [Note] WSREP: view(view_id(PRIM,0e0a2851,2) memb { 2019-03-07 16:16:23 [Note] WSREP: view(view_id(PRIM,0e0a2851,3) memb { 2019-03-07 16:55:56 [Note] WSREP: view(view_id(NON_PRIM,0e0a2851,3) memb { 2019-03-07 16:56:04 [Note] WSREP: view(view_id(PRIM,6d80bb1a,5) memb { 2019-03-07 17:00:28 [Note] WSREP: view(view_id(NON_PRIM,6d80bb1a,5) memb { 2019-03-07 17:01:11 [Note] WSREP: view(view_id(PRIM,24f67954,7) memb { 2019-03-07 17:18:58 [Note] WSREP: view(view_id(NON_PRIM,24f67954,7) memb { 2019-03-07 17:19:31 [Note] WSREP: view(view_id(PRIM,a380c8cb,9) memb { 2019-03-07 17:20:27 [Note] WSREP: view(view_id(PRIM,a380c8cb,11) memb { 2019-03-08 7:58:38 [Note] WSREP: view(view_id(PRIM,753a350f,15) memb { 2019-03-08 11:31:38 [Note] WSREP: view(view_id(NON_PRIM,753a350f,15) memb { 2019-03-08 11:31:43 [Note] WSREP: view(view_id(PRIM,489e3c67,17) memb { 2019-03-08 11:31:58 [Note] WSREP: view(view_id(PRIM,489e3c67,18) memb { ... 2019-03-22 7:05:53 [Note] WSREP: view(view_id(NON_PRIM,49dc20da,49) memb { 2019-03-22 7:05:53 [Note] WSREP: view(view_id(PRIM,49dc20da,50) memb { 2019-03-26 12:14:05 [Note] WSREP: view(view_id(NON_PRIM,49dc20da,50) memb { 2019-03-27 7:33:25 [Note] WSREP: view(view_id(NON_PRIM,22ae25aa,1) memb {

So this Cluster had an Uptime of about 18 days and 20 hours. Why can I seed this? Simple: In the brackets there is a number at the very right. This number seems to be the same as wsrep_cluster_conf_id which is reset by a full Galera Cluster shutdown.

So far so good. But, wait, what is the definition of Uptime? Hmmm, not so helpful, how should I interpret this for a 3-Node Galera Cluster?

I would say a good definition for Uptime of a Galera Cluster would be: "At least one Galera Node must be available for the application for reading and writing." That means PRIM in the output above. And we still cannot say from the output above if there was at least on Galera Node available (reading and writing) at any time. For this we have to compare ALL 3 MariaDB Error Logs... So it does not help, we need a good Monitoring solution to answer this question...

PS: Who has found the little fake in this blog?

Taxonomy upgrade extras: galera clusteruptime

Uptime of a MariaDB Galera Cluster

Shinguz - Fri, 2019-03-15 17:58

A while ago somebody on Google Groups asked for the Uptime of a Galera Cluster. The answer is easy... Wait, no! Not so easy... The uptime of a Galera Node is easy (or not?). But Uptime of the whole Galera Cluster?

My answer then was: "Grep the error log." My answer now is still: "Grep the error log." But slightly different:

$ grep 'view(view_id' * 2019-03-07 16:10:26 [Note] WSREP: view(view_id(PRIM,0e0a2851,1) memb { 2019-03-07 16:14:37 [Note] WSREP: view(view_id(PRIM,0e0a2851,2) memb { 2019-03-07 16:16:23 [Note] WSREP: view(view_id(PRIM,0e0a2851,3) memb { 2019-03-07 16:55:56 [Note] WSREP: view(view_id(NON_PRIM,0e0a2851,3) memb { 2019-03-07 16:56:04 [Note] WSREP: view(view_id(PRIM,6d80bb1a,5) memb { 2019-03-07 17:00:28 [Note] WSREP: view(view_id(NON_PRIM,6d80bb1a,5) memb { 2019-03-07 17:01:11 [Note] WSREP: view(view_id(PRIM,24f67954,7) memb { 2019-03-07 17:18:58 [Note] WSREP: view(view_id(NON_PRIM,24f67954,7) memb { 2019-03-07 17:19:31 [Note] WSREP: view(view_id(PRIM,a380c8cb,9) memb { 2019-03-07 17:20:27 [Note] WSREP: view(view_id(PRIM,a380c8cb,11) memb { 2019-03-08 7:58:38 [Note] WSREP: view(view_id(PRIM,753a350f,15) memb { 2019-03-08 11:31:38 [Note] WSREP: view(view_id(NON_PRIM,753a350f,15) memb { 2019-03-08 11:31:43 [Note] WSREP: view(view_id(PRIM,489e3c67,17) memb { 2019-03-08 11:31:58 [Note] WSREP: view(view_id(PRIM,489e3c67,18) memb { ... 2019-03-22 7:05:53 [Note] WSREP: view(view_id(NON_PRIM,49dc20da,49) memb { 2019-03-22 7:05:53 [Note] WSREP: view(view_id(PRIM,49dc20da,50) memb { 2019-03-26 12:14:05 [Note] WSREP: view(view_id(NON_PRIM,49dc20da,50) memb { 2019-03-27 7:33:25 [Note] WSREP: view(view_id(NON_PRIM,22ae25aa,1) memb {

So this Cluster had an Uptime of about 18 days and 20 hours. Why can I seed this? Simple: In the brackets there is a number at the very right. This number seems to be the same as wsrep_cluster_conf_id which is reset by a full Galera Cluster shutdown.

So far so good. But, wait, what is the definition of Uptime? Hmmm, not so helpful, how should I interpret this for a 3-Node Galera Cluster?

I would say a good definition for Uptime of a Galera Cluster would be: "At least one Galera Node must be available for the application for reading and writing." That means PRIM in the output above. And we still cannot say from the output above if there was at least on Galera Node available (reading and writing) at any time. For this we have to compare ALL 3 MariaDB Error Logs... So it does not help, we need a good Monitoring solution to answer this question...

PS: Who has found the little fake in this blog?

Taxonomy upgrade extras: galera clusteruptime

Uptime of a MariaDB Galera Cluster

Shinguz - Fri, 2019-03-15 17:58

A while ago somebody on Google Groups asked for the Uptime of a Galera Cluster. The answer is easy... Wait, no! Not so easy... The uptime of a Galera Node is easy (or not?). But Uptime of the whole Galera Cluster?

My answer then was: "Grep the error log." My answer now is still: "Grep the error log." But slightly different:

$ grep 'view(view_id' * 2019-03-07 16:10:26 [Note] WSREP: view(view_id(PRIM,0e0a2851,1) memb { 2019-03-07 16:14:37 [Note] WSREP: view(view_id(PRIM,0e0a2851,2) memb { 2019-03-07 16:16:23 [Note] WSREP: view(view_id(PRIM,0e0a2851,3) memb { 2019-03-07 16:55:56 [Note] WSREP: view(view_id(NON_PRIM,0e0a2851,3) memb { 2019-03-07 16:56:04 [Note] WSREP: view(view_id(PRIM,6d80bb1a,5) memb { 2019-03-07 17:00:28 [Note] WSREP: view(view_id(NON_PRIM,6d80bb1a,5) memb { 2019-03-07 17:01:11 [Note] WSREP: view(view_id(PRIM,24f67954,7) memb { 2019-03-07 17:18:58 [Note] WSREP: view(view_id(NON_PRIM,24f67954,7) memb { 2019-03-07 17:19:31 [Note] WSREP: view(view_id(PRIM,a380c8cb,9) memb { 2019-03-07 17:20:27 [Note] WSREP: view(view_id(PRIM,a380c8cb,11) memb { 2019-03-08 7:58:38 [Note] WSREP: view(view_id(PRIM,753a350f,15) memb { 2019-03-08 11:31:38 [Note] WSREP: view(view_id(NON_PRIM,753a350f,15) memb { 2019-03-08 11:31:43 [Note] WSREP: view(view_id(PRIM,489e3c67,17) memb { 2019-03-08 11:31:58 [Note] WSREP: view(view_id(PRIM,489e3c67,18) memb { ... 2019-03-22 7:05:53 [Note] WSREP: view(view_id(NON_PRIM,49dc20da,49) memb { 2019-03-22 7:05:53 [Note] WSREP: view(view_id(PRIM,49dc20da,50) memb { 2019-03-26 12:14:05 [Note] WSREP: view(view_id(NON_PRIM,49dc20da,50) memb { 2019-03-27 7:33:25 [Note] WSREP: view(view_id(NON_PRIM,22ae25aa,1) memb {

So this Cluster had an Uptime of about 18 days and 20 hours. Why can I seed this? Simple: In the brackets there is a number at the very right. This number seems to be the same as wsrep_cluster_conf_id which is reset by a full Galera Cluster shutdown.

So far so good. But, wait, what is the definition of Uptime? Hmmm, not so helpful, how should I interpret this for a 3-Node Galera Cluster?

I would say a good definition for Uptime of a Galera Cluster would be: "At least one Galera Node must be available for the application for reading and writing." That means PRIM in the output above. And we still cannot say from the output above if there was at least on Galera Node available (reading and writing) at any time. For this we have to compare ALL 3 MariaDB Error Logs... So it does not help, we need a good Monitoring solution to answer this question...

PS: Who has found the little fake in this blog?

Taxonomy upgrade extras: galera clusteruptime

Linux system calls of MySQL process

Shinguz - Fri, 2019-03-15 17:06

We had the problem today that a MySQL Galera Cluster node with the multi-tenancy pattern caused a lot of system time (sy 75%, load average about 30 (you really must read this article by Brendan Gregg, it is worth it!)) so we wanted to find what system calls are being used to see what could cause this issue (to verify if it is a TOC or a TDC problem:

$ sudo strace -c -p $(pidof -s mysqld) -f -e trace=all Process 5171 attached with 41 threads Process 16697 attached ^C Process 5171 detached ... Process 5333 detached Process 16697 detached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 66.85 1.349700 746 1810 io_getevents 25.91 0.523055 1298 403 197 futex 4.45 0.089773 1069 84 22 read 2.58 0.052000 13000 4 3 restart_syscall 0.19 0.003802 1901 2 select 0.01 0.000235 3 69 1 setsockopt 0.01 0.000210 18 12 getdents 0.00 0.000078 2 32 write 0.00 0.000056 1 49 fcntl 0.00 0.000026 4 6 openat 0.00 0.000012 2 6 close 0.00 0.000000 0 2 2 open 0.00 0.000000 0 22 stat 0.00 0.000000 0 2 mmap 0.00 0.000000 0 7 mprotect 0.00 0.000000 0 16 pread 0.00 0.000000 0 1 access 0.00 0.000000 0 1 sched_yield 0.00 0.000000 0 5 madvise 0.00 0.000000 0 1 accept 0.00 0.000000 0 1 getsockname 0.00 0.000000 0 1 clone 0.00 0.000000 0 1 set_robust_list ------ ----------- ----------- --------- --------- ---------------- 100.00 2.018947 2537 225 total $ man io_getevents ...

See also: Configuration of MySQL for Shared Hosting.

Taxonomy upgrade extras: system calltable_open_cachetable_definition_cachesystem timeload averageopen_files_limitLimitNOFILE

Linux system calls of MySQL process

Shinguz - Fri, 2019-03-15 17:06

We had the problem today that a MySQL Galera Cluster node with the multi-tenancy pattern caused a lot of system time (sy 75%, load average about 30 (you really must read this article by Brendan Gregg, it is worth it!)) so we wanted to find what system calls are being used to see what could cause this issue (to verify if it is a TOC or a TDC problem:

$ sudo strace -c -p $(pidof -s mysqld) -f -e trace=all Process 5171 attached with 41 threads Process 16697 attached ^C Process 5171 detached ... Process 5333 detached Process 16697 detached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 66.85 1.349700 746 1810 io_getevents 25.91 0.523055 1298 403 197 futex 4.45 0.089773 1069 84 22 read 2.58 0.052000 13000 4 3 restart_syscall 0.19 0.003802 1901 2 select 0.01 0.000235 3 69 1 setsockopt 0.01 0.000210 18 12 getdents 0.00 0.000078 2 32 write 0.00 0.000056 1 49 fcntl 0.00 0.000026 4 6 openat 0.00 0.000012 2 6 close 0.00 0.000000 0 2 2 open 0.00 0.000000 0 22 stat 0.00 0.000000 0 2 mmap 0.00 0.000000 0 7 mprotect 0.00 0.000000 0 16 pread 0.00 0.000000 0 1 access 0.00 0.000000 0 1 sched_yield 0.00 0.000000 0 5 madvise 0.00 0.000000 0 1 accept 0.00 0.000000 0 1 getsockname 0.00 0.000000 0 1 clone 0.00 0.000000 0 1 set_robust_list ------ ----------- ----------- --------- --------- ---------------- 100.00 2.018947 2537 225 total $ man io_getevents ...

See also: Configuration of MySQL for Shared Hosting.

Taxonomy upgrade extras: system calltable_open_cachetable_definition_cachesystem timeload average

MariaDB and MySQL Database Consolidation

Shinguz - Thu, 2019-03-14 23:05

We see at various customers the request for consolidating their MariaDB and MySQL infrastructure. The advantage of such a measure is clear in the first step: Saving costs! And this request comes typically from managers. But what we unfortunately see rarely is to question this request from the IT engineering perspective. Because it comes, as anything in life, with some "costs". So, saving costs with consolidation on one side comes with "costs" for operation complexity on the other side.

To give you some arguments for arguing with managers we collected some topics to consider before consolidating:

  • Bigger Database Instances are more demanding in handling than smaller ones:
    • Backup and Restore time takes longer. Copying files around takes longer, etc.
    • Possibly your logical backup with mysqldump does not restore any longer in a reasonable amount of time (Mean Time to Repair/Recover (MTTR) is not met any more). You have to think about some physical backup methods including MariaDB or MySQL Enterprise Backup solutions.
    • Consolidated database instances typically contain many different schemas of various different applications. In case of problems you typically want to restore and possibly recover only one single schema and not all schemas. And this becomes much more complicated (depending on your backup strategy). MariaDB/MySQL tooling is not yet (fully) prepared for this situation (#17365). Possibly your old backup strategy is not adequate any more?
    • Binary Logs are written globally, not per schema. Have you considered how to do a PiTR for one or several schemas on your consolidated instance? Not an easy game.
    • When you restore a schema you do not want the application interfering with your restore. How can you properly exclude the one application from your database instance while you are restoring? Locking accounts (possible only with MariaDB 10.4 and MySQL 5.7 and newer). Tricks like --skip-networking, adding Firewall rules, --read-only, database port change (--port=3307), do not work any more (as easy)!
    • In short the costs are: Restore/Recovery Operations become more demanding!
  • Do NOT mix schemas of different criticalities into the same database instance! The worst cases we have seen were some development schemas which were on the same high-availability Cluster like highly critical transactional systems. The developers did some nasty things on their development systems (which IMHO is OK for them on a development system). What nobody considered in this case was that the troubles from the development schema brought down the whole production schema which was located on the same machine... Cost: Risk of failure of your important services caused by some non-important services AND planning becomes more expensive and you need to know more about all instances and other instances.
  • This phenomena is also called Noisy Neighbor effect. Noisy Neighbors become a bigger issue with consolidated systems. You have to know much more in detail what you and everybody else is doing on the system! Do you...? Costs are: More know-how is required, better education and training of people, more clever people, better planning, better monitoring, etc.
  • When you consolidate different applications into one system it becomes more critical than the previous ones on their own. So you have to think about High-Availability solutions. Costs are: 1 to 4 new instances (for HA), more complexity, more know-how, more technologies... Do you plan to buy an Enterprise Support subscription?
  • Do NOT mix different maintenance windows (Asia vs. Europe vs. America) or daily online-business and nightly job processing. You get shorter maintenance windows. Costs are: Better planning is needed, costly night and weekend maintenance time, etc...

    Europe12:00China19:00(7 hours ahead of us)US east07:00(5 hours behind us)US west04:00(8 hours behind us)
  • Resource Fencing becomes more tricky. Within the same instance resource fencing becomes more tricky and is not really doable atm. MySQL 8.0 shows some firsts steps with the Resource Groups but this is pretty complicated and is by far not complete and usable yet. A better way would be to install several instances on the same machine an fence them with some O/S means like Control Groups. This comes at the costs of know-how, complexity and more complicated set-ups.
  • Naming conflicts can happen: Application a) is called `wiki` and application b) is called `wiki` as well and for some reasons you cannot rename them (any more).
  • Monitoring becomes much more demanding and needs to be done more fine grained. You want to know exactly what is going on your system because it can easily have some side effects on many different schemas/applications. Example of today: We were running out of kernel file descriptors (file-max) and we did not recognize it in the beginning.
  • Consolidated things are a much a higher Bulk Risk (this is true also for SAN or Virtualisation Clusters). When you have an outage not only one application is down but the whole company is down. We have seen this already for SAN and Virtualisation Clusters and we expect to see that soon also on highly consolidated Database Clusters. Costs: Damage on the company is bigger for one incident.
  • Different applications have different configuration requirements which possibly conflict with other requirements from other applications (Jira from Atlassian is a good example for this).
    Server variables cannot be adjusted any more according to somebody’s individual wishes...
    • sql_mode: Some old legacy applications still require ONLY_FULL_GROUP_BY) :-(
    • The requirements are conflicting: Performance/fast vs. Safe/durability: innodb_flush_log_at_trx_commit, sync_binlog, crash-safe binary logging, etc.
    • Transaction isolation: transaction_isolation = READ-COMMITTED (old: tx_isolation, Jira again as an example) vs. REPEATABLE-READ (default). Other applications which do not assume, that transaction isolation behaviour changes. And cannot cope with it. Have you ever asked your developers if their application can cope with a different transaction isolation levels? :-) Do they know what you are talking about?
    • Character set (utf8_bin for Jira as example again), which can be changed globally or on a schema level, but it has to be done correctly for all participants.
  • Some applications require MariaDB some application require MySQL. They are not the same databases any more nowadays (8.0 vs. 10.3/10.4). So you cannot consolidate them (easily).
  • You possibly get a mixture of persistent connections (typically Java with connection pooling) and non-persistent connections (typically PHP and other languages). Which causes different database behaviour, which has an impact on how you configure the database instance. Which is more demanding and needs more knowledge of the database AND the application or you solve it with more RAM.
  • You need to know much more about you application to understand what it does and how could it interfere with others...
  • When you consolidate more and more schemas into your consolidated database server you have to adjust your database setting as well from time to time (innodb_buffer_pool_size, table_open_cache, table_definition_cache, O/S File descriptors, etc). And possibly add more RAM, CPU and stronger I/O. When is your network saturated? Have you thought about this already?
  • Upgrading MariaDB/MySQL and changes in database configuration becomes more demanding in communication and coordination. Potentially several development teams are affected. And they possibly have event different requirements/needs in O/S, forks and database versions. Or are even not willing or capable to update.
  • If you have different schemas on the same Instance it is easier to access data in different schemas at the same time in the same query. This can cause (unwanted) dependencies between those schemas. The database becomes the interface between applications. Here you have to be very restrictive with user privileges to avoid these dependencies. From an architecture point of view it would be more preferable to use clearly defined interfaces outside of the database. For example APIs. But those APIs require much more development resources than a simple SQL query. The problem comes later: If you want to separate the schemas again into different instances the effort is increasing significantly to split/rewrite the JOIN queries and the underlying data sets. Or the depending schemas must be moved all together which causes longer downtimes for applications and requires more coordination between teams.

This leads us to the result that consolidation let us save some costs on infrastructure but adds additional costs on complexity, skills etc. Theses costs will grow exponentially and thus at some point it is not worth the effort any more. This will end up in not only one big consolidated instance but possibly in a hand full of them.

Where this point is for you you have to find yourself...

Alternatives to consolidating everything into one instance
  • 1 Machine can contain 1 to many Database Instances can contain 1 to many Schemas. Instead of putting all schemas into one machine, think about installing several instances on one machine. This comes at the cost of more complexity. MyEnv will help you to manage this additional complexity.
  • 1 Machine can contain 1 to many Virtual Machines (VMs, kvm, XEN, VMWare, etc.) can contain 1 to many Instance(s) can contain 1 to many Schemas. This comes at the cost of even more complexity and pretty complex technology (Virtualization).

A big thanks to Antoniya K. for here valuable feedback!

Taxonomy upgrade extras: consolidationcentralizationMyEnv

MariaDB and MySQL Database Consolidation

Shinguz - Thu, 2019-03-14 23:05

We see at various customers the request for consolidating their MariaDB and MySQL infrastructure. The advantage of such a measure is clear in the first step: Saving costs! And this requests comes typically from managers. But what we unfortunately see rarely is to question this request from the IT engineering perspective. Because it comes, as anything in life, with some "costs". So, saving costs with consolidation on one side comes with "costs" for operation complexity on the other side.

To give you some arguments for arguing with managers we collected some topics to consider before consolidating:

  • Bigger Database Instances are more demanding in handling than smaller ones:
    • Backup and Restore time takes longer. Copying files around takes longer, etc.
    • Possibly your logical backup with mysqldump does not restore any longer in a reasonable amount of time (Mean Time to Repair/Recover (MTTR) is not met any more). You have to think about some physical backup methods including MariaDB or MySQL Enterprise Backup solutions.
    • Consolidated database instances typically contain many different schemas of various different applications. In case of problems you typically want to restore and possibly recover only one single schema and not all schemas. And this becomes much more complicated (depending on you backup strategy). MariaDB/MySQL tooling is not yet (fully) prepared for this situation (#17365). Possibly your old backup strategy is not adequate any more?
    • When you restore a schema you do not want the application interfering with your restore. How can you properly exclude the one application from your database instance while you are restoring? Locking accounts (possible only with MariaDB 10.4 and MySQL 5.7 and newer). Tricks like --skip-networking, adding Firewall rules, --read-only, database port change (--port=3307), do not work any more (as easy)!
    • In short the costs are: Restore/Recovery Operations becomes more demanding!
  • Do NOT mix schemas of different criticalities into the same database instance! The worst cases we have seen were some development schemas which were on the same high-availability Cluster like highly critical transactional systems. The developers did some nasty things on their development systems (which IMHO is OK for them on a development system). What nobody considered in this case was that the troubles from the development schema brought down the whole production schema which was located on the same machine... Cost: Risk of failure of your important services caused by some non-important services AND planing becomes more expensive and you need to know more about all instances and other instances.
  • This phenomena is also called Noisy Neighbor effect. Noisy Neighbors become a bigger issues with consolidated systems. You have to know much more in detail what you and everybody else is doing on the system! Do you...? Costs are: More know-how is required, better education and training of people, more clever people, better planning, better monitoring, etc.
  • When you consolidate different applications into one system it becomes more critical than the previous ones on their own. So you have to think about High-Availability solutions. Costs are: 1 to 4 new instances (for HA), more complexity, more know-how, more technologies... Do you plan to buy an Enterprise Support subscription?
  • Do NOT mix different maintenances windows (Asia vs. Europe vs. America) or daily online-business and nightly job processing. You get shorter maintenance windows. Costs are: Better planning is needed, costly night and weekend maintenance time, etc...

    Europe12:00China19:00(7 hours ahead of us)US east07:00(5 hours behind us)US west04:00(8 hours behind us)
  • Resource Fencing becomes more tricky. Within the same instance resource fencing becomes more tricky and is not really doable atm. MySQL 8.0 shows some firsts steps with the Resource Groups but this is pretty complicated and is by far not complete and usable yet. A better way would be to install several instances on the same machine an fence them with some O/S means like Control Groups. This comes at the costs of know-how, complexity and more complicated set-ups.
  • Naming conflicts can happen: Application a) is called `wiki` and application b) is called `wiki` as well and for some reasons you cannot rename them (any more).
  • Monitoring becomes much more demanding and needs to be done more fine grained. You want to know exactly what is going on your system because it can easily have some side effects on many different schemas/applications. Example of today: We were running out of kernel file descriptors (file-max) and we did not recognize it in the beginning.
  • Consolidated things are a much a higher Bulk Risk (this is true also for SAN or Virtualisation Clusters). When you have an outage not only one application is down but the whole company is down. We have seen this already for SAN and Virtualisation Clusters and we expect to see that soon also on highly consolidated Database Clusters. Costs: Damage on the company is bigger for one incident.
  • Different applications have different configuration requirements which possibly conflict with other requirements from other applications (Jira from Atlassian is a good example for this).
    Server variables cannot be adjusted any more according to somebody’s individual wishes...
    • sql_mode: Some old legacy applications still require ONLY_FULL_GROUP_BY) :-(
    • The requirements are conflicting: Performance/fast vs. Safe/durability: innodb_flush_log_at_trx_commit, sync_binlog, crash-safe binary logging, etc.
    • Transaction isolation: transaction_isolation = READ-COMMITTED (old: tx_isolation, Jira again as an example) vs. REPEATABLE-READ (default). Other applications which do not assume, that transaction isolation behaviour changes. And cannot cope with it. Have you ever asked your developers if their application can cope with a different transaction isolation levels? :-) Do they know what you are talking about?
    • Character set (utf8_bin for Jira as example again), which can be changed globally or on a schema level, but it has to be done correctly for all participants.
  • Some applications require MariaDB some application require MySQL. They are not the same databases any more nowadays (8.0 vs. 10.3/10.4). So you cannot consolidate them (easily).
  • You possibly get a mixture of persistent connections (typically Java with connection pooling) and non-persistent connections (typically PHP and other languages). Which causes different database behaviour, which has an impact on how you configure the database instance. Which is more demanding and needs more knowledge of the database AND the application or you solve it with more RAM.
  • You need to know much more about you application to understand what it does and how could it interfere with others...
  • When you consolidate more and more schemas into your consolidated database server you have to adjust your database setting as well from time to time (innodb_buffer_pool_size, table_open_cache, table_definition_cache, O/S File descriptors, etc). And possibly add more RAM, CPU and stronger I/O. When is your network saturated? Have you thought about this already?

This leads us to the result that consolidation let us save some costs on infrastructure but adds additional costs on complexity, skills etc. Theses costs will grow exponentially and thus at some point it is not worth the effort any more. This will end up in not only one big consolidated instance but possibly in a hand full of them.

Where this point is for you you have to find yourself...

Alternatives to consolidating everything into one instance
  • 1 Machine can contain 1 to many Database Instances can contain 1 to many Schemas. Instead of putting all schemas into one machine, think about installing several instances on one machine. This comes at the cost of more complexity. MyEnv will help you to manage this additional complexity.
  • 1 Machine can contain 1 to many Virtual Machines (VMs, kvm, XEN, VMWare, etc.) can contain 1 to many Instance(s) can contain 1 to many Schemas. This comes at the cost of even more complexity and pretty complex technology (Virtualization).

Taxonomy upgrade extras: consolidationcentralizationMyEnv

FromDual Performance Monitor for MariaDB and MySQL 1.0.2 has been released

Shinguz - Wed, 2019-03-13 20:58

FromDual has the pleasure to announce the release of the new version 1.0.2 of its popular Database Performance Monitor for MariaDB, MySQL, Galera Cluster and Percona Server fpmmm.

The new FromDual Performance Monitor for MariaDB and MySQL (fpmmm) can be downloaded from here. How to install and use fpmmm is documented in the fpmmm Installation Guide.

In the inconceivable case that you find a bug in the FromDual Performance Manager for MariaDB and MySQL please report it the FromDual Bugtracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Monitoring as a Service (MaaS)

You do not want to set-up your Database monitoring yourself? No problem: Choose our MariaDB and MySQL Monitoring as a Service (Maas) program to safe costs!

Upgrade from 1.0.x to 1.0.2 shell> cd /opt shell> tar xf /download/fpmmm-1.0.2.tar.gz shell> rm -f fpmmm shell> ln -s fpmmm-1.0.2 fpmmm
Changes in FromDual Performance Monitor for MariaDB and MySQL 1.0.2

This release contains various bug fixes.

You can verify your current FromDual Performance Monitor for MariaDB and MySQL version with the following command:

shell> fpmmm --version
fpmmm agent
  • Server entropy probe added.
  • Processlist empty state is covered.
  • Processlist statements made more robust.
  • Error caught properly after query.
  • Branch for Ubuntu is different, fixed.
  • PHP Variable variables_order is included into program.
  • Fixed the documentation URL in file INSTALL.
  • Connection was not set to utf8. This is fixed now.
  • fprint error fixed.
  • Library myEnv.inc updated from MyEnv project.

fpmmm Templates
  • Backup template added.
  • SQL thread and IO thread error more verbose and running again triggers implemented. Typo in slave template fixed.
  • Forks graph fixed, y axis starts from 0.

fpmmm agent installer
  • Error messages made more flexible.

For subscriptions of commercial use of fpmmm please get in contact with us.

Taxonomy upgrade extras: performancemonitormonitoringfpmmmmaasrelease

FromDual Performance Monitor for MariaDB and MySQL 1.0.2 has been released

Shinguz - Wed, 2019-03-13 20:58

FromDual has the pleasure to announce the release of the new version 1.0.2 of its popular Database Performance Monitor for MariaDB, MySQL, Galera Cluster and Percona Server fpmmm.

The new FromDual Performance Monitor for MariaDB and MySQL (fpmmm) can be downloaded from here. How to install and use fpmmm is documented in the fpmmm Installation Guide.

In the inconceivable case that you find a bug in the FromDual Performance Manager for MariaDB and MySQL please report it the FromDual Bugtracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Monitoring as a Service (MaaS)

You do not want to set-up your Database monitoring yourself? No problem: Choose our MariaDB and MySQL Monitoring as a Service (Maas) program to safe costs!

Upgrade from 1.0.x to 1.0.2 shell> cd /opt shell> tar xf /download/fpmmm-1.0.2.tar.gz shell> rm -f fpmmm shell> ln -s fpmmm-1.0.2 fpmmm
Changes in FromDual Performance Monitor for MariaDB and MySQL 1.0.2

This release contains various bug fixes.

You can verify your current FromDual Performance Monitor for MariaDB and MySQL version with the following command:

shell> fpmmm --version
fpmmm agent
  • Server entropy probe added.
  • Processlist empty state is covered.
  • Processlist statements made more robust.
  • Error caught properly after query.
  • Branch for Ubuntu is different, fixed.
  • PHP Variable variables_order is included into program.
  • Fixed the documentation URL in file INSTALL.
  • Connection was not set to utf8. This is fixed now.
  • fprint error fixed.
  • Library myEnv.inc updated from MyEnv project.

fpmmm Templates
  • Backup template added.
  • SQL thread and IO thread error more verbose and running again triggers implemented. Typo in slave template fixed.
  • Forks graph fixed, y axis starts from 0.

fpmmm agent installer
  • Error messages made more flexible.

For subscriptions of commercial use of fpmmm please get in contact with us.

Taxonomy upgrade extras: performancemonitormonitoringfpmmmmaasrelease

MariaDB and MySQL consulting by plane

Shinguz - Fri, 2019-03-01 15:27

Since January 2019 FromDual tries to contribute actively a little bit against global warming too.

The best for the climate would be to NOT travel to the customer at all! For this cases we have our FromDual remote-DBA services for MariaDB and MySQL.

But sometimes customer wants or needs us on-site for our FromDual in-house trainings or our FromDual on-site consulting engagements. In these cases we try to travel by train. Travelling by train is after walking or travelling by bicycle the most climate friendly way to travel:


But some customers are located more than 7 to 8 hours far away by train. For these customers we have to take the plan which is not good for the climate at all. But at least we will compensate for our CO2 emission via MyClimate.org:

Taxonomy upgrade extras: consultingtrainingremote-dbamysqlmariadbservices

MariaDB and MySQL consulting by plane

Shinguz - Fri, 2019-03-01 15:27

Since January 2019 FromDual tries to contribute actively a little bit against global warming too.

The best for the climate would be to NOT travel to the customer at all! For this cases we have our FromDual remote-DBA services for MariaDB and MySQL.

But sometimes customer wants or needs us on-site for our FromDual in-house trainings or our FromDual on-site consulting engagements. In these cases we try to travel by train. Travelling by train is after walking or travelling by bicycle the most climate friendly way to travel:


But some customers are located more than 7 to 8 hours far away by train. For these customers we have to take the plan which is not good for the climate at all. But at least we will compensate for our CO2 emission via MyClimate.org:

Taxonomy upgrade extras: consultingtrainingremote-dbaclimatemysqlmariadbservices

FromDual Backup and Recovery Manager for MariaDB and MySQL 2.1.0 has been released

Shinguz - Fri, 2019-02-22 17:14

FromDual has the pleasure to announce the release of the new version 2.1.0 of its popular Backup and Recovery Manager for MariaDB and MySQL (brman).

The new FromDual Backup and Recovery Manager can be downloaded from here. How to install and use the Backup and Recovery Manager is describe in FromDual Backup and Recovery Manager (brman) installation guide.

In the inconceivable case that you find a bug in the FromDual Backup and Recovery Manager please report it to the FromDual Bugtracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.2.x to 2.1.0

brman 2.1.0 requires a new PHP package for ssh connections.

shell> sudo apt-get install php-ssh2 shell> cd ${HOME}/product shell> tar xf /download/brman-2.1.0.tar.gz shell> rm -f brman shell> ln -s brman-2.1.0 brman
Changes in FromDual Backup and Recovery Manager 2.1.0

This release is a new major release series. It contains a lot of new features. We have tried to maintain backward-compatibility with the 1.2 and 2.0 release series. But you should test the new release seriously!

You can verify your current FromDual Backup Manager version with the following command:

shell> fromdual_bman --version shell> bman --version
FromDual Backup Manager
  • Usage (--help) updated.
  • Some WARN severities downgraded to INFO to keep mail output clean.
  • Error messages made more flexible and fixed PHP library advice.
  • Split some redundant code from bman library into brman library.
  • Security fix: Password from config file is hidden now.
  • Bug on simulation of physical backup fixed (xtrabackup_binlog_info not found).
  • Options --backup-name and --backup-overwrite introduced for restore automation.
  • Minor typo bugs fixed.
  • Option --options remove.
  • Sort order for schema backup changed to ORDER BY ASC.
  • 2 PHP errors fixed for simulation.
  • Maskerade API added.
  • Physical backup sftp archiving with special characters (+foodmarat) in archive directory name fixed.

FromDual Recovery Manager
  • Rman has progress report.
  • Full logical restore is implemented.
  • Schema logical restore is implemented.
  • Physical restore is implemented.
  • Physical restore of compressed backups is implemented.
  • Option --cleanup-first was implemented for physical backup as well.
  • Option: --stop-instance implemented.

FromDual Backup Manager Catalog
  • No changes.

Subscriptions for commercial use of FromDual Backup and Recovery Manager you can get from from us.

Taxonomy upgrade extras: BackupRestoreRecoverypitrbrmanreleasebmanrman

MySQL Enterprise Backup Support Matrix

Shinguz - Tue, 2019-02-19 19:13

MySQL Enterprise Backup (MEB) is a bit limited related to support of older MySQL versions. So you should consider the following release matrix:

MEB/MySQLSupported 5.5  5.6  5.7  8.0 3.11.xNOxx3.12.xYESxx4.0.xNOx4.1.xYESx8.0.xYES8.0.x*

* MySQL Enterprise Backup 8.0.15 only supports MySQL 8.0.15. For earlier versions of MySQL 8.0, use the MySQL Enterprise Backup version with the same version number as the server.

MySQL Enterprise Backup is available for download from the My Oracle Support (MOS) website. This release will be available on Oracle eDelivery (OSDC) after the next upload cycle. MySQL Enterprise Backup is a commercial extension to the MySQL family of products.

As an Open Source alternative Percona XtraBackup for MySQL databases is available.

Compatibility with MySQL Versions: 3.11, 3.12, 4.0, 4.1, 8.0.

MySQL Enterprise Backup User's Guide: 3.11, 3.12, 4.0, 4.1, 8.0.

Taxonomy upgrade extras: MySQL Enterprise BackupBackupmebenterprisesupportmatrix

MariaDB/MySQL Environment MyEnv 2.0.2 has been released

Shinguz - Thu, 2019-01-17 19:35

FromDual has the pleasure to announce the release of the new version 2.0.2 of its popular MariaDB, Galera Cluster and MySQL multi-instance environment MyEnv.

The new MyEnv can be downloaded here. How to install MyEnv is described in the MyEnv Installation Guide.

In the inconceivable case that you find a bug in the MyEnv please report it to the FromDual bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.x to 2.0

Please look at the MyEnv 2.0.0 Release Notes.

Upgrade from 2.0.x to 2.0.2 shell> cd ${HOME}/product shell> tar xf /download/myenv-2.0.2.tar.gz shell> rm -f myenv shell> ln -s myenv-2.0.2 myenv
Plug-ins

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

shell> cd ${HOME}/product/myenv shell> ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Upgrade of the instance directory structure

From MyEnv 1.0 to 2.0 the directory structure of instances has fundamentally changed. Nevertheless MyEnv 2.0 works fine with MyEnv 1.0 directory structures.

Changes in MyEnv 2.0.2 MyEnv
  • Error message fixed.
  • bind_address 0.0.0.0 is optimized to *.
  • State up and down are coloured now.
  • Complaint on missing symbolic link to my.cnf added.
  • New start-timeout configuration variable added. Important for Galera SST.
  • Default MariaDB my.cnf hash added to avoid complaints.
  • mysqld is consistently searched in sbin, bin and libexec now for RHEL/CentOS 7 compatibility.
  • Avoid EGPCS error messages during MyEnv start/stop.
  • Not used aReleaseVersion removed, side effect is to not have performance issues any more on up in huge MyEnv set-ups with older MySQL releases.

MyEnv Installer
  • Function answerQuestion on previous error message works now.
  • Try and catch for existing configuration file improved.
  • Default answer is "q" on error and instance name and blacklist name check is fixed.
  • myenv.conf backup file has a correct timestamp now.
  • Create symlink to datadir for my.cnf.
  • Purge of database is done from instancedir and not datadir any more.

MyEnv Utilities
  • galera_monitor.sh output made nicer.
  • Script az_test.php added, initial test found already a bug in MariaDB 10.3.
  • Script slave_monitor.sh added.
  • Option check made more careful for drop_partition.php and merge_partition.php.
  • Timestamp problem fixed for year change in split_partition.php.

For subscriptions of commercial use of MyEnv please get in contact with us.

Taxonomy upgrade extras: MyEnvmulti-instancevirtualizationconsolidationSaaSOperationsreleaseupgrademysqld_multi

Using tmux for MariaDB database support and surveillance

Shinguz - Tue, 2018-12-25 16:14

See also our older article: Using screen for support and/or surveillance.

First simple steps

The command tmux starts a tmux server and opens a new session with a (pseudo) terminal:

shell> tmux

To leave a tmux session again just type Ctrl+d inside your tmux session or:

tmux> exit

If you want to give a tmux session a specific name you can start tmux as follows to created a named session:

shell> tmux new -s mariadb

or if you are already inside tmux:

tmux> Ctrl+b $

followed by a session name where only the first 9 characters are shown in the overview:

List available tmux sessions

To list the available tmux sessions we have the tmux list-sessions command:

shell> tmux list-sessions 1: 1 windows (created Sun Dec 23 13:35:37 2018) [117x33] mariadb-104: 1 windows (created Sun Dec 23 13:13:46 2018) [130x41] (attached)

If there is no session available we will get the following error:

shell> tmux ls failed to connect to server
tmux help

To get more information about tmux you can run:

shell> man tmux shell> tmux --help shell> tmux ls --help tmux> Ctrl+b ?
Detach and re-attach to a tmux session

With the command:

tmux> Ctrl+b d

you will detach from a tmux session. With tmux ls you can list the available sessions and to reattach to a tmux session you can type:

shell> tmux attach shell> tmux attach -t 1 shell> tmux a -t mariadb-104
Split window (session) into different panes

A tmux session uses a window and this window can be split into different panes (pseudo terminals):

  • Ctrl+b % Splits a window into 2 panes vertically.
  • Ctrl+b " Splits a window into 2 panes horizontally.

To switch between the panes you can use:

tmux> Ctrl+b arrow {up|down|left|right}

If you want to make a pane full-screen you can use the Ctrl+b z to toggle.

Ctrl+b Ctrl+Cursor {up|down|left|right} resizes the current pane.

Scroll within a pane

To switch to the scroll mode you have to use the following key combination:

tmux> Ctrl+b [

Then you can navigate with Cursor {up|down|left|right} or {PgUp|PgDown}. To leave the navigation scroll mode you just have to type q.

An other possibility to switch to the scroll mode is to press the key Ctrl+b PgUp.

Taxonomy upgrade extras: tmuxscreenterminalsupportssh

Pages

Subscribe to FromDual aggregator - FromDual TechFeed (en)