Quantcast
Channel: MySQL Support Blogs
Viewing all 352 articles
Browse latest View live

MySQL 5.6 general query log behavior change

$
0
0

The MySQL general query log can be a useful debugging tool, showing commands received from clients.  In versions through MySQL 5.5, you could count on the GQL to log every command it received – the logging happened before parsing.  That can be helpful – for example, the GQL entries might have records of somebody unsuccessfully attempting to exploit SQL injection vulnerabilities that result in syntax exceptions.

Here’s a sample, which I’ll run in both 5.5 and 5.6 and show the resulting GQL:

mysql> SELECT 1;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (0.00 sec)

mysql> SELECT NOTHING();
ERROR 1305 (42000): FUNCTION NOTHING does not exist
mysql> SELECT 2;
+---+
| 2 |
+---+
| 2 |
+---+
1 row in set (0.00 sec)

In 5.5, this produces the following in the general query log:

130513 18:26:34        1 Query    SELECT 1
130513 18:26:40        1 Query    SELECT NOTHING()
130513 18:26:44        1 Query    SELECT 2

In 5.6, the same produces the following:

130425 21:53:25        1 Query    SELECT 1
130425 21:53:35        1 Query    SELECT 2

The behavior hasn’t changed between 5.5 and 5.6 with respect to successfully-parsed, but unauthorized statements.  With the limited-privilege anonymous user account, I issued the following against both 5.5 and 5.6 servers:

mysql> SHOW GRANTS;
+--------------------------------------+
| Grants for @localhost                |
+--------------------------------------+
| GRANT USAGE ON *.* TO ''@'localhost' |
+--------------------------------------+
1 row in set (0.00 sec)

mysql> SELECT * FROM mysql.user;
ERROR 1142 (42000): SELECT command denied to user ''@'localhost' for table 'user'

The general query log for both 5.5 and 5.6 recorded the attempt to SELECT from mysql.user system table:

130513 18:31:10        3 Query    SHOW GRANTS
130513 18:31:11        3 Query    SELECT * FROM mysql.user

The documentation doesn’t explicitly note this behavior change (I files Bug#68937 to include this in the manual) – it talks about the password-masking feature which triggered this behavioral change, though (and this page also documents which statements are rewritten).  In order to mask passwords in log files, the log entries have to be written after they are parsed.  When I issue the following statement in 5.6, the password is masked in the general query log:

mysql> SET PASSWORD = PASSWORD('test');
Query OK, 0 rows affected (0.00 sec)

Here’s the corresponding general query log entry:

130513 18:45:59        2 Query    SET PASSWORD FOR `root`=<secret>

That’s much appreciated behavior – there’s typically no reason to expose passwords in logs.  For those who do need this temporarily for diagnostic purposes, there’s a –log-raw option which logs pre-parser, just like in 5.5.  This means that plain-text passwords as well as statements with syntax errors get logged.  Here’s the result in 5.6 with –log-raw enabled:

130513 18:43:10        1 Query    SELECT NOTHING()

Unfortunately, there’s no status variable to tell a DBA whether or not they are protected by the new 5.6 behavior, or whether the running server has been started with –log-raw to override it and is still logging plain-text passwords.  I filed Bug#68936 to address that.  I would also love to allow users (with appropriate permissions) the ability to change this configuration option without restart of MySQL Server, but it’s probably not something that will need – or want – to be changed in a production environment where downtime is critical.

I’m happy to see plain-text passwords removed from logs in 5.6, and hope that this post helps clarify associated behavioral changes related to the general query log in 5.6.

 


How to tell whether MySQL Server uses yaSSL or OpenSSL

$
0
0

Starting with MySQL 5.6, MySQL commercial-license builds use OpenSSL.  yaSSL – previously used as the default SSL library for all builds – remains the implementation for Community (GPL) builds, and users comfortable building from source can choose to build with OpenSSL instead.  Daniel van Eeden recently requested a global variable to indicate which SSL library was used to compile the server (bug#69226), and it’s a good request.  It’s something I’ve previously requested as well, having been fooled by the use of have_openssl as a synonym for have_ssl (I’m sure it made sense at the time, right?). 

I found a workaround (at least as of 5.6.6 and more recent) which gives an indication whether yaSSL or OpenSSL was used.  The Rsa_public_key status variable is explicitly defined only when yaSSL libraries are not used:

#ifndef HAVE_YASSL
  {"Rsa_public_key",           (char*) &show_rsa_public_key, SHOW_FUNC},
#endif

As a result, MySQL Enterprise 5.6.10 (with OpenSSL) has Rsa_public_key status variable:

mysql> select version();
+---------------------------------------+
| version()                             |
+---------------------------------------+
| 5.6.10-enterprise-commercial-advanced |
+---------------------------------------+
1 row in set (0.02 sec)

mysql> show status like '%rsa%';
+----------------+-------+
| Variable_name  | Value |
+----------------+-------+
| Rsa_public_key |       |
+----------------+-------+
1 row in set (0.00 sec)

while MySQL Community 5.6.10 does not:

mysql> select version();
+-----------+
| version() |
+-----------+
| 5.6.10    |
+-----------+
1 row in set (0.00 sec)

mysql> show status like '%rsa%';
Empty set (0.00 sec)

Hopefully that will help others that have a need similar to Daniel and myself.  Hopefully we’ll get a global status variable that makes this indirect method obsolete.

Easier Overview of Current Performance Schema Setting

$
0
0

While I prepared for my Hands-On Lab about the Performance Schema at MySQL Connect last year, one of the things that occurred to me was how difficult it was quickly getting an overview of which consumers, instruments, actors, etc. are actually enabled. For the consumers things are made more complicated as the effective setting also depends on parents in the hierarchy. So my thought was: “How difficult can it be to write a stored procedure that outputs a tree of the hierarchies.” Well, simple enough in principle, but trying to be general ended up making it into a lengthy project and as it was a hobby project, it often ended up being put aside for more urgent tasks.

However here around eight months later, it is starting to shape up. While there definitely still is work to be done, e.g. creating the full tree and outputting it in text mode (more on modes later) takes around one minute on my test system – granted I am using a standard laptop and MySQL is running in a VM, so it is nothing sophisticated.

The current routines can be found in ps_tools.sql.gz – it may later be merged into Mark Leith’s ps_helper to try to keep the Performance Schema tools collected in one place.

Note: This far the routines have only been tested in Linux on MySQL 5.6.11. Particularly line endings may give problems on Windows and Mac.

Description of the ps_tools Routines

The current status are two views, four stored procedure, and four functions – not including several functions and procedures that does all the hard work:

  • Views:
    • setup_consumers – Displays whether each consumer is enabled and whether the consumer actually will be collected based on the hierarchy rules described in Pre-Filtering by Consumer in the Reference Manual.
    • accounts_enabled – Displays whether each account defined in the mysql.user table has instrumentation enabled based on the rows in performance_schema.setup_actors.
  • Procedures:
    • setup_tree_consumers(format, color) – Create a tree based on setup_consumers displaying whether each consumer is effectively enabled. The arguments are:
      • format is the output format and can be either (see also below).:
        • Text: Left-Right
        • Text: Top-Bottom
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escapes sequences around the consumer names when used a Text format (ignored for Dot outputs).
    • setup_tree_instruments(format, color, only_enabled, regex_filter) – Create a tree based on setup_instruments displaying whether each instrument is enabled. The tree is creating by splitting the instrument names at each /. The arguments are:
      • format is the output format and can be either:
        • Text: Left-Right
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escapes sequences around the instrument names when used a Text format (ignored for Dot outputs).
      • type – whether to base the tree on the ENABLED or TIMED column of setup_instruments.
      • only_enabled – if TRUE only the enabled instruments are included.
      • regex_filter – if set to a non-empty string only instruments that match the regex will be included.
    • setup_tree_actors_by_host(format, color) – Create a tree of each account defined in mysql.user and whether they are enabled; grouped by host. The arguments are:
      • format is the output format and can be either:
        • Text: Left-Right
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escape sequences around the names when used a Text format (ignored for Dot outputs).
    • setup_tree_actors_by_user – Create a tree of each account defined in mysql.user and whether they are enabled; grouped by username. The arguments are:
      • format is the output format and can be either:
        • Text: Left-Right
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escape sequences around the names when used a Text format (ignored for Dot outputs).
  • Functions:
    • is_consumer_enabled(consumer_name) – Returns whether a given consumer is effectively enabled.
    • is_account_enabled(host, user) – Returns whether a given account (host, user) is enabled according to setup_actors.
    • substr_count(haystack, needle, offset, length) – The number of times a given substring occurs in a string. A port of the PHP function of the same name.
    • substr_by_delim(set, delimiter, pos) – Returns the Nth element from a delimiter string.

The two functions substr_count() and substr_by_delim() was also described in an earlier blog.

The formats for the four stored procedures consists of two parts: whether it is Text or Dot and the direction. Text is a tree that can be viewed directly either in the mysql command line client (coloured output not supported) or the shell (colours supported for bash). Dot will output a DOT graph file in the same way as dump_thread_stack() in ps_helper. The direction is as defined in the DOT language, so e.g. Left-Right will have the first level furthest to the left, then add each new level to the right of the parent level.

Examples

As the source code – including comments – is more than 1600 lines, I will not discuss it here, but rather go through some examples.

setup_tree_consumers

Using the coloured output:

setup_tree_consumers_tbor the same using a non-coloured output:
setup_tree_consumers_lr

setup_tree_instruments

setup_tree_instruments_lrHere a small part of the tree is selected using a regex.

setup_tree_actors_%

With only root@localhost and root@127.0.0.1 enabled, the outputs of setup_tree_actors_by_host and setup_tree_actors_by_user gives respectively:setup_tree_actors_by_host_lrsetup_tree_actors_by_user_lr

DOT Graph of setup_instruments

The full tree of setup_instruments can be created using the following sequence of steps (I am using graphviz to get support for dot files):

MySQL 5.6.11$ echo -e "$(mysql -NBe "CALL ps_tools.setup_tree_instruments('Dot: Left-Right', FALSE, 'Enabled', FALSE, '')")" > /tmp/setup_instruments.dot
MySQL 5.6.11$ dot -Tpng /tmp/setup_instruments.dot -o /tmp/setup_instruments.png

setup_tree_instruments_dot_lr_snipThe full output is rather large (6.7M). If you want to see if you can get to it at http://mysql.wisborg.dk/wp-content/uploads/2013/05/setup_tree_instruments_dot_lr.png.

Views

mysql> SELECT * FROM ps_tools.setup_consumers;
+--------------------------------+---------+----------+
| NAME                           | ENABLED | COLLECTS |
+--------------------------------+---------+----------+
| events_stages_current          | NO      | NO       |
| events_stages_history          | NO      | NO       |
| events_stages_history_long     | NO      | NO       |
| events_statements_current      | YES     | YES      |
| events_statements_history      | NO      | NO       |
| events_statements_history_long | NO      | NO       |
| events_waits_current           | NO      | NO       |
| events_waits_history           | NO      | NO       |
| events_waits_history_long      | NO      | NO       |
| global_instrumentation         | YES     | YES      |
| thread_instrumentation         | YES     | YES      |
| statements_digest              | YES     | YES      |
+--------------------------------+---------+----------+
12 rows in set (0.00 sec)

mysql> SELECT * FROM ps_tools.accounts_enabled;
+-------------+-----------+---------+
| User        | Host      | Enabled |
+-------------+-----------+---------+
| replication | 127.0.0.1 | NO      |
| root        | 127.0.0.1 | YES     |
| root        | ::1       | NO      |
| meb         | localhost | NO      |
| memagent    | localhost | NO      |
| root        | localhost | YES     |
+-------------+-----------+---------+
6 rows in set (0.00 sec)

Conclusion

There is definitely more work to do on making the Performance Schema easier to access. ps_helper and ps_tools are a great start to what I am sure will be an extensive library of useful diagnostic queries and tools.

The Dangers in Changing Default Character Sets on Tables

$
0
0

The ALTER TABLE statement syntax is explained in the manual at:

http://dev.mysql.com/doc/refman/5.6/en/alter-table.html

To put it simply, there are two ways you can alter the table to use a new character set.

1. ALTER TABLE tablename DEFAULT CHARACTER SET utf8;

This will alter the table to use the new character set as the default, but as a safety mechanism, it will only change the table definition for the default character set. That is, existing character fields will have the old character set per column. For example:

mysql> create table mybig5 (id int not null auto_increment primary key,      
    -> subject varchar(100) ) engine=innodb default charset big5;
Query OK, 0 rows affected (0.81 sec)

mysql> show create table mybig5;
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table  | Create Table                                                                                                                                                     |
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mybig5 | CREATE TABLE `mybig5` (
 `id` int(11) NOT NULL AUTO_INCREMENT,
 `subject` varchar(100) DEFAULT NULL,
 PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=big5 |
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> alter table mybig5 default charset utf8;
Query OK, 0 rows affected (0.17 sec)
Records: 0  Duplicates: 0  Warnings: 0

Inserting a multi-byte string that worked in big5 character set, such as:

mysql> INSERT INTO mybig5 VALUES (NULL, UNHEX('E7BB8FE79086'));

01:08:19  [INSERT - 0 row(s), 0.000 secs]  [Error Code: 1366, SQL State: HY000]  Incorrect string value: '\xE7\xBB\x8F\xE7\x90\x86' for column 'SUBJECT' at row 1 ... 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.000/0.000 sec  [0 successful, 0 warnings, 1 errors]

mysql> show create table mybig5;
+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table  | Create Table                                                                                                                                                                        |
+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mybig5 | CREATE TABLE `mybig5` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `subject` varchar(100) CHARACTER SET big5 DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
Notice that the 'subject' column has the original character set definition and when data is inserted, can result in the error above if the character sets do not match.

2. ALTER TABLE tablename CONVERT TO CHARACTER SET utf8;

This will change all the columns to the new character set and change the table as well. So you will end up with the required definition of:

mysql> show create table mybig5;
mysql> +--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
   -> | Table  | Create Table                                                                                                                                                     |
   -> +--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
   -> | mybig5 | CREATE TABLE `mybig5` (
   ->   `id` int(11) NOT NULL AUTO_INCREMENT,
   ->   `subject` varchar(100) DEFAULT NULL,
   ->   PRIMARY KEY (`id`)
   -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
   -> +--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
   -> 1 row in set (0.00 sec)

So if you see the incorrect string error on a table, check that the columns are not under a different character set to the default. Look at using the CONVERT clause to avoid the issue, but also be aware that certain tables may actually require different character sets for different columns.

Finding the source of problematic queries

$
0
0

Many MySQL users are familiar with using slow query logs and tools such as mysqldumpslow to identify poor-performing SQL commands, and MySQL 5.6 introduces new powerful tools in PERFORMANCE_SCHEMA.  Both allow you to identify the date/time and the user account from which the command was issued, which is helpful – but if you’re using MySQL Enterprise Monitor (MEM), you can immediately identify the actual line of code responsible for the SQL command in question.  This happens to be one of my favorite and powerful features of MEM, but it’s frequently overlooked by new and experienced MEM users alike, so I’m writing the post to highlight it.

MySQL Enterprise Monitor, of course, is a commercial product that’s part of the MySQL Enterprise subscription.  But it’s freely-available under 30-day trial terms for evaluation from Oracle Software Delivery Cloud – if you aren’t a commercial customer, consider downloading MEM to see what it can do for you.  And if you are a MySQL Enterprise subscriber who hasn’t deployed MEM, or haven’t yet explored some of the more advanced features, now’s the time to do so.

MEM includes functionality called Query Analyzer (or QUAN for short).  When this feature was initially introduced in version 2.0, it was entirely dependent upon MySQL Proxy to intercept SQL commands between the client and the server, where useful metrics such as execution time and result set size and shipping that information to MEM along with other useful information like the query EXPLAIN plan.  This was useful, but there were some limitations.  Proxy doesn’t (yet) scale very well, is a single point of failure, requires deployment reconfiguration (either of the application or the MySQL server so that it listens on the ports to which applications are talking), and masks client host information from the server (for authentication in particular).

Subsequent improvements to this QUAN functionality improved the situation by enabling data collection directly within several connectors which support it – if you are using Java, .NET or PHP languages, you can use connector plugins that collect data for QUAN and ship it to MEM without deploying Proxy.  You also get additional insight into your application, as the connector plugins can identify the source from which queries originate.  Here’s a sample screenshot, showing the stack trace collected for the example query in QUAN data:

MEM 2.3 QUAN with stack trace

In a “DevOps” world, this is a killer feature, giving developers immediate insight into database-level performance problems which maps directly to specific lines of code in the application which trigger the problems.  The MEM development team uses this feature to troubleshoot MEM performance itself – Mark Matthews explains this in a great post here (note that the stack trace QUAN data was still being fleshed out).  When coupled with other MEM features such as advanced QUAN data filtering and graph-based filtering, users can quickly isolate the specific events which cause concern.

If you’re a developer for a MySQL-backed application who also has to cover DBA responsibilities, or if you’re a MySQL DBA looking to provide more directive feedback to the developers you work with, this feature of MEM is worth checking out.

 

 

mysql_upgrade is now version-specific by default

$
0
0

You’ve just completed an upgrade from MySQL 5.5 to 5.6.  You followed the upgrade instructions in the manual, and ran mysql_upgrade.  But when you start MySQL 5.6, you still see the following error messages like the following in the server error log:

2013-03-26 16:45:51 5040 [ERROR] Column count of mysql.events_waits_current is w
rong. Expected 19, found 16. Created with MySQL 50520, now running 50610. Please
use mysql_upgrade to fix this error.
2013-03-26 16:45:51 5040 [ERROR] Column count of mysql.events_waits_history is w
rong. Expected 19, found 16. Created with MySQL 50520, now running 50610. Please
use mysql_upgrade to fix this error.

What went wrong?

Well, because mysql_upgrade is a client that’s built for a specific server version, it’s possible you have two different mysql_upgrade binaries on your system – one for the  old 5.5, and another for the new 5.6 server.  Unless you are very careful, it’s possible to accidentally run the mysql_upgrade binary from the 5.5 distribution against the 5.6 server, which won’t do anything useful at all.

To help prevent this, mysql_upgrade binaries from version 5.6.12 onward will explicitly check the server version connected to and compare it against the version for which mysql_upgrade was compiled.  If they don’t match, mysql_upgrade will generate an error and stop.  If you’re confident that the mysql_upgrade binary in use is the one you mean to use, even if it doesn’t match the server version against which you are using it, you can bypass the version check by including the new –skip-version-check option.

Naturally, the vast majority of the many bug fixes and improvements released to the MySQL community in 5.6.12 were implemented by MySQL Engineering staff, but in this case, the MySQL Support Team got into the act as well.  This new functionality was implemented by Sinisa Milivojevic, a longtime MySQL Support guru who will tell you he was – and is – the very first MySQL employee.  Thanks Sinisa!

New Member of the Cluster API Family: The Native Node.js Connector

$
0
0

MySQL Cluster 7.3 went GA yesterday and with it came a new member of the MySQL Cluster API family: mysql-js – a native Node.js connector. mysql-js uses the NDB API to connect directly to the data nodes which improves performance compared to executing queries through the MySQL nodes.

For an introduction to mysql-js and installation instructions I will recommend taking a look at the official API documentation and Andrew Morgan’s blog; the latter also has an overview of the new features in MySQL Cluster 7.3 in general.

To get a feel for how the new API works, I went ahead and created a small test program that will take one or more files with delimited data (e.g. similar to what you get with SELECT … INTO OUTFILE and inserts the data into a table. I have tried to keep things simple. This means that no other external modules than mysql-js is used, not very much error handling has been included, the reading, parsing of the data files could be done much better, performance has not been considered, etc. – but I would rather focus on the usage of mysql-js.

The complete example can be found in the file nodejs_tabinsert.js. The following will go through the important bits.

Preparation

The first part of the script is not really specific to mysql-js, so I will go lightly over that. A few of the arguments deserve a couple of extra words:

  • –log-level: when set to debug or detail some output with information about what happens inside mysql-js is logged. This can be useful to learn more about the module or for debugging.
  • –basedir: this is the same as the basedir option for mysqld – it sets where MySQL has been installed. It is used for loading the mysql-js module. Default is /usr/local/mysql.
  • –database and –table: which table to insert the data into. The default database is test, but the table name must always be specified.
  • –connect-string: the script connects directly to the cluster nodes, so it needs the NDB connect-string similar to other NDB programs. The default is localhost:1186.
  • –delimiter: the delimiter used in the data files. The default is a tab (\t).

Setting Up mysql-js

With all the arguments parsed, it is not possible to load the mysql-js module:

// Create the mysql-js instance - look for it in $basedir/share/nodejs
var nosqlPath = path.join(basedir, 'share', 'nodejs');
var nosql = require(nosqlPath);

// unified_debug becomes available once nosql has been loaded. So set up
// the log level now.
if (logLevel != 'default') {
   unified_debug.on();
   switch (logLevel) {
      case 'debug':
         unified_debug.level_debug();
         break;

      case 'detail':
         unified_debug.level_detail();
         break;
   }
}

// Configure the connections - use all defaults except for the connect string and the database
var dbProperties = nosql.ConnectionProperties('ndb');
dbProperties.ndb_connectstring = ndbConnectstring;
dbProperties.database          = databaseName;

The unified_debug class is part of mysql-js and allows to get debug information from inside mysql-js logged to the console.

The nosql.ConnectionProperties() method will return an object with the default settings for the chosen adapter – in this case ndb. After that we can change the settings where we do not want the defaults. It is also possible to use an object with the settings as the argument instead of ‘ndb’; that requires setting the name of the adapter using the “implementation” property. Currently the two supported adapters are ‘ndb’ (as in this example) and ‘mysql’ which connects to mysqld instead. ‘mysql’ required node-mysql version 2.0 and also support InnoDB.

As the ‘ndb’ adapter connects directly to the cluster nodes, no authentication is used. This is the same as for the NDB API.

Callbacks and Table Mapping Constructor

var trxCommit = function(err, session) {
   if (err) {
      failOnError(err, 'Failed to commit after inserting ' + session.insertedRows + ' rows from ' + session.file + '.');
   }
   session.close(function(err) {
      if (err) {
         failOnError(err, 'Failed to close session for ' + session.file + '.');
      }
   });
}

We will load each file inside a transaction. The trxCommit() callback will verify that the transaction was committed without error and then closes the session.

var onInsert = function(err, session) {
   session.insertedRows++;
   if (err && err.ndb_error !== null) {
      failOnError(err, 'Error onInsert after ' + session.insertedRows + ' rows.');
   }

   // Check whether this is the last row.
   if (session.insertedRows === session.totalRows) {
      session.currentTransaction().commit(trxCommit, session);
   }
};

The onInsert callback checks whether each insert worked correctly. When all rows for the session (file) have been inserted, it commits the transaction.

var tableRow = function(tableMeta, line) {
   // Skip empty lines and comments
   if (line.length > 0 && line.substr(0, 1) != '#') {
      var dataArray = line.split(delimiter);
      for (var j = 0; j < tableMeta.columns.length; j++) {
         this[tableMeta.columns[j].name] = dataArray[tableMeta.columns[j].columnNumber];
      }
   }
}

The tableRow is the constructor later used for the table mapping. It is used to set up the object with the data to be inserted for that row. tableMeta is a TableMetaData object with information about the table we are inserting into.

The Session

This is were the bulk of the work is done. Each file will have it’s own session.

var onSession = function(err, session, file) {
   if (err) {
      failOnError(err, 'Error onSession.');
   }

   // Get the metadata for the table we are going to insert into.
   // This is needed to map the lines read from the data files into row objects
   // (the mapping happens in tableRow() ).
   session.getTableMetadata(databaseName, tableName, function(err, tableMeta) {
      if (err) {
         failOnError(err, 'Error getTableMetadata.');
      }

      var trx = session.currentTransaction();
      trx.begin(function(err) {
         if (err) {
            failOnError(err, 'Failed to start transaction for "' + file + '".');
         }
      });

      session.insertedRows = 0;
      session.file         = file;
      console.log('Reading: ' + file);
      fs.readFile(file, { encoding: 'utf8', flag: 'r' }, function(err, data) {
         if (err) {
            failOnError(err, 'Error reading file "' + file + '"');
         }

         // First find the rows to inserted
         console.log('Analysing: ' + file);
         var rows  = [];
         session.totalRows = 0;
         data.split('\n').forEach(function(line) {
            var row = new tableRow(tableMeta, line);
            if (Object.keys(row).length > 0) {
               rows[session.totalRows++] = row;
            }
         });

         // Insert the rows
         console.log('Inserting: ' + file);
         rows.forEach(function(row) {
            session.persist(row, onInsert, session);
         });
      });
   });
};

The onSession function is a callback that is used when creating (opening) the sessions.

The first step is to get the meta data for the table. As all data is inserted into the same table, in principle we could reuse the same meta data object for all sessions, but the getTableMetaData() method is a method of the session, so it cannot be fetched until this point.

Next a transaction is started. We get the transaction with the session.currentTransaction() method. This returns an idle transaction which can then be started using the begin() method. As such there is not need to store the transaction in a variable; as can be seen in the trxCommit() and onInsert() callbacks above, it is also possible to call session.currnetTransaction() repeatedly – it will keep returning the same transaction object.

The rest of the onSession function processes the actual data. The insert itself is performed with the session.persist() method.

Edit: using a session this way to insert the rows one by one is obviously not very efficient as it requires a round trip to the data nodes for each row. For bulk inserts the Batch class is a better choice, however I chose Session to demonstrate using multiple updates inside a transaction.

Creating the Sessions

var annotations = new nosql.TableMapping(tableName).applyToClass(tableRow);
files.forEach(function(file) {
   nosql.openSession(dbProperties, tableRow, onSession, file);
});

First the table mapping is defined. Then a session is opened for each file. Opening a session means connecting to the cluster, so it can be a relatively expensive step.

Running the Script

To test the script, the table t1 in the test database will be used:

CREATE TABLE `t1` (
  `id` int(10) unsigned NOT NULL PRIMARY KEY,
  `val` varchar(10) NOT NULL
) ENGINE=ndbcluster DEFAULT CHARSET=utf8;

For the data files, I have been using:

t1a.txt:

# id    val
1       a
2       b
3       c
4       d
5       e

t1b.txt:

# id    val
6       f
7       g
8       h
9       i
10      j

Running the script:

shell$ export LD_LIBRARY_PATH=/usr/local/mysql/lib
shell$ node nodejs_tabinsert.js --table=t1 t1a.txt t1b.txt
Connected to cluster as node id: 53
Reading: t1b.txt
Reading: t1a.txt
Analysing: t1b.txt
Inserting: t1b.txt
Analysing: t1a.txt
Inserting: t1a.txt

One important observation is that even though the session for t1a.txt is created before the one for t1b, the t1b.txt file is ending up being inserted first. Actually if the inserts were using auto-increments, it would be possible to see that in fact, the actual assignment of auto-increment values will in general alternate between rows from t1b.txt and t1a.txt. The lesson: in node.js do not count on knowing the exact order of operations.

I hope this example will spark your interest in mysql-js. Feedback is most welcome – both bug reports and feature requests can be reports at bugs.mysql.com.

Vote for bugs which impact you!

$
0
0

Matt Lord already announced this change, but I am so happy, so want to repeat. MySQL Community Bugs Database Team introduced new button "Affects Me". After you click this button, counter, assigned to each of bug reports, will increase by one. This means we: MySQL Support and Engineering, - will see how many users are affected by the bug.


Why is this important? We have always considered community input as we prioritize bug fixes, and this is one more point of reference for us. Before this change we only had a counter for support customers which increased when they opened a support request, complaining they are affected by a bug. But our customers are smart and not always open support request when hit a bug: sometimes they simply implement workaround. Or there could be other circumstances when they don't create a ticket. Or this could be just released version, which big shops frighten to use in production. Therefore, sometimes, when discussing which bug to prioritize and which not we can not rely only on "Affects paying customers" number, rather need to make guess if one or another bug can affect large group of our users. We used number of bug report subscribers, most recent comments, searched forums, but all these methods gave only approximation.


Therefore I want to ask you. If you hit a bug which already was reported, but not fixed yet, please click "Affects Me" button! It will take just a few seconds, but your voice will be heard.


Practical P_S: Finding which accounts fail to properly close connections

$
0
0

I’ve previously written about several problems which can benefit from additional visibility provided by PERFORMANCE_SCHEMA in MySQL 5.6, and it’s time to add to that list.  A very common problem involves connections which are not properly closed – they simply idle until they reach wait_timeout (or interactive_timeout, depending on the client flags set), and the server terminates the connection.  Who knows what the root cause is – perhaps the client terminated without cleaning up connections, or maybe there was just no load, or maybe the network cable was unplugged.  It’s something application developers – particularly those using persistent connections managed by a pool – run into frequently.

If you are a DBA rather than a developer, though, your only real clue that something is wrong may be a perpetually increasing Aborted_clients status variable counter. The manual has a page dedicated to solving such (and related) connection problems, and it references tools such as the general query log and error log.  The Aborted_clients status variable is useful to answer the question, “how many connections have been closed without an explicit quit request from the client?”  And prior to 5.6, that’s about as much information as you could expect to get:

mysql> show global status like 'aborted_clients';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| Aborted_clients | 5     |
+-----------------+-------+
1 row in set (0.00 sec)

With PERFORMANCE_SCHEMA in 5.6, we can isolate the problem to specific accounts, and we can calculate the percentage of client connections which were terminated without an explicit quit command from the client.  You can do that with the following query:

SELECT 
    ess.USER,
    ess.HOST,
    (a.TOTAL_CONNECTIONS - a.CURRENT_CONNECTIONS) - ess.COUNT_STAR not_closed,
    ((a.TOTAL_CONNECTIONS - a.CURRENT_CONNECTIONS) - ess.COUNT_STAR) * 100 / 
       (a.TOTAL_CONNECTIONS - a.CURRENT_CONNECTIONS) pct_not_closed
FROM
    performance_schema.events_statements_summary_by_account_by_event_name ess
        JOIN
    performance_schema.accounts a ON (ess.USER = a.USER AND ess.HOST = a.HOST)
WHERE
    ess.EVENT_NAME = 'statement/com/Quit'
        AND (a.TOTAL_CONNECTIONS - a.CURRENT_CONNECTIONS) > ess.COUNT_STAR;

The easiest way to test this is to make a handful of connections, issue SET @@session.wait_timeout = 1 – here’s the result of the above query after doing so:

+------+-----------+------------+----------------+
| USER | HOST      | not_closed | pct_not_closed |
+------+-----------+------------+----------------+
| root | localhost |          4 |        44.4444 |
| ODBC | localhost |          1 |       100.0000 |
+------+-----------+------------+----------------+
2 rows in set (0.00 sec)

Knowing which accounts are failing to properly close connections can help quickly spotlight where further investigation should be focused.  And with MySQL 5.6, DBAs can get that information without resorting to the general query log or via application logs.

Practical P_S: Extending PROCESSLIST

$
0
0

MySQL 5.6 introduced major advances to monitoring made via PERFORMANCE_SCHEMA, but also made a change in how it binds to the network by default.  In MySQL 5.5, the –bind-address configuration option defaulted to “0.0.0.0″, meaning only IPv4.  This changed to “*” in MySQL 5.6, accepting connections on both IPv6 and IPv4 interfaces.  Somehow (I’ve not looked into it yet), my (unsupported) WindowsXP installation now refuses to bind to IPv4, which caused surprising problems for certain tools that seem to internally map “localhost” to IPv4-specific 127.0.0.1, where connections fail.  In working through this problem, I found myself wishing that PROCESSLIST output included information about which mechanism or interface was being used by each connection.  Fortunately, we can leverage PERFORMANCE_SCHEMA to extend PROCESSLIST in meaningful ways – this post aims to demonstrate how to do this by adding information about the interface as an example.

Here’s output from a basic PROCESSLIST:

mysql> SHOW PROCESSLIST\G
*************************** 1. row ***************************
Id: 6
User: root
Host: localhost:2873
db: performance_schema
Command: Query
Time: 0
State: init
Info: SHOW PROCESSLIST
1 row in set (0.00 sec)

Notice that we get “localhost” here, rather than in IP address – there’s no way to tell from this information whether the connection leverages IPv4 or IPv6.  There’s another way to get that same information:

mysql> SELECT * FROM information_schema.processlist\G
*************************** 1. row ***************************
ID: 6
USER: root
HOST: localhost:2873
DB: performance_schema
COMMAND: Query
TIME: 0
STATE: executing
INFO: SELECT * FROM information_schema.processlist
1 row in set (0.06 sec)

That’s useful, because now we’re doing a SELECT on a table which we can use with JOINs.  So, what table should we JOIN with?  PERFORMANCE_SCHEMA.THREADS is where you want to look:

mysql> SELECT * FROM performance_schema.threads
-> WHERE processlist_id = 6\G
*************************** 1. row ***************************
THREAD_ID: 27
NAME: thread/sql/one_connection
TYPE: FOREGROUND
PROCESSLIST_ID: 6
PROCESSLIST_USER: root
PROCESSLIST_HOST: localhost
PROCESSLIST_DB: performance_schema
PROCESSLIST_COMMAND: Query
PROCESSLIST_TIME: 0
PROCESSLIST_STATE: Sending data
PROCESSLIST_INFO: SELECT * FROM performance_schema.threads
WHERE processlist_id = 6
PARENT_THREAD_ID: NULL
ROLE: NULL
INSTRUMENTED: YES
1 row in set (0.02 sec)

The PROCESSLIST_ID column is obviously what we want to use to do the JOIN:

mysql> SELECT thread_id, p.*
-> FROM performance_schema.threads t
-> JOIN information_schema.processlist p
->  ON (p.id = t.processlist_id)\G
*************************** 1. row ***************************
thread_id: 27
ID: 6
USER: root
HOST: localhost:2873
DB: performance_schema
COMMAND: Query
TIME: 0
STATE: executing
INFO: SELECT thread_id, p.*
FROM performance_schema.threads t
JOIN information_schema.processlist p
ON (p.id = t.processlist_id)
1 row in set (0.06 sec)

The THREAD_ID column is what’s most useful to join to other PERFORMANCE_SCHEMA tables, so we’ll use the THREAD table as a bridge.  In looking at the connection information, the table we want to look at is PERFORMANCE_SCHEMA.SOCKET_INSTANCES:

mysql> SELECT * FROM performance_schema.socket_instances\G
Empty set (0.00 sec)

If you get an empty resultset like above, it’s because the socket_instances table relies on PERFORMANCE_SCHEMA instrumentation which is not on by default.  So, taking a step back, if you want to see this information, you need to enable it in PERFORMANCE_SCHEMA.  The documentation explains two ways you can enable this instrumentation:  You can do this at startup with –performance_schema_instrument=wait/io/socket/sql/client_connection=counted (beware of this bug report), or you can issue the following UPDATE statement at runtime:

UPDATE performance_schema.setup_instruments
SET counted = 'YES'
WHERE name LIKE 'wait/io/socket/client_connection'

Once enabled, new (and only new) connections will be counted, and you’ll start to see this type of information in the SOCKET_INSTANCES table:

mysql> SELECT * FROM socket_instances\G
*************************** 1. row ***************************
EVENT_NAME: wait/io/socket/sql/client_connection
OBJECT_INSTANCE_BEGIN: 35914240
THREAD_ID: 27
SOCKET_ID: 105332
IP: ::1
PORT: 2873
STATE: ACTIVE
1 row in set (0.00 sec)

Figuring out which interface is in use takes a little manipulation, but not much.  I use an ugly REGEX to isolate out IPv4 (or IPv6-mapped IPv4) connections: ‘^(::ffff:)?[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$’ (suggestions for improvements welcomed, though it worked for me).  Unix socket connections also show here, with a PORT value of “0″ and an IP value of “”.  Right now, shared memory or named pipe connections on Windows don’t appear in this table, so anything which doesn’t meet the IPv4 REGEX and has a PORT value greater than 0 should be considered IPv6.  Putting this all together, here’s the final query:

mysql> SELECT
->   p.*,
->   CASE
->     WHEN PORT = 0 AND IP = '' THEN 'Unix Socket'
->     WHEN IP REGEXP '^(::ffff:)?[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$' THEN 'IPv4'
->     WHEN PORT > 0 THEN 'IPv6'
->     ELSE 'Undetermined'
->     END AS interface
-> FROM performance_schema.socket_instances si
->   RIGHT JOIN performance_schema.threads t
->     ON (t.thread_id = si.thread_id)
->   JOIN information_schema.processlist p
->     ON (t.processlist_id = p.id)\G
*************************** 1. row ***************************
ID: 6
USER: root
HOST: localhost:2873
DB: performance_schema
COMMAND: Query
TIME: 0
STATE: executing
INFO: SELECT
p.*,
CASE
WHEN PORT = 0 AND IP = '' THEN 'Unix Socket'
WHEN IP REGEXP '^(::ffff:)?[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$'
THEN 'IPv4'
WHEN PORT > 0 THEN 'IPv6'
ELSE 'Undetermined'
END AS interface
FROM performance_schema.socket_instances si
RIGHT JOIN performance_schema.threads t
ON (t.thread_id = si.thread_id)
JOIN information_schema.processlist p
ON (t.processlist_id = p.id)
interface: IPv6
1 row in set (0.13 sec)

Leveraging PERFORMANCE_SCHEMA in MySQL 5.6 allows you to get meaningful information about connections beyond what’s pre-packaged in PROCESSLIST.  Hopefully this post gives you ideas on how you can leverage this capability to meet your diagnostic needs.

 

Where’s my line?

$
0
0
mysql -e "select * from test.t where d < '2013-07-17 17:00:00'"
+---------------------+
| d                   |
+---------------------+
| 2013-07-17 15:34:19 |
+---------------------+

mysqldump -t --compact test t --where="d < '2013-07-17 17:00:00'"
(no output)

Where's my line?

Hint ▼

Practical P_S: From which hosts are connections being attempted?

$
0
0

MySQL Server has an aborted_connect status counter which will show you the number of failed attempts to establish a new connection.  The manual describes potential causes as follows:

It goes on to make the following statement:

If these kinds of things happen, it might indicate that someone is trying to break into your server! Messages for these types of problems are logged to the general query log if it is enabled.

While not explicitly stated here in the manual, one can also use the MySQL Enterprise Audit plugin to get additional information, instead of the general query log.  This is a commercial-license feature of MySQL Enterprise subscriptions, but like all MySQL commercial products, it is available to download for evaluation purposes from Oracle Software Delivery Cloud.

Here’s an example of a failed login – I’m trying here to connect to my own external IPv6 address, and all user accounts on this instance are restricted to the loopback interface (localhost, ::1, 127.0.0.1):

D:\mysql-advanced-5.6.11-win32>bin\mysql -hfe80::205:9aff:fe3c:7a00%23 -P3308
ERROR 1130 (HY000): Host 'fe80::205:9aff:fe3c:7a00%23' is not allowed to connect to this MySQL server

We can see that aborted_connects was incremented:

mysql> SHOW GLOBAL STATUS LIKE 'aborted_connects';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| Aborted_connects | 1     |
+------------------+-------+
1 row in set (0.00 sec)

Here’s what this looks like in the output for MySQL Enterprise Audit plugin:

<AUDIT_RECORD TIMESTAMP="2013-07-19T16:50:11" NAME="Connect" CONNECTION_ID="2" 
  STATUS="1130" USER="" PRIV_USER="" OS_LOGIN="" PROXY_USER="" 
  HOST="" IP="fe80::205:9aff:fe3c:7a00%23" DB=""/>
<AUDIT_RECORD TIMESTAMP="2013-07-19T16:50:11" NAME="Quit" CONNECTION_ID="2" STATUS="0"/>
<AUDIT_RECORD TIMESTAMP="2013-07-19T17:29:24" NAME="Query" CONNECTION_ID="1" STATUS="0" 
  SQLTEXT="SHOW GLOBAL STATUS LIKE 'aborted_connect'"/>

It’s also easy to search the audit records for such events, using mysqlauditgrep from MySQL Utilities:

mysqluc> mysqlauditgrep --event-type=Connect \ 
  D:\\mysql-advanced-5.6.11-win32\\data\\audit.log --status=1130 --format=VERTICAL
*************************       1. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T16:43:51
            IP: fe80::21f:3bff:fe82:bb05%4
          NAME: Connect
 CONNECTION_ID: 2
*************************       2. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T16:43:59
            IP: fe80::21f:3bff:fe82:bb05%4
          NAME: Connect
 CONNECTION_ID: 4
*************************       3. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T16:47:02
            IP: fe80::205:9aff:fe3c:7a00%23
          NAME: Connect
 CONNECTION_ID: 5
*************************       4. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T16:50:11
            IP: fe80::205:9aff:fe3c:7a00%23
          NAME: Connect
 CONNECTION_ID: 2
*************************       5. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T17:31:30
            IP: fe80::205:9aff:fe3c:7a00%23
          NAME: Connect
 CONNECTION_ID: 3
*************************       6. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T17:31:56
            IP: fe80::205:9aff:fe3c:7a00%23
          NAME: Connect
 CONNECTION_ID: 4
*************************       7. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T17:33:50
            IP: fe80::205:9aff:fe3c:7a00%23
          NAME: Connect
 CONNECTION_ID: 5
7 rows.

You can also easily filter for “any connection attempt that produced an error”, if you want to look beyond error 1130 – say including attempts which use incorrect passwords:

D:\mysql-advanced-5.6.11-win32>bin\mysql -uroot -P3308 -pnotmypassword
Warning: Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)

mysqluc> mysqlauditgrep --event-type=Connect \
  D:\\mysql-advanced-5.6.11-win32\\data\\audit.log --status=1-9999 --format=VERTICAL
*************************       1. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T16:43:51
            IP: fe80::21f:3bff:fe82:bb05%4
          NAME: Connect
 CONNECTION_ID: 2
          HOST: None
          USER: None
*************************       2. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T16:43:59
            IP: fe80::21f:3bff:fe82:bb05%4
          NAME: Connect
 CONNECTION_ID: 4
          HOST: None
          USER: None
*************************       3. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T16:47:02
            IP: fe80::205:9aff:fe3c:7a00%23
          NAME: Connect
 CONNECTION_ID: 5
          HOST: None
          USER: None
*************************       4. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T16:50:11
            IP: fe80::205:9aff:fe3c:7a00%23
          NAME: Connect
 CONNECTION_ID: 2
          HOST: None
          USER: None
*************************       5. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T17:31:30
            IP: fe80::205:9aff:fe3c:7a00%23
          NAME: Connect
 CONNECTION_ID: 3
          HOST: None
          USER: None
*************************       6. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T17:31:56
            IP: fe80::205:9aff:fe3c:7a00%23
          NAME: Connect
 CONNECTION_ID: 4
          HOST: None
          USER: None
*************************       7. row *************************
        STATUS: 1130
     TIMESTAMP: 2013-07-19T17:33:50
            IP: fe80::205:9aff:fe3c:7a00%23
          NAME: Connect
 CONNECTION_ID: 5
          HOST: None
          USER: None
*************************       8. row *************************
        STATUS: 1045
     TIMESTAMP: 2013-07-19T18:18:59
            IP: ::1
          NAME: Connect
 CONNECTION_ID: 7
          HOST: localhost
          USER: root
8 rows.

But enough about Enterprise Audit plugin – how else can you get information about which hosts initiate connections which fail to authenticate?  As the manual states, we can find some information about this in the general log:

 

mysql> SET GLOBAL general_log = 'ON';
Query OK, 0 rows affected (0.09 sec)

mysql> SELECT 1;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (0.02 sec)

mysql> exit
Bye

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -uroot -hfe80::205:9aff:fe3c:7a00%23
ERROR 1130 (HY000): Host 'fe80::205:9aff:fe3c:7a00%23' is not allowed to connect
 to this MySQL server

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -uroot
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 6
...
mysql> SELECT 2;
+---+
| 2 |
+---+
| 2 |
+---+
1 row in set (0.00 sec)

mysql>

And here’s what shows in the general query log:

130719 10:31:46	    1 Query	SELECT 1
130719 10:33:31	    1 Quit	
130719 10:33:55	    6 Connect	root@localhost on 
		    6 Query	select @@version_comment limit 1
130719 10:33:58	    6 Query	SELECT 2

That’s interesting – there’s no record of the connection attempt. There is, however, a record when the wrong password is used:

130719 11:18:59	    7 Connect	root@localhost on 
		    7 Connect	Access denied for user 'root'@'localhost' (using password: YES)

So the general query log is useful for identifying some – but not all – failed connection attempts (this is true regardless of the setting of –log-warnings). Fortunately, there are some tools in PERFORMANCE_SCHEMA in MySQL 5.6 which provide a bit more.

The PERFORMANCE_SCHEMA.HOSTS table seems promising, based on the following manual description:

The hosts table contains a row for each host from which clients have connected to the MySQL server. For each host name, the table counts the current and total number of connections.

Unfortunately, there’s a couple of things to note about this table:

  1. Internal threads also appear here, not just external connections.  These internal threads all show with a NULL value in the HOST column.
  2. The table has columns for CURRENT_CONNECTIONS and TOTAL_CONNECTIONS, but nothing indicating number of failed connections.
  3. Most importantly, the connection shows with a NULL HOST value until the authentication is complete.  I’m hopeful this will be seen as a bug and fixed in 5.6, but if not, adding the actual (and known) HOST value for unauthenticated users seems a useful feature request.

Taken together, these three aspects really limit the ability of the HOSTS table to help us, here.  We can’t really tell how many of the NULL hosts are internal threads vs. failed connection attempts, and we can’t tell from which host a connection attempt was made.  On the other hand, it does provide a meaningful list of all hosts from which valid connections have been established.  That can come in handy later.  Here’s a quick example of how the row with NULL values for HOST gets incremented for each failed connection:

mysql> SELECT * FROM performance_schema.hosts\G
*************************** 1. row ***************************
               HOST: NULL
CURRENT_CONNECTIONS: 20
  TOTAL_CONNECTIONS: 21
*************************** 2. row ***************************
               HOST: localhost
CURRENT_CONNECTIONS: 1
  TOTAL_CONNECTIONS: 3
2 rows in set (0.00 sec)

mysql> exit
Bye

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -hfe80::205:9aff:fe3c:7a00%23 -uroot
ERROR 1045 (28000): Access denied for user 'root'@'fe80::205:9aff:fe3c:7a00%23'
(using password: NO)

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -hfe80::205:9aff:fe3c:7a00%23 -uroot
ERROR 1045 (28000): Access denied for user 'root'@'fe80::205:9aff:fe3c:7a00%23'
(using password: NO)

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -uroot
Welcome to the MySQL monitor.  Commands end with ; or \g.
...
mysql> SELECT * FROM performance_schema.hosts\G
*************************** 1. row ***************************
               HOST: NULL
CURRENT_CONNECTIONS: 20
  TOTAL_CONNECTIONS: 23
*************************** 2. row ***************************
               HOST: localhost
CURRENT_CONNECTIONS: 1
  TOTAL_CONNECTIONS: 4
2 rows in set (0.00 sec)

It’s worth noting that the HOST column will show as NULL even if the connection authenticates just fine, but is rejected during the authentication phase for other reasons, such as the authenticated user account doesn’t have access to the requested default database, or because it’s using an account with an expired password and a client which doesn’t support that capability.

The next table that might help us is the PERFORMANCE_SCHEMA.HOST_CACHE table.  This is a useful addition in MySQL 5.6 – it allows DBAs to monitor the host cache contents, especially to see whether hosts are at risk of being blocked because of excessive connection failures (see documentation on –max-connect-errors).  Unlike the HOSTS table, HOST_CACHE has counters for failed connections only – so it will show failed connection attempts – sometimes.  Like the HOSTS table, the HOST_CACHE table also has a couple of caveats:

  1. It will be empty if you are running with –skip-name-resolve, since the purpose of the host cache is to avoid repeated reverse DNS lookups for the same host.
  2. It doesn’t track connections when name resolution is not required – specifically, it does not capture information on localhost and loopback interfaces.

That means that you typically can’t use the HOST_CACHE table to identify potential problems originating on the same host.  But if you are looking for potential signs that unexpected remote hosts have reached your MySQL Server, this is a good place to look.  Here’s an example output from my earlier testing:

mysql> SELECT * FROM performance_schema.host_cache\G
*************************** 1. row ***************************
                                        IP: fe80::205:9aff:fe3c:7a00%23
                                      HOST: NULL
                            HOST_VALIDATED: YES
                        SUM_CONNECT_ERRORS: 0
                 COUNT_HOST_BLOCKED_ERRORS: 0
           COUNT_NAMEINFO_TRANSIENT_ERRORS: 0
           COUNT_NAMEINFO_PERMANENT_ERRORS: 1
                       COUNT_FORMAT_ERRORS: 0
           COUNT_ADDRINFO_TRANSIENT_ERRORS: 0
           COUNT_ADDRINFO_PERMANENT_ERRORS: 0
                       COUNT_FCRDNS_ERRORS: 0
                     COUNT_HOST_ACL_ERRORS: 7
               COUNT_NO_AUTH_PLUGIN_ERRORS: 0
                  COUNT_AUTH_PLUGIN_ERRORS: 0
                    COUNT_HANDSHAKE_ERRORS: 0
                   COUNT_PROXY_USER_ERRORS: 0
               COUNT_PROXY_USER_ACL_ERRORS: 0
               COUNT_AUTHENTICATION_ERRORS: 0
                          COUNT_SSL_ERRORS: 0
         COUNT_MAX_USER_CONNECTIONS_ERRORS: 0
COUNT_MAX_USER_CONNECTIONS_PER_HOUR_ERRORS: 0
             COUNT_DEFAULT_DATABASE_ERRORS: 0
                 COUNT_INIT_CONNECT_ERRORS: 0
                        COUNT_LOCAL_ERRORS: 0
                      COUNT_UNKNOWN_ERRORS: 0
                                FIRST_SEEN: 2013-07-19 09:50:11
                                 LAST_SEEN: 2013-07-19 12:46:08
                          FIRST_ERROR_SEEN: 2013-07-19 09:50:11
                           LAST_ERROR_SEEN: 2013-07-19 12:46:08
1 row in set (0.00 sec)

 

There’s a helpful breakdown of different error classes, so that you can isolate out potential root causes.  In our case, COUNT_HOST_ACL_ERRORS was incremented since no user account is allowed to connect from that host.  Note also that COUNT_NAMEINFO_PERMANENT_ERRORS shows 1 – that’s telling us that MySQL couldn’t get a host name from a reverse DNS lookup.  Let’s create an account that can connect from any host, and see what happens:

mysql> CREATE USER test_hc@'%' IDENTIFIED BY 'T3stP@ss';
Query OK, 0 rows affected (0.03 sec)

mysql> flush hosts;
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT * FROM performance_schema.host_cache\G
Empty set (0.00 sec)

mysql> exit
Bye

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -hfe80::205:9aff:fe3c:7a00%23 -utest_hc -pT3stP@ss
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
...
mysql> exit
Bye

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -uroot
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 21
Server version: 5.6.11-enterprise-commercial-advanced MySQL Enterprise Server -
Advanced Edition (Commercial)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SELECT * FROM performance_schema.host_cache\G
*************************** 1. row ***************************
                                        IP: fe80::205:9aff:fe3c:7a00%23
                                      HOST: NULL
                            HOST_VALIDATED: YES
                        SUM_CONNECT_ERRORS: 0
                 COUNT_HOST_BLOCKED_ERRORS: 0
           COUNT_NAMEINFO_TRANSIENT_ERRORS: 0
           COUNT_NAMEINFO_PERMANENT_ERRORS: 1
                       COUNT_FORMAT_ERRORS: 0
           COUNT_ADDRINFO_TRANSIENT_ERRORS: 0
           COUNT_ADDRINFO_PERMANENT_ERRORS: 0
                       COUNT_FCRDNS_ERRORS: 0
                     COUNT_HOST_ACL_ERRORS: 0
               COUNT_NO_AUTH_PLUGIN_ERRORS: 0
                  COUNT_AUTH_PLUGIN_ERRORS: 0
                    COUNT_HANDSHAKE_ERRORS: 0
                   COUNT_PROXY_USER_ERRORS: 0
               COUNT_PROXY_USER_ACL_ERRORS: 0
               COUNT_AUTHENTICATION_ERRORS: 0
                          COUNT_SSL_ERRORS: 0
         COUNT_MAX_USER_CONNECTIONS_ERRORS: 0
COUNT_MAX_USER_CONNECTIONS_PER_HOUR_ERRORS: 0
             COUNT_DEFAULT_DATABASE_ERRORS: 0
                 COUNT_INIT_CONNECT_ERRORS: 0
                        COUNT_LOCAL_ERRORS: 0
                      COUNT_UNKNOWN_ERRORS: 0
                                FIRST_SEEN: 2013-07-19 13:19:16
                                 LAST_SEEN: 2013-07-19 13:19:16
                          FIRST_ERROR_SEEN: 2013-07-19 13:19:16
                           LAST_ERROR_SEEN: 2013-07-19 13:19:16
1 row in set (0.00 sec)

mysql> exit
Bye

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -hfe80::205:9aff:fe3c:7a00%23 -utest_hc -pbadpassword
Warning: Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'test_hc'@'fe80::205:9aff:fe3c:7a00%23' (using password: YES)

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -uroot
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 23
Server version: 5.6.11-enterprise-commercial-advanced MySQL Enterprise Server -
Advanced Edition (Commercial)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SELECT * FROM performance_schema.host_cache\G
*************************** 1. row ***************************
                                        IP: fe80::205:9aff:fe3c:7a00%23
                                      HOST: NULL
                            HOST_VALIDATED: YES
                        SUM_CONNECT_ERRORS: 0
                 COUNT_HOST_BLOCKED_ERRORS: 0
           COUNT_NAMEINFO_TRANSIENT_ERRORS: 0
           COUNT_NAMEINFO_PERMANENT_ERRORS: 1
                       COUNT_FORMAT_ERRORS: 0
           COUNT_ADDRINFO_TRANSIENT_ERRORS: 0
           COUNT_ADDRINFO_PERMANENT_ERRORS: 0
                       COUNT_FCRDNS_ERRORS: 0
                     COUNT_HOST_ACL_ERRORS: 0
               COUNT_NO_AUTH_PLUGIN_ERRORS: 0
                  COUNT_AUTH_PLUGIN_ERRORS: 0
                    COUNT_HANDSHAKE_ERRORS: 0
                   COUNT_PROXY_USER_ERRORS: 0
               COUNT_PROXY_USER_ACL_ERRORS: 0
               COUNT_AUTHENTICATION_ERRORS: 1
                          COUNT_SSL_ERRORS: 0
         COUNT_MAX_USER_CONNECTIONS_ERRORS: 0
COUNT_MAX_USER_CONNECTIONS_PER_HOUR_ERRORS: 0
             COUNT_DEFAULT_DATABASE_ERRORS: 0
                 COUNT_INIT_CONNECT_ERRORS: 0
                        COUNT_LOCAL_ERRORS: 0
                      COUNT_UNKNOWN_ERRORS: 0
                                FIRST_SEEN: 2013-07-19 13:19:16
                                 LAST_SEEN: 2013-07-19 13:20:01
                          FIRST_ERROR_SEEN: 2013-07-19 13:19:16
                           LAST_ERROR_SEEN: 2013-07-19 13:20:01
1 row in set (0.00 sec)

We can see that COUNT_AUTHENTICATION_ERRORS was incremented, as we failed to authenticate. Also note that COUNT_HOST_BLOCKED_ERRORS remains zero – the documentation again explains why:

The number of connection errors that are deemed “blocking” (assessed against the max_connect_errors system variable). Currently, only protocol handshake errors are counted, and only for hosts that passed validation (HOST_VALIDATED = YES).

So users entering the wrong password don’t count towards max_connect_errors.  Demonstrating another type of connection error – which again doesn’t increment max_connect_errors – here’s what happens when the test_tc user attempts to connect with a default database of “mysql” (for which it has no privileges):

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -hfe80::205:9aff:fe3c:7a00%23 -utest_hc -pT3stP@ss mysql
Warning: Using a password on the command line interface can be insecure.
ERROR 1044 (42000): Access denied for user 'test_hc'@'%' to database 'mysql'

D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -uroot
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 25
...
mysql> SELECT * FROM performance_schema.host_cache\G
*************************** 1. row ***************************
                                        IP: fe80::205:9aff:fe3c:7a00%23
                                      HOST: NULL
                            HOST_VALIDATED: YES
                        SUM_CONNECT_ERRORS: 0
                 COUNT_HOST_BLOCKED_ERRORS: 0
           COUNT_NAMEINFO_TRANSIENT_ERRORS: 0
           COUNT_NAMEINFO_PERMANENT_ERRORS: 1
                       COUNT_FORMAT_ERRORS: 0
           COUNT_ADDRINFO_TRANSIENT_ERRORS: 0
           COUNT_ADDRINFO_PERMANENT_ERRORS: 0
                       COUNT_FCRDNS_ERRORS: 0
                     COUNT_HOST_ACL_ERRORS: 0
               COUNT_NO_AUTH_PLUGIN_ERRORS: 0
                  COUNT_AUTH_PLUGIN_ERRORS: 0
                    COUNT_HANDSHAKE_ERRORS: 0
                   COUNT_PROXY_USER_ERRORS: 0
               COUNT_PROXY_USER_ACL_ERRORS: 0
               COUNT_AUTHENTICATION_ERRORS: 1
                          COUNT_SSL_ERRORS: 0
         COUNT_MAX_USER_CONNECTIONS_ERRORS: 0
COUNT_MAX_USER_CONNECTIONS_PER_HOUR_ERRORS: 0
             COUNT_DEFAULT_DATABASE_ERRORS: 1
                 COUNT_INIT_CONNECT_ERRORS: 0
                        COUNT_LOCAL_ERRORS: 0
                      COUNT_UNKNOWN_ERRORS: 0
                                FIRST_SEEN: 2013-07-19 13:19:16
                                 LAST_SEEN: 2013-07-19 13:26:03
                          FIRST_ERROR_SEEN: 2013-07-19 13:19:16
                           LAST_ERROR_SEEN: 2013-07-19 13:26:03
1 row in set (0.00 sec)

Now we see COUNT_DEFAULT_DATABASE_ERRORS incremented. So what would cause SUM_CONNECT_ERRORS to increment? A connection which doesn’t complete the authentication handshake – such as a port scanner or telnet. Here’s what HOST_CACHE looks like after two consecutive connections using telnet:

D:\mysql-advanced-5.6.11-win32>telnet fe80::205:9aff:fe3c:7a00%23 3308
...
D:\mysql-advanced-5.6.11-win32>telnet fe80::205:9aff:fe3c:7a00%23 3308
...
D:\mysql-advanced-5.6.11-win32>bin\mysql -P3308 -uroot
Welcome to the MySQL monitor.  Commands end with ; or \g.
...

mysql> SELECT * FROM performance_schema.host_cache\G
*************************** 1. row ***************************
                                        IP: fe80::205:9aff:fe3c:7a00%23
                                      HOST: NULL
                            HOST_VALIDATED: YES
                        SUM_CONNECT_ERRORS: 2
                 COUNT_HOST_BLOCKED_ERRORS: 0
           COUNT_NAMEINFO_TRANSIENT_ERRORS: 0
           COUNT_NAMEINFO_PERMANENT_ERRORS: 1
                       COUNT_FORMAT_ERRORS: 0
           COUNT_ADDRINFO_TRANSIENT_ERRORS: 0
           COUNT_ADDRINFO_PERMANENT_ERRORS: 0
                       COUNT_FCRDNS_ERRORS: 0
                     COUNT_HOST_ACL_ERRORS: 0
               COUNT_NO_AUTH_PLUGIN_ERRORS: 0
                  COUNT_AUTH_PLUGIN_ERRORS: 0
                    COUNT_HANDSHAKE_ERRORS: 2
                   COUNT_PROXY_USER_ERRORS: 0
               COUNT_PROXY_USER_ACL_ERRORS: 0
               COUNT_AUTHENTICATION_ERRORS: 1
                          COUNT_SSL_ERRORS: 0
         COUNT_MAX_USER_CONNECTIONS_ERRORS: 0
COUNT_MAX_USER_CONNECTIONS_PER_HOUR_ERRORS: 0
             COUNT_DEFAULT_DATABASE_ERRORS: 1
                 COUNT_INIT_CONNECT_ERRORS: 0
                        COUNT_LOCAL_ERRORS: 0
                      COUNT_UNKNOWN_ERRORS: 0
                                FIRST_SEEN: 2013-07-19 13:19:16
                                 LAST_SEEN: 2013-07-19 13:32:14
                          FIRST_ERROR_SEEN: 2013-07-19 13:19:16
                           LAST_ERROR_SEEN: 2013-07-19 13:32:25
1 row in set (0.00 sec)

Also note that COUNT_HANDSHAKE_ERRORS was incremented, which is a telltale sign of a “dumb” port scan, client protocol incompatibility, or network issues.  A good place to start looking for potential security problems might be:

  1. Those rows in HOST_CACHE where COUNT_HANDSHAKE_ERRORS > 0, as this might indicate a port scan.
  2. Those rows in HOST_CACHE where COUNT_HOST_ACL_ERRORS > 0, as this might indicate an attempted connection from an unauthorized host (check firewall to make sure appropriate restrictions apply).
  3. Those rows in HOST_CACHE for which corresponding rows in the HOSTS table cannot be found.

The goal of the third criteria is to ignore those hosts from which valid connections are made, but perhaps have a handful of isolated failures or mistaken configurations at one point, and isolate out those hosts from which only failed connection attempts have been made.  It’s not quite as easy as it might seem, as there’s a HOST and IP address in the HOST_CACHE table, but only a HOST column – which can have the host name or IP address in it – in the HOSTS table.  The following query works:

mysql> SELECT * FROM performance_schema.hosts\G
*************************** 1. row ***************************
               HOST: NULL
CURRENT_CONNECTIONS: 20
  TOTAL_CONNECTIONS: 21
*************************** 2. row ***************************
               HOST: localhost
CURRENT_CONNECTIONS: 1
  TOTAL_CONNECTIONS: 2
2 rows in set (0.00 sec)

mysql> SELECT hc.*
    -> FROM
    ->  performance_schema.host_cache hc
    -> LEFT JOIN
    ->  performance_schema.hosts h1
    ->    ON (h1.host = hc.host)
    -> LEFT JOIN
    ->  performance_schema.hosts h2
    ->    ON (h2.host = hc.ip)
    -> WHERE h2.host IS NULL
    ->   AND h1.host IS NULL\G
*************************** 1. row ***************************
                                        IP: fe80::205:9aff:fe3c:7a00%23
                                      HOST: NULL
                            HOST_VALIDATED: YES
                        SUM_CONNECT_ERRORS: 0
                 COUNT_HOST_BLOCKED_ERRORS: 0
           COUNT_NAMEINFO_TRANSIENT_ERRORS: 0
           COUNT_NAMEINFO_PERMANENT_ERRORS: 1
                       COUNT_FORMAT_ERRORS: 0
           COUNT_ADDRINFO_TRANSIENT_ERRORS: 0
           COUNT_ADDRINFO_PERMANENT_ERRORS: 0
                       COUNT_FCRDNS_ERRORS: 0
                     COUNT_HOST_ACL_ERRORS: 0
               COUNT_NO_AUTH_PLUGIN_ERRORS: 0
                  COUNT_AUTH_PLUGIN_ERRORS: 0
                    COUNT_HANDSHAKE_ERRORS: 0
                   COUNT_PROXY_USER_ERRORS: 0
               COUNT_PROXY_USER_ACL_ERRORS: 0
               COUNT_AUTHENTICATION_ERRORS: 1
                          COUNT_SSL_ERRORS: 0
         COUNT_MAX_USER_CONNECTIONS_ERRORS: 0
COUNT_MAX_USER_CONNECTIONS_PER_HOUR_ERRORS: 0
             COUNT_DEFAULT_DATABASE_ERRORS: 0
                 COUNT_INIT_CONNECT_ERRORS: 0
                        COUNT_LOCAL_ERRORS: 0
                      COUNT_UNKNOWN_ERRORS: 0
                                FIRST_SEEN: 2013-07-19 13:56:37
                                 LAST_SEEN: 2013-07-19 13:56:37
                          FIRST_ERROR_SEEN: 2013-07-19 13:56:37
                           LAST_ERROR_SEEN: 2013-07-19 13:56:37
1 row in set (0.00 sec)

By using PERFORMANCE_SCHEMA tables in MySQL Server 5.6, you can get visibility into failed connection attempts not recorded in the general query log.

mysqldump privileges required

$
0
0

"mysqldump requires at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, and LOCK TABLES if the --single-transaction option is not used. Certain options might require other privileges as noted in the option descriptions."
- http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html

Format Default Privileges Required
--add-drop-database Off  
--add-drop-table On in --opt  
--add-drop-trigger Off  
--add-locks On in --opt  
--all-databases Off SELECT, SHOW DATABASES ON *.*
--allow-keywords Off  
--apply-slave-statements Off  
--bind-address=ip_address Off  
--comments On  
--compact Off  
--compatible=name[,name,...] Off  
--complete-insert Off  
--create-options On in --opt  
--databases Off  
--debug[=debug_options] Off  
--debug-check Off  
--debug-info Off  
--default-auth=plugin Off  
--default-character-set=charset_name utf8/latin1  
--delayed-insert Off  
--delete-master-logs Off SUPER ON *.*
--disable-keys On in --opt  
--dump-date On in --comments  
--dump-slave[=value] Off SUPER or REPLICATION CLIENT ON *.*
--events Off EVENT
--extended-insert On in --opt  
--fields-enclosed-by=string '' in --tab  
--fields-escaped-by '\\' in --tab  
--fields-optionally-enclosed-by=string Off  
--fields-terminated-by=string '\t' in --tab  
--flush-logs Off RELOAD ON *.*
--flush-privileges Off  
--help Off  
--hex-blob Off  
--host localhost  
--ignore-table=db_name.tbl_name Off  
--include-master-host-port Off  
--insert-ignore Off  
--lines-terminated-by=string '\n' in --tab  
--lock-all-tables Off LOCK TABLES ON *.*
--lock-tables On in --opt LOCK TABLES
--log-error=file_name Off  
--login-path=name Off controlled at OS level
--master-data Off RELOAD ON *.*
SUPER or REPLICATION CLIENT ON *.*
--max_allowed_packet=value 24MB  
--net_buffer_length=value 1022KB  
--no-autocommit Off  
--no-create-db Off  
--no-create-info Off  
--no-data Off  
--no-set-names Off  
--no-tablespaces Off  
--opt On  
--order-by-primary Off  
--password[=password] Off  
--pipe Off  
--plugin-dir=path Off  
--port=port_num 3306  
--quick On in --opt  
--quote-names On  
--replace Off  
--result-file=file Off  
--routines Off SELECT ON mysql.proc
--set-charset On  
--set-gtid-purged=value Auto  
--single-transaction Off  
--skip-add-drop-table Off in --opt  
--skip-add-locks Off in --opt  
--skip-comments Off  
--skip-compact On  
--skip-disable-keys Off in --opt  
--skip-extended-insert Off in --opt  
--skip-opt Off  
--skip-quick Off in --opt  
--skip-quote-names Off  
--skip-set-charset Off  
--skip-triggers Off  
--skip-tz-utc Off  
--ssl-ca=file_name Off  
--ssl-capath=dir_name Off  
--ssl-cert=file_name Off  
--ssl-cipher=cipher_list Off  
--ssl-crl=file_name Off  
--ssl-crlpath=dir_name Off  
--ssl-key=file_name Off  
--ssl-verify-server-cert Off  
--tab=path Off  
--tables Off  
--triggers On SUPER ON *.* or TRIGGER
--tz-utc On  
--user=user_name system user on Linux, 'ODBC' on Windows  
--verbose Off  
--version Off  
--where='where_condition' Off  
--xml Off  

Practical P_S: Finding the KILLer

$
0
0

In a previous post, I described how to leverage PERFORMANCE_SCHEMA in MySQL 5.6 to identify connections which had not been properly closed by the client.  One possible cause of connections being closed without explicit request from the client is when another process issues a KILL CONNECTION command:

mysql> SHOW GLOBAL STATUS LIKE 'aborted_clients';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| Aborted_clients | 0     |
+-----------------+-------+
1 row in set (0.02 sec)

mysql> KILL CONNECTION 3;
Query OK, 0 rows affected (0.00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'aborted_clients';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| Aborted_clients | 1     |
+-----------------+-------+
1 row in set (0.00 sec)

You can somewhat determine how many KILL statements have been executed globally using GLOBAL STATUS:

mysql> SHOW GLOBAL STATUS LIKE '%kill%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Com_kill      | 3     |
+---------------+-------+
1 row in set (0.00 sec)

I say “somewhat” because KILL commands which are parsed and processed are counted, including those where the KILL command fails because the target connection no longer exists, or the user lacks necessary privileges to KILL the connection:

mysql> SHOW GLOBAL STATUS LIKE '%kill%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Com_kill      | 3     |
+---------------+-------+
1 row in set (0.00 sec)

mysql> KILL CONNECTION 1;
ERROR 1094 (HY000): Unknown thread id: 1
mysql> SHOW GLOBAL STATUS LIKE '%kill%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Com_kill      | 4     |
+---------------+-------+
1 row in set (0.00 sec)

mysql> exit
Bye

D:\mysql-advanced-5.6.11-win32>bin\mysql -unothing -P3307
Welcome to the MySQL monitor.  Commands end with ; or \g.
...
mysql> KILL CONNECTION 4;
ERROR 1095 (HY000): You are not owner of thread 4
mysql> SHOW GLOBAL STATUS LIKE '%kill%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Com_kill      | 5     |
+---------------+-------+
1 row in set (0.00 sec)

Can we do better than this with PERFORMANCE_SCHEMA? You bet we can – we’ll start by getting a global count of KILL commands:

mysql> SELECT event_name, count_star
    -> FROM events_statements_summary_global_by_event_name
    -> WHERE event_name LIKE '%kill%';
+--------------------+------------+
| event_name         | count_star |
+--------------------+------------+
| statement/sql/kill |          5 |
| statement/com/Kill |          2 |
+--------------------+------------+
2 rows in set (0.00 sec)

mysql> KILL CONNECTION 9;
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT event_name, count_star
    -> FROM events_statements_summary_global_by_event_name
    -> WHERE event_name LIKE '%kill%';
+--------------------+------------+
| event_name         | count_star |
+--------------------+------------+
| statement/sql/kill |          6 |
| statement/com/Kill |          2 |
+--------------------+------------+
2 rows in set (0.00 sec)

So here’s an interesting first observation regarding the PERFORMANCE_SCHEMA instrumentation – it differentiates between handling the KILL SQL syntax and the COM_PROCESS_KILL client/server protocol. Those commands sent as a KILL CONNECTION SQL statement are found in statement/sql/kill, while the protocol COM_PROCESS_KILL commands are found in statement/com/kill. We’re probably less interested in the different mechanisms than a total count, so I’ll rewrite the query like so:

mysql> SELECT SUM(count_star)
    -> FROM events_statements_summary_global_by_event_name
    -> WHERE event_name LIKE '%kill%';
+-----------------+
| SUM(count_star) |
+-----------------+
|               8 |
+-----------------+
1 row in set (0.05 sec)

Because PERFORMANCE_SCHEMA also provides a SUM_ERRORS column, we can factor out those KILL statements which failed:

mysql> SELECT SUM(count_star - sum_errors)
    -> FROM events_statements_summary_global_by_event_name
    -> WHERE event_name LIKE '%kill%';
+------------------------------+
| SUM(count_star - sum_errors) |
+------------------------------+
|                            5 |
+------------------------------+
1 row in set (0.00 sec)

mysql> SHOW GLOBAL STATUS LIKE '%kill%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Com_kill      | 9     |
+---------------+-------+
1 row in set (0.00 sec)

So we now have a more accurate global count of operations which killed a connection, compared to SHOW GLOBAL STATUS output. Being able to distinguish between successful and failed KILL attempts can be useful – you might want to know if an account is regularly attempting to KILL connections for which it lacks permission. PERFORMANCE_SCHEMA can help you there.

Using PERFORMANCE_SCHEMA, you can also identify the user, account or host which issues the KILL operation. To get per-account information, we’ll use the events_statements_summary_by_account_by_event_name table:

mysql> SELECT user, host, event_name, count_star, sum_errors
    -> FROM events_statements_summary_by_account_by_event_name
    -> WHERE event_name LIKE '%kill%';
+---------+-----------+--------------------+------------+------------+
| user    | host      | event_name         | count_star | sum_errors |
+---------+-----------+--------------------+------------+------------+
| nothing | localhost | statement/sql/kill |          3 |          3 |
| nothing | localhost | statement/com/Kill |          0 |          0 |
| root    | localhost | statement/sql/kill |          4 |          1 |
| root    | localhost | statement/com/Kill |          3 |          0 |
| NULL    | NULL      | statement/sql/kill |          0 |          0 |
| NULL    | NULL      | statement/com/Kill |          0 |          0 |
+---------+-----------+--------------------+------------+------------+
6 rows in set (0.01 sec)

If we want to identify who’s doing the killing, we can do the following:

mysql> SELECT user, host, SUM(count_star - sum_errors) kills
    -> FROM events_statements_summary_by_account_by_event_name
    -> WHERE event_name LIKE '%kill%'
    -> GROUP BY user, host
    -> HAVING kills > 0;
+------+-----------+-------+
| user | host      | kills |
+------+-----------+-------+
| root | localhost |     6 |
+------+-----------+-------+
1 row in set (0.05 sec)
One thing to note is that the SQL command for KILL allows one to terminate queries, instead of connections (the client/server protocol has no such option).  This means that a KILL QUERY command will increment the PERFORMANCE_SCHEMA and GLOBAL STATUS counters, while it won't cause a connection to terminate.  Here's an example:
mysql> SHOW GLOBAL STATUS LIKE 'aborted_clients';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| Aborted_clients | 0     |
+-----------------+-------+
1 row in set (0.00 sec)

mysql> SELECT event_name, count_star
    -> FROM events_statements_summary_global_by_event_name
    -> WHERE event_name LIKE '%kill%';
+--------------------+------------+
| event_name         | count_star |
+--------------------+------------+
| statement/sql/kill |          0 |
| statement/com/Kill |          0 |
+--------------------+------------+
2 rows in set (0.00 sec)

mysql> KILL QUERY 13;
Query OK, 0 rows affected (0.00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'aborted_clients';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| Aborted_clients | 0     |
+-----------------+-------+
1 row in set (0.00 sec)

mysql> SELECT event_name, count_star
    -> FROM events_statements_summary_global_by_event_name
    -> WHERE event_name LIKE '%kill%';
+--------------------+------------+
| event_name         | count_star |
+--------------------+------------+
| statement/sql/kill |          1 |
| statement/com/Kill |          0 |
+--------------------+------------+
2 rows in set (0.00 sec)

So PERFORMANCE_SCHEMA can provide visibility into who is doing the killing, but not (yet) who – or what – is being killed. In order to obtain that information, you’ll need to enable the MySQL General Query Log or MySQL Enterprise Audit.

Practical P_S: How old are your connections?

$
0
0

I’ve often wished that PROCESSLIST exposed when a connection was first established, and I find myself wishing for this information more now with MySQL 5.6.  Improvements to PERFORMANCE_SCHEMA make it trivial to see how much time is being spent in various operations for a given connection – but it would make some analysis (“what percentage of connection time is spent doing X?”) easier.

That said, it is possible to approximate connection age with PERFORMANCE_SCHEMA in MySQL 5.6.  I say “approximate” because results will vary based on what instrumentation exists, is enabled, and is collecting timing data.  That’s because we’re just doing a SUM() on the SUM_TIMER_WAIT column for all instrumented waits.  Here’s an example (FYI, I’m using the format_time() function from Mark Leith’s awesome ps_helper scripts to convert from picoseconds to something meaningful to me):

mysql> SELECT ps_helper.format_time(SUM(sum_timer_wait)) total_time
    -> FROM events_waits_summary_by_thread_by_event_name ews
    -> JOIN threads t
    -> ON (t.thread_id = ews.thread_id)
    -> WHERE t.processlist_id = CONNECTION_ID()\G
*************************** 1. row ***************************
total_time: 07:27:38.2321
1 row in set (0.03 sec)

For good measure, I pulled the timestamps for the above query and the initial connection from the MySQL Enterprise Audit Log plugin and compared them. They are pretty close, but not exact matches:

mysql> SELECT TIMEDIFF('2013-07-30 23:48:11','2013-07-30 16:20:30');
+-------------------------------------------------------+
| TIMEDIFF('2013-07-30 23:48:11','2013-07-30 16:20:30') |
+-------------------------------------------------------+
| 07:27:41                                              |
+-------------------------------------------------------+
1 row in set (0.00 sec)

So somewhere, the PERFORMANCE_SCHEMA instrumentation isn’t recording about 3s worth of time (over a span of 7 hours) into the events_waits_summary_by_thread_by_event_name table. That’s going to happen – there are some minor stages missing instrumentation, and not all instrumentation is enabled and timed by default. For example, the default configuration doesn’t capture information for the “stage/sql/User sleep” instrument. Once that’s enabled (and timed), that timing information gets recorded:

mysql> SELECT * FROM events_stages_summary_global_by_event_name
    -> WHERE event_name LIKE '%sleep%'\G
*************************** 1. row ***************************
    EVENT_NAME: stage/sql/User sleep
    COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
1 row in set (0.00 sec)

mysql> SELECT SLEEP(2);
+----------+
| SLEEP(2) |
+----------+
|        0 |
+----------+
1 row in set (2.00 sec)

mysql> SELECT * FROM events_stages_summary_global_by_event_name
    -> WHERE event_name LIKE '%sleep%'\G
*************************** 1. row ***************************
    EVENT_NAME: stage/sql/User sleep
    COUNT_STAR: 1
SUM_TIMER_WAIT: 2000057006405
MIN_TIMER_WAIT: 2000057006405
AVG_TIMER_WAIT: 2000057006405
MAX_TIMER_WAIT: 2000057006405
1 row in set (0.00 sec)

So, a first attempt at a query to find the age of all current connections would look like this:

mysql> SELECT
    ->   t.processlist_id,
    ->   t.processlist_user,
    ->   t.processlist_host,
    ->   ps_helper.format_time(SUM(ews.sum_timer_wait)) total_time
    -> FROM events_waits_summary_by_thread_by_event_name ews
    ->   JOIN threads t
    ->     ON (t.thread_id = ews.thread_id)
    -> WHERE t.processlist_id IS NOT NULL
    -> GROUP BY
    ->   t.processlist_id,
    ->   t.processlist_user,
    ->   t.processlist_host\G
*************************** 1. row ***************************
  processlist_id: 20
processlist_user: root
processlist_host: localhost
      total_time: 00:43:30.1790
*************************** 2. row ***************************
  processlist_id: 21
processlist_user: nothing
processlist_host: localhost
      total_time: 47.13 s
2 rows in set (0.02 sec)

But there’s a bit more to it than that- watch what happens with nothing@localhost below:

mysql> SELECT
    ->   t.processlist_id,
    ->   t.processlist_user,
    ->   t.processlist_host,
    ->   ps_helper.format_time(SUM(ews.sum_timer_wait)) total_time
    -> FROM events_waits_summary_by_thread_by_event_name ews
    ->   JOIN threads t
    ->     ON (t.thread_id = ews.thread_id)
    -> WHERE t.processlist_id IS NOT NULL
    -> GROUP BY
    ->   t.processlist_id,
    ->   t.processlist_user,
    ->   t.processlist_host\G
*************************** 1. row ***************************
  processlist_id: 20
processlist_user: root
processlist_host: localhost
      total_time: 01:02:00.4435
*************************** 2. row ***************************
  processlist_id: 22
processlist_user: nothing
processlist_host: localhost
      total_time: 1.60 ms
2 rows in set (0.03 sec)

mysql> SELECT SLEEP(10);
+-----------+
| SLEEP(10) |
+-----------+
|         0 |
+-----------+
1 row in set (10.00 sec)

mysql> SELECT
    ->   t.processlist_id,
    ->   t.processlist_user,
    ->   t.processlist_host,
    ->   ps_helper.format_time(SUM(ews.sum_timer_wait)) total_time
    -> FROM events_waits_summary_by_thread_by_event_name ews
    ->   JOIN threads t
    ->     ON (t.thread_id = ews.thread_id)
    -> WHERE t.processlist_id IS NOT NULL
    -> GROUP BY
    ->   t.processlist_id,
    ->   t.processlist_user,
    ->   t.processlist_host\G
*************************** 1. row ***************************
  processlist_id: 20
processlist_user: root
processlist_host: localhost
      total_time: 01:03:17.0619
*************************** 2. row ***************************
  processlist_id: 22
processlist_user: nothing
processlist_host: localhost
      total_time: 1.60 ms
2 rows in set (0.01 sec)

Notice that the total_time for nothing@localhost hasn’t increased. The summary tables get timing information once the wait event is completed, and that connection is still sitting idle. So how do we account for that? Well, we could theoretically add in values obtained from the events_waits_current table, but there are two problems there:

  1. This table isn’t enabled by default (you need to update setup_consumers to enable this).
  2. Events still waiting show with a TIMER_WAIT value of NULL.

The first is easy enough to enable, but the second is a major hurdle.  There’s a TIMER_START column with non-NULL values, but it’s expressed in picoseconds since server startup, and there’s no mechanism to get the current timer value to do your own calculations.  Until that’s addressed, we can’t leverage the events_waits_current table.  If we assume idle connections are likely the big gaps, we can do something like this:

mysql> SELECT
    ->  processlist_id,
    ->  processlist_user,
    ->  processlist_host,
    ->  ps_helper.format_time(SUM(total_time)) total_time
    -> FROM
    ->  ( SELECT
    ->   t.processlist_id,
    ->   t.processlist_user,
    ->   t.processlist_host,
    ->   SUM(ews.sum_timer_wait) total_time
    -> FROM events_waits_summary_by_thread_by_event_name ews
    ->   JOIN threads t
    ->     ON (t.thread_id = ews.thread_id)
    -> WHERE t.processlist_id IS NOT NULL
    -> GROUP BY
    ->   t.processlist_id,
    ->   t.processlist_user,
    ->   t.processlist_host
    ->  UNION SELECT
    ->   processlist_id,
    ->   processlist_user,
    ->   processlist_host,
    ->   processlist_time * 1000000000000 total_time
    -> FROM threads
    -> WHERE processlist_id IS NOT NULL
    ->   AND processlist_command = 'Sleep'
    -> ) thr
    ->  GROUP BY
    ->   processlist_id,
    ->   processlist_user,
    ->   processlist_host\G
*************************** 1. row ***************************
  processlist_id: 20
processlist_user: root
processlist_host: localhost
      total_time: 01:45:23.4352
*************************** 2. row ***************************
  processlist_id: 22
processlist_user: nothing
processlist_host: localhost
      total_time: 00:43:39.0016
*************************** 3. row ***************************
  processlist_id: 23
processlist_user: root
processlist_host: localhost
      total_time: 00:36:53.3564
3 rows in set (0.00 sec)

It’s not ideal, but it gets me in the neighborhood of what I’m looking for – an approximate age for connections.
 


Practical P_S: How idle are your connections?

$
0
0

Idle connections can cause problems both at the application side, increasing the risk of connection timeouts for applications where persistent connections are used, and the server side, where resources remain allocated to idle connections.  Any application with persistent connections, such as a JDBC application using a connection pool, will have periods where connections are idle – but it’s good to know how much time is spent idle.  Too much idle time might mean connections pools configured to allow too many connections to sit idle in a connection pool, or not properly doing connection pool maintenance.

PERFORMANCE_SCHEMA in MySQL 5.6 makes it trivial to measure absolute time spent waiting.  This will show total, average and maximum idle times by account:

mysql> SELECT
    ->  user,
    ->  host,
    ->  ps_helper.format_time(sum_timer_wait) total_idle,
    ->  ps_helper.format_time(avg_timer_wait) average_idle,
    ->  ps_helper.format_time(max_timer_wait) max_idle
    -> FROM events_waits_summary_by_account_by_event_name
    -> WHERE event_name = 'idle'
    ->  AND host IS NOT NULL;
+---------+-----------+---------------+---------------+---------------+
| user    | host      | total_idle    | average_idle  | max_idle      |
+---------+-----------+---------------+---------------+---------------+
| nothing | localhost | 05:54:56.1089 | 00:25:21.1506 | 05:08:17.4926 |
| root    | localhost | 13:04:08.5643 | 00:03:30.9801 | 02:08:05.6043 |
+---------+-----------+---------------+---------------+---------------+
2 rows in set (0.50 sec)

That’s a good initial start – total idle time is something worth looking at in any context, but it’s somewhat expected to see larger values for accounts supporting applications using persistent connections. For accounts where JDBC connection pooling is in use, you would expect to see both average and maximum idle times below the threshold for your connection pool maintenance thread to check connections. If you see average or maximum times exceeding those values, it could be a sign that your connection pool maintenance threads have too many idle connections to maintain (pool is oversized).

It would also be a concern if average or max idle times approach or exceed wait_timeout (or interactive_timeout) – that suggests that application connections are not being maintained properly (or possibly leaked, such that persistent connections aren’t being reused).

All of this is highly dependent upon proper configuration and behavior of application-side code, so it’s possible that you’ll want to look at this data not by account, but rather by host from which the connection originates. That makes it much easier to identify potentially problematic configuration on a specific application host, even if the account is shared across many such hosts. PERFORMANCE_SCHEMA also makes this easy:

mysql> SELECT
    ->  host,
    ->  ps_helper.format_time(sum_timer_wait) total_idle,
    ->  ps_helper.format_time(avg_timer_wait) average_idle,
    ->  ps_helper.format_time(max_timer_wait) max_idle
    -> FROM events_waits_summary_by_host_by_event_name
    -> WHERE event_name = 'idle'
    ->  AND host IS NOT NULL;
+-----------------------------+---------------+--------------+---------------+
| host                        | total_idle    | average_idle | max_idle      |
+-----------------------------+---------------+--------------+---------------+
| TFARMER-MYSQL.wh.oracle.com | 20.61 s       | 6.87 s       | 12.52 s       |
| localhost                   | 00:04:38.1177 | 27.81 s      | 00:02:05.8972 |
+-----------------------------+---------------+--------------+---------------+
2 rows in set (0.22 sec)

It’s also possible to get a very rough estimate of the percentage of time connections are spending idle. The accuracy of this is constrained by not having access to reliable information about the duration of a connection as well as the elapsed time for incomplete events, as noted in my previous blog post. But this gets us pretty close:

mysql> SELECT
    ->  user,
    ->  host,
    ->  100 * (SUM(IF(event_name = 'idle', sum_timer_wait, 0))
    ->    / SUM(sum_timer_wait)) pct_idle,
    ->  ps_helper.format_time(
    ->    SUM(IF(event_name = 'idle', sum_timer_wait, 0))
    ->  ) total_idle
    -> FROM events_waits_summary_by_account_by_event_name
    -> WHERE host IS NOT NULL
    -> GROUP BY user, host;
+---------+-----------------------------+----------+---------------+
| user    | host                        | pct_idle | total_idle    |
+---------+-----------------------------+----------+---------------+
| root    | localhost                   |  99.9919 | 00:18:55.3054 |
| test_hc | TFARMER-MYSQL.wh.oracle.com | 100.0000 | 20.61 s       |
+---------+-----------------------------+----------+---------------+
2 rows in set (0.02 sec)

Note that this only looks at completed events, so a connection which idles for hours immediately after connection won’t be reflected in the above queries.

Option prefixes deprecated

$
0
0

MySQL 5.6.13 was released earlier this week, and in that release (as well as 5.5.33) the ability to use unique option prefixes was deprecated.  This is fully removed from MySQL 5.7, and I thought it might be useful to amplify the change log notes on why this was done:

Previously, program options could be specified in full or as any unambiguous prefix. For example, the --compress option could be given to mysqldump as --compr, but not as --comp because the latter is ambiguous. Option prefixes now are deprecated. They can cause problems when new options are implemented for programs. A prefix that is currently unambiguous might become ambiguous in the future. If an unambiguous prefix is given, a warning now occurs to provide feedback. For example:

Warning: Using unique option prefix compr instead of compress is
deprecated and will be removed in a future release. Please use the
full name instead.

Option prefixes are no longer supported in MySQL 5.7; only full options are accepted. (Bug #16996656)

I’m not sure the rationale for originally supporting unique option prefixes, but it likely was meant to allow people to use easy-to-remember (and type) shorter versions of option names.  The problem is that any option prefix that works with a certain version today may not work tomorrow, as new options are added which may conflict with a given prefix.  That means that with any new release, programs (including mysqld) might fail to start due to an option prefix which used to work suddenly becoming ambiguous with the introduction of a new option.  It’s not exactly unheard of to add new server options in maintenance releases, while plugins can add new options without even upgrading MySQL.  It seemed prudent to deprecate and remove this functionality rather than allow potential conflicts which would adversely affect users randomly, if they use option prefixes.

The most notable area where this change will affect people is in client use, where use of certain prefixes may be common to avoid keystrokes.  For example:

[oracle@oraclelinux6 mysql-56]$ bin/mysql -uroot --sock=/path/to/mysql.sock
Warning: Using unique option prefix sock instead of socket is deprecated and
will be removed in a future release. Please use the full name instead.
Welcome to the MySQL monitor.  Commands end with ; or \g.

The decision to add deprecation warnings in 5.5 and 5.6 was made to give interactive users visibility to the upcoming removal of this support. For users who leverage option prefixes in scripts or batch files today, and will be adversely affected by the addition of the deprecation warning, we’re hoping to catch all possible affected users at once rather than continue to let users be affected indiscriminately as new options are added in future releases, breaking scripts with random upgrades.

Obviously, the best solution for batch files is to reference an options file with the full option name.  Note that the deprecation warning (and future support removal) applies to prefix usage in options files, as well as command-line options:

 

[oracle@oraclelinux6 mysql-56]$ cat test.cnf 
[client]
sock=/path/to/mysql.sock

[oracle@oraclelinux6 mysql-56]$ bin/mysql --defaults-file=test.cnf --user=root
Warning: Using unique option prefix sock instead of socket is deprecated and 
will be removed in a future release. Please use the full name instead.
Welcome to the MySQL monitor.  Commands end with ; or \g.

By the way, if you like to use “sock” as a prefix for the socket as shown in the above examples, you have an additional option: mysql_config_editor can store socket information:

[oracle@oraclelinux6 mysql-56]$ bin/mysql_config_editor set --login-path=sock \
  --socket=/usr/local/mysql-56/data/mysql.sock
[oracle@oraclelinux6 mysql-56]$ bin/mysql -uroot
ERROR 2002 (HY000): Can't connect to local MySQL server 
through socket '/tmp/mysql.sock' (2)

[oracle@oraclelinux6 mysql-56]$ bin/mysql --login-path=sock -uroot
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.6.13-enterprise-commercial-advanced MySQL Enterprise 
Server - Advanced Edition (Commercial)

If you use unique options prefixes in options files or batch scripts, please take the time to fix this now before support for this is completely removed in 5.7. Hopefully you can understand how the legacy behavior can lead to deployment-specific problems we can’t anticipate, and the reason behind the deprecation warnings. For those (hopefully very few) who are truly inconvenienced by client deprecation warnings, please consider keeping 5.6.12 or earlier clients, where the deprecation warning doesn’t exist until you can use full option names or options files.

Practical P_S: Fixing gaps in GLOBAL STATUS

$
0
0

Over three years ago, I noticed that there was no STATUS counter for COM_PING commands – something that is useful for ensuring proper configuration of JDBC connection pools.  Mark Leith even provided a patch, but it’s never been incorporated.  With the advances PERFORMANCE_SCHEMA makes in MySQL 5.6, that’s OK – a STATUS counter becomes somewhat redundant:

mysql> SELECT SUM(count_star) as pings
    -> FROM events_statements_summary_global_by_event_name
    -> WHERE event_name = 'statement/com/Ping';
+-------+
| pings |
+-------+
|    12 |
+-------+
1 row in set (0.02 sec)


Not only does PERFORMANCE_SCHEMA provide capabilities which mirror the STATUS counters, it really goes well beyond what’s capable there. A global counter is interesting, but if you have many app servers, how do you know that each of them is properly configured? A global counter won’t help much – you’ll need statistics compiled by host or account. PERFORMANCE_SCHEMA can provide this easily:

mysql> SELECT host, SUM(count_star) as pings
    -> FROM events_statements_summary_by_host_by_event_name
    -> WHERE event_name = 'statement/com/Ping'
    -> GROUP BY host;
+-----------+-------+
| host      | pings |
+-----------+-------+
| NULL      |     0 |
| localhost |    12 |
+-----------+-------+
2 rows in set (0.00 sec)

With that information, it’s easy to track down which app servers might be hosting a Java application and not configured to do lightweight COM_PING operations for connection pool maintenance.  Leveraging Connector/J’s support for connection attributes, you can even check the configuration of each independent connection pool:

mysql> SELECT
    ->   ca.attr_value,
    ->   SUM(count_star) as pings
    -> FROM
    ->   events_statements_summary_by_thread_by_event_name ess
    -> JOIN
    ->   threads t
    ->     ON (t.thread_id = ess.thread_id)
    -> JOIN
    ->   session_connect_attrs ca
    ->     ON (ca.processlist_id = t.processlist_id)
    -> WHERE ca.attr_name = 'pool'
    ->   AND ess.event_name = 'statement/com/Ping'
    -> GROUP BY ca.attr_value;
+------------+-------+
| attr_value | pings |
+------------+-------+
| first      |    68 |
| second     |     0 |
+------------+-------+
2 rows in set (0.09 sec)

Note that you connection attributes are not persisted beyond current connections, so the tables used are a bit different than the earlier examples.  Just for completeness, here’s the Java code I used for testing:

public static void testAttributesPing() throws Exception {
	Class.forName("com.mysql.jdbc.Driver");
	Properties props = new Properties();
	props.setProperty("user", "root");
	props.setProperty("connectionAttributes", "pool:first");
	Connection conn1 = DriverManager.getConnection(
			"jdbc:mysql://localhost:3307/test", props);

	props.setProperty("connectionAttributes", "pool:second");
	Connection conn2 = DriverManager.getConnection(
			"jdbc:mysql://localhost:3307/test", props);

	while(true) {
		conn1.createStatement().execute("/* ping */ SELECT 1");
		conn2.createStatement().execute("SELECT 1");
		Thread.sleep(1000);
	}

}

Connection attributes are a convenient tool for filtering data in PERFORMANCE_SCHEMA – it’s easy to annotate each individual JDBC resource configuration so that performance and behavior can be monitored independent of other configurations, regardless of whether it’s deployed using the same MySQL account, on the same host, the same application server or JVM, or even the same application.

Spring Cleaning: Useless protocol commands

$
0
0

In an earlier post, I commented on clients and utility programs which seem to no longer be useful, and opened (or referenced existing) public bug reports to deprecate and remove, where appropriate.  That effort came actually was the product of a different initiative:  I was looking for clients which might leverage the full spectrum of MySQL protocol commands in an effort to understand whether certain protocol commands are in use.  I thought I would share my observations, in the hope that we might also get feedback from others regarding usage of these commands. And even though it’s no longer spring (I started this post in April), I finally found time to finish this post.

The first group to mention are those which are refused by the server.  Some are explicitly identified, others are handled as they fall through to the default case in sql_parse.cc – but all are ignored, and I’m assuming that nobody cares about these commands as a result:

COM_SLEEP (0×00)

As noted in the documentation, this is an internal server command only.

COM_TIME (0x0f)

Also an internal command ignored in the protocol handling.

COM_DELAYED_INSERT (0×10)

One more internal command.

COM_END (0x1f)

This one isn’t documented anywhere, but it’s explicitly handled in the code.  It appears to be a guard against receiving command values greater than the allowable range:

  command= (enum enum_server_command) (uchar) packet[0];

  if (command >= COM_END)
    command= COM_END;                // Wrong command
So nothing much to talk about here.

COM_CREATE_DB (0×05)

Now we start to get interesting.  I expected that mysqladmin create db would leverage COM_CREATE_DB, but I was wrong.  I started poking around looking to see if I could find any other clients which might issue COM_CREATE_DB, gave up, and eventually wrote my own pseudo-client which would allow me to send COM_CREATE_DB protocol commands.  I should have looked at the server source code first; there’s no place in which COM_CREATE_DB gets handled, and it falls through to the default handling:

default:
 my_message(ER_UNKNOWN_COM_ERROR, ER(ER_UNKNOWN_COM_ERROR), MYF(0));
 break;

It’s not currently marked as deprecated in the protocol docs (bug 69074), but it seems as though support for it was removed way, way back.  mysqladmin uses COM_QUERY instead.  This corresponds to the mysql_create_db() function in the C API, which has been deprecated since 4.0, with the recommendation to use com_query() instead.

COM_DROP_DB

Same as COM_CREATE_DB above.

Interestingly, all of the above still have counters exposed via PERFORMANCE_SCHEMA, and even though invoking this protocol commands results in an error, you can still see the relevant counters increase:

mysql> SELECT event_name, count_star, sum_errors
    -> FROM events_statements_summary_global_by_event_name
    -> WHERE event_name LIKE 'statement/com/Sleep';
+---------------------+------------+------------+
| event_name          | count_star | sum_errors |
+---------------------+------------+------------+
| statement/com/Sleep |          1 |          1 |
+---------------------+------------+------------+
1 row in set (0.00 sec)

I noticed this when I started wondering about the documentation claims that some of above represent internal server commands – I wondered whether PERFORMANCE_SCHEMA might indicate they are actually used. I couldn’t observe any counters incrementing from internal server usage, but with my custom client, I could trigger execution (and error) counters to increase. Here’s a list of all COM_* equivalent events in PERFORMANCE_SCHEMA:

mysql> SELECT event_name, count_star, sum_errors
    -> FROM events_statements_summary_global_by_event_name
    -> WHERE event_name LIKE 'statement/com/%';
+--------------------------------+------------+------------+
| event_name                     | count_star | sum_errors |
+--------------------------------+------------+------------+
| statement/com/Sleep            |          1 |          1 |
| statement/com/Quit             |          3 |          0 |
| statement/com/Init DB          |          0 |          0 |
| statement/com/Query            |          0 |          0 |
| statement/com/Field List       |          0 |          0 |
| statement/com/Create DB        |          1 |          1 |
| statement/com/Drop DB          |          1 |          1 |
| statement/com/Refresh          |          0 |          0 |
| statement/com/Shutdown         |          0 |          0 |
| statement/com/Statistics       |          0 |          0 |
| statement/com/Processlist      |          3 |          0 |
| statement/com/Connect          |          0 |          0 |
| statement/com/Kill             |          0 |          0 |
| statement/com/Debug            |          0 |          0 |
| statement/com/Ping             |       1609 |          0 |
| statement/com/Time             |          0 |          0 |
| statement/com/Delayed insert   |          0 |          0 |
| statement/com/Change user      |          0 |          0 |
| statement/com/Binlog Dump      |          0 |          0 |
| statement/com/Table Dump       |          0 |          0 |
| statement/com/Connect Out      |          0 |          0 |
| statement/com/Register Slave   |          0 |          0 |
| statement/com/Prepare          |          0 |          0 |
| statement/com/Execute          |          0 |          0 |
| statement/com/Long Data        |          0 |          0 |
| statement/com/Close stmt       |          0 |          0 |
| statement/com/Reset stmt       |          0 |          0 |
| statement/com/Set option       |          0 |          0 |
| statement/com/Fetch            |          0 |          0 |
| statement/com/Daemon           |          0 |          0 |
| statement/com/Binlog Dump GTID |          0 |          0 |
| statement/com/Error            |          0 |          0 |
| statement/com/                 |          0 |          0 |
+--------------------------------+------------+------------+
33 rows in set (0.00 sec)

What’s interesting to me about this list is that there are clearly events which correspond to COM_* protocol commands, but are not yet documented. I submitted Bug#69927 to improve the documentation here. It also isn’t clear to me what the “statement/com” event is; if it’s meant to be a rollup (which could be nice), it’s not performing as expected (Bug#69928).

Another interesting observation is that the documentation for COM_PROCESS_INFO indicates that it’s been deprecated since MySQL 4.1, but it’s clearly still functioning (statement/com/Processlist events increase and no errors).  I submitted Bug#69929 to remove support for COM_PROCESS_INFO in MySQL 5.6 (FYI, mysqladmin uses COM_QUERY to issue SHOW PROCESSLIST).

I’m also interested in usage of COM_DEBUG – does anybody use this?  I know that’s what mysqladmin debug sends, but in my six years as part of the MySQL Support Team, I don’t recall anybody ever having used it.  Would people care if it were deprecated?

MySQL 5.6 users – prevent host blocked errors

$
0
0

The much-improved PERFORMANCE_SCHEMA in MySQL 5.6 provides visibility into MySQL’s host cache, including the ability to monitor for impending blocked hosts.  You can do this with the following query:

mysql> SELECT
    ->  ip,
    ->  host,
    ->  host_validated,
    ->  sum_connect_errors
    -> FROM performance_schema.host_cache\G
*************************** 1. row ***************************
                ip: 192.168.2.4
              host: TFARMER-MYSQL.wh.oracle.com
    host_validated: YES
sum_connect_errors: 3
1 row in set (0.02 sec)

That’s helpful information, and allows DBAs to identify problematic hosts before they are blocked.  Due to Bug#69807, it’s also something MySQL 5.6 users will want to do.  This bug causes the counter maintained in the host cache for failed connections – against which max_connect_errors is compared – to never be reset by a valid connection.  The end result is that over time, hosts may reach the max_connect_errors threshold and be blocked.

This bug is a regression from earlier behavior, and is already fixed for MySQL 5.6.14.  The original developers of this feature (back before MySQL 4.1 days) never provided any meaningful tests around max_connect_errors functionality, so it wasn’t noticed when it changed.  Marc Alff exposed the host cache via PERFORMANCE_SCHEMA.HOST_CACHE, and in doing so, he added myriad test cases that really expose the inner workings of max_connect_errors in ways that were never previously tested – you can find them in source distributions under the mysql-test/suite/perfschema directory.  Unfortunately, the expected behavior that a successful connection would reset the counter was not part of any legacy test, and wasn’t incorporated into the many new tests written with 5.6.

For MySQL 5.6 users worried about whether they will be affected by this defect, I can offer three suggestions:

  1. Consider setting max_connect_errors to maximum value; it probably doesn’t do what you expect it to do anyways.
  2. Consider whether you can eliminate DNS reverse lookups entirely with –skip-name-resolve – this not only eliminates the possibility of being blocked, but may also result in faster initial connections.
  3. Monitor your host cache and periodically flush it when counters start to approach max_connect_errors.  Here’s a query you might find useful to do this (note I set max_connect_errors to 4 for test purposes):
mysql> SELECT
    ->  ip,
    ->  host,
    ->  host_validated,
    ->  sum_connect_errors,
    ->  @@global.max_connect_errors - sum_connect_errors until_blocked,
    ->  (@@global.max_connect_errors - sum_connect_errors) *
    ->    (gs.variable_value / sum_connect_errors) est_seconds_until_blocked
    -> FROM performance_schema.host_cache hc
    -> JOIN information_schema.global_status gs
    ->  ON (gs.variable_name = 'UPTIME')
    ->  \G
*************************** 1. row ***************************
                       ip: 192.168.2.4
                     host: TFARMER-MYSQL.wh.oracle.com
           host_validated: YES
       sum_connect_errors: 3
            until_blocked: 1
est_seconds_until_blocked: 204085
1 row in set (0.00 sec)
Viewing all 352 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>