Quantcast
Channel: MySQL Support Blogs
Viewing all 352 articles
Browse latest View live

Understanding max_connect_errors

$
0
0

To only slightly misquote one of the greatest movies of all times:

You keep using that option.  I do not think it means what you think it means.

 

Perhaps like many users, I had certain assumptions about what max_connect_errors really does – but in looking closely as part of investigating the new PERFORMANCE_SCHEMA.HOST_CACHE table in MySQL 5.6, I learned that some very fundamental elements had escaped my notice.  I’m writing this blog post to help others who hold similar misconceptions of what this option does.

Many, if not most, MySQL DBAs are familiar with “host blocked” errors:

C:\mysql-5.5.27-winx64>bin\mysql -utest_mce -P3307 -h192.168.2.8
ERROR 1129 (HY000): Host 'Crowder' is blocked because of many connection errors;
 unblock with 'mysqladmin flush-hosts'

The solution to this problem is readily apparent from the error message – some DBAs might not even bother to glance at the documentation regarding this.  Even those who do might miss the nuanced explanation of the root cause:

The value of the max_connect_errors system variable determines how many successive interrupted connection requests are permitted.

The use of “interrupted” is surely intentional here, and it’s key to understanding the first point I’ll make:

1. It provides no meaningful protection against brute force access attacks

Truly.  You can set max_connect_errors to any value you please, and it will have exactly zero impact on somebody trying to brute force their way into your system by guessing user names and passwords.  It will lock out a host if somebody does a dumb port scan 100 times successively without trying to log in, but who scans a port 100 times?  The useful information from a port scan is divulged in the initial scan:

  1. MySQL is running on the specified port.
  2. The version of MySQL is included in the handshake.
  3. There are (or aren’t) accounts configured to allow access from the client machine, based on error code.
  4. The default authentication mechanism preferred by the server.

What’s the use of scanning it an additional 99 times when you already have all the information you are going to get?

2. Authentication failures reset the counter

Strange, but true.  Not only do authentication failures not increment the host counter, they actually reset it to zero – along with all other errors other than handshake interruptions.  The only thing that matters is whether the handshake was interrupted or not.  If it wasn’t interrupted, it counts as “success” and reset the host counter – regardless of whether the end result was a successful connection or not.  So, if you want to run a dumb port scanner more than 100 times, just make sure you intersperse an actual connection attempt every 99 cycles or so to rest the counter.  Here’s my testing of MySQL 5.5 behavior:

mysql> select @@global.max_connect_errors;
+-----------------------------+
| @@global.max_connect_errors |
+-----------------------------+
|                           1 |
+-----------------------------+
1 row in set (0.00 sec)

mysql> exit
Bye

D:\mysql-5.5.28-win32>bin\mysql -uhct -P3308 -h10.159.156.50 -ptest
ERROR 1129 (HY000): Host 'TFARMER-MYSQL.wh.oracle.com' is blocked 
because of many connection errors; unblock with 
'mysqladmin flush-hosts'

D:\mysql-5.5.28-win32>bin\mysqladmin -uroot -P3308 flush-hosts

D:\mysql-5.5.28-win32>start telnet 10.159.156.50 3308

D:\mysql-5.5.28-win32>bin\mysql -uhct -P3308 -h10.159.156.50 -ptest-bad
ERROR 1045 (28000): Access denied for user 
'hct'@'TFARMER-MYSQL.wh.oracle.com' (using password: YES)

D:\mysql-5.5.28-win32>start telnet 10.159.156.50 3308

D:\mysql-5.5.28-win32>bin\mysql -uhct -P3308 -h10.159.156.50 -ptest
Welcome to the MySQL monitor.  Commands end with ; or \g.
...
mysql> exit 
Bye

D:\mysql-5.5.28-win32>bin\mysqladmin -uroot -P3308 flush-hosts

D:\mysql-5.5.28-win32>start telnet 10.159.156.50 3308

D:\mysql-5.5.28-win32>start telnet 10.159.156.50 3308

D:\mysql-5.5.28-win32>bin\mysql -uhct -P3308 -h10.159.156.50 -ptest
ERROR 1129 (HY000): Host 'TFARMER-MYSQL.wh.oracle.com' is blocked 
because of many connection errors; unblock with 'mysqladmin flush-hosts'

 

3. All bets are off if you use –skip-name-resolve

Because this is all managed in the host cache, if you turn off reverse DNS lookups using –skip-name-resolve – and many people will to avoid potential DNS overhead in creation of new connections – max_connect_errors has zero effect.

4.  Localhost and IP loopbacks are excluded

For the same reason as #3, you’ll never see host blocked errors when connecting to localhost or via IP loopback interface.  These don’t go through the DNS reverse lookup and thus the host cache, and are therefore not tracked at all.  Whether that’s good (nobody can lock up local access) or not, I’ll let you decide.

5. The host cache is a fixed size

Marc Alff pointed out to me that the fixed size of the host cache – along with the LRU purge algorithm used – makes it quite possible that blocked hosts can fall out of the cache and cease to be blocked.  That has pretty obvious implications for how it can be bypassed by any third party needing to do so.

 Conclusion

If you are looking for a mechanism to limit exposure to brute-force attempts to access MySQL, max_connect_errors won’t help you.  If you’re worried about a SYN flood attack, max_connect_errors might help you in very specific situations.  PERFORMANCE_SCHEMA improvements in MySQL 5.6 expose meaningful information about potential brute-force attacks, but again – only in situations where the host cache is involved.  Beyond that, the contents of MySQL Enterprise Audit log or general query log can be mined to identify such attacks.  I filed several feature requests to give even more visibility through PERFORMANCE_SCHEMA and to provide a mechanism to restrict access from hosts based on number of failed authorization attempts.

 


Speaking at MySQL Connect 2013

$
0
0

183037-mysql-tk-imspeaking-250x250-1951648It is hard to believe it is already closing in on a year since the last MySQL Connect, but it is true, it is time to start preparing again.

This year MySQL Connect will take place in the weekend of 21-23 September with the Monday being dedicated to tutorials. As last year MySQL Connect is part of Oracle OpenWorld and is hosted at Hilton, San Francisco Union Square.

I am fortunate enough this year to be taking part in three sessions:

  • Meet the MySQL Community and Support Teams [BOF2480]
    Join MySQL community and support team members in this BOF to ask all your questions as well as provide feedback, suggestions, and ideas related to the MySQL community, handling of bugs, and overall technical support at Oracle.

    This Birds-Of-a-Feather session will take place Saturday at 4:30 PM.

  • Making the Performance Schema Easier to Use [CON5282]
    The Performance Schema can seem daunting at first, with the vast amount of data available. This session focuses on tools, such as ps_helper views and stored programs, that make it easier to get started with the Performance Schema and perform common tasks. The presentation includes examples of how ps_helper can be used to simplify checking the current configuration, changing the configuration, and investigating the usage of the server.

    This talk will take place Sunday at 11:00 AM.

  • Improving Performance with MySQL Performance Schema [HOL9733]
    The Performance Schema feature of MySQL is MySQL’s gateway for looking into the engine room. It enables you not only to discover what is going on in the internals but also to get detailed information about the current connections and some historical data. MySQL 5.6, which is now GA, introduces significant enhancements to Performance Schema. This hands-on lab gives you the opportunity to use Performance Schema, going through the steps from initial configuration and high-level summaries to detailed wait events.

    This Hands-On-Labs session will take place Sunday at 2:00 PM.

However those three sessions is just a small part of the agenda for MySQL Connect. There are a total of more than 80 sessions from both Oracle developers, engineer, and staff as well as users, third party developers, and more. See also Bertrand Matthelié’s Top 10 Reasons to Attend MySQL Connect.

See you all there.

183037-mysql-tk-joinme-250x250-1951669

MEM in WB

$
0
0

In the new Workbench, click on the "Planet MySQL" shortcut, then search for my feed and find this post.

MEM

Speaking at MySQL Connect

$
0
0

I'm Speaking at MySQL ConnectThe MySQL Connect content catalog is published, and I’ll be leading a hands-on lab on MySQL Enterprise Features in Practice [HOL9787].  If you have wondered how to get the most out of the features of MySQL Enterprise subscriptions – whether you are an existing Enterprise customer or not – this lab is for you.  We’ll help you understand the benefits of the various components of the MySQL Enterprise subscription as you install, configure, demonstrate and use the features.  You’ll learn how best practices and helpful tips, and work through sample customization exercises illustrating how tools such as MySQL Enterprise Monitor, MySQL Enterprise Backup and Security, Audit and Scalability components of MySQL Server can be applied to your MySQL use cases.  I’ll be joined by Engineering staff responsible for several of these key products/features, so it’s a great opportunity to learn more about features that can make your life easier directly from the experts!

It’s also very likely I will be found at the Application Development with MySQL, Java, PHP, and Python [BOF4743] if you want to talk Java with me.

Remove on sight: thread_concurrency, innodb_additional_mem_pool_size, innodb_use_sys_malloc

$
0
0

If you have thread_concurrency, innodb_additional_mem_pool_size or innodb_use_sys_malloc in one of your my.cnf or my.ini files please remove it at your next opportunity unless one of the unlikely exceptions applies.

Thread_concurrency is a setting for only Solaris that has no effect in recent Solaris versions like 11. For Solaris 8 and earlier it gave the thread system a hint about how many threads to use for MySQL. Solaris 8 was last a supported platform for MySQL 5.1. We have deprecated this setting from 5.6.1 and removed it from 5.7.

Innodb_additional_mem_pool_size is used in any operating system to specify the initial size of a buffer used for InnoDB data dictionary information when the old InnoDB internal memory allocator is being used. InnoDB automatically increases this size on demand. Until late versions of MySQL 4.0, InnoDB would crash when trying to increase the size, so you needed this option. We have deprecated this setting from 5.6.3 and will remove it later.

The InnoDB internal memory allocator was only needed because when the InnoDB project was started the allocators in many operating systems did a bad job with many memory allocations happening concurrently. That situation persisted until sometime around the release of MySQL 4.1 through 5.1, with operating system allocators gradually getting better. The InnoDB internal allocator is no longer needed and we no longer use it by default in the InnoDB plugin in 5.1 or in 5.5 or 5.6, with innodb_use_sys_malloc set to 1. It is deprecated from 5.6.3 and we'll remove it in a future release because there is now no reason to have it turned off. If you use 5.1 with the InnoDB plugin, 5.5 or 5.6 it's best to remove this setting now and just use the default enabled setting.

Why remove them? If you remove them now you won't be inconvenienced when they go away and it's better to have only things that matter in your configuration files, not things that don't.

Implementing a host blacklist with MySQL privileges

$
0
0

When I saw Shlomi’s recent post which asked (in part) for blacklist support in MySQL, I started thinking about ways in which this could be done using the tools we have today.  Here’s the example requirements Shlomi noted:

Speaking of whitelist, it would be great to have a host blacklist. If I wanted to grant access to ‘gromit’@’192.168.%’ except for ’192.168.10.%’ — well, I would have to whitelist all the possible subnets. I can’t exclude a set of hosts.

I think that’s entirely possible without the overhead of whitelisting all possible subnets – let’s give it a go!

This solution will rely on the fact that the first step in authentication in MySQL is finding the most applicable host for the incoming connection.  That’s caused all sorts of annoyances in the past with the anonymous user, where some unfortunate MySQL user creates a named account with a wildcard host, like ‘somebody’@'%’, and then proceeds to test locally, getting access denied because they are using a password that doesn’t match the ”@’localhost’ account that MySQL chose to use instead.  We can leverage that behavior to implement a blacklist.  First, we create the most generic user account:

CREATE USER 'gromit'@'192.168.%' IDENTIFIED BY 'password';

Now we can create a second user with 192.168.10.% as the host, and we’ll make sure they can’t log in.  You can use something like my system_user_auth plugin, here, if you like, but there are other ways to make logins impossible:

CREATE USER 'gromit'@'192.168.10.%' IDENTIFIED WITH system_user_auth';

Now you can log in from any host on the 192.168.% subnet, except those hosts on 192.168.10.%.  I admit it’s not the prettiest solution in the world, but it works with the MySQL tools we have today.

protocol speed comparison on windows

$
0
0

Comparison of Protocols


A while ago I wrote a small random function tester to fuzz test native functions such as linestring, polygon, astext, etc.  The queries it sends are generally small (100 bytes or less) and a totally CPU bound workload, since no data/tables are accessed.

As this was pretty much an open-ended test,  simply pumping random data into the functions, I had planned to let it run for a few days and see if any problems arose.

I benchmarked all the ways to connect on windows;  TCP/IP, named pipe, shared memory, and embedded server.

5.5.29 client and server are used here in all tests.  Roughly 345 million queries are sent via 8 threads.  Below is a graph to show the QPS of each protocol for the run:

Averages:



The QPS taken every minute is here.
As we see, libmysqld can do nearly 8x the throughput of tcp/ip in this test.  This matters, when you're running hundreds of billions of small fast queries.

Conclusion 

Embedded speed is clearly superior.  For this reason, I always try writing QA/testing code in C/C++ if I think it might need to be run billions of times.    But you lose ability to monitor the embedded server status from a mysql client, which is annoying.   However, distributing an embedded server application is far easier as it's self-contained. :)

Shared memory doesn't care to enforce wait_timeout, so you may want an idle-connection-killer script
looping in the background.  Also, shared memory connection isn't very stable at high concurrency.  This stability issue is already being dealt with so that's a plus.

Happy testing!

My eighteen MySQL 5.6 favorite troubleshooting improvements

$
0
0

MySQL 5.6 is in RC state now which means it is going to be GA sooner or later.

This release contains a lot of improvements. However, since I am a support engineer, I most amazed by those which make troubleshooting easier.

So here is the list of my favorite troubleshooting improvements.

1. EXPLAIN for UPDATE/INSERT/DELETE.


This is extremely useful feature.

Although prior version 5.6 we, theoretically, could have some kind of explain for them too, for example, if convert DML queries to their SELECT equivalents, optimizer can optimize them differently.

We still could execute DELETE or UPDATE, then query Hadler_% status variables, but who wants to execute update just for testing on live database? And anyway, querying Handler_% variables we could only know if some index was used or not, but can not identify which one.

2. INFORMATION SCHEMA.OPTIMIZER_TRACE table.

This table contains trace of last few queries, number of which is specified by configuration option optimizer_trace_limit.

Actually we could have similar information prior version 5.6: just use debug server and start it with option --debug. In this case MySQL server creates trace file where it writes information about functions and methods executed during server run. Problem with such files they are extremely verbose and large. It is hard to find necessary information there if you are not MySQL developer. As support engineer I use them when need to test a bug, but I use as follow: create very short script which executes as less queries as it can, then start MySQL server with debug option and run the test. Then stop debugging. Even with such short single-threaded tests size of resulting file is thousands of rows. It is almost not possible to use such files in production.

With optimizer trace we can have similar information just for optimizer, but in much compact way.

Hmm... I want similar feature for all parts of the server!

3. EXPLAIN output in JSON format.

This is actually simply syntax sugar around normal EXPLAIN output. It prints exactly same information like normal, table-view EXPLAIN, but can be used for some automations.

4. New InnoDB tables in Information Schema.

Table INNODB_METRICS contains a lot of information about InnoDB performance, starting from InnoDB buffer usage and up to number of transactions and records.

Tables INNODB_SYS_* contains information about InnoDB internal dictionary and its objects. Particularly about tables, fields, keys, etc.

Table INNODB_SYS_TABLESTATS contains information about InnoDB performance statistics.

Tables INNODB_BUFFER_POOL_* store information about InnoDB Buffer Pool usage.

5. Option to log all InnoDB deadlocks into error log file: innodb_print_all_deadlocks.

Currently we can use InnoDB Monitor output for the same purpose, but it prints latest deadlock only, not everyone since server startup. This feature should be very handy to troubleshoot issues, repeatable on production only.

6. Persistent statistics for InnoDB tables.

It is also OFF by default and can be turned ON with help of option innodb_analyze_is_persistent.


It was originally OFF. Now this option has name innodb_stats_persistent and is ON by default.

But this is performance improvement and how does it relate to troubleshooting?

In version 5.5 by default when you run query ANALYZE TABLE and after each MySQL server restart InnoDB renews table statistics which optimizer uses to generate better execution plans. Unfortunately such often statistics renewal does not fit for all data and tables. Sometimes fresh statistics can lead to choice of not the best plan. Now, if innodb_stats_persistent is ON, statistics get renewed only after ANALYZE TABLE and stays same after server restart.

In past we sometimes get complains from people who initially had fast perfectly running query, but performance decreased after inserting few millions of rows. As you can imagine rebuilding the table does not help in this case. For such cases we offered FORCE/IGNORE INDEX as a workaround which usually means customers had to rewrite dozens of queries.

Now we can start recommending to load data for which optimizer chooses best plan, then ANALYZE TABLE and never update statistics again.

7. InnoDB read-only transactions.

Every query on InnoDB table is part of transaction. This does not depend if you SELECT or modify the table. For example, for TRANSACTION ISOLATION levels REPEATABLE-READ and SERIALIZABLE, InnoDB creates a snapshot for data which was received at the first time. This is necessary for multiversion concurrency control. At the same time such snapshots slow down those transactions which are used only for reads. For example, many users complained about catastrophic performance decrease after InnoDB reached 256 connections. And if for transactions, which modify data, such slowdown is necessary for better stability, for read-only transactions this is not fair.

To solve this issue in version 5.6 new access modificator for transactions was introduced: READ ONLY or READ WRITE. Default is READ WRITE. We can modify it with help of START TRANSACTION READ ONLY, SET TRANSACTION READ ONLY queries or variables. InnoDB run tests which showed that usage of READ ONLY transaction completely solves 256 threads issue.

If InnoDB is running in autocommit mode it considers data snapshot for SELECT queries is not needed and does not create it. This means SELECT performance in autocommit mode is same as if READ ONLY transactions are used.


InnoDB Team published in their blog nice graphs with results of benchmarks of this new feature. Blog post is available here.

8. Support of backup/restore of InnoDB Buffer Pool during restart and on demand.

This feature can noticably speed up first few running hours of installations which use large InnoDB Buffer Pool. Earlier, when application with large InnoDB Buffer Pool started, some time, until queries filled the buffer, performance could be not so fast as it should. Now, we should not wait until the application execute all queries and fills up the pool: just backup it at shutdown, then restore at startup or in any time while MySQL server is running.

9. Multithreaded slave.

Prior version 5.6 slave could be catching up the master very slowly not because it runs on slow hardware, but due to its single-threaded nature. Now it can use up to 1024 parallel threads and catch up the master more easily.

10. Possibility to execute events from binary log with N-seconds delay.

This feature is opposite to previous one. Why did I put it in my troubleshooting improvements favorites list than? Because in addition to its main purposes, such as possibility to rollback DROP, DELETE (and other) operations we can also use this technique when master sends updates which slow down parallel reads executed by slave. For example, we can delay expensive ALTER TABLE until time when peak load on a web site passed. Because this option is set up on a slave, we can spread such expensive queries among them, so the application can use another slave for reads while one is busy.

11. Partial images in RBR.


This feature can increase replication performance dramatically.

Today one of most popular row-based replication performance issues is long transferring time of large events. For example, if you have a row with LONGBLOB column, any modification of this row, even of an integer field, will require to send whole row. In other words, to modify 4 bytes master will send to slave up to 4G. Since version 5.6 it is possible to setup how to send rows in case of row-based replication.

binlog_row_image=full will mimic 5.5 and earlier behavior: full row will be stored in the binary log file.

binlog_row_image=minimal will store only modified data

and

binlog_row_image=noblob will store full row for all types except blobs

12. GET DIAGNOSTICS queries and access to Diagnostic Area.

Diagnostic Area is a structure which filled after each query execution. It contains two kinds of information: query results, such as number of affected rows, and information about warnings and notes.

Now we can access Diagnostic Area through GET DIAGNOSTICS queries.

13.  HANDLERs processing is more like SQL standard now.

I wrote about this two times already: here and here

So I would not repeat my second post, just mention again I am finally happy with how HANDLERs work now.

PERFORMANCE_SCHEMA improvements


These are huge improvements!

In version 5.5, when performance schema was introduced, I'd say it was mostly designed for MySQL developers themselves or at least for people who know MySQL source code very well. Now performance schema is for everybody.

14. New instruments to watch IO operations.

We can even watch IO operations for specific table. There are also new aggregation tables by these operations.

15. New tables events_statements_* instrument statements.

Now we can know what happens during query execution.

Things like SQL text of the query, event name, corresponding to particular code and statistics of the query execution.

16. Instruments for operation stages: events_stages_* tables.


EVENT_NAME field of these tables contains information similar to content of SHOW PROCESSLIST, but field SOURCE has exact source code row there operations executed. This is very useful when you want to know what is going on.


I strongly recommend you to study events_statements_* and events_stages_* tables

17. New digests: for operators by account, user, host and so on.


These are very useful for work with historical data.

Of course, there are history tables, which we know since version 5.5, exist for both events_stages and events_statements.

We can also filter information by users, sessions and tables.

18. HOST_CACHE table.

This table contains information about client host and IP address which are stored in the host cache, so MySQL server should not query DNS server second time.

I am not sure why it is in Performance Schema and not in Information Schema, but maybe this is only me.


I expect to get use of it when diagnosing connection failures.


My way to verify MySQL bug reports

$
0
0

I promised to write this blog post long time ago at one of conferences in Russia. Don't know why I delayed this, but finally I did.

We, members of MySQL bugs verification group, have to verify bugs in all currently supported versions. We use not only version reported, but test in development source tree for each of supported major versions and identify recent regressions.

You can imagine that even if I would do so for simple bug report about wrong results with perfect test case, which requires me simply run few queries I would have to start 4 or more MySQL servers: one for each of currently supported versions 5.0, 5.1, 5.5 plus one for current development. And unknown number of servers if I could not repeat or if I want to check if this is regression.

Even if I have all these basic 4 servers running I still should type all these queries at least 4 times. How much time it would take to verify single bug report if I did so?

I know some members of my group preferred this way, because typing queries manually is same action which our customers do. Again, some bugs are repeatable only if you type queries manually.

But I prefer to test manually erroneous exceptions only and don't make it my routine job.


So how do I test bug reports?


Every version of MySQL server comes with regression test suite: a program, called mtr (mysql-test-run.pl), its libraries, mysqltest program (you should not call it directly, though) and set of tests. Good thing with MySQL test suite is that you can create your own test cases. So do I.

I write my tests in MTR format, then run MTR with record option and examine result. Actually this is kind of hack, because users expected to create result file first, then compare output of running test with that result file. But my purpose is to repeat bug report, not to create proper test case for it, so I can be lazy.

But simply running MTR manually still takes time. And I found a way to automate this process as well.

I created a BASH script, called do_test.sh, which run through all my MySQL trees and runs tests for me automatically, then prints result.

Let me explain it a little bit.


$ cat ~/scripts/do_test.sh

#!/bin/bash

# runs MySQL tests in all source directories

# prints usage information

usage ()

{

    echo "$VERSION"

    echo "

do_test copies MySQL test files from any place

to each of source directory, then runs them

Usage: `basename $0` [option]... [testfile ...]

    or `basename $0` [option]... -d dirname [test ...]

    or `basename $0` [option]... [-b build [build option]... ]...

Options:

    -d --testdir    directory, contains test files

I have a directory, there I store test files. It has subdirectory t where tests to be run are stored, subdirectory r, where results, sorted by MySQL server version number, are stored, and directory named archive, there test are stored for archiving purpose.


    -s --srcdir     directory, contains sources directories

This is path to the directory where MySQL package is located. I called it srcdir, but this is actually not so strict: program will work with binary packages as well.


    -b --build      mysql source directory

Name of MySQL source directory. You can specify any package name. For example, to run tests in 5.6.9 package in my current dir I call the program as do_test -s . -b mysql-5.6.9-rc-linux-glibc2.5-x86_64


    -c --clean      remove tests from src directory after execution

    -t --suite      suite where to put test

MTR can have test suites with their own rules of how to run test case. If you want to run your tests in specific suite, specify this option. You can also have directory for your own suite, but in this case you need to create directories your_suite, your_suite/t and your_suite/r in mysql-test/suite directory of your MySQL installation prior doing this.

As I told I am lazy, so I run tests in main test suite mostly. This can be not good idea if you use MySQL installation not only for tests of its own bugs, but for some other tests.

Rest of the code speaks for itself, so I would not explain it. What you need to do to run this program is simply call it: do_test.sh and pass paths to your test, src dir and MySQL installation.


    -v --version    print version number, then exit

    -h --help       print this help, then exit

You can also pass any option to mysqltest program.

    "

}

# error exit

error()

{

    printf "$@" >&2

    exit $E_CDERROR

}

# creates defaults values

initialize()

{

This probably not very obvious. These are my default paths and, most importantly, default set of servers I test


    TESTDIR=/home/sveta/src/tests

    SRCDIR=/home/sveta/src

    BUILDS="mysql-5.0 mysql-5.1 mysql-5.5 mysql-trunk"

    CLEAN=0 #false

    MYSQLTEST_OPTIONS="--record --force"

    TESTS_TO_PASS=""

    TESTS=""

    SUITE=""

    SUITEDIR=""

    OLD_PWD=`pwd`

    VERSION="do_test v0.2 (May 28 2010)"

}

# parses arguments/sets values to defaults

parse()

{

    TEMP_BUILDS=""

    

    while getopts "cvhd:s:b:t:" Option

    do

        case $Option in

            c) CLEAN=1;;

            v) echo "$VERSION";;

            h) usage; exit 0;;

            d) TESTDIR="$OPTARG";;

            s) SRCDIR="$OPTARG";;

            b) TEMP_BUILDS="$TEMP_BUILDS $OPTARG";;

            t) SUITE="$OPTARG"; SUITEDIR="/suite/$SUITE"; MYSQLTEST_OPTIONS="$MYSQLTEST_OPTIONS --suite=$SUITE";;

            *) usage; exit 0; ;;

        esac

    done

    if [[ $TEMP_BUILDS ]]

    then

        BUILDS="$TEMP_BUILDS"

    fi

}

# copies test to source directories

copy()

{

    cd "$TESTDIR/t"

    TESTS_TO_PASS=`ls *.test 2>/dev/null | sed s/.test$//`

    cd $OLD_PWD

    for build in $BUILDS

    do

        #cp -i for reject silent overload

        cp "$TESTDIR"/t/*.{test,opt,init,sql,cnf} "$SRCDIR/$build/mysql-test$SUITEDIR/t" 2>/dev/null

    done

}

# runs tests

run()

{

    for build in $BUILDS

    do

        cd "$SRCDIR/$build/mysql-test"

        ./mysql-test-run.pl $MYSQLTEST_OPTIONS $TESTS_TO_PASS

    done

    cd $OLD_PWD

}

# copies result and log files to the main directory

get_result()

{

    for build in $BUILDS

    do

        ls "$TESTDIR/r/$build" 2>/dev/null

        if [[ 0 -ne $? ]]

        then

            mkdir "$TESTDIR/r/$build"

        fi

        for test in $TESTS_TO_PASS

        do

            cp "$SRCDIR/$build/mysql-test$SUITEDIR/r/$test".{log,result} "$TESTDIR/r/$build" 2>/dev/null

        done

    done

}

# removes tests and results from MySQL sources directories

cleanup()

{

    if [[ 1 -eq $CLEAN ]]

    then

        for build in $BUILDS

        do

            for test in $TESTS_TO_PASS

            do

                rm "$SRCDIR/$build/mysql-test$SUITEDIR/r/$test".{log,result} 2>/dev/null

                rm "$SRCDIR/$build/mysql-test$SUITEDIR/t/$test.test"

            done

        done

    fi

}

# shows results

show()

{

    for build in $BUILDS

    do

        echo "=====$build====="

        for test in $TESTS_TO_PASS

        do

            echo "=====$test====="

            cat "$TESTDIR/r/$build/$test".{log,result} 2>/dev/null

            echo

        done

        echo

    done

}

E_CDERROR=65

#usage

initialize

parse $@

copy

run

get_result

cleanup

show

exit 0

After I finished with test I copy it to archive directory, again, with a script, named ar_test.sh:


$ cat ~/scripts/ar_test.sh

#!/bin/bash

# moves MySQL tests from t to archive directory and clean ups r directories

# prints usage information

usage ()

{

    echo "$VERSION"

    echo "

ar_test copies MySQL test files from t to archive folder

Usage: `basename $0` [-v] [-d dirname] [test ...]

Options:

    -d    directory, contains test files

    -v    print version

    -h    print this help

    "

}

# error exit

error()

{

    printf "$@" >&2

    exit $E_CDERROR

}

# creates defaults values

initialize()

{

    TESTDIR=/home/sveta/src/tests

    TESTS_TO_MOVE=""

    OLD_PWD=`pwd`

    VERSION="ar_test v0.2 (Dec 01 2011)"

}

# parses arguments/sets values to defaults

parse()

{    

    while getopts "vhd:" Option

    do

        case $Option in

            v) echo "$VERSION"; shift;;

            h) usage; exit 0;;

            d) TESTDIR="$OPTARG"; shift;;

            *) usage; exit 0;;

        esac

    done

    

    TESTS_TO_MOVE="$@"

}

# copies test to source directories

copy()

{

    if [[ "xx" = x"$TESTS_TO_MOVE"x ]]

    then

        cp "$TESTDIR"/t/* "$TESTDIR"/archive 2>/dev/null

    else

        for test in $TESTS_TO_MOVE

        do

            cp "$TESTDIR/t/$test".{test,opt,init,sql} "$TESTDIR"/archive 2>/dev/null

        done

    fi

}

# removes tests and results from r directories

cleanup()

{

    if [[ "xx" = x"$TESTS_TO_MOVE"x ]]

    then

        rm "$TESTDIR"/t/* 2>/dev/null

        rm "$TESTDIR"/r/*/* 2>/dev/null

    else

        for test in $TESTS_TO_MOVE

        do

            rm "$TESTDIR/t/$test".{test,opt,init,sql} 2>/dev/null

            rm "$TESTDIR/r/"*"/$test".{test,opt,init,sql} 2>/dev/null

        done

    fi

}

E_CDERROR=65

initialize

parse $@

copy

cleanup

exit 0

But most important part: what to do if I want to test on some specific machine which is not available at home? Fortunately, we have shared machines to run tests on, so I can simply move them to my network homedir, then choose appropriate machine and run. Since this is BASH script and test cases in MTR format this would work on any operating system.


$ cat ~/scripts/scp_test.sh

#!/bin/bash

# copies MySQL tests to remote box

# prints usage information

usage ()

{

    echo "$VERSION"

    echo "

scp_test copies MySQL test files from t directory on local box to MySQL's XXX

    

Usage: `basename $0` [-v] [-d dirname] [-r user@host:path] [test ...]

Options:

    -d    directory, contains test files

    -r    path to test directory on remote server, default: USERNAME@MACHINE_ADDRESS:~/PATH/src/tests/t

    -v    print version

    -h    print this help

    "

}

# error exit

error()

{

    printf "$@" >&2

    exit $E_CDERROR

}

# creates defaults values

initialize()

{

    TESTDIR=/home/sveta/src/tests

    MOVETO='USERNAME@MACHINE_ADDRESS:~/PATH/src/tests/t'

    TESTS_TO_MOVE=""

    OLD_PWD=`pwd`

    VERSION="scp_test v0.2 (Dec 1 2011)"

}

# parses arguments/sets values to defaults

parse()

{    

    while getopts "vhd:" Option

    do

        case $Option in

            v) echo "$VERSION"; shift;;

            h) usage; exit 0;;

            d) TESTDIR="$OPTARG"; shift;;

            r) MOVETO="$OPTARG"; shift;;

            *) usage; exit 0;;

        esac

    done

    

    TESTS_TO_MOVE="$@"

}

# copies test to source directories

copy()

{

    if [[ "xx" = x"$TESTS_TO_MOVE"x ]]

    then

        scp "$TESTDIR"/t/* "$MOVETO"

    else

        for test in $TESTS_TO_MOVE

        do

            scp "$TESTDIR/t/$test".{test,opt,init,sql} "$MOVETO"

        done

    fi

}

E_CDERROR=65

initialize

parse $@

copy

exit 0

Wanted to put them to Launchpad, but stack with name for this package. Does anybody have ideas?

Troubleshooting Performance Diagrams

$
0
0


Last year, when I was speaking about MySQL performance at Devconf in Moscow, I expected my audience will be very experienced as this always happen at all PHPClub conferences. So I had to choose: either make full-day seminar and explain people every basic of performance, or rely on their knowledge and make some one and half hours seminar. I prefer short speeches, so I considered latter.



But even with such a mature audience you don't always know if they knew some or another basic thing. Like somebody can be good analyzing EXPLAIN output and other is in reading InnoDB Monitor printout. Also, native language of the audience is not English and it would be always good to have short reference to simple things, described in their native language. In this case Russian. This is why I created these slides first time.



I was in doubt if I need to translate them to English, because information there repeats the official MySQL user reference manual in many aspects. Although diagrams still can be useful for English-speaking audience. So I did not translate those slides in first turn.



Time passed and in few days I am speaking about MySQL Troubleshooting at Fosdem. This is 30-minustes talk this time! And I just created slides and content for 8-hours Oracle University training! Can I zip this 8-hours training into 30-minutes talk? Of course, not. But I also want to give users as much in these 30 minutes as possible. So idea with add-ons came back.



You can download them in PDF format either from my own website or from slideshare.

Changes to Options and Variables in MySQL 5.6

$
0
0

With MySQL 5.6 just gone GA, I thought it would be good to take a look at the changes in options and variables that comes with the new release.

First of all, several of the existing options have get new default values. As James Day already have written a good post about that in his blog, I will refer to that instead of going through the changes. For a general overview of the new features and improvements, the recent blogs by Rob Young and Peter Saitsev are good starting points together with the What is New in MySQL 5.6 page in the Reference Manual are good places to start.

Instead I will focus a little on the new options that has been introduced. The first thing to note is that a in the current 5.5. release (5.5.30) there are 323 variables whereas 5.6 GA (5.6.10) returns 440 rows.

MySQL 5.5.29> SELECT COUNT(*) FROM information_schema.GLOBAL_VARIABLES;
+----------+
| COUNT(*) |
+----------+
|      323 |
+----------+
1 row in set (0.04 sec)

MySQL 5.6.10> SELECT COUNT(*) FROM information_schema.GLOBAL_VARIABLES;
+----------+
| COUNT(*) |
+----------+
|      440 |
+----------+
1 row in set (0.02 sec)

Note: this post is written using the Enterprise versions with the semi-synchronous replication plugins enabled in both versions plus the memcached and password validation plugins in 5.6.

Actually the number of new variables is not 117 but 129 as 12 variables have been removed in 5.6.

So what are all of these 129 new variables good for? Actually there is a good chance that you will never need to touch many of them as the default value is good enough, they simply have been added to provide the value of options already present in 5.5 but not exposed through SHOW GLOBAL VARIABLES, or that they are for features you are not using. If we try to group the new variables the distribution comes out as:

Feature
New Variables
Global Transaction IDs5
Other Replication19
Memcached Plugin6
Validate Password Plugin6
Other Security Related5
InnoDB54
Optimizer Traces5
Performance Schema15
Exposing Previously Existing Variables2
Other12

New Variables in MySQL 5.6

The 54 new InnoDB variables span a number of different changes and additions such as:

  • New adaptive flushing algorithm
  • Buffer Pool dumps to disk and restore
  • Support for additional checksum algorithms
  • Improvements for compression
  • Full text indexes
  • New monitoring options (the information_schema.metrics table)
  • Configurable page size
  • Persistent statistics
  • Undo logs improvements
  • And more …

For reference I have added a list of the new variables with the release they were introduced and the default value (additionally innodb_print_all_deadlocks is also new, but that was also added to 5.5.30):

+--------------------------------------------------------+--------------+------------------------------------------------------------------------+
| Variable_name                                          | From Version | Default (Linux)                                                        |
+--------------------------------------------------------+--------------+------------------------------------------------------------------------+
| bind_address                                           | 5.6.1        | *                                                                      |
| binlog_checksum                                        | 5.6.2        | NONE                                                                   |
| binlog_max_flush_queue_time                            | 5.6.6        | 0                                                                      |
| binlog_order_commits                                   | 5.6.6        | ON                                                                     |
| binlog_row_image                                       | 5.6.2        | FULL                                                                   |
| binlog_rows_query_log_events                           | 5.6.2        | OFF                                                                    |
| core_file                                              | 5.6.2        | OFF                                                                    |
| daemon_memcached_enable_binlog                         | 5.6.6        | OFF                                                                    |
| daemon_memcached_engine_lib_name                       | 5.6.6        | innodb_engine.so                                                       |
| daemon_memcached_engine_lib_path                       | 5.6.6        |                                                                        |
| daemon_memcached_option                                | 5.6.6        |                                                                        |
| daemon_memcached_r_batch_size                          | 5.6.6        | 1                                                                      |
| daemon_memcached_w_batch_size                          | 5.6.6        | 1                                                                      |
| default_authentication_plugin                          | 5.6.6        | MYSQL_NATIVE_PASSWORD                                                  |
| default_tmp_storage_engine                             | 5.6.2        | InnoDB                                                                 |
| end_markers_in_json                                    | 5.6.5        | OFF                                                                    |
| enforce_gtid_consistency                               | 5.6.9        | OFF                                                                    |
| eq_range_index_dive_limit                              | 5.6.5        | 10                                                                     |
| explicit_defaults_for_timestamp                        | 5.6.6        | OFF                                                                    |
| gtid_executed                                          | 5.6.9        |                                                                        |
| gtid_mode                                              | 5.6.5        | OFF                                                                    |
| gtid_next                                              | 5.6.5        | AUTOMATIC                                                              |
| gtid_purged                                            | 5.6.9        |                                                                        |
| host_cache_size                                        | 5.6.5        | 128                                                                    |
| ignore_db_dirs                                         | 5.6.3        |                                                                        |
| innodb_adaptive_flushing_lwm                           | 5.6.6        | 10                                                                     |
| innodb_adaptive_max_sleep_delay                        | 5.6.3        | 0                                                                      |
| innodb_api_bk_commit_interval                          | 5.6.7        | 5                                                                      |
| innodb_api_disable_rowlock                             | 5.6.6        | OFF                                                                    |
| innodb_api_enable_binlog                               | 5.6.6        | OFF                                                                    |
| innodb_api_enable_mdl                                  | 5.6.6        | OFF                                                                    |
| innodb_api_trx_level                                   | 5.6.6        | 0                                                                      |
| innodb_buffer_pool_dump_at_shutdown                    | 5.6.3        | OFF                                                                    |
| innodb_buffer_pool_dump_now                            | 5.6.3        | OFF                                                                    |
| innodb_buffer_pool_filename                            | 5.6.3        | ib_buffer_pool                                                         |
| innodb_buffer_pool_load_abort                          | 5.6.3        | OFF                                                                    |
| innodb_buffer_pool_load_at_startup                     | 5.6.3        | OFF                                                                    |
| innodb_buffer_pool_load_now                            | 5.6.3        | ON                                                                     |
| innodb_change_buffer_max_size                          | 5.6.2        | 25                                                                     |
| innodb_checksum_algorithm                              | 5.6.3        | innodb                                                                 |
| innodb_cmp_per_index_enabled                           | 5.6.7        | OFF                                                                    |
| innodb_compression_failure_threshold_pct               | 5.6.7        | 5                                                                      |
| innodb_compression_level                               | 5.6.7        | 6                                                                      |
| innodb_compression_pad_pct_max                         | 5.6.7        | 50                                                                     |
| innodb_disable_sort_file_cache                         | 5.6.4        | OFF                                                                    |
| innodb_flush_log_at_timeout                            | 5.6.6        | 1                                                                      |
| innodb_flush_neighbors                                 | 5.6.3        | 1                                                                      |
| innodb_flushing_avg_loops                              | 5.6.6        | 30                                                                     |
| innodb_ft_cache_size                                   | 5.6.4        | 32M                                                                    |
| innodb_ft_enable_diag_print                            | 5.6.4        | OFF                                                                    |
| innodb_ft_enable_stopword                              | 5.6.4        | ON                                                                     |
| innodb_ft_max_token_size                               | 5.6.4        | 84                                                                     |
| innodb_ft_min_token_size                               | 5.6.4        | 3                                                                      |
| innodb_ft_num_word_optimize                            | 5.6.4        | 2000                                                                   |
| innodb_ft_server_stopword_table                        | 5.6.4        | NULL                                                                   |
| innodb_ft_sort_pll_degree                              | 5.6.4        | 2                                                                      |
| innodb_ft_user_stopword_table                          | 5.6.4        | NULL                                                                   |
| innodb_io_capacity_max                                 | 5.6.6        | 2000                                                                   |
| innodb_lru_scan_depth                                  | 5.6.3        | 1024                                                                   |
| innodb_max_dirty_pages_pct_lwm                         | 5.6.6        | 0                                                                      |
| innodb_max_purge_lag_delay                             | 5.6.5        | 0                                                                      |
| innodb_monitor_disable                                 | 5.6.2        |                                                                        |
| innodb_monitor_enable                                  | 5.6.2        |                                                                        |
| innodb_monitor_reset                                   | 5.6.2        |                                                                        |
| innodb_monitor_reset_all                               | 5.6.2        |                                                                        |
| innodb_online_alter_log_max_size                       | 5.6.6        | 128M                                                                   |
| innodb_optimize_fulltext_only                          | 5.6.4        | OFF                                                                    |
| innodb_page_size                                       | 5.6.4        | 16k                                                                    |
| innodb_print_all_deadlocks                             | 5.6.2        | OFF                                                                    |
| innodb_read_only                                       | 5.6.7        | OFF                                                                    |
| innodb_sort_buffer_size                                | 5.6.4        | 1M                                                                     |
| innodb_stats_auto_recalc                               | 5.6.6        | ON                                                                     |
| innodb_stats_persistent                                | 5.6.6        | ON                                                                     |
| innodb_stats_persistent_sample_pages                   | 5.6.2        | 20                                                                     |
| innodb_stats_transient_sample_pages                    | 5.6.2        | 8                                                                      |
| innodb_sync_array_size                                 | 5.6.3        | 1                                                                      |
| innodb_undo_directory                                  | 5.6.3        | .                                                                      |
| innodb_undo_logs                                       | 5.6.3        | 128                                                                    |
| innodb_undo_tablespaces                                | 5.6.3        | 0                                                                      |
| log_bin_basename                                       | 5.6.1        |                                                                        |
| log_bin_index                                          | 5.6.1        |                                                                        |
| log_bin_use_v1_row_events                              | 5.6.6        | OFF                                                                    |
| log_throttle_queries_not_using_indexes                 | 5.6.5        | 0                                                                      |
| master_info_repository                                 | 5.6.2        | FILE                                                                   |
| master_verify_checksum                                 | 5.6.2        | OFF                                                                    |
| metadata_locks_hash_instances                          | 5.6.8        | 8                                                                      |
| optimizer_trace                                        | 5.6.3        | enabled=off,one_line=off                                               |
| optimizer_trace_features                               | 5.6.3        | greedy_search=on,range_optimizer=on,dynamic_range=on,repeated_subselec |
| optimizer_trace_limit                                  | 5.6.3        | 1                                                                      |
| optimizer_trace_max_mem_size                           | 5.6.3        | 16k                                                                    |
| optimizer_trace_offset                                 | 5.6.3        | -1                                                                     |
| performance_schema_accounts_size                       | 5.6.3        | -1 (autosized)                                                         |
| performance_schema_digests_size                        | 5.6.5        | -1 (autosized)                                                         |
| performance_schema_events_stages_history_long_size     | 5.6.3        | -1 (autosized)                                                         |
| performance_schema_events_stages_history_size          | 5.6.3        | -1 (autosized)                                                         |
| performance_schema_events_statements_history_long_size | 5.6.3        | -1 (autosized)                                                         |
| performance_schema_events_statements_history_size      | 5.6.3        | -1 (autosized)                                                         |
| performance_schema_hosts_size                          | 5.6.3        | -1 (autosized)                                                         |
| performance_schema_max_socket_classes                  | 5.6.3        | 10                                                                     |
| performance_schema_max_socket_instances                | 5.6.3        | -1 (autosized)                                                         |
| performance_schema_max_stage_classes                   | 5.6.3        | 100                                                                    |
| performance_schema_max_statement_classes               | 5.6.3        | 100                                                                    |
| performance_schema_session_connect_attrs_size          | 5.6.6        | -1 (autosized)                                                         |
| performance_schema_setup_actors_size                   | 5.6.1        | 100                                                                    |
| performance_schema_setup_objects_size                  | 5.6.1        | 100                                                                    |
| performance_schema_users_size                          | 5.6.3        | -1 (autosized)                                                         |
| relay_log_basename                                     | 5.6.2        | %{datadir}%{hostname}-relay-bin                                        |
| relay_log_info_repository                              | 5.6.2        | FILE                                                                   |
| server_id_bits                                         | 5.6.6        | 32                                                                     |
| server_uuid                                            | 5.6.0        |                                                                        |
| sha256_password_private_key_path                       | 5.6.6        | private_key.pem                                                        |
| sha256_password_public_key_path                        | 5.6.6        | public_key.pem                                                         |
| slave_allow_batching                                   | 5.6.6        | OFF                                                                    |
| slave_checkpoint_group                                 | 5.6.3        | 512                                                                    |
| slave_checkpoint_period                                | 5.6.3        | 300                                                                    |
| slave_parallel_workers                                 | 5.6.3        | 0                                                                      |
| slave_pending_jobs_size_max                            | 5.6.3        | 1k                                                                     |
| slave_rows_search_algorithms                           | 5.6.6        | TABLE_SCAN,INDEX_SCAN                                                  |
| slave_sql_verify_checksum                              | 5.6.1        | ON                                                                     |
| ssl_crl                                                | 5.6.3        |                                                                        |
| ssl_crlpath                                            | 5.6.3        |                                                                        |
| table_open_cache_instances                             | 5.6.6        | 1                                                                      |
| tx_read_only                                           | 5.6.5        | OFF                                                                    |
| validate_password_dictionary_file                      | 5.6.6        |                                                                        |
| validate_password_length                               | 5.6.6        | 8                                                                      |
| validate_password_mixed_case_count                     | 5.6.6        | 1                                                                      |
| validate_password_number_count                         | 5.6.6        | 1                                                                      |
| validate_password_policy                               | 5.6.10       | MEDIUM                                                                 |
| validate_password_special_char_count                   | 5.6.6        | 1                                                                      |
+--------------------------------------------------------+--------------+------------------------------------------------------------------------+

Note that while the default values are for an installation on Linux, most will also apply to other platforms. See also the Reference Manual.

For good measure here is a list of the variables that have been removed in 5.6:

  • engine_condition_pushdown – deprecated in 5.5.3, use optimizer_switch instead.
  • have_csv – use SHOW ENGINES or information_schema.ENGINES instead.
  • have_innodb – use SHOW ENGINES or information_schema.ENGINES instead.
  • have_ndbcluster – use SHOW ENGINES or information_schema.ENGINES instead.
  • have_partitioning – use SHOW ENGINES or information_schema.ENGINES instead.
  • log – deprecated in 5.1.29, use general_log instead.
  • log_slow_queries – deprecated in 5.1.29, use slow_query_log instead.
  • max_long_data_size – deprecated in 5.5.11, is now automatically controlled by max_allowed_packet.
  • rpl_recovery_rank – previously unused.
  • sql_big_tables – hasn’t really been needed since 3.23.2.
  • sql_low_priority_updates – Use low_priority_updates instead.
  • sql_max_join_size

Default user

$
0
0

It came up twice in two days: if you do not specify the user name when connecting, what gets picked?

The manual says:
"On Unix, most MySQL clients by default try to log in using the current Unix user name as the MySQL user name, but that is for convenience only."
http://dev.mysql.com/doc/refman/5.6/en/user-names.html

"The default user name is ODBC on Windows or your Unix login name on Unix."
http://dev.mysql.com/doc/refman/5.6/en/connecting.html

Let's be a little more specific. The relevant section of code is in libmysql/libmysql.c

On Linux, we check the following in this order:
- if (geteuid() == 0), user is "root"
- getlogin()
- getpwuid(geteuid())
- environment variables $USER, $LOGNAME, $LOGIN
If none of those return non-NULL results, use "UNKNOWN_USER"

On Windows:
- environment variable $USER
If that's not set, use "ODBC".

I wondered why on Windows we check $USER but not $USERNAME. I gather that it's an ODBC thing.

Changing the Size of the InnoDB Log Files In MySQL 5.6

$
0
0

In MySQL 5.5 and earlier, the steps to resize the InnoDB log files were a bit involved and for example included manually moving the log files out of the way as InnoDB would only create new files, if none existed.

In MySQL 5.6 a not so much talked about feature is the support to resize the log files in a way much more similar to changing other settings in MySQL. Now you simply update your MySQL configuration file and restart MySQL.

Let us look at an example. In MySQL 5.5 and earlier the total size of the InnoDB log files has to be less than 4G in total, so one way of staying within this limit is to have two files each 2047M large:

innodb $ ls -1s ib_logfile*
2096132 ib_logfile0
2096144 ib_logfile1

Now update the configuration file to take advantage of the fact that MySQL 5.6 allows much larger InnoDB log files; the actual limit is a total size of 512G, but here I will use two files each 4G large:

[mysqld]
innodb_log_files_in_group = 2
innodb_log_file_size      = 4G

Restarting MySQL will then automatically resize the log files, and the error log will show something like:

...
2013-02-24 11:29:15 5997 [Warning] InnoDB: Resizing redo log from 2*131008 to 2*262144 pages, LSN=2918104
2013-02-24 11:29:15 5997 [Warning] InnoDB: Starting to delete and rewrite log files.
2013-02-24 11:29:15 5997 [Note] InnoDB: Setting log file /MySQL/data/ib_logfile101 size to 4096 MB
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000
2013-02-24 11:31:03 5997 [Note] InnoDB: Setting log file /MySQL/data/ib_logfile1 size to 4096 MB
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000
2013-02-24 11:32:11 5997 [Note] InnoDB: Renaming log file /MySQL/data/ib_logfile101 to /MySQL/data/ib_logfile0
2013-02-24 11:32:11 5997 [Warning] InnoDB: New log files created, LSN=2918104

One of the other requirements when changing the log file size in MySQL 5.5 and earlier was that innodb_fast_shutdown must be set to 0 or 1 (the default value is 1). What happens in MySQL 5.6 if you have innodb_fast_shutdown = 2 and try to change the log size? Well now InnoDB handles that as well – InnoDB will do its “crash recovery” and then resize the log files:

mysql> SET GLOBAL innodb_fast_shutdown = 2;
Query OK, 0 rows affected (0.01 sec)

And a look into the error log for the restart (setting the size back to 2 times 2047M):

2013-02-24 11:38:00 5997 [Note] InnoDB: MySQL has requested a very fast shutdown without flushing the InnoDB buffer pool to data files. At the next mysqld startup InnoDB will do a crash recovery!
...
InnoDB: Doing recovery: scanned up to log sequence number 2968389
2013-02-24 11:38:18 7129 [Note] InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percent: 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
InnoDB: Last MySQL binlog file position 0 202094, file name binlog.000003
2013-02-24 11:38:18 7129 [Warning] InnoDB: Resizing redo log from 2*262144 to 2*131008 pages, LSN=2968389
2013-02-24 11:38:18 7129 [Warning] InnoDB: Starting to delete and rewrite log files.
2013-02-24 11:38:19 7129 [Note] InnoDB: Setting log file /MySQL/data/ib_logfile101 size to 2047 MB
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000
2013-02-24 11:39:10 7129 [Note] InnoDB: Setting log file /MySQL/data/ib_logfile1 size to 2047 MB
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000
2013-02-24 11:40:10 7129 [Note] InnoDB: Renaming log file /MySQL/data/ib_logfile101 to /MySQL/data/ib_logfile0
2013-02-24 11:40:10 7129 [Warning] InnoDB: New log files created, LSN=2968389

While it is not something this that makes an impact during normal operations, it just helps making the life of a DBA (or Support engineer) life a little easier.

Yet another UDF tutorial

$
0
0

Some time ago I wrote a blog post describing a way I use to verify MySQL Server bugs. But my job consists not only of bugs which can be verified just by passing SQL queries to the server.

One of such examples is UDF bugs.

MySQL User Reference Manual is good source of information for those who want to write UDF functions, as well as book "MySQL 5.1 Plugin Development" by  Sergei Golubchik and Andrew Hutchings. But while the book describes in details how to write UDFs it was created in time when current MySQL version was 5.1 and does not contain information about how to build UDF nowadays. User Reference Manual has this information, of course. But it missed details, such as how to build UDF with custom library. And, last but not least, I need to create layout which allows me to test my UDF quickly with any server version I need to.

So here is brief overview of how I do it.

Lets took a MySQL bug report which I just created as example: "Bug #68946     UDF *_init and *_deinit functions called only once for multiple-row select".

All code, necessary to repeat the issue, is attached to the bug report. So you can download it and try. Or simply read this description.

After unpacking the archive we can see following layout:


-rwxrwx--- 1 sveta sveta  1070 Apr 13 11:48 CMakeLists.txt



-rwxrwx--- 1 sveta sveta   180 Apr 13 12:16 initid_bug.cc



-rwxrwx--- 1 sveta sveta   146 Apr 13 12:16 initid_bug.h



drwxrwx--- 1 sveta sveta     0 Apr 13 11:22 initid_bug_udf



-rwxrwx--- 1 sveta sveta   715 Apr 13 12:18 initid_bug_udf.cc



-rwxrwx--- 1 sveta sveta    76 Apr 13 11:48 initid_bug_udf.def


-rwxrwx--- 1 sveta sveta   484 Apr 13 12:08 initid_bug_udf.h


-rwxrwx--- 1 sveta sveta  6281 Apr 13 13:07 Makefile



-rwxrwx--- 1 sveta sveta   299 Apr 13 12:28 Makefile.unix


Lets start from code.


$ cat initid_bug_udf.def 

LIBRARY         initid_bug_udf

VERSION         0.1

EXPORTS

  initid_bug

This is common *.def file, which contains version of UDF and function names. I am going to use single function, showing issue with initid->ptr pointer.

initid_bug.h and initid_bug.cc contain declaration and definition of a helper function, necessary to demonstrate the problem. For this particular bug report I didn't need to create this function, but I had in mind future blog post when created it, so I have an example of external library which should be linked with the UDF:


$ cat initid_bug.h

/* MySQL */

#pragma once

#ifndef _INITID_BUG

#define _INITID_BUG

#define MAXRES 10

char* multiply_by_ten(int value);

#endif /* UNUTUD_BUG */

$ cat initid_bug.cc 

/* MySQL */

#include <stdio.h>

#include "initid_bug.h"

char* multiply_by_ten(int value)

{

    char* result= new char[MAXRES];

    sprintf(result, "%d", value * 10);

 

    return result;

}

Files initid_bug_udf.h and initid_bug_udf.cc contain code for the UDF itself.

initid_bug_udf.h is simple header file:


$ cat initid_bug_udf.h

/* MySQL */

#include <my_global.h>

#include <my_sys.h>

#include <mysql.h>

#include <string.h>

#include "initid_bug.h"

#ifdef __WIN__

#define WINEXPORT __declspec(dllexport)

//#define strcpy strcpy_s

#else

#define WINEXPORT

#endif

extern "C" {

WINEXPORT long long initid_bug(UDF_INIT *initid, UDF_ARGS *args, char *is_null, char *error);

 
WINEXPORT my_bool initid_bug_init(UDF_INIT *initid, UDF_ARGS *args, char *message);

 
WINEXPORT void initid_bug_deinit(UDF_INIT *initid);

}

And initid_bug_udf.cc contains code, demonstrating the issue:


$ cat initid_bug_udf.cc

/* MySQL */

#include "initid_bug_udf.h"

long long initid_bug(UDF_INIT *initid, UDF_ARGS *args, char *is_null, char *error)

{

    int result= atoi(initid->ptr);

    char* bug= multiply_by_ten(result);

    memcpy(initid->ptr, bug, strlen(bug));

    delete[] bug;

    return result;

}

 
my_bool initid_bug_init(UDF_INIT *initid, UDF_ARGS *args, char *message)

{

  if (!(initid->ptr= (char*)malloc(MAXRES)))

  {

    strcpy(message,"Couldn't allocate memory for result buffer");

    return 1;

  }

  memset(initid->ptr, '\0', MAXRES);

 

  memcpy(initid->ptr, "1", strlen("1"));

  initid->maybe_null= 1;

  initid->const_item= 0;

 

  return 0;

}

 
void initid_bug_deinit(UDF_INIT *initid)

{

  if (initid->ptr)

    free(initid->ptr);

}

So far so good. However there is nothing interesting yet.

And now is the part I am writing this tutorial for: how to build and test it.

User manual contains build instructions for version 5.5 and up at http://dev.mysql.com/doc/refman/5.6/en/udf-compiling.html

But since I want to test my UDF with several MySQL installations I need an easy way to pass value of MySQL's basedir to my build and test scripts. For this purpose I introduced variable MYSQL_DIR in CMakeLists.txt


$ cat CMakeLists.txt 

CMAKE_MINIMUM_REQUIRED(VERSION 2.6)

# Avoid warnings in higher versions

if("${CMAKE_MAJOR_VERSION}.${CMAKE_MINOR_VERSION}" GREATER 2.6)

 CMAKE_POLICY(VERSION 2.8)

endif()

PROJECT(initid_bug_udf)

# The version number.

set (initid_bug_udf_VERSION_MAJOR 0)

set (initid_bug_udf_VERSION_MINOR 1)

# Path for MySQL include directory

SET(MYSQL_DIR_NAME_DOCSTRING "Path to MySQL directory")

IF(DEFINED MYSQL_DIR)

  SET(MYSQL_DIR ${MYSQL_DIR} CACHE STRING ${MYSQL_DIR_NAME_DOCSTRING} FORCE)

ELSE()

  MESSAGE(WARNING "${MYSQL_DIR_NAME_DOCSTRING} was not specified. If something goes wrong re-reun with option -DMYSQL_DIR")  

ENDIF()

INCLUDE_DIRECTORIES("${MYSQL_DIR}/include")

 


I also added the library here




ADD_LIBRARY(initid_bug initid_bug.cc)

ADD_DEFINITIONS("-DMYSQL_DYNAMIC_PLUGIN")

ADD_DEFINITIONS("-fPIC")

ADD_DEFINITIONS("-g")

ADD_LIBRARY(initid_bug_udf MODULE initid_bug_udf.cc initid_bug_udf.def)


And linked it


IF(${CMAKE_SYSTEM_NAME} MATCHES "Windows")

TARGET_LINK_LIBRARIES(initid_bug_udf initid_bug wsock32)

ELSE()

TARGET_LINK_LIBRARIES(initid_bug_udf initid_bug)

ENDIF()


In other aspects this CMakeLists.txt is the same as described in the user manual.

Now it is easy to build UDF for any server, installed on the same machine as UDF sources.

On Linux/Solaris/Mac:

cmake . -DMYSQL_DIR=/home/sveta/src/mysql-5.6
make

On some Mac machines it is failed to create 64-bit binaries. You can build universal binaries instead providing option -DCMAKE_OSX_ARCHITECTURES="x86_64;i386;ppc"

On Windows:

You need Visual Studio (I did not test with Express, but I hope it works) and cmake (cmake.org) If you want to run automatic tests you need Perl.

To create makefiles run:


"C:\Program Files (x86)\CMake 2.8\bin\cmake.exe" -G "Visual Studio 11 Win64" . -DMYSQL_DIR="D:/build/mysql-5.5"


As you can see I have Visual Studio 11. If you have another version, change accordingly, for example: "Visual Studio 10 Win64"


Then compile:


devenv my_json_udf.sln /build Release


In all cases change MYSQL_DIR value so it points to basedir of your MySQL installation. It does not matter if you compiled MySQL server yourself or have pre-compiled binaries.

After compiling the library, install it or test. Since I care about tests mostly I did not create install script, but created tests and makefile to run them.

Tests are located in the initid_bug_udf directory:


$ ls -l initid_bug_udf

total 4

drwxrwx--- 1 sveta sveta 4096 Apr 13 13:08 include

drwxrwx--- 1 sveta sveta    0 Apr 13 11:31 plugin

drwxrwx--- 1 sveta sveta    0 Apr 13 11:59 r

drwxrwx--- 1 sveta sveta    0 Apr 13 11:21 t


They are in MTR format. I put all installation-related and commonly used scripts into include directory and copy UDF binary into plugin directory.

Test case itself is easy:


$ cat initid_bug_udf/t/initid_bug.test 

--source suite/initid_bug_udf/include/initid_bug_udf_install.inc

--source suite/initid_bug_udf/include/initid_bug_udf_testdata.inc

select initid_bug() from t1;

--source suite/initid_bug_udf/include/initid_bug_udf_cleanup.inc


As well as initialization and cleanup files:


$ cat initid_bug_udf/include/initid_bug_udf_install.inc 

--disable_query_log

let $win=`select @@version_compile_os like 'win%'`;

if($win==1)

{

--source suite/initid_bug_udf/include/initid_bug_udf_install_win.inc

}

if($win==0)

{

--source suite/initid_bug_udf/include/initid_bug_udf_install_unix.inc

}

--enable_result_log

--enable_query_log

$ cat initid_bug_udf/include/initid_bug_udf_install_unix.inc 

--disable_query_log

--disable_warnings

drop function if exists initid_bug;

--enable_warnings

create function initid_bug returns integer soname 'libinitid_bug_udf.so';

--enable_query_log

$ cat initid_bug_udf/include/initid_bug_udf_install_win.inc 

--disable_query_log

--disable_warnings

drop function if exists initid_bug;

--enable_warnings

create function initid_bug returns integer soname 'initid_bug_udf.dll';

--enable_query_log

However you can use *install* scripts as templates for installation automation if you care more about usage than testing.



Note, I did not test this particular UDF on Windows, so Windows-related code can be buggy.

Also, if you use versions prior to 5.5, you could not use initid_bug_udf/include/initid_bug_udf_install.inc script, but have to distinguish Windows and UNIX installation scripts in different way.


$ cat initid_bug_udf/include/initid_bug_udf_testdata.inc 

--disable_warnings

drop table if exists t1;

--enable_warnings

create table t1(f1 int);

insert into t1 values(1);

insert into t1 select f1 from t1;

insert into t1 select f1 from t1;

insert into t1 select f1 from t1;


And, finally, cleanup:


$ cat initid_bug_udf/include/initid_bug_udf_cleanup.inc 

--disable_query_log

--disable_warnings

drop table if exists t1;

drop function if exists initid_bug;

--enable_warnings

--enable_query_log


To run tests easily I created Makefile.unix file. I leave a task to create similar file for Windows up to you, because this depends from Windows Perl installation: scripts for Cygwin, ActiveState Perl or whichever Perl you have can vary in slight details.


$ cat Makefile.unix 

#MYSQL_DIR - path to source dir

test_initid_bug:    

            cp libinitid_bug_udf.so initid_bug_udf/plugin

            cp -R initid_bug_udf  $(MYSQL_DIR)/mysql-test/suite

            cd $(MYSQL_DIR)/mysql-test; \

                perl mtr  --suite=initid_bug_udf --mysqld=--plugin-dir=$(MYSQL_DIR)/mysql-test/suite/initid_bug_udf/plugin


Finally, to run tests you simply need to call following command:


$ MYSQL_DIR=/home/sveta/src/mysql-5.6 make -f Makefile.unix test_initid_bug


It will copy test directory under mysql-test/suite directory of the MySQL installation, pointed by MYSQL_DIR variable, then run the test.

If you did not create result file you will see test failure and result of queries. You can create result file after that and put it into r directory. ZiP archive, attached to the bug report, contains result file.

Five Number Summary

$
0
0

From Freenode: how do you generate a five number summary in MySQL? There is no "median" aggregate function built in. You could do some clever things involving self joins or temporary tables, or build an aggregate UDF - see the comments section in the manual for those approaches.

Here's another way using a single query. Be sure to set group_concat_max_len high enough for your data, and since it relies on string manipulation, it's probably not a good choice if your data is millions of rows.

First, a helper function to get the Nth element of a comma-delimited string, just to make the query shorter:

CREATE FUNCTION LIST_ELEM(inString text, pos int) 
RETURNS TEXT DETERMINISTIC 
RETURN SUBSTRING_INDEX(SUBSTRING_INDEX(inString, ',', pos), ',', -1);


Now, fetching the min, max, median, and first and third quartiles (computing method 2) for each group:

SELECT 

  groupId,

  GROUP_CONCAT(data ORDER BY data) AS dataSet,

  MIN(data) AS min,

  (
    LIST_ELEM(GROUP_CONCAT(data ORDER BY data), CEIL(COUNT(*)/4))
    + LIST_ELEM(GROUP_CONCAT(data ORDER BY data), FLOOR(COUNT(*)/4 + 1))
  ) / 2 AS q1,

  (
    LIST_ELEM(GROUP_CONCAT(data ORDER BY data), CEIL(COUNT(*)/2))
    + LIST_ELEM(GROUP_CONCAT(data ORDER BY data), FLOOR(COUNT(*)/2 + 1))
  ) / 2 AS median,

  (
    LIST_ELEM(GROUP_CONCAT(data ORDER BY data DESC), CEIL(COUNT(*)/4))
    + LIST_ELEM(GROUP_CONCAT(data ORDER BY data DESC), FLOOR(COUNT(*)/4 + 1))
  ) / 2 AS q3,

  MAX(data) AS max

FROM t 
GROUP BY groupId;
+---------+---------------------+------+------+--------+------+------+
| groupId | dataSet             | min  | q1   | median | q3   | max  |
+---------+---------------------+------+------+--------+------+------+
|       1 | 0,0,1,2,13,27,61,63 |    0 |  0.5 |    7.5 |   44 |   63 |
|       2 | 0,0,1,2,25          |    0 |    0 |      1 |    2 |   25 |
+---------+---------------------+------+------+--------+------+------+

Easier Overview of Current Performance Schema Setting

$
0
0

While I prepared for my Hands-On Lab about the Performance Schema at MySQL Connect last year, one of the things that occurred to me was how difficult it was quickly getting an overview of which consumers, instruments, actors, etc. are actually enabled. For the consumers things are made more complicated as the effective setting also depends on parents in the hierarchy. So my thought was: “How difficult can it be to write a stored procedure that outputs a tree of the hierarchies.” Well, simple enough in principle, but trying to be general ended up making it into a lengthy project and as it was a hobby project, it often ended up being put aside for more urgent tasks.

However here around eight months later, it is starting to shape up. While there definitely still is work to be done, e.g. creating the full tree and outputting it in text mode (more on modes later) takes around one minute on my test system – granted I am using a standard laptop and MySQL is running in a VM, so it is nothing sophisticated.

The current routines can be found in ps_tools.sql.gz – it may later be merged into Mark Leith’s ps_helper to try to keep the Performance Schema tools collected in one place.

Note: This far the routines have only been tested in Linux on MySQL 5.6.11. Particularly line endings may give problems on Windows and Mac.

Description of the ps_tools Routines

The current status are two views, four stored procedure, and four functions – not including several functions and procedures that does all the hard work:

  • Views:
    • setup_consumers – Displays whether each consumer is enabled and whether the consumer actually will be collected based on the hierarchy rules described in Pre-Filtering by Consumer in the Reference Manual.
    • accounts_enabled – Displays whether each account defined in the mysql.user table has instrumentation enabled based on the rows in performance_schema.setup_actors.
  • Procedures:
    • setup_tree_consumers(format, color) – Create a tree based on setup_consumers displaying whether each consumer is effectively enabled. The arguments are:
      • format is the output format and can be either (see also below).:
        • Text: Left-Right
        • Text: Top-Bottom
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escapes sequences around the consumer names when used a Text format (ignored for Dot outputs).
    • setup_tree_instruments(format, color, only_enabled, regex_filter) – Create a tree based on setup_instruments displaying whether each instrument is enabled. The tree is creating by splitting the instrument names at each /. The arguments are:
      • format is the output format and can be either:
        • Text: Left-Right
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escapes sequences around the instrument names when used a Text format (ignored for Dot outputs).
      • type – whether to base the tree on the ENABLED or TIMED column of setup_instruments.
      • only_enabled – if TRUE only the enabled instruments are included.
      • regex_filter – if set to a non-empty string only instruments that match the regex will be included.
    • setup_tree_actors_by_host(format, color) – Create a tree of each account defined in mysql.user and whether they are enabled; grouped by host. The arguments are:
      • format is the output format and can be either:
        • Text: Left-Right
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escape sequences around the names when used a Text format (ignored for Dot outputs).
    • setup_tree_actors_by_user – Create a tree of each account defined in mysql.user and whether they are enabled; grouped by username. The arguments are:
      • format is the output format and can be either:
        • Text: Left-Right
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escape sequences around the names when used a Text format (ignored for Dot outputs).
  • Functions:
    • is_consumer_enabled(consumer_name) – Returns whether a given consumer is effectively enabled.
    • is_account_enabled(host, user) – Returns whether a given account (host, user) is enabled according to setup_actors.
    • substr_count(haystack, needle, offset, length) – The number of times a given substring occurs in a string. A port of the PHP function of the same name.
    • substr_by_delim(set, delimiter, pos) – Returns the Nth element from a delimiter string.

The two functions substr_count() and substr_by_delim() was also described in an earlier blog.

The formats for the four stored procedures consists of two parts: whether it is Text or Dot and the direction. Text is a tree that can be viewed directly either in the mysql command line client (coloured output not supported) or the shell (colours supported for bash). Dot will output a DOT graph file in the same way as dump_thread_stack() in ps_helper. The direction is as defined in the DOT language, so e.g. Left-Right will have the first level furthest to the left, then add each new level to the right of the parent level.

Examples

As the source code – including comments – is more than 1600 lines, I will not discuss it here, but rather go through some examples.

setup_tree_consumers

Using the coloured output:

setup_tree_consumers_tbor the same using a non-coloured output:
setup_tree_consumers_lr

setup_tree_instruments

setup_tree_instruments_lrHere a small part of the tree is selected using a regex.

setup_tree_actors_%

With only root@localhost and root@127.0.0.1 enabled, the outputs of setup_tree_actors_by_host and setup_tree_actors_by_user gives respectively:setup_tree_actors_by_host_lrsetup_tree_actors_by_user_lr

DOT Graph of setup_instruments

The full tree of setup_instruments can be created using the following sequence of steps (I am using graphviz to get support for dot files):

MySQL 5.6.11$ echo -e "$(mysql -NBe "CALL ps_tools.setup_tree_instruments('Dot: Left-Right', FALSE, 'Enabled', FALSE, '')")" > /tmp/setup_instruments.dot
MySQL 5.6.11$ dot -Tpng /tmp/setup_instruments.dot -o /tmp/setup_instruments.png

setup_tree_instruments_dot_lr_snipThe full output is rather large (6.7M). If you want to see if you can get to it at http://mysql.wisborg.dk/wp-content/uploads/2013/05/setup_tree_instruments_dot_lr.png.

Views

mysql> SELECT * FROM ps_tools.setup_consumers;
+--------------------------------+---------+----------+
| NAME                           | ENABLED | COLLECTS |
+--------------------------------+---------+----------+
| events_stages_current          | NO      | NO       |
| events_stages_history          | NO      | NO       |
| events_stages_history_long     | NO      | NO       |
| events_statements_current      | YES     | YES      |
| events_statements_history      | NO      | NO       |
| events_statements_history_long | NO      | NO       |
| events_waits_current           | NO      | NO       |
| events_waits_history           | NO      | NO       |
| events_waits_history_long      | NO      | NO       |
| global_instrumentation         | YES     | YES      |
| thread_instrumentation         | YES     | YES      |
| statements_digest              | YES     | YES      |
+--------------------------------+---------+----------+
12 rows in set (0.00 sec)

mysql> SELECT * FROM ps_tools.accounts_enabled;
+-------------+-----------+---------+
| User        | Host      | Enabled |
+-------------+-----------+---------+
| replication | 127.0.0.1 | NO      |
| root        | 127.0.0.1 | YES     |
| root        | ::1       | NO      |
| meb         | localhost | NO      |
| memagent    | localhost | NO      |
| root        | localhost | YES     |
+-------------+-----------+---------+
6 rows in set (0.00 sec)

Conclusion

There is definitely more work to do on making the Performance Schema easier to access. ps_helper and ps_tools are a great start to what I am sure will be an extensive library of useful diagnostic queries and tools.

The Dangers in Changing Default Character Sets on Tables

$
0
0

The ALTER TABLE statement syntax is explained in the manual at:

http://dev.mysql.com/doc/refman/5.6/en/alter-table.html

To put it simply, there are two ways you can alter the table to use a new character set.

1. ALTER TABLE tablename DEFAULT CHARACTER SET utf8;

This will alter the table to use the new character set as the default, but as a safety mechanism, it will only change the table definition for the default character set. That is, existing character fields will have the old character set per column. For example:

mysql> create table mybig5 (id int not null auto_increment primary key,      
    -> subject varchar(100) ) engine=innodb default charset big5;
Query OK, 0 rows affected (0.81 sec)

mysql> show create table mybig5;
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table  | Create Table                                                                                                                                                     |
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mybig5 | CREATE TABLE `mybig5` (
 `id` int(11) NOT NULL AUTO_INCREMENT,
 `subject` varchar(100) DEFAULT NULL,
 PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=big5 |
+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> alter table mybig5 default charset utf8;
Query OK, 0 rows affected (0.17 sec)
Records: 0  Duplicates: 0  Warnings: 0

Inserting a multi-byte string that worked in big5 character set, such as:

mysql> INSERT INTO mybig5 VALUES (NULL, UNHEX('E7BB8FE79086'));

01:08:19  [INSERT - 0 row(s), 0.000 secs]  [Error Code: 1366, SQL State: HY000]  Incorrect string value: '\xE7\xBB\x8F\xE7\x90\x86' for column 'SUBJECT' at row 1 ... 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.000/0.000 sec  [0 successful, 0 warnings, 1 errors]

mysql> show create table mybig5;
+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table  | Create Table                                                                                                                                                                        |
+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mybig5 | CREATE TABLE `mybig5` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `subject` varchar(100) CHARACTER SET big5 DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
Notice that the 'subject' column has the original character set definition and when data is inserted, can result in the error above if the character sets do not match.

2. ALTER TABLE tablename CONVERT TO CHARACTER SET utf8;

This will change all the columns to the new character set and change the table as well. So you will end up with the required definition of:

mysql> show create table mybig5;
mysql> +--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
   -> | Table  | Create Table                                                                                                                                                     |
   -> +--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
   -> | mybig5 | CREATE TABLE `mybig5` (
   ->   `id` int(11) NOT NULL AUTO_INCREMENT,
   ->   `subject` varchar(100) DEFAULT NULL,
   ->   PRIMARY KEY (`id`)
   -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
   -> +--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
   -> 1 row in set (0.00 sec)

So if you see the incorrect string error on a table, check that the columns are not under a different character set to the default. Look at using the CONVERT clause to avoid the issue, but also be aware that certain tables may actually require different character sets for different columns.

I am speaking at MySQL Connect 2013

$
0
0

I open this blog to announce that I will be speaking at MySQL Connect in 2 weeks.183037-mysql-tk-imspeaking-250x250-1951648

 I will present a conference session :

and a tutorial session :

I am very happy to be part of this great event and to be able to meet the MySQL Community, our customers and my colleagues there. Looking forward to seeing you !

It is not too late to register !!

183037-mysql-tk-bn-336x280-1951592

 

 

Inserting NULLs into NOT NULL columns in 5.6: refused by default

$
0
0

MySQL 5.6 ships with a default config file that sets the SQL mode to NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES . Here is what happens if you try to insert NULL values into a table with NOT NULL columns:

mysql> create table safetyfirst(
    -> id int primary key not null auto_increment,
    -> country varchar(60) NOT NULL,
    -> product varchar(60) NOT NULL );
Query OK, 0 rows affected (0.24 sec)

mysql> insert into safetyfirst(country) values('Sweden');
ERROR 1364 (HY000): Field 'product' doesn't have a default value

If someone tells you that MySQL 5.6 by default allows you to do this, ask them to prove it using the default settings we use for new installations and check their claim by asking them for the output of SHOW VARIABLES LIKE 'sql%'; .

We would like to use NO_ZERO_DATE, NO_ZERO_IN_DATE, NO_AUTO_VALUE_ON_ZERO, ERROR_FOR_DIVISION_BY_ZERO as well but we know that many web applications use some of these things, so we did not do that for 5.6. If you are doing new application development or provide a development framework please strongly consider using these as well.

(2013-09-18 rewrote unclear last paragraph)

Thanks For Attending MySQL Connect

$
0
0

MySQL Connect 2013 was held this past Saturday through Monday, and I would like to extend a big thank you to everyone who attended my sessions, I talked to or otherwise took part in the conference.

I had two sessions as well as participated in a Birds of the Feather session with the Community and Support teams. The slides have been uploaded the the Content Catalog but they are not available for download from there yet. Until then you can download them from the the links below:

The ps_helper views, procedures, and functions used in the above presentations can be downloaded from https://github.com/MarkLeith/dbahelper:

git clone https://github.com/MarkLeith/dbahelper

For ps_tools, I will follow up on this site with more information although some of the tools can be found in Easier Overview of Current Performance Schema Setting. Note: the presentation uses the naming convention that Performance Schema tools are prefixed ps_ – that was not the case in the above blog, so e.g. ps_setup_tree_consumers is in the blog call setup_tree_consumers.

And again: thanks for attending MySQL Connect 2013 – hope to see you again next year.

Viewing all 352 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>