In my post MySQL 5.5.8 and Percona Server: being adaptive I mentioned that I used innodb-log-block-size=4096 in Percona Server to get better throughput, but later Dimitri in his article MySQL Performance: Analyzing Percona's TPCC-like Workload on MySQL 5.5 sounded doubt that it really makes sense. Here us quote from his article:
"Question: what is a potential impact on buffered 7MB/sec writes if we'll use 4K or 512 bytes block size to write to the buffer?.. Image may be NSFW.
Clik here to view. )
There will be near no or no impact at all as all writes are managed by the filesystem, and filesystem will use its own block size.. - Of course the things may change if "innodb_flush_log_at_trx_commit=1" will be used, but it was not a case for the presented tests.."
Well, sure you do not need to believe me, you should demand for real numbers. So I have number to show you.
I took Dell PowerEdge R900 server with 32GB of RAM and with FusionIO 320GB MLC card, and run tpcc-mysql benchmark with 500W using Percona Server 5.5.8.
Here is relevant part of config what I used
-
innodb_buffer_pool_size=26G
-
innodb_data_file_path=ibdata1:10M:autoextend
-
innodb_file_per_table=1
-
innodb_flush_log_at_trx_commit=2
-
innodb_log_buffer_size=8M
-
innodb_log_files_in_group=2
-
-
innodb_log_file_size=4G
-
-
innodb_adaptive_checkpoint=keep_average
-
-
innodb_thread_concurrency=0
-
innodb_flush_method = O_DIRECT
-
-
innodb_read_ahead = none
-
innodb_flush_neighbor_pages = 0
-
-
innodb_write_io_threads=16
-
innodb_read_io_threads=16
-
innodb_io_capacity=2000
I made two runs, one with default innodb-log-block-size ( 512 bytes), and another with --innodb-log-block-size=4096. Full benchmark command is tpcc_start localhost tpcc500 root "" 500 24 10 3600
Image may be NSFW.
Clik here to view.
From graph you can actually see, that there is quite significant impact when we use --innodb-log-block-size=4096.
The average throughput for last 15 mins in first run is 38090.66 NOTPM,
in second run it is 49130.13 NOTPM, that is increase is 1.28x, and I can't say this is "near no or no impact".
What is the cause of such difference ? I am not really sure. Apparently FusionIO driver is sensitive to IO block size. And I know that other SSD/Flash drives like to have IO multiplied to their internal block size (which is often 4096 bytes), but I do not know if the effect is the same as on FusionIO.
I put CPU usage graph ( user and system) for both cases:
Image may be NSFW.
Clik here to view.
You may see with 4096 block size USER and SYS CPU is utilized much better, meaning that IDLE is much lower.
Is this contention issue in FusionIO driver when we have 512 bytes IO ? It may be.
Also I am not sure what is strange hill on throughput line with 512 bytes, but it is quite repeatable.
My blind guess (but do not believe me, I have no proof) is that again something is going on inside FusionIO driver,
but this is topic for another research.
For history, FusionIO card information is
-
Found 1 ioDrive in this system
-
Fusion-io driver version: 2.2.0 build 82
-
-
fct0 Attached as 'fioa' (block device)
-
Fusion-io ioDrive 320GB, Product Number:FS1-002-321-CS SN:10973
-
ioDIMM3, PN:00119401203, Mfr:004, Date:20091118
-
Firmware v5.0.5, rev 43674
Entry posted by Vadim | 15 comments
Add to: Image may be NSFW.
Clik here to view. | Image may be NSFW.
Clik here to view. | Image may be NSFW.
Clik here to view. | Image may be NSFW.
Clik here to view. | Image may be NSFW.
Clik here to view.
PlanetMySQL Voting: Vote UP / Vote DOWN