Increasing mysql query speed by less than 0.1ms/request
About
For any single query, it took the MariaDB server to do it in approximately 1ms for every request. When the concurrency increases the query time for each request also increases until it timed out. By far, It seems its only possible to do about 2k max connections per second for each mysql server instance, no amount of config tweeking seems to have any effect. Is there any way to reduce query time for each client by less than 0.1ms?
This is the query
select ID from table where id=1;
If it helps, here is the mysql configuration file
[client]
port = 3306
socket = /home/user/mysql.sock
[mysqld]
port = 3306
bind-address=127.0.0.1
datadir=/home/user/database
log-error=/home/user/error.log
pid-file=/home/user/mysqld.pid
innodb_file_per_table=1
back_log = 2000
max_connections = 1000000
max_connect_errors = 10
table_open_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 200
query_cache_size = 64M
query_cache_type = 1 #My settings
innodb_io_capacity = 100000
query_cache_limit = 2M
ft_min_word_len = 4
default-storage-engine = innodb
thread_stack = 240K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log-bin=mysql-bin
binlog_format=mixed
slow_query_log
long_query_time = 2
server-id = 1
key_buffer_size = 32M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
innodb_buffer_pool_size = 2G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_doublewrite = 0
sync_binlog=0
skip_name_resolve
innodb_write_io_threads = 500
innodb_read_io_threads = 500
innodb_thread_concurrency = 1000
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 81920
HW
2x Intel Xeon 2670 32 Gb RAM 500Gb ssd samsung evo 850
Detour
While its true that MySql can do more than 1 million queries per second, the test here just did only 250 connected clients.
PS
I am neither an expert yet in programming or mysql, so please bear with me.
mysql performance database-performance query-performance sqlperformance
|
show 2 more comments
About
For any single query, it took the MariaDB server to do it in approximately 1ms for every request. When the concurrency increases the query time for each request also increases until it timed out. By far, It seems its only possible to do about 2k max connections per second for each mysql server instance, no amount of config tweeking seems to have any effect. Is there any way to reduce query time for each client by less than 0.1ms?
This is the query
select ID from table where id=1;
If it helps, here is the mysql configuration file
[client]
port = 3306
socket = /home/user/mysql.sock
[mysqld]
port = 3306
bind-address=127.0.0.1
datadir=/home/user/database
log-error=/home/user/error.log
pid-file=/home/user/mysqld.pid
innodb_file_per_table=1
back_log = 2000
max_connections = 1000000
max_connect_errors = 10
table_open_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 200
query_cache_size = 64M
query_cache_type = 1 #My settings
innodb_io_capacity = 100000
query_cache_limit = 2M
ft_min_word_len = 4
default-storage-engine = innodb
thread_stack = 240K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log-bin=mysql-bin
binlog_format=mixed
slow_query_log
long_query_time = 2
server-id = 1
key_buffer_size = 32M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
innodb_buffer_pool_size = 2G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_doublewrite = 0
sync_binlog=0
skip_name_resolve
innodb_write_io_threads = 500
innodb_read_io_threads = 500
innodb_thread_concurrency = 1000
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 81920
HW
2x Intel Xeon 2670 32 Gb RAM 500Gb ssd samsung evo 850
Detour
While its true that MySql can do more than 1 million queries per second, the test here just did only 250 connected clients.
PS
I am neither an expert yet in programming or mysql, so please bear with me.
mysql performance database-performance query-performance sqlperformance
This is limited to your hardware and configuration. It won't be easy to go fastest with 1 instance. If you want to do (accept) more queries by second, you'll have to setup a cluster. Easy queries won't be fastest, but you can accept more at a time. mysql.com/fr/why-mysql/benchmarks/mysql-cluster
– DanB
Nov 23 '18 at 21:23
what kind connections do you perform is it same processes? or there is a new connection for each query?
– Alex
Nov 23 '18 at 21:27
@DanB its too bad if that is only how far we can do w/ mysql server. I have an app that can do 60-100k connections/second, but when connected w/ mysql its so difficult to manage connection w/ only 1 instance, I agree w/ you, clustering could be my next step. But it might be interesting if there is a relational db server that can do much better than mysql server for 1 instance.
– Dominic Guana
Nov 23 '18 at 21:31
@Alex its a connection pool, but I dont manage the pool. If I am not mistaken, the pool is around 16k.
– Dominic Guana
Nov 23 '18 at 21:32
" it took the MariaDB server to do it in approximately 1ms for every request" - that seems quite good....
– Mitch Wheat
Nov 25 '18 at 5:49
|
show 2 more comments
About
For any single query, it took the MariaDB server to do it in approximately 1ms for every request. When the concurrency increases the query time for each request also increases until it timed out. By far, It seems its only possible to do about 2k max connections per second for each mysql server instance, no amount of config tweeking seems to have any effect. Is there any way to reduce query time for each client by less than 0.1ms?
This is the query
select ID from table where id=1;
If it helps, here is the mysql configuration file
[client]
port = 3306
socket = /home/user/mysql.sock
[mysqld]
port = 3306
bind-address=127.0.0.1
datadir=/home/user/database
log-error=/home/user/error.log
pid-file=/home/user/mysqld.pid
innodb_file_per_table=1
back_log = 2000
max_connections = 1000000
max_connect_errors = 10
table_open_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 200
query_cache_size = 64M
query_cache_type = 1 #My settings
innodb_io_capacity = 100000
query_cache_limit = 2M
ft_min_word_len = 4
default-storage-engine = innodb
thread_stack = 240K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log-bin=mysql-bin
binlog_format=mixed
slow_query_log
long_query_time = 2
server-id = 1
key_buffer_size = 32M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
innodb_buffer_pool_size = 2G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_doublewrite = 0
sync_binlog=0
skip_name_resolve
innodb_write_io_threads = 500
innodb_read_io_threads = 500
innodb_thread_concurrency = 1000
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 81920
HW
2x Intel Xeon 2670 32 Gb RAM 500Gb ssd samsung evo 850
Detour
While its true that MySql can do more than 1 million queries per second, the test here just did only 250 connected clients.
PS
I am neither an expert yet in programming or mysql, so please bear with me.
mysql performance database-performance query-performance sqlperformance
About
For any single query, it took the MariaDB server to do it in approximately 1ms for every request. When the concurrency increases the query time for each request also increases until it timed out. By far, It seems its only possible to do about 2k max connections per second for each mysql server instance, no amount of config tweeking seems to have any effect. Is there any way to reduce query time for each client by less than 0.1ms?
This is the query
select ID from table where id=1;
If it helps, here is the mysql configuration file
[client]
port = 3306
socket = /home/user/mysql.sock
[mysqld]
port = 3306
bind-address=127.0.0.1
datadir=/home/user/database
log-error=/home/user/error.log
pid-file=/home/user/mysqld.pid
innodb_file_per_table=1
back_log = 2000
max_connections = 1000000
max_connect_errors = 10
table_open_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 200
query_cache_size = 64M
query_cache_type = 1 #My settings
innodb_io_capacity = 100000
query_cache_limit = 2M
ft_min_word_len = 4
default-storage-engine = innodb
thread_stack = 240K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log-bin=mysql-bin
binlog_format=mixed
slow_query_log
long_query_time = 2
server-id = 1
key_buffer_size = 32M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
innodb_buffer_pool_size = 2G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_doublewrite = 0
sync_binlog=0
skip_name_resolve
innodb_write_io_threads = 500
innodb_read_io_threads = 500
innodb_thread_concurrency = 1000
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 81920
HW
2x Intel Xeon 2670 32 Gb RAM 500Gb ssd samsung evo 850
Detour
While its true that MySql can do more than 1 million queries per second, the test here just did only 250 connected clients.
PS
I am neither an expert yet in programming or mysql, so please bear with me.
mysql performance database-performance query-performance sqlperformance
mysql performance database-performance query-performance sqlperformance
edited Nov 23 '18 at 21:35
Dominic Guana
asked Nov 23 '18 at 21:10
Dominic GuanaDominic Guana
8310
8310
This is limited to your hardware and configuration. It won't be easy to go fastest with 1 instance. If you want to do (accept) more queries by second, you'll have to setup a cluster. Easy queries won't be fastest, but you can accept more at a time. mysql.com/fr/why-mysql/benchmarks/mysql-cluster
– DanB
Nov 23 '18 at 21:23
what kind connections do you perform is it same processes? or there is a new connection for each query?
– Alex
Nov 23 '18 at 21:27
@DanB its too bad if that is only how far we can do w/ mysql server. I have an app that can do 60-100k connections/second, but when connected w/ mysql its so difficult to manage connection w/ only 1 instance, I agree w/ you, clustering could be my next step. But it might be interesting if there is a relational db server that can do much better than mysql server for 1 instance.
– Dominic Guana
Nov 23 '18 at 21:31
@Alex its a connection pool, but I dont manage the pool. If I am not mistaken, the pool is around 16k.
– Dominic Guana
Nov 23 '18 at 21:32
" it took the MariaDB server to do it in approximately 1ms for every request" - that seems quite good....
– Mitch Wheat
Nov 25 '18 at 5:49
|
show 2 more comments
This is limited to your hardware and configuration. It won't be easy to go fastest with 1 instance. If you want to do (accept) more queries by second, you'll have to setup a cluster. Easy queries won't be fastest, but you can accept more at a time. mysql.com/fr/why-mysql/benchmarks/mysql-cluster
– DanB
Nov 23 '18 at 21:23
what kind connections do you perform is it same processes? or there is a new connection for each query?
– Alex
Nov 23 '18 at 21:27
@DanB its too bad if that is only how far we can do w/ mysql server. I have an app that can do 60-100k connections/second, but when connected w/ mysql its so difficult to manage connection w/ only 1 instance, I agree w/ you, clustering could be my next step. But it might be interesting if there is a relational db server that can do much better than mysql server for 1 instance.
– Dominic Guana
Nov 23 '18 at 21:31
@Alex its a connection pool, but I dont manage the pool. If I am not mistaken, the pool is around 16k.
– Dominic Guana
Nov 23 '18 at 21:32
" it took the MariaDB server to do it in approximately 1ms for every request" - that seems quite good....
– Mitch Wheat
Nov 25 '18 at 5:49
This is limited to your hardware and configuration. It won't be easy to go fastest with 1 instance. If you want to do (accept) more queries by second, you'll have to setup a cluster. Easy queries won't be fastest, but you can accept more at a time. mysql.com/fr/why-mysql/benchmarks/mysql-cluster
– DanB
Nov 23 '18 at 21:23
This is limited to your hardware and configuration. It won't be easy to go fastest with 1 instance. If you want to do (accept) more queries by second, you'll have to setup a cluster. Easy queries won't be fastest, but you can accept more at a time. mysql.com/fr/why-mysql/benchmarks/mysql-cluster
– DanB
Nov 23 '18 at 21:23
what kind connections do you perform is it same processes? or there is a new connection for each query?
– Alex
Nov 23 '18 at 21:27
what kind connections do you perform is it same processes? or there is a new connection for each query?
– Alex
Nov 23 '18 at 21:27
@DanB its too bad if that is only how far we can do w/ mysql server. I have an app that can do 60-100k connections/second, but when connected w/ mysql its so difficult to manage connection w/ only 1 instance, I agree w/ you, clustering could be my next step. But it might be interesting if there is a relational db server that can do much better than mysql server for 1 instance.
– Dominic Guana
Nov 23 '18 at 21:31
@DanB its too bad if that is only how far we can do w/ mysql server. I have an app that can do 60-100k connections/second, but when connected w/ mysql its so difficult to manage connection w/ only 1 instance, I agree w/ you, clustering could be my next step. But it might be interesting if there is a relational db server that can do much better than mysql server for 1 instance.
– Dominic Guana
Nov 23 '18 at 21:31
@Alex its a connection pool, but I dont manage the pool. If I am not mistaken, the pool is around 16k.
– Dominic Guana
Nov 23 '18 at 21:32
@Alex its a connection pool, but I dont manage the pool. If I am not mistaken, the pool is around 16k.
– Dominic Guana
Nov 23 '18 at 21:32
" it took the MariaDB server to do it in approximately 1ms for every request" - that seems quite good....
– Mitch Wheat
Nov 25 '18 at 5:49
" it took the MariaDB server to do it in approximately 1ms for every request" - that seems quite good....
– Mitch Wheat
Nov 25 '18 at 5:49
|
show 2 more comments
1 Answer
1
active
oldest
votes
Your machine has 4 cores, correct? So, if you run more than 4 CPU-bound processes simultaneously, the CPU will be saturated. This implies that each thread will be interrupted to let other threads run. That is, latency increases.
Is your goal to shrink time taken for the average query? That is latency? Then more connections will not help.
Is your goal queries/second, then, again, you will be stopped once the CPUs are saturated. That will probably happen before you get to 8 connections. After the CPU is saturated, throughput (queries/second) will level off even as you increase the number of connections. But, as I already said, latency for individual queries will increase.
If you want to push the machine, do multiple queries in each connection. Otherwise, you are only timing the connection handling. This is not a useful metric.
If you add more servers (via Replication, Clustering, etc), you can run more queries/second. Ditto for more cores. But nothing will decrease the time taken for an individual query by much.
In the settings, max_connections = 1000000
is ludicrous, and may consume a lot of RAM. As I have said, 8 might be all that your benchmarking can handle.
Another setting... Having the Query Cache turned on is deceptive. It speeds up running the identical SELECT
if the table in question has not changed. That is, the first run of a query might take 1.0ms; then all subsequent runs of the same query might take 0.1ms. That is not a very exciting finding. Execute a query twice -- this will give you all you can learn, without firing up any benchmark platform etc.
But most Production machines find the QC to be useless. This is because the data is changing, so the QC is out of date. In fact, the cost of "purging" the QC may make queries run slower!
If you want lots of connections for reading, Replication Slaves can provide an unlimited number of connections. I used to work with a system with 23 Slaves; that gave 23x the connections. Booking.com has systems with well over 100 Slaves. That's how you can look up hotel availability so fast.
Please back up and think what your real goal is. Then we can discuss things further.
Hi, @Rick James, the access pattern of my application has a unique query each time, you were right, I should definitely turn cache off, thanks for that! My intention is to increase query speed by decreasing query time, or more specifically, decrease query time/s/cpu core. Are there alternatives that you can suggest?
– Dominic Guana
Nov 28 '18 at 3:04
@DominicGuana - 1M queries with 250 connections = 4000 qps each = 0.25ms per query. 0.1 is not realistic for a single query from a single connection. I'm still not clear on what the goal is.
– Rick James
Nov 28 '18 at 3:16
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53452970%2fincreasing-mysql-query-speed-by-less-than-0-1ms-request%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Your machine has 4 cores, correct? So, if you run more than 4 CPU-bound processes simultaneously, the CPU will be saturated. This implies that each thread will be interrupted to let other threads run. That is, latency increases.
Is your goal to shrink time taken for the average query? That is latency? Then more connections will not help.
Is your goal queries/second, then, again, you will be stopped once the CPUs are saturated. That will probably happen before you get to 8 connections. After the CPU is saturated, throughput (queries/second) will level off even as you increase the number of connections. But, as I already said, latency for individual queries will increase.
If you want to push the machine, do multiple queries in each connection. Otherwise, you are only timing the connection handling. This is not a useful metric.
If you add more servers (via Replication, Clustering, etc), you can run more queries/second. Ditto for more cores. But nothing will decrease the time taken for an individual query by much.
In the settings, max_connections = 1000000
is ludicrous, and may consume a lot of RAM. As I have said, 8 might be all that your benchmarking can handle.
Another setting... Having the Query Cache turned on is deceptive. It speeds up running the identical SELECT
if the table in question has not changed. That is, the first run of a query might take 1.0ms; then all subsequent runs of the same query might take 0.1ms. That is not a very exciting finding. Execute a query twice -- this will give you all you can learn, without firing up any benchmark platform etc.
But most Production machines find the QC to be useless. This is because the data is changing, so the QC is out of date. In fact, the cost of "purging" the QC may make queries run slower!
If you want lots of connections for reading, Replication Slaves can provide an unlimited number of connections. I used to work with a system with 23 Slaves; that gave 23x the connections. Booking.com has systems with well over 100 Slaves. That's how you can look up hotel availability so fast.
Please back up and think what your real goal is. Then we can discuss things further.
Hi, @Rick James, the access pattern of my application has a unique query each time, you were right, I should definitely turn cache off, thanks for that! My intention is to increase query speed by decreasing query time, or more specifically, decrease query time/s/cpu core. Are there alternatives that you can suggest?
– Dominic Guana
Nov 28 '18 at 3:04
@DominicGuana - 1M queries with 250 connections = 4000 qps each = 0.25ms per query. 0.1 is not realistic for a single query from a single connection. I'm still not clear on what the goal is.
– Rick James
Nov 28 '18 at 3:16
add a comment |
Your machine has 4 cores, correct? So, if you run more than 4 CPU-bound processes simultaneously, the CPU will be saturated. This implies that each thread will be interrupted to let other threads run. That is, latency increases.
Is your goal to shrink time taken for the average query? That is latency? Then more connections will not help.
Is your goal queries/second, then, again, you will be stopped once the CPUs are saturated. That will probably happen before you get to 8 connections. After the CPU is saturated, throughput (queries/second) will level off even as you increase the number of connections. But, as I already said, latency for individual queries will increase.
If you want to push the machine, do multiple queries in each connection. Otherwise, you are only timing the connection handling. This is not a useful metric.
If you add more servers (via Replication, Clustering, etc), you can run more queries/second. Ditto for more cores. But nothing will decrease the time taken for an individual query by much.
In the settings, max_connections = 1000000
is ludicrous, and may consume a lot of RAM. As I have said, 8 might be all that your benchmarking can handle.
Another setting... Having the Query Cache turned on is deceptive. It speeds up running the identical SELECT
if the table in question has not changed. That is, the first run of a query might take 1.0ms; then all subsequent runs of the same query might take 0.1ms. That is not a very exciting finding. Execute a query twice -- this will give you all you can learn, without firing up any benchmark platform etc.
But most Production machines find the QC to be useless. This is because the data is changing, so the QC is out of date. In fact, the cost of "purging" the QC may make queries run slower!
If you want lots of connections for reading, Replication Slaves can provide an unlimited number of connections. I used to work with a system with 23 Slaves; that gave 23x the connections. Booking.com has systems with well over 100 Slaves. That's how you can look up hotel availability so fast.
Please back up and think what your real goal is. Then we can discuss things further.
Hi, @Rick James, the access pattern of my application has a unique query each time, you were right, I should definitely turn cache off, thanks for that! My intention is to increase query speed by decreasing query time, or more specifically, decrease query time/s/cpu core. Are there alternatives that you can suggest?
– Dominic Guana
Nov 28 '18 at 3:04
@DominicGuana - 1M queries with 250 connections = 4000 qps each = 0.25ms per query. 0.1 is not realistic for a single query from a single connection. I'm still not clear on what the goal is.
– Rick James
Nov 28 '18 at 3:16
add a comment |
Your machine has 4 cores, correct? So, if you run more than 4 CPU-bound processes simultaneously, the CPU will be saturated. This implies that each thread will be interrupted to let other threads run. That is, latency increases.
Is your goal to shrink time taken for the average query? That is latency? Then more connections will not help.
Is your goal queries/second, then, again, you will be stopped once the CPUs are saturated. That will probably happen before you get to 8 connections. After the CPU is saturated, throughput (queries/second) will level off even as you increase the number of connections. But, as I already said, latency for individual queries will increase.
If you want to push the machine, do multiple queries in each connection. Otherwise, you are only timing the connection handling. This is not a useful metric.
If you add more servers (via Replication, Clustering, etc), you can run more queries/second. Ditto for more cores. But nothing will decrease the time taken for an individual query by much.
In the settings, max_connections = 1000000
is ludicrous, and may consume a lot of RAM. As I have said, 8 might be all that your benchmarking can handle.
Another setting... Having the Query Cache turned on is deceptive. It speeds up running the identical SELECT
if the table in question has not changed. That is, the first run of a query might take 1.0ms; then all subsequent runs of the same query might take 0.1ms. That is not a very exciting finding. Execute a query twice -- this will give you all you can learn, without firing up any benchmark platform etc.
But most Production machines find the QC to be useless. This is because the data is changing, so the QC is out of date. In fact, the cost of "purging" the QC may make queries run slower!
If you want lots of connections for reading, Replication Slaves can provide an unlimited number of connections. I used to work with a system with 23 Slaves; that gave 23x the connections. Booking.com has systems with well over 100 Slaves. That's how you can look up hotel availability so fast.
Please back up and think what your real goal is. Then we can discuss things further.
Your machine has 4 cores, correct? So, if you run more than 4 CPU-bound processes simultaneously, the CPU will be saturated. This implies that each thread will be interrupted to let other threads run. That is, latency increases.
Is your goal to shrink time taken for the average query? That is latency? Then more connections will not help.
Is your goal queries/second, then, again, you will be stopped once the CPUs are saturated. That will probably happen before you get to 8 connections. After the CPU is saturated, throughput (queries/second) will level off even as you increase the number of connections. But, as I already said, latency for individual queries will increase.
If you want to push the machine, do multiple queries in each connection. Otherwise, you are only timing the connection handling. This is not a useful metric.
If you add more servers (via Replication, Clustering, etc), you can run more queries/second. Ditto for more cores. But nothing will decrease the time taken for an individual query by much.
In the settings, max_connections = 1000000
is ludicrous, and may consume a lot of RAM. As I have said, 8 might be all that your benchmarking can handle.
Another setting... Having the Query Cache turned on is deceptive. It speeds up running the identical SELECT
if the table in question has not changed. That is, the first run of a query might take 1.0ms; then all subsequent runs of the same query might take 0.1ms. That is not a very exciting finding. Execute a query twice -- this will give you all you can learn, without firing up any benchmark platform etc.
But most Production machines find the QC to be useless. This is because the data is changing, so the QC is out of date. In fact, the cost of "purging" the QC may make queries run slower!
If you want lots of connections for reading, Replication Slaves can provide an unlimited number of connections. I used to work with a system with 23 Slaves; that gave 23x the connections. Booking.com has systems with well over 100 Slaves. That's how you can look up hotel availability so fast.
Please back up and think what your real goal is. Then we can discuss things further.
answered Nov 25 '18 at 5:38
Rick JamesRick James
66.4k55899
66.4k55899
Hi, @Rick James, the access pattern of my application has a unique query each time, you were right, I should definitely turn cache off, thanks for that! My intention is to increase query speed by decreasing query time, or more specifically, decrease query time/s/cpu core. Are there alternatives that you can suggest?
– Dominic Guana
Nov 28 '18 at 3:04
@DominicGuana - 1M queries with 250 connections = 4000 qps each = 0.25ms per query. 0.1 is not realistic for a single query from a single connection. I'm still not clear on what the goal is.
– Rick James
Nov 28 '18 at 3:16
add a comment |
Hi, @Rick James, the access pattern of my application has a unique query each time, you were right, I should definitely turn cache off, thanks for that! My intention is to increase query speed by decreasing query time, or more specifically, decrease query time/s/cpu core. Are there alternatives that you can suggest?
– Dominic Guana
Nov 28 '18 at 3:04
@DominicGuana - 1M queries with 250 connections = 4000 qps each = 0.25ms per query. 0.1 is not realistic for a single query from a single connection. I'm still not clear on what the goal is.
– Rick James
Nov 28 '18 at 3:16
Hi, @Rick James, the access pattern of my application has a unique query each time, you were right, I should definitely turn cache off, thanks for that! My intention is to increase query speed by decreasing query time, or more specifically, decrease query time/s/cpu core. Are there alternatives that you can suggest?
– Dominic Guana
Nov 28 '18 at 3:04
Hi, @Rick James, the access pattern of my application has a unique query each time, you were right, I should definitely turn cache off, thanks for that! My intention is to increase query speed by decreasing query time, or more specifically, decrease query time/s/cpu core. Are there alternatives that you can suggest?
– Dominic Guana
Nov 28 '18 at 3:04
@DominicGuana - 1M queries with 250 connections = 4000 qps each = 0.25ms per query. 0.1 is not realistic for a single query from a single connection. I'm still not clear on what the goal is.
– Rick James
Nov 28 '18 at 3:16
@DominicGuana - 1M queries with 250 connections = 4000 qps each = 0.25ms per query. 0.1 is not realistic for a single query from a single connection. I'm still not clear on what the goal is.
– Rick James
Nov 28 '18 at 3:16
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53452970%2fincreasing-mysql-query-speed-by-less-than-0-1ms-request%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
This is limited to your hardware and configuration. It won't be easy to go fastest with 1 instance. If you want to do (accept) more queries by second, you'll have to setup a cluster. Easy queries won't be fastest, but you can accept more at a time. mysql.com/fr/why-mysql/benchmarks/mysql-cluster
– DanB
Nov 23 '18 at 21:23
what kind connections do you perform is it same processes? or there is a new connection for each query?
– Alex
Nov 23 '18 at 21:27
@DanB its too bad if that is only how far we can do w/ mysql server. I have an app that can do 60-100k connections/second, but when connected w/ mysql its so difficult to manage connection w/ only 1 instance, I agree w/ you, clustering could be my next step. But it might be interesting if there is a relational db server that can do much better than mysql server for 1 instance.
– Dominic Guana
Nov 23 '18 at 21:31
@Alex its a connection pool, but I dont manage the pool. If I am not mistaken, the pool is around 16k.
– Dominic Guana
Nov 23 '18 at 21:32
" it took the MariaDB server to do it in approximately 1ms for every request" - that seems quite good....
– Mitch Wheat
Nov 25 '18 at 5:49