mysql slow on first query, then fast for related queries
Pages of the innodb data files get cached in the innodb buffer pool. This is what you'd expect. Reading files is slow, even on good hard drives, especially random reads which is mostly what databases see.
It may be that your first query is doing some kind of table scan which pulls a lot of pages into the buffer pool, then accessing them is fast. Or something similar.
This is what I'd expect.
Ideally, use the same engine for all tables (exceptions: system tables, temporary tables (perhaps) and very small tables or short-lived ones). If you don't do this then they have to fight for ram.
Assuming all your tables are innodb, make the buffer pool use up to 75% of the server's physical ram (assuming you don't run too many other tasks on the machine).
Then you will be able to fit around 12G of your database into ram, so once it's "warmed up", the "most used" 12G of your database will be in ram, where accessing it is nice and fast.
Some users of mysql tend to "warm up" production servers following a restart by sending them queries copied from another machine for a while (these will be replication slaves) until they add them into their production pool. This avoids the extreme slowness seen while the cache is cold. For example, Youtube does this (or at least it used to; Google bought them and they may now use Google-fu)
mysql query slow at first fast afterwards
Finally I resorted to splitting the table to 2, moving the blob object to the second table and joining wherever needed. Unfortunately that involved changing many lines of code.
Full Text query run slow first time and then fast
FULLTEXT
is available in InnoDB; consider migrating.
There are two things that can lead to "first is slow; second is fast":
The first time you run a query, it may need to fetch index and/or data blocks from disk. The second time, those blocks are cached in RAM, therefore much faster.
The "Query cache", if enabled, records queries and their resultsets. So, if exactly the same
SELECT
is run a second time, it can simply look up the result that was previously computed.
MySQL slow query when combining two very fast queries
IN ( SELECT ... )
is poorly optimized, at least in older versions of MySQL. What version are you using?
When using a FULLTEXT
index (MATCH...
), that part is performed first, if possible. This is because nearly always the FT lookup is faster than whatever else is going on.
But when using two fulltext queries, it picks one, then can't use fulltext on the other.
Here's one possible workaround:
- Have a extra table for searches. It includes both
Name
andLocations
in it. - Have
FULLTEXT(Name, Locations)
MATCH (Name, Locations) AGAINST ('+austin +elastic' IN BOOLEAN MODE)
If necessary, AND
that with something to verify that it is not, for example, finding a person named 'Austin'.
Another possibility:
5.7 (or 5.6?) might be able to optimize this by creating indexes on the subqueries:
SELECT ...
FROM ( SELECT Company_Id FROM ... MATCH(Name) ... ) AS x
JOIN ( SELECT Company_Id FROM ... MATCH(Locations) ... ) AS y
USING(Company_id);
Provide the EXPLAIN
; I am hoping to see <auto-key>
.
Test that. If it is 'fast', then you may need to add on another JOIN
and/or WHERE
. (I am unclear what your ultimate query needs to be.)
SQL query is running very slow. It shows the results in about 23 seconds
These look like "filters", so move them to the
WHERE
clause and leave just "relation" conditions in theON
clause. (This won't change performance, but will make reading the query easier.)AND U.uStatus IN('1','3')
AND F.fr_status IN('me', 'flwr', 'subscriber')Get rid of the
FORCE INDEX
clauses; they may help today, but hurt tomorrow when the distribution of the data changes.What is
$morePost
? I ask because it may be critical to optimizing the performance.Add these composite indexes:
P: INDEX(post_owner_id, post_id)
F: INDEX(fr_status, fr_two)
U: INDEX(uStatus, iuid)
(When adding a composite index, DROP index(es) with the same leading columns. That is, when you have both INDEX(a) and INDEX(a,b), toss the former.)
Don't use both
DISTINCT
andGROUP BY
; it probably cause an extra sort on the entire dataset (after the JOINs, but before the LIMIT).LIMIT 5
without anORDER BY
lets the Optimizer pick whichever 5 it likes. Add anORDER BY
if you care which 5.A common performance problem comes from the mixture of
JOIN
andGROUP BY
. I call it "explode-implode". The Joins explode the data set into lots more rows, only to have the Group-by implode back down to the rows that came from one of the tables. The typical cure is first select the desired rows from the grouped table (P
). Do this in a "derived table". then Join to the other tables. (However, I got lost in this query, so I cannot tell if it applies here.)
Related Topics
Quickest Way to Fill SQL Table with Dummy Data
Upper Limit for Autoincrement Primary Key in SQL Server
Composite VS Surrogate Keys for Referential Integrity in 6Nf
One-To-Many Query Selecting All Parents and Single Top Child for Each Parent
Oracle: How to Get Percent of Total by a Query
How to Group by Week in Postgresql
SQL Return Only Duplicate Rows
Sql: How to Select Earliest Row
Add Time 23:59:59.999 to End Date for Between
Tricks for Generating SQL Statements in Excel
SQL Server 2008: Delete Duplicate Rows
How to Flip a Bit in SQL Server
How to Get Input File Name as Column in Aws Athena External Tables
Convert Integer to Text in SQLite's Select Query
How to Do an Insert Where Not Exists
Sql: Select a List of Numbers from "Nothing"
What's the Simplest Way to Import an SQLite SQL File into a Web SQL Database