Does Limiting a Query to One Record Improve Performance

Does limiting a query to one record improve performance

If the column has

a unique index: no, it's no faster

a non-unique index: maybe, because it will prevent sending any additional rows beyond the first matched, if any exist

no index: sometimes

  • if 1 or more rows match the query, yes, because the full table scan will be halted after the first row is matched.
  • if no rows match the query, no, because it will need to complete a full table scan

Does adding 'LIMIT 1' to MySQL queries make them faster when you know there will only be 1 result?

Depending on the query, adding a limit clause can have a huge effect on performance. If you want only one row (or know for a fact that only one row can satisfy the query), and are not sure about how the internal optimizer will execute it (for example, WHERE clause not hitting an index and so forth), then you should definitely add a LIMIT clause.

As for optimized queries (using indexes on small tables) it probably won't matter much in performance, but again - if you are only interested in one row than add a LIMIT clause regardless.

MySQL limit syntax with Query performance

LIMIT usually saves part of the cost of sending large result sets from the MySQL server to the requesting client. It's good to use LIMIT if you need only a few result set rows, rather than simply skipping un-needed rows on the client side.

There's a notorious performance antipattern using LIMIT. A query like this

 SELECT a,whole,mess,of,columns,...
FROM big_table JOIN big_tableb ON something JOIN big_tablec ON something ....
ORDER BY whole, mess DESC
LIMIT 5

in MySQL wastes server resources (time and RAM). Why? It generates a big result set, then sorts it, then discards all but a few rows.

Another performance antipattern is LIMIT small_number, big_number applied to a complex result set. It has to romp through many rows to get a small number of rows.

You can work around these with a deferred join pattern, something like this:

   SELECT a,whole,mess,of,columns,..
FROM (
SELECT big_table_id
FROM big_table JOIN big_tableb ON something JOIN big_tablec ON something ....
ORDER BY whole, mess DESC
LIMIT 5, 200000
) ids,
JOIN big_table ON ids.big_table_id = big_table.big_table_id
JOIN big_tableb ON something JOIN big_tablec ON something ...

This pattern orders and then discard just some id values rather than a whole mess of columns.

Using LIMIT really helps performance in situations where the result set is ordered via an index. For example, if you have an index on datestamp and you do

 SELECT datestamp, col, col
FROM table
ORDER BY datestamp DESC
LIMIT 20

the MySQL query planner can scan backwards through the datestamp index and retrieve just twenty rows.

How much is performance improved when using LIMIT in a SQL sentence?

Assuming both tables are equivalent in terms of index, row-sizing and other structures. Also assuming that you are running that simple SELECT statement. If you have an ORDER BY clause in your SQL statements, then obviously the larger table will be slower. I suppose you're not asking that.

If X = Y, then obviously they should run in similar speed, since the query engine will be going through the records in exactly the same order -- basically a table scan -- for this simple SELECT statement. There will be no difference in query plan.

If Y > X only by a little bit, then also similar speed.

However, if Y >> X (meaning Y has many many more rows than X), then the LIMIT version MAY be slower. Not because of query plan -- again should be the same -- but simply because that the internal structure of data layout may have several more levels. For example, if data is stored as leafs on a tree, there may be more tree levels, so it may take slightly more time to access the same number of pages.

In other words, 1000 rows may be stored in 1 tree level in 10 pages, say. 1000000 rows may be stored in 3-4 tree levels in 10000 pages. Even when taking only 10 pages from those 10000 pages, the storage engine still has to go through 3-4 tree levels, which may take slightly longer.

Now, if the storage engine stores data pages sequentially or as a linked list, say, then there will be no difference in execution speed.

Why does ORDER BY and LIMIT 1 slow down a MySQL query so much?

I think MySQL is trying to use mailing_CS index in last query and this index is not optimal.

Try this query:

SELECT * 
FROM `mydata`.`mytable` USE INDEX (section_CS) IGNORE INDEX(mailing_CS)
WHERE (
(token = 'XFRA1NMDU9XY') AND
(section = 210874)
)
ORDER BY mailing
LIMIT 1

Also you may use composite index (section, mailing) for this table.

Why is MySQL slow when using LIMIT in my query?

Indexes do not necessarily improve performance. To better understand what is happening, it would help if you included the explain for the different queries.

My best guess would be that you have an index in id_state or even id_state, id_mp that can be used to satisfy the where clause. If so, the first query without the order by would use this index. It should be pretty fast. Even without an index, this requires a sequential scan of the pages in the orders table, which can still be pretty fast.

Then when you add the index on creation_date, MySQL decides to use that index instead for the order by. This requires reading each row in the index, then fetching the corresponding data page to check the where conditions and return the columns (if there is a match). This reading is highly inefficient, because it is not in "page" order but rather as specified by the index. Random reads can be quite inefficient.

Worse, even though you have a limit, you still have to read the entire table because the entire result set is needed. Although you have saved a sort on 38 records, you have created a massively inefficient query.

By the way, this situation gets significantly worse if the orders table does not fit in available memory. Then you have a condition called "thrashing", where each new record tends to generate a new I/O read. So, if a page has 100 records on it, the page might have to be read 100 times.

You can make all these queries run faster by having an index on orders(id_state, id_mp, creation_date). The where clause will use the first two columns and the order by will use the last.

speed up LIMIT query with millions of records

I found duplicate
Why does MYSQL higher LIMIT offset slow the query down?

so best solution will be

1)Hold the last id of a set of data(30) (e.g. lastId = 530)
2)Add the condition "WHERE id > lastId limit 0,30"

Thank you all for your patience

Why does MYSQL higher LIMIT offset slow the query down?

It's normal that higher offsets slow the query down, since the query needs to count off the first OFFSET + LIMIT records (and take only LIMIT of them). The higher is this value, the longer the query runs.

The query cannot go right to OFFSET because, first, the records can be of different length, and, second, there can be gaps from deleted records. It needs to check and count each record on its way.

Assuming that id is the primary key of a MyISAM table, or a unique non-primary key field on an InnoDB table, you can speed it up by using this trick:

SELECT  t.* 
FROM (
SELECT id
FROM mytable
ORDER BY
id
LIMIT 10000, 30
) q
JOIN mytable t
ON t.id = q.id

See this article:

  • MySQL ORDER BY / LIMIT performance: late row lookups


Related Topics



Leave a reply



Submit