Sql Query to Select Million Records Quickly

SQL query to select million records quickly

Use Indexing for your table fields for fetching data fast.

Reference:

http://www.tutorialspoint.com/sql/sql-indexes.htm

Fastest way to process Millions of Rows in SQL Server for a Chart

I think on two approaches to improve the performance of the charts:

  1. Trying to improve the performance of the queries.
  2. Reducing the amount of data needed to be read.

It's almost impossible for me to improve the performance of the queries without the full DDL and execution plans. So I'm suggesting you to reduce the amount of data to be read.

The key is summarizing groups at a given granularity level as the data comes and storing it in a separate table like the following:

CREATE TABLE SummarizedData
(
int GroupId PRIMARY KEY,
FromDate datetime,
ToDate datetime,
SumX float,
SumY float,
GroupCount
)

IdGroup should be equals to Id/100 or Id/1000 depending on how much granularity you want in groups. With larger groups you get more coarse granularity but more efficient charts.

I'm assuming LargeTable Id column increases monotonically, so you can store the last Id that has been processed in another table called SummaryProcessExecutions

You would need a stored procedure ExecuteSummaryProcess that:

  1. Read LastProcessedId from SummaryProcessExecutions
  2. Read the Last Id on large table and store it into @NewLastProcessedId variable
  3. Summarize all rows from LargeTable with Id > @LastProcessedId and Id <= @NewLastProcessedId and store the results into SummarizedData table
  4. Store @NewLastProcessedId variable into SummaryProcessExecutions table

You can execute ExecuteSummaryProcess stored procedure frequently in a SQL Server Agent Job.

I believe that grouping by date would be a better choice than grouping by Id. It would simplify things. The SummarizedData GroupId column would not be related to LargeTable Id and you would not need to update SummarizedData rows, you would only need to insert rows.

How to speed up query in sql with over 20 million rows

Main problem I think is 20 000 000 records returned from server. Its time costly. Especially, if you are querying "big" data types (xml, binary, etc.), and your server is remote with slow internet connection.

Minor problem is DISTINCT. Youre performing this over all records to be returned to frontend.

NEVER return all dataset to frontend. Use PAGING instead.

Here is way how to do this:

--  declare @DateTo as DateTime ='2018-08-01';
-- declare @page_size int = 25;
-- declare @page int = 1;

;with [data] as (
select distinct
[Location Code]
,[Bin Code]
,[Item No_]
,[Quantity]
,[Qty_ (Base)]
,[Zone Code]
,[Bin Type Code]
,[Lot No_]
,[Registering Date]
from
[Warehouse Entry]
where
[Registering Date] <= @DateTo
)
select
[Location Code]
,[Bin Code]
,[Item No_]
,[Quantity]
,[Qty_ (Base)]
,[Zone Code]
,[Bin Type Code]
,[Lot No_]
,[Registering Date]
from
[data]
order by
[Registering Date] asc
offset
@page_size * (@page - 1) rows fetch next @page_size rows only;

Speeding up SQL SELECT query for table with 6 million rows

This should be faster if you have the appropriate index:

create index idx_codes_code on codes(code)

If you have an index, then the issue may be the number of rows being returned. In that case, you can limit the result set to a single row. You don't specify the database. The standard syntax is:

SELECT 1
FROM codes
WHERE code = $value
FETCH FIRST 1 ROW ONLY;

In some databases, this would be handled with SELECT TOP (1) or LIMIT 1.



Related Topics



Leave a reply



Submit