Deleting a LOT of data in Oracle
The first approach is better, because you give the query optimizer a clear picture of what you are trying to do, instead of trying to hide it. The database engine might take a different approach to deleting 5.5m (or 5.5% of the table) internally than to deleting 200k (or 0.2%).
Here is also an article about massive DELETE in Oracle which you might want to read.
How to delete large amount of data from Oracle table in batches
If you want this query to run faster, add the following two indexes:
create index idx_persons_city_pid on persons(city, p_id);
create index idx_orders_pid on orders(p_id);
Best way to delete millions of rows in Oracle
You might try:
- Creating a new non-partitioned table
- Create indexes on it
- Use direct path (APPEND) nologging insert to add the rows you want to keep.
- Perform partition exchange
- Truncate non-partitioned table
- Repeat from (3) for other partitions
- Drop non-partitioned table
- Take backup
Note that the indexes are built during the insert, by logging the required data into temporary segments that are then scanned to build the indexes, rather than full scanning the table itself.
Deleting large records in oracle sql
delete from KPI_LOG where SYSDATE - TIMESTAMP > 2;
If you are deleting more number of rows then you are keeping in the table, then you could do a CTAS i.e. create table as select
and then drop the old able and rename the new table.
Make sure, you have an index on the timestamp
column.
For example,
CREATE INDEX tmstmp_indx ON KPI_LOG(TIMESTAMP )
/
CREATE TABLE KPI_LOG_NEW
AS
SELECT * FROM KPI_LOG WHERE TIMESTAMP > SYSDATE -2
/
DROP TABLE KPI_LOG
/
ALTER TABLE KPI_LOG_NEW RENAME TO KPI_LOG
/
Make sure you create all the necessary indexes and constraints on the new table.
Deleting rows doesn't reset the HIGH WATERMARK, by doing CTAS you have a fresh new table. Therefore, you don't have to scan all those rows below the high watermark which you would do in case of deletion.
Deleting data from Big Table in Oracle taking longer execution time
Rather a poor design. Try this one
DELETE FROM integration_errors WHERE err_rec_id in (select error_log_id from integation_log where insert_date < sysdate - 180) ;
DELETE from integation_log where insert_date < sysdate - 180;
Don't use trunc(insert_date)
unless you have function-based index. Disable the foreign-key constraint before you delete the data and enable it again after you deleted it.
Consider partitioning, in newer Oracle you can also partition by ref-key
Related Topics
Iterate Through a List of Strings in SQL Server
Solution for Speeding Up a Slow Select Distinct Query in Postgres
How to Send a Query Result in CSV Format
Activerecord::Statementinvalid. Pg Error
Presto Check If Null and Return Default (Nvl Analog)
Geometry and Geography Difference SQL Server 2008
How to Pass Two SQL Tables as Input Parameter for R Codes in SQL Server
How to Use a Case Statement in Scalar Valued Function in Sql
Create a Sqlite View Where a Row Depends on The Previous Row
Laravel Concat in Query (Where Condition)
Postgres: Select All Row with Count of a Field Greater Than 1
How to Use Group by Based on a Case Statement in Oracle
Sql Server 2008 Hierarchy Data Type Performance
How to Avoid SQL Query Timeout
Why Are SQL Server Inserts So Slow
Oracle SQL to Change Column Type from Number to Varchar2 While It Contains Data