Duration of Data in a Global Temporary Table

Duration of data in a Global Temporary table?

They can be SESSION based (data survives a commit but not a disconnect/reconnect). They can also be TRANSACTION based (data disappears after a commit).

This creates a transaction based temp table:

create global temporary table temp_table_transaction on commit delete rows ...

This creates a session based temp table:

create global temporary table temp_table_transaction on commit preserve rows ...

Oracle11 global temp table lifespan

Correct.

A global temporary table exists forever, just like any other database object. It is created when you install your application (not at runtime) just like any other database object.

If you specify ON COMMIT PRESERVE ROWS, the data persists for the lifetime of the session (and is only visible to the session that inserted it). If you specify ON COMMIT DELETE ROWS, the data persists only for the lifetime of the transaction (and is only visible to the transaction that inserted it).

Oracle global temporary tables - maximum number of records?

there is no hard limit to the number of records in a global temporary table. Data will eventually be written to disk and therefore will be subject to read/write speed when you insert/update your data or query the table. You would expect performance (i.e data modification and data access) to be roughly of the same level as a regular table -- a little faster since there is less redo generated.

You can add indexes to a global temporary table to ease retrieval of a subset of rows (it will obviously slow down insert and consume more temp space, this is a trade-off)

Global temporary table clears on commit

The transaction getting complete and new session is requested on ajax call to load again.

For more details refer below link:
https://oracle-concepts-learning.blogspot.com/2019/07/oracle-gtts.html

Local and global temporary tables in SQL Server

I find this explanation quite clear (it's pure copy from Technet):

There are two types of temporary tables: local and global. Local temporary tables are visible only to their creators during the same connection to an instance of SQL Server as when the tables were first created or referenced. Local temporary tables are deleted after the user disconnects from the instance of SQL Server. Global temporary tables are visible to any user and any connection after they are created, and are deleted when all users that are referencing the table disconnect from the instance of SQL Server.

Global Temporary Table

Oracle Temp tables are NOT like SQL Server #temp tables. I can't see any reason to continuously drop/create the tables. The data is gone on a per session basis anyways once the transaction or session is completed (depends on table creation options). If you have multiple threads using the same db session, they will see each other's data. If you have one session per thread, then the data is limited in scope as you mentioned. See example here.

How many records can contain a GLOBAL TEMPORARY TABLE ON COMMIT PRESERVE ROWS?

Two factors will limit the number of rows you can insert: temporary space and undo space.

You can put as much data in a temporary table as there is space in the temporary tablespace. If the temporary tablespace is allowed to grow (with autoextend datafiles and tablespace), you will only be limited by the disk space. Now you want to estimate the size of your rows and allow some extra room for overhead. This will give you a rough estimate of the size needed by the temporary tablespace.

A single transaction needs to fit entirely in the undo tablespace. Undo data for insert is smaller than other DMLs, still 80M rows will produce a LOT of undo. If you're also deleting these rows from some other table, the undo will take roughly the same space as the original rows. You're probably using automatic undo management, just set the tablespace and its datafiles to autoextend and you're good.

If this is a one-shot you may want to reduce the size of both temporary and undo tablespaces once you're done. If you're going to perform this operation on a regular basis, just let the tablespaces grow and leave them there afterwards.


The only real problem with a 80M row transaction is the looooooong rollback time you may experience if something goes wrong. Deleted rows in particular will make your rollback a lot longer than the actual deletion.

While there is nothing fundamentally wrong with Oracle and a large transaction (Oracle will scale), dividing the total work into smaller work units will allow you to restart the process faster and on a smaller subset of data in case of failure.



Related Topics



Leave a reply



Submit