Limiting the Number of Records in a SQLite Db

SQLite how to limit the number of records

You can do it with a trigger.

Say that your table is this:

CREATE TABLE tablename (
id INTEGER PRIMARY KEY,
name TEXT,
inserted_at TEXT DEFAULT (strftime('%Y-%m-%d %H:%M:%f', 'now'))
);

In the column inserted_at you will have the timestamp of the insertion of each row.

This is not necessary if you declared the column id as:

id INTEGER PRIMARY KEY AUTOINCREMENT

because in this case you could identify the 1st inserted row by the minimum value of the id.

Now create this trigger:

CREATE TRIGGER keep_100_rows AFTER INSERT ON tablename
WHEN (SELECT COUNT(*) FROM tablename) > 100
BEGIN
DELETE FROM tablename
WHERE id = (SELECT id FROM tablename ORDER BY inserted_at, id LIMIT 1);
-- or if you define id as AUTOINCREMENT
-- WHERE id = (SELECT MIN(id) FROM tablename);
END;
END;

Every time that you insert a new row, the trigger will check if the table has more than 100 rows and if it does it will delete the 1st inserted row.

See the demo (for max 3 rows).

Limiting the number of records in a Sqlite DB

You can use an implicit "rowid" column for that.

Assuming you don't delete rows manually in different ways:

DELETE FROM yourtable WHERE rowid < (last_row_id - 1000)

You can obtain last rowid using API function or as max(rowid)

If you don't need to have exactly 1000 records (e.g. just want to cleanup old records), it is not necessary to do it on each insert. Add some counter in your program and execute cleanup f.i. once every 100 inserts.

UPDATE:

Anyway, you pay performance either on each insert or on each select. So the choice depends on what you have more: INSERTs or SELECTs.

In case you don't have that much inserts to care about performance, you can use following trigger to keep not more than 1000 records:

CREATE TRIGGER triggername AFTER INSERT ON tablename BEGIN
DELETE FROM tablename WHERE timestamp < (SELECT MIN(timestamp) FROM tablename ORDER BY timestamp DESC LIMIT 1000);
END

Creating unique index on timestamp column should be a good idea too (in case it isn't PK already). Also note, that SQLITE supports only FOR EACH ROW triggers, so when you bulk-insert many records it is worth to temporary disable the trigger.

If there are too many INSERTs, there isn't much you can do on database side. You can achieve less frequent trigger calls by adding trigger condition like AFTER INSERT WHEN NEW.rowid % 100 = 0. And with selects just use LIMIT 1000 (or create appropriate view).

I can't predict how much faster that would be. The best way would be just measure how much performance you will gain in your particular case.

Limit an sqlite Table's Maximum Number of Rows

Another solution is to precreate 100 rows and instead of INSERT use UPDATE to update the oldest row.

Assuming that the table has a datetime field, the query

UPDATE ...
WHERE datetime = (SELECT min(datetime) FROM logtable)

can do the job.

Edit: display the last 100 entries

SELECT * FROM logtable
ORDER BY datetime DESC
LIMIT 100

Update: here is a way to create 130 "dummy" rows by using join operation:

CREATE TABLE logtable (time TIMESTAMP, msg TEXT);
INSERT INTO logtable DEFAULT VALUES;
INSERT INTO logtable DEFAULT VALUES;
-- insert 2^7 = 128 rows
INSERT INTO logtable SELECT NULL, NULL FROM logtable, logtable, logtable,
logtable, logtable, logtable, logtable;
UPDATE logtable SET time = DATETIME('now');

Maximum number of rows in a sqlite table

In SQLite3 the field size isn't fixed. The engine will commit as much space as needed for each cell.

For the file limits see this SO question:

What are the performance characteristics of sqlite with very large database files?

Limit number of records in a table in SQLite

For small tables it is not recommended that you have specified keys anyway, so by default it is indexed on rowid.

Thus rowid defines the order in which the records were added.

For each row added:

SELECT rowid FROM TheTable limit 1;
and delete it!

Simplicity itself.

i.e.

delete from TheTable where rowid in (SELECT rowid FROM TheTable limit 1);

Thereby, for each record added at the front end, you remove the first record at the back end.

For tables which do have one or more indices just ignore them and order using rowid.

delete from TheTable where rowid in (SELECT rowid FROM TheTable order by rowid asc limit 1);

Answering this question allowed me to use this technique to alter my own project, to limit the number of files in a "recently used" file list.

is there a limit to the size of a SQLite database?

This is fairly easy to deduce from the implementation limits page:

An SQLite database file is organized as pages. The size of each page is a power of 2 between 512 and SQLITE_MAX_PAGE_SIZE. The default value for SQLITE_MAX_PAGE_SIZE is 32768.

...

The SQLITE_MAX_PAGE_COUNT parameter, which is normally set to 1073741823, is the maximum number of pages allowed in a single database file. An attempt to insert new data that would cause the database file to grow larger than this will return SQLITE_FULL.

So we have 32768 * 1073741823, which is 35,184,372,056,064 (35 trillion bytes)!

You can modify SQLITE_MAX_PAGE_COUNT or SQLITE_MAX_PAGE_SIZE in the source, but this of course will require a custom build of SQLite for your application. As far as I'm aware, there's no way to set a limit programmatically other than at compile time (but I'd be happy to be proven wrong).



Related Topics



Leave a reply



Submit