Unrelated tables and time intervals
Tables (bases, views & query results) represent relation(ship)s/associations. FK (foreign key) constraints are sometimes called "relation(ship)s" but are not. They are statements of fact. They say that subrows appear elsewhere as a PK (primary key) or UNIQUE; that entities participate elsewhere once. Table meanings are necessary & sufficient to query. Constraints--including PKs, UNIQUE, NOT NULL, CHECK & FKs--are neither necessary nor sufficient to query. They are for integrity to be enforced by the DBMS. (But when constraints hold, additional queries return the same results as queries that don't assume constraints.)
Declare constraints when they hold & are not implied by constraints already declared, and don't declare them when they don't hold or are implied by constraints already declared.
Re querying & constraints.
How do I maintain multi-table integrity across multiple selects in SQLAlchemy in Pyramid?
Using serializable transaction isolation level should prevent both problems. If one transaction modifies the data that can affect results of previous reads in another transaction, there is a serialization conflict. Only one transaction wins, all others are aborted by the database to be restarted by the client. SQLite does this by locking the whole database, PostgreSQL employs much more complex mechanism (see docs for details). Unfortunately, there is no portable sqlalchemic way to catch serialization anomaly and retry. You need to write DB-specific code to reliably distinguish it from other errors.
I've put up a sample program with two threads concurrently modifying the data (a very basic reproduction of your scheme), running into conflicts and retrying:
https://gist.github.com/khayrov/6291557
With Pyramid transaction middleware and Zope transaction manager in use that would be even easier. After catching serialization error, instead of retrying manually, raise TransientError
and middleware will retry the whole request up to tm.attempts
(in the paster config) times.
from transaction.interfaces import TransientError
class SerializationConflictError(TransientError):
def __init__(self, orig):
self.orig = orig
You can even write your own middleware sitting below pyramid_tm
in the stack that will catch serialization errors and translate them to transient errors transparently.
def retry_serializable_tween_factory(handler, registry):
def retry_tween(request):
try:
return handler(request)
except DBAPIError, e:
orig = e.orig
if getattr(orig, 'pgcode', None) == '40001':
raise SerializationConflictError(e)
elif isinstance(orig, sqlite3.DatabaseError) and \
orig.args == ('database is locked',):
raise SerializationConflictError(e)
else:
raise
return retry_tween
Select unrelated columns from two unrelated tables
You could do this:
INSERT Invoice (
Id,
CustomerName,
ProductName
)
SELECT
:InvoiceId,
(
SELECT
Customer.Name
FROM
Customer
WHERE
Customer.CustomerId = :CustomerId
),
(
SELECT
Product.Name
FROM
Product
WHERE
Product.ProductId = :ProductId
)
FROM RDB$DATABASE
How to model this relation in a relational database?
Here is a possible solution, in which the basic idea is to define as primary keys for Contract and Employee a composite key, formed by the primary key of the client, and a numeric value used to distinguish between the different contracts and employees “inside” a client.
Client(ClientId, ClientData),
primary key ClientId
Contract(ClientId, ContrNum, ContractData)
primary key (ClientId, ContrNum)
ClientId foreign key for Client
Employee(ClientId, EmplNum, EmplData)
primary key (ClientId, EmpNum)
ClientId foreign key for Client
EmployeeContract(ClientId, EmpNum, ContrNum)
primary key (ClientId, EmpNum, ContrNum)
(ClientId, EmpNum) foreign key for Employee
(ClientId, ContrNum) foreign key for Contract
In this way the consistency among the data is maintained at the database level through the different foreign keys: when you insert a new record for EmployeeContract this will surely be an employee working for the same client for which the contract is stipulated.
How to Implement Referential Integrity in Subtypes
None of that is necessary, especially not the doubling up the tables.
Introduction
Since the Standard for Modelling Relational Databases (IDEF1X) has been in common use for over 25 years (at least in the high quality, high performance end of the market), I use that terminology. Date & Darwen, despite1 consistent with the great work they have done to progresssuppress the Relation Model, they were unaware of IDEF1X until I brought it to their attention in 2009, and thus has a new terminology2 for the Standard terminology that we have been using for decades. Further, the new terminology does not deal with all the cases, as IDEF1X does. Therefore I use the established Standard terminology, and avoid new terminology.
Even the concept of a "distributed key" fails to recognise the underlying ordinary PK::FK Relations, their implementation in SQL, and their power.
The Relational, and therefore IDEF1X, concept is Identifiers and Migration thereof.
Sure, the vendors are not exactly on the ball, and they have weird things such a "partial Indices" etc, which are completely unnecessary when the basics are understood. But famous “academics” and “theoreticians” coming up with incomplete new concepts when the concept was standardised and give full treatment 25 years ago ... that, is unexpected and unacceptable.
Caveat
IEC/ISO/ANSI SQL barely handles Codd’s 3NF (Date & Darwen’s “5NF”) adequately, and it does not support Basetype-Subtype structures at all; there are no Declarative Constraints for this (and there should be).
- Therefore, in order to enforce the full set of Rules expressed in the Data Model, both Basetype::Subtype and Subtype::Basetype, we have to fiddle a little with
CHECK CONSTRAINT
s, etc (I avoid using Triggers for a number of reasons).
Relief
However, I take all that into account. In order for me to effectively provide a Data Modelling service on Stack Overflow, without having to preface that with a full discourse, I purposely provide models that can be implemented by capable people, using existing SQL and existing Constraints, to whatever extent they require. It is already simplified, and contains the common level of enforcement.
We can use both the example graphic in the linked document and your fully IDEF1X-compliant Sensor Data Model
Readers who are not familiar with the Relational Modelling Standard may find IDEF1X Notation useful. Readers who think a database can be mapped to objects, classes, and subclasses are advised that reading further may cause injury. This is further than Fowler and Ambler have read.
Implementation of Referential Integrity for Basetype-Subtype
There are two types of Basetype-Subtype structures.
Exclusive Subtype
Exclusive means there must be one and only one Subtype row for each Basetype row. In IDEF1X terms, there should be a Discriminator column in the Basetype, which identifies the Subtype row that exists for it.
For more than two Subtypes, this is demanded, and I implement a Discriminator column.
For two Subtypes, since this is easily derived from existing data (eg.
Sensor.IsSwitch
is the Discriminator forReading
), I do not model an additional explicit Discriminator column forReading
. However, you are free to follow the Standard to the letter and implement a Discriminator.
I will take each aspect in detail.
The Discriminator column needs a
CHECK CONSTRAINT
to ensure it is within the range of values, eg:IN ("B", "C", "D")
.IsSwitch
is aBIT
, which is 0 or 1, so that is already constrained.Since the PK of the Basetype defines its uniqueness, only one Basetype row will be allowed; no second Basetype row (and thus no second Subtype row) can be inserted.
Therefore it is overkill, completely redundant, an additional unnecessary Index, to implement an Index such as (PK, Discriminator) in the Basetype, as your link advises. The uniqueness is in the PK, and therefore the PK plus anything will be unique).
IDEF1X does not require the Discriminator in the Subtype tables. In the Subtype, which is again constrained by the uniqueness of its PK, as per the model, if the Discriminator was implemented as a column in that table, every row in it will have the same value for the Discriminator (every Book will be "B"; every
ReadingSwitch
will be anIsSwitch
). Therefore it is absurd to implement the Discriminator as a column in the Subtype. And again, completely redundant, an additional unnecessary Index, to implement an Index such as (PK, Discriminator) in the Subtype: the uniqueness is in the PK, and therefore the PK plus anything will be unique).The method identified in the link is a ham-fisted and bloated (massive data duplication for no purpose) way of implementing Referential Integrity. There is probably a good reason the author has not seen that construct anywhere else. It is a basic failure to understand SQL and to use it as it is effectively. These "solutions" are typical of people who follow a dogma "SQL can't do ..." and thus are blind to what SQL can do. The horrors that result from Fowler and Ambler's blind "methods" are even worse.
- The Subtype PK is also the FK to the Basetype, that is all that is required, to ensure that the Subtype does not exist without a parent Basetype.
- Therefore for any given PK, whichever Basetype-Subtype is inserted first will succeed; and whichever Basetype-Subtype is attempted after that, will fail. Therefore there is nothing to worry about in the Subtype table (a second Basetype row or a second Subtype row for the same PK is prevented).
.
- The SQL
CHECK CONSTRAINT
is limited to checking the inserted row. We need to check the inserted row against other rows, either in the same table, or in another table. Therefore a 'User Defined' Function is required.
Write a simple UDF that will check for existence of the PK and the Discriminator in the Basetype, and return 1 if
EXITS
or 0 ifNOT EXITS
. You will need one UDF per Basetype (not per Subtype).In the Subtype, implement a
CHECK CONSTRAINT
that calls the UDF, using the PK (which is both the Basetype and the Subtype) and the Discriminator value.I have implemented this in scores of large, real world databases, on different SQL platforms. Here is the 'User Defined' Function Code, and the DDL Code for the objects it is based on.
This particular syntax and code is tested on Sybase ASE 15.0.2 (they are very conservative about SQL Standards compliance).
I am aware that the limitations on 'User Defined' Functions are different for every SQL platform. However, this is the simplest of the simple, and AFAIK every platform allows this construct. (No idea what the Non-SQLs do.)
yes, of course this clever little technique can be used implement any non-trivial data rule that you can draw in a Data Model. In particular, to overcome the limitations of SQL. Note my caution to avoid two-way Constraints (circular references).
- Therefore the
CHECK CONSTRAINT
in the Subtype, ensures that the PK plus the correct Discriminator exists in Basetype. Which means that only that Subtype exists for the Basetype (the PK).
Any subsequent attempt to insert another Subtype (ie. break the Exclusive Rule) will fail because the PK+Discriminator does not exist in the Basetype.
Any subsequent attempt to insert another row of the same Subtype is prevented by the uniqueness of its PK Constraint.
- The only bit that is missing (not mentioned in the link) is the Rule "every Basetype must have at least one Subtype" is not enforced. This is easily covered in Transactional code (I do not advise Constraints going in two directions, or Triggers); use the right tool for the job.
Non-exclusive Subtype
The Basetype (parent) can host more than one Subtype (child)
- There is no single Subtype to be identified.
The Discriminator does not apply to Non-exclusive Subtypes.
The existence of a Subtype is identified by performing an existence check on the Subtype table, using the Basetype PK.
- Simply exclude the
CHECK CONSTRAINT
that calls the UDF above.
- The
PRIMARY KEY
,FOREIGN KEY
, and the usual RangeCHECK CONSTRAINT
s, adequately support all requirements for Non-exclusive Subtypes.
Reference
For further detail; a diagrammatic overview including details; and the distinction between Subtypes and Optional Column tables, refer to this Subtype document.
Note
I, too, was taken in by C J Date's and Hugh Darwen's constant references to "furthering" the Relational Model. After many years of interaction, based on the mountain of consistent evidence, I have concluded that their work is in fact, a debasement of it. They have done nothing to further Dr E F Codd's seminal work, the Relational Model, and everything to damage and suppress it.
They have private definitions for Relational terms, which of course severely hinders any communication. They have new terminology for terms we have had since 1970, in order to appear that they have "invented" it.
Response to Comment
This section can be skipped by all readers who did not comment.
Unfortunately, some people are so schooled in doing things the wrong way, at massive additional cost, that even when directed clearly in the right way, they cannot understand it. Perhaps that is why proper education cannot be substituted with a Question-and Answer format.
Sam:
I’ve noticed that this approach doesn't prevent someone from usingUPDATE
to change a Basetype's discriminator value. How could that be prevented? TheFOREIGN KEY
+ duplicate Discriminator column in subtypes approach seems to overcome this.
Yes. This Method doesn't prevent someone using UPDATE
to change a Key, or a column in some unrelated table, or headaches, either. It answers a specific question, and nothing else. If you wish to prevent certain DML commands or whatever, use the SQL facility that is designed for that purpose. All that is way beyond the scope of this question. Otherwise every answer has to address every unrelated issue.
Answer. Since we should be using Open Architecture Standards, available since 1993, all changes to the db are via ACID Transactions, only. That means direct INSERT/UPDATE/DELETE
, to all tables are prohibited; the data retains Integrity and Consistency (ACID terminology). Otherwise, sure, you have a mess, such as your eg. and the consequences. The proponents of this method do not understand Transactions, they understand only single file INSERT/UPDATE/DELETE
.
Further, the FK+Duplicate D+Duplicate Index (and the massive cost therein !) does nothing of the sort, I don't know where you got "seems" from.
dtheodor:
This question is about referential integrity. Referential integrity doesn't mean "check that the reference is valid on insert and the forget about it". It means "maintain the validity of the reference forever". The duplicate discriminator + FK method guarantees this integrity, your UDF approach does not. It's without question thatUPDATE
s should not break the reference.
The problem here is two-fold. First, you need basic education in other areas regarding Relational Databases and Open Architecture Standards. Again, it is best to open a new question here, so a complete answer to that other area of Relational Databases can be provided.
OK, short answer, that really belongs in another question How is the Discriminator in Exclusive Subtypes Protected from an Invalid UPDATE?
- Clarity. Yes, Referential integrity doesn't mean "check that the reference is valid on insert and the forget about it”. I didn’t say that it meant that either.
Referential Integrity means the References in the database
FOREIGN KEY
has Integrity with thePRIMARY KEY
that it references.Declarative Referential Integrity means the declared References in the database ...
CONSTRAINT
FOREIGN KEY... REFERENCES ...
CONSTRAINT CHECK ...
are maintained by the RDBMS platform and not by the application code.It does not mean "maintain the validity of the reference forever” either.
- The original question regards RI for Subtypes, and I have answered it, providing DRI.
- The point that massively inefficient structures and duplicated tables, are not required, must be emphasised.
Your question does not regard RI or DRI.
Your question, although asked incorrectly, because you are expecting the Method to provide what the Method does not provide, and you do not understand that your requirement is fulfilled by other means, is How is the Discriminator in Exclusive Subtypes Protected from an Invalid UPDATE ?
The answer is, use the Open Architecture Standards that we should be using since 1993. That prevents all invalid
UPDATE
s. Do please read the linked documents, and understand them, your concern is a non-issue, it does not exist. That is the short answer.But you did not understand the short answer, so I will explain it here.
No one is allowed to walk up to the database and change a column here or a value there. Using either SQL directly or an app that uses SQL directly. If that were allowed, you will not have a secured database.
All updates (lower case) to the database (including multi-row
INSERT/UPDATE/DELETE
) are implemented as ACID SQL Transactions. And nothing but Transactions. The set of Transactions constitute the Database API, that is exposed to any application that uses the database.- SQL has ACID Transactions. Non-SQL databases do not have Transactions. Proponents of these database systems know absolutely nothing about Transactions, let alone Open Architecture. Their Non-architecture is a monolithic stack. And a “database” that gets refactored every month.
Since the only Transactions that you write will insert the basetype+subtype in a single Transaction, as a single Logical Unit of Work, the Integrity (data Integrity, not Referential Integrity) of the basetype::subtype relation is maintained, and maintained within the database. Therefore all updates to the database will be Valid, there will not be any Invalid updates.
Since you are not so stupid as to write code that
UPDATE
s the Discriminator column in a single row without the attendantDELETE Previous_Subtype
, place it in a Transaction, andGRANT EXEC
permission for it to userROLES
, there will not be an Invalid Discriminator anywhere in the database.
Uniqueness constraint on secondary relation
You could change Candidate
's PK to be a composite of electionId, name
or at least make that combination a unique constraint in Candidate
.
Then you would change Vote
to be userId, electionId, name
where the PK is userId, electionId
and there is a FK pointing to Candidate
's electionId, name
which is now unique.
This means that userId and electionId are unique for the vote table and there is no redundancy left.
Related Topics
Put Pg_Try_Advisory_Xact_Lock() in a Nested Subquery
MySQL Duplicates with Load Data Infile
Replace Row Value with Empty String If Duplicate
Ms Access 2010 Running Total in Query
Access Query Counter Per Group
Identifying Transitive Dependencies
To Prevent the Use of Duplicate Tags in a Database
SQL Server:Pivot with Custom Column Names
Dynamic Pivot Needed with Row_Number()
How to Ensure Integrity Between Unrelated Tables
What's the Equivalent for Listagg (Oracle Database) in Postgresql
Getting an Error When Executing a Dynamic SQL Within a Function (SQL Server)
This SQL 'Order By' Is Not Working Properly
Mod Negative Numbers in SQL Just Like Excel
Order by in a SQL Server 2008 View
How to Back Up a Postgresql Database from Within Psql
Multiple Rows into a Single Row and Combine Column SQL
How to Convert SQL to Relational Algebra in Case of SQL Joins