How Do Uniqueness Constraints as (Comma,Separated,Attributes) Work with Swift in Coredata

core data ios9: multifield unique constraint

It is possible to set uniqueness constraints for a combination of attributes. You were on the right path putting both attributes on the same line in the constraints:

Data Model Editor snapshot

You may have found this had no effect due to a problem with Xcode: the constraints are not actually updated in the model unless you modify some other aspect of the model at the same time (eg. change an attribute type and then change it back).

If you look at the SQL being generated, the table is created with a constraint:

CREATE TABLE ZSTOREOBJECT ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ...., CONSTRAINT ZLASTNAME_ZFIRSTNAME UNIQUE (ZLASTNAME, ZFIRSTNAME))

and when you insert new insert new values which fail this constraint (when the context is saved):

CoreData: sql: COMMIT
CoreData: sql: BEGIN EXCLUSIVE
CoreData: sql: INSERT INTO ZSTOREOBJECT(Z_PK, Z_ENT, Z_OPT) VALUES(?, ?, ?)
CoreData: sql: UPDATE ZSTOREOBJECT SET ZLASTNAME = ?, ZFIRSTNAME = ? WHERE Z_PK = ?
CoreData: sql: ROLLBACK

and the error returned is:

Error Domain=NSCocoaErrorDomain Code=133021 "(null)" UserInfo={conflictList=(
"NSConstraintConflict (0x7fbd18d33c10) for constraint (\n lastName,\n firstName\n): ....

(This works in Xcode 7.2.1, with the iOS9.2 simulator; I haven't checked prior versions).

How to prevent to-many relationship NSManagedObject duplicates on overwrite merge with Swift in CoreData?

Well, I can assure you that neither knowledge of Objective-C, nor reading Apple's non-existent documentation on subclassing NSMergePolicy would have helped you figure this one out :)

I confirmed in my own little demo project that Core Data's Uniqueness Constraints do not play as one would expect with Core Data's Cascade Delete Rule. As you reported, in your case, you just keep getting more and more Address objects.

The following code solves the problem of duplicated Address objects in my demo project. However its complexity makes one wonder if it would not be better to forego Core Data's Uniqueness Constraint and write your own old school uniquifying code instead. I suppose that might perform worse, but you never know.

When de-duplicating Address objects, one could keep either the existing objects in the persistent store or make new ones. It should not matter, if indeed all attributes are equal. The following code keeps the existing objects. That has the aesthetically pleasing effect of not growing the "p" suffixes in the object identifier string representations. They remain as "p1", "p2", "p3", etc.

When you create your persistent container, in the loadPersistentStores() completion handler, you assign your custom merge policy to the the managed object context like this:

container.loadPersistentStores(completionHandler: { (storeDescription, error) in
container.viewContext.mergePolicy = MyMergePolicy(merge: .overwriteMergePolicyType)
...
})

Finally, here is your custom merge policy. The Client objects in the merge conflicts passed to resolve(constraintConflicts list:) have their new Address objects. The override removes these, and then invokes one of Core Data's standard merge policies, which appends the existing Address objects, as desired.

class MyMergePolicy : NSMergePolicy {
override func resolve(constraintConflicts list: [NSConstraintConflict]) throws {
for conflict in list {
for object in conflict.conflictingObjects {
if let client = object as? Client {
if let addresses = client.addresses {
for object in addresses {
if let address = object as? Address {
client.managedObjectContext?.delete(address)
}
}
}
}
}
}

/* This is kind of like invoking super, except instead of super
we invoke a singleton in the CoreData framework. Weird. */
try NSOverwriteMergePolicy.resolve(constraintConflicts: list)

/* This section is for development verification only. Do not ship. */
for conflict in list {
for object in conflict.conflictingObjects {
if let client = object as? Client {
print("Final addresses in Client \(client.identifier) \(client.objectID)")
if let addresses = client.addresses {
for object in addresses {
if let address = object as? Address {
print(" Address: \(address.city ?? "nil city") \(address.objectID)")
}
}
}
}
}
}

}
}

Note that this code builds on the Overwrite merge policy. I'm not sure if one of the Trump policies would be more appropriate.

I'm pretty sure that's all you need. Let me know if I left anything out.

defining array in javascript

They do the same thing. Advantages to the [] notation are:

  • It's shorter.
  • If someone does something silly like redefine the Array symbol, it still works.
  • There's no ambiguity when you only define a single entry, whereas when you write new Array(3), if you're used to seeing entries listed in the constructor, you could easily misread that to mean [3], when in fact it creates a new array with a length of 3 and no entries.
  • It may be a tiny little bit faster (depending on JavaScript implementation), because when you say new Array, the interpreter has to go look up the Array symbol, which means traversing all entries in the scope chain until it gets to the global object and finds it, whereas with [] it doesn't need to do that. The odds of that having any tangible real-world impact in normal use cases are low. Still, though...

So there are several good reasons to use [].

Advantages to new Array:

  • You can set the initial length of the array, e.g., var a = new Array(3);

I haven't had any reason to do that in several years (not since learning that arrays aren't really arrays and there's no point trying to pre-allocate them). And if you really want to, you can always do this:

var a = [];
a.length = 3;


Related Topics



Leave a reply



Submit