F# type provider in DAL
If I'm not much mistaken, the context is an instance of DataContext, which isn't thread-safe. None of the data context base classes I'm aware of, are thread-safe, and since you want to use them in a web application, you must, at the very least, create one instance per HTTP request; otherwise, the behaviour of your DAL will be defective.
On the other hand, within a single request, it may be worthwhile to reuse the same instance. Therefore, I'd go with this function design:
let insertItem dbContext item =
dbContext.CreateStoredProc(item.Stuff)
because that would let you create a single dbContext
value associated with a single HTTP request, and reuse that for multiple database operations, because you can pass the same dbContext
value to more than one function.
If you want to make all of this accessible from C#, it would be easiest on C# client developers if you wrap all the functionality in classes. Assuming the above insertItem
function, as well as the getDbContext
function from the OP, you could define classes like these:
type ContextShim internal(ctx) =
member x.InsertItem(item : Item) =
insertItem ctx item
type ContextFactory() =
member x.CreateContext connectionString =
let ctx = getDbContext connectionString
ContextShim ctx
This will enable a C# client to use a Singleton instance of the ContextFactory
class, and for each HTTP request, use its CreateContext
method to create an instance of the ContextShim
class, and then use the members on the ContextShim
instance, such as InsertItem
.
F# TypeProviders, how to Change Database?
There is a fundamental flaw in the approach you're trying to take. You want getting a connection string from your application configuration at run time and offer SqlDataConnection
type provider to make its magic with underlying database.
But this type provider simply cannot do anything at run time stage of workflow as its job has to be already completed at compile time on some known at compile time database.
Then, you may ask, what is the point of using type provider if we want to make our code, after being compiled once, being able to work with database(s) configurable in run-time?
Right, but we do expect results of type provider work being applicable to structurally the same databases, do we?
So, the way out is offering type provider to do its job upon a database compileTimeDB
with literally known at compile time connection string compileTimeCC
, and consider all goodies we getting (type checks, Intellisense,...) parameterized upon a connection string. This connection string parameter value runTimeCC
can be set at run time in any desirable way as long as it points to a database runTimeDB
with the same schema as compileTimeDB
.
Illustrating this fundamental principle with a bit of code below:
[<Literal>]
let compileTimeCC = @"Data Source=(localdb)\ProjectsV12;Initial Catalog=compileTimeDB;Integrated Security=True;"
.....
type MySqlConnection = SqlDataConnection<ConnectionString = compileTimeCC>
// Type provider is happy, let it help us writing our DB-related code
.....
let db = MySqlConnection.GetDataContext(runTimeCC)
// at run time db will be a runTimeDB set by connection string runTimeCC that can be any
// as long as runTimeDB and compileTimeDB has same schema
.....
UPDATE as question author made his problem context clearer I may suggest more specific recommendation upon approaching this with the given TP. As SO answers should be reasonably concise let's limit consideration by two legacy Person
types OldPersonT1
and OldPersonT2
as data sources and one contemporary ModernPerson
type as destination. I'm talking types here, it can be as many as you want instances of such around your DB farm.
Now, let's create a single DB at your localdb
named CompileTypeDB
and run Sql scripts for creating tables corresponding to OldPersonT1
,OldPersonT2
, and ModernPerson
(it's one time exercise and no real data moving will be involved). This will be a single source of type info for SqlDataConnection
TP.
Having this ready, let get back to the code:
type CTSqlConn = SqlDataConnection<ConnectionString = @"Data Source=(LocalDB)\Projectsv12;Initial Catalog=myCompileTimeDB;Integrated Security=True">
type OldPersonT1 = CTSqlConn.ServiceTypes.OldPersonT1 // just for brevity
type OldPersonT2 = CTSqlConn.ServiceTypes.OldPersonT2
type ModernPerson = CTSqlConn.ServiceTypes.ModernPerson
Then, augment each of legacy types with the following static member (given below for OldPersonT1
only for brevity):
type CTSqlConn.ServiceTypes.OldPersonT1 with
static member MakeModernPersons(rtConn: string) =
let projection (old: OldPersonT1) =
// Just a direct copy, but you may be very flexible in spreading verification
// logic between query, projection, and even makeModernPersons function
// that will be processing IQueryable<ModernPerson>
let mp = ModernPerson()
mp.Id <- old.Id
mp.birthDate <- old.birthDate
mp.firstName <- old.firstName
mp.lastName <- old.lastName
mp.dateCreated <- old.dateCreated
mp
query {
for oldPerson in (CTSqlConn.GetDataContext(rtConn)).OldPersonT1 do
select (projection oldPerson)
}
Now you can get hold on IQueryable<ModernPerson>
from any data source of type OldPersonT1
by merely evaluating
OldPersonT1.MakeModernPersons("real time connection string to any DB having OldPersonT1 table")
For this to work real time DB may be not identical to compile time DB, it just should contain everything that OldPersonT1
has and depends upon.
Similarly would be true for OldPersonT2
or any other variation type: by implementing MakeModernPersons
once per variation type you get all data source instances covered.
Doing with data destination requires a single function with signature
let makeModernPersons destinationConnStr (source: IQueryable<ModernPerson>) =
...
that now covers all possible combinations of Person
data sources and destinations just by manipulating values of two real-time connection strings.
Very rough cut, but the idea seems quite clear.
Type provider: How to regenerate?
You're right - this seems to be quite tricky. I'm using SqlDataConnection
type provider in a script file and the only way to update the schema that I've found so far is to make some minor (irrelevant) change in the connection string. For example, add space after =
of one of the parameters:
[<Generate>]
type Northwind = TypeProviders.SqlDataConnection
<"data source=.\\sqlexpress;initial catalog=Northwind;integrated security=True">
[<Generate>]
type Northwind = TypeProviders.SqlDataConnection
<"data source=.\\sqlexpress;initial catalog=Northwind;integrated security= True">
// ^ here
The schema seems to be cached using connection string as the key, so if you change it back, you get the old schema again. I guess this is probably a bug, so adding whitespace is a possible workaround.
There is also a parameter ForceUpdate
, but that doesn't seem to have any effect and the documentation doesn't say much about it.
F# Data Type Provider - Create with string variable
The parameter to the CSV type provider needs to be a constant, so that the types can be generated at compile-time (without actually evaluating the program. However, you can load a different file with the same schema at runtime.
So, the best way to handle this is to copy one of your actual inputs to some well-known location (e.g. sample.csv
in the same directory as your script) and then use the actual path at runtime:
// Generate type based on a statically known sample file
type GenomeFile = CsvProvider<"sample.csv">
// Load the data from the actual input at runtime
let actualData = GenomeFile.Load(fstFile.FullName.ToString())
F# type providers vs C# interfaces + Entity Framework
How can I easily manage database up / down migrations in F# world?
And, to start from, what is the proper way to actually do the database
migrations in F# world when many developers are involved?
Most natural way to manage Db migrations is to use tools native to db i.e. plain SQL. At our team we use dbup package, for every solution we create a small console project to roll up db migrations in dev and during deployment. Consumer apps are both in F# (type providers) and C# (EF), sometimes with the same database. Works like a charm.
You mentioned EF Code First. F# SQL providers are all inherently "Db First" because they generate types based on external data source (database) and not the other way around. I don't think that mixing two approaches is a good idea. In fact I wouldn't recommend EF Code First to anyone to manage migrations: plain SQL is simpler, doesn't require "extensive shaman dancing", infinitely more flexible and understood by far more people.
If you are uncomfortable with manual SQL scripting and consider EF Code First just for automatic generation of migration script then even MS SQL Server Management Studio designer can generate migration scripts for you
What is the F# way to achieve “the best of C# world” as described
above: when I update F# type Person and then fix all places where I
need to add / remove properties to the record, what would be the most
appropriate F# way to “fail” either at compile time or at least at
test time when I have not updated the database to match the business
object(s)?
My recipe is as follows:
- Don't use the interfaces. as you said :)
interfaces, it just does not feel the F# way
- Don't let autogenerated types from type provider to leak outside thin db access layer. They are not business objects, and neither EF entities are as a matter of fact.
- Instead declare F# records and/or discriminated unions as your domain objects. Model them as you please and don't feel constrained by db schema.
- In db access layer, map from autogenerated db types to your domain F# types. Every usage of types autogenerated by Type Provider begins and ends here. Yes, it means you have to write mappings manually and introduce human factor here e.g. you can accidentally map FirstName to LastName. In practice it's a tiny overhead and benefits of decoupling outweigh it by a magnitude.
- How to make sure you don't forget to map some property? It's impossible, F# compiler will emit error if record not fully initialized.
- How to add new property and not forget to initialize it? Start with F# code: add new property to domain record/records, F# compiler will guide you to all record instantiations (usually just one) and force you to initialize it with something (you will have to add a migration script / upgrade database schema accordingly).
- How to remove a property and don't forget to clean up everything up to db schema. Start from the other end: delete column from database. All mappings between type provider types and domain F# records will break and highlight properties that became redundant (more importantly, it will force you to double check that they are really redundant and reconsider your decision).
- In fact in some scenarios you may want to preserve database column (e.g. for historical/audit purposes) and only remove property from F# code. It's just one (and rather rare) of multitude of scenarios when it's convenient to have domain model decoupled from db schema.
In Short
- migrations via plain SQL
- domain types are manually declared F# records
- manual mapping from Type Providers to F# domain types
Even Shorter
Stick with Single Responsibility Principle and enjoy the benefits.
FSharp.Data type providers and reflection: how do I examine the properties of an XmlProvider type?
XML type provider is an erasing type provider and all the objects that represent XML elements become values of the same type called FSharp.Data.Runtime.BaseTypes.XmlElement
in the compiled code. The provided properties are erased and are replaced with a piece of code that accesses the property value via a name lookup.
This means that reflection will never be able to see the provided properties. The only way to get those is to access the underlying XElement
and use that directly. For example, to get child elements, you can write:
[ for e in firstObject.XElement.Elements() -> e.Name.LocalName ]
On the first element from your sample, this returns a list with ["color"; "shape"; "children"]
.
How do I get an F# fsx script to re-execute and re-pull SQL data each time it's called from C#?
I haven't tried to compile .fsx scripts yet, but my experience with using modules in F# projects makes me think that:
let result = cmd.Execute() |> Seq.toArray
compiles to a static variable on the CalculateCostPrice class. This would mean it'll only get executed once (when it's first used, if not earlier), and the result would be stored in the "result" variable.
Adding a parameter of type "unit" should change it to a method (again, not tested yet):
let result() = cmd.Execute() |> Seq.toArray
And you would call it from C# as:
foreach (var item in CalculateCostPrice.result())
In this case, the execution will happen when you call the method, not when the class gets initialized. I might rename it from "result" to "executeQuery" or something along those lines.
Related Topics
Change Separator of Wm_Concat Function of Oracle 11Gr2
Does Ms Access Suppress Primary Key Violations on Inserts
Credentials Error When Integrating Google Drive With
What's the Equivalent for Listagg (Oracle Database) in Postgresql
SQL Fixed-Value In() VS. Inner Join Performance
Dynamic Pivot Table with Multiple Columns in SQL Server
Ssrs Remove Column from Report
Insert a Select Group By:More Target Columns Than Expressions Error
Why Doesn't Oracle Raise "Ora-00918: Column Ambiguously Defined" for This Query
Postgresql Reusing Computation Result in Select Query
Performance and Readability of Regexp_Substr VS Instr and Substr
Get Envelope.I.E Overlapping Time Spans
Getting the Floor Value of a Number in SQLite
SQL Server: How to Optimize "Like" Queries