Safely execute a dynamic sql query
Although I have covered much of this in the comments, I felt it worthwhile giving an answer to give more of an explanation.
Firstly, as I have mentioned, this isn't a route you should be going down. Yes, you can have procedures that do use dynamic SQL, but these shouldn't be handling such basic things as inserting data into a table, or updating said data.
When using dynamic SQL, you need to first ensure that you are properly quoting your dynamic objects. For this that isn't too hard, you can just have a parameter for the object's schema and name and then when you inject them wrap them in QUOTENAME
. The real problem comes from the latter, the "dynamic" columns.
Firstly, you seem to want a dynamic number of parameters; this is a major problem. You can't trivially, or even easily, parametrise dynamic parameters. You won't be able to pass these parameters as their correct type either; you wouldn't be able to pass a date
as a date
for example. I can imagine a solution that uses dynamic dynamic SQL (yes, I said dynamic twice), and the sql_variant
object type, but should you be doing that? No. If you understood how to maintain such a solution I don't for one second think you would have asked the question you have; you would have something that is on it's way there, but needed some help.
So, what is the solution? Well, again, like I said in the comments, you should have separate procedures for each table. You might also want separate ones for INSERT
and UPDATE
operations, but you could also use a single one and implement "UPSERT" logic; there is plenty of good articles on how to do this so I won't cover it here.
As I mentioned in the comments as well, that means updating your procedures when you update your objects. That is normal. I routinely update procedures when an underlying table is updated to have more columns.
At the same time your application developers will then need to update their application code to ensure that pass the new parameters to your procedure. Good devops and relationships between your DBAs, SQL Developers and Application Developers is key, but that's all. Keep those communication channels open and active. When you or your DBA alters the table, adding the new column(s) and amended the objects indexes (if needed) in your development environment, and has informed you the SQL developer, you can then ALTER
the needed procedures. You can then inform the Application Developer, and they can update the application code.
After that, complete your internal testing, fixe any bugs/unexpected behaviour/performance issues and then progress to the test environment. Get your users to confirm it works as required, and then off it goes to production. In other words, follow the basics of a good develop cycle.
TL;DR: The route you want is wrong, and is never going to scale. Stick to a normal development cycle, and update your database and application code in sync so that new functionality can be provided.
Incorrect Syntax Near GO, T-SQL EXEC()
Try removing comma after [EventType] [nvarchar](64) NULL,
and see if the error message changes.
So you have 2 problems:
- As @Tanner has pointed out, you cannot use GO in dynamic SQL.
- You still have that trailing comma in the
meta.LogAudit
table columns definition.
Try running this code:
DECLARE @dbName NVARCHAR(20) = 'ABC';
declare @sql nvarchar(max) = N'exec '+ @DBName + '..sp_executesql N''CREATE SCHEMA meta'''
execute(@sql)
declare @sql2 nvarchar(max) = '
-- Create Log Table
CREATE TABLE '+ @DBName + '.meta.LogAudit(
[EventDate] [datetime] NOT NULL DEFAULT (getdate()),
[EventType] [nvarchar](64) NULL
)'
exec sp_executesql @sql2,N''
It will allow you to programmatically create schema in the specified Database as opposite to using current database.
Dynamic SQL select query in Golang
From what I know (also not much experienced in Go) if you don't assign a real type to value then Scan
will return []byte
and when it is marshalled it returns base64 encoded string.
So you have to assign a type to your columns and if you want proper json then assign keys to values.
In your example it can be done something like this:
cols := []string{"id", "first_name", "last_name"}
vals := make([]interface{}, len(cols))
result := make(map[string]interface{}, len(cols))
for i, key := range cols {
switch key {
case "id", "status":
vals[i] = new(int)
default:
vals[i] = new(string)
}
result[key] = vals[i]
}
b, _ := json.Marshal(result)
fmt.Println(string(b))
So, instead of looping over cols
and creating new interface for each column, now we are creating key/value pairs and assigning type based on column name.
Also, if you have nullable columns in table, and you probably have, then you'll get error because nil
can't go into string
. So I suggest this package gopkg.in/guregu/null.v3
and then assign type like null.String
. That way you'll get back null
as a value.
For example:
for i, key := range cols {
switch key {
case "id", "status":
vals[i] = new(int)
case "updated_at", "created_at":
vals[i] = new(null.Time)
default:
vals[i] = new(null.String)
}
result[key] = vals[i]
}
How to execute the GO statement with dynamic count?
If you simply want to insert a repeated row you could use a CTE or numbers table.
-- Sample data.
declare @Users as Table ( UserId Int Identity, Name VarChar(16) );
insert into @Users ( Name ) values
( 'Bob' ), ( 'Carol' ), ( 'Ted' ), ( 'Alice' );
select * from @Users;
-- Load another table with repetitions of a single user.
declare @TempUsers as Table ( UserId Int, Name VarChar(16) );
declare @Repetitions as Int = ( select Count(*) from @Users );
with TempUsers as (
select UserId, Name, 1 as Repetitions
from @Users
where Name = 'Ted'
union all
select UserId, Name, Repetitions + 1
from TempUsers
where Repetitions < @Repetitions
)
insert into @TempUsers ( UserId, Name )
select UserId, Name
from TempUsers;
select * from @TempUsers;
Build dynamic (conditional) WHERE SQL query in Golang
So, I found the solution. Big thanks to Cerise Limón, whose code fits perfectly for me.
The solution I ended up with
Controller
func Find(c echo.Context) (err error) {
model := &models.Query{}
if err = c.Bind(model); err != nil {
return c.JSON(http.StatusInternalServerError, u.Message(false, "Bad request"))
}
resp := model.Find()
return c.JSON(http.StatusOK, resp)
Model
type Query map[string]interface{}
func (model Query) Find() (Query) {
var values []interface{}
var where []string
for k, v := range model {
values = append(values, v)
//MySQL Way: where = append(where, fmt.Sprintf("%s = ?", k))
where = append(where, fmt.Sprintf(`"%s" = %s`,k, "$" + strconv.Itoa(len(values))))
}
string := ("SELECT name FROM users WHERE " + strings.Join(where, " AND "))
//for testing purposes i didn't ran actual query, just print it in the console and returned JSON back
fmt.Println(string)
return model
}
Update: for PostgreSQL users (thanks to @mkopriva and his playground example), I'm able to have this placeholder thing working right on PostgreSQL
Related Topics
SQL Error: Ora-00933: SQL Command Not Properly Ended
Why Can't I Use Column Aliases in the Next Select Expression
How to Alter the Position of a Column in a Postgresql Database Table
Closing Connection When Using Dapper
In Which Sequence Are Queries and Sub-Queries Executed by the SQL Engine
How to Use an Alias in a Postgresql Order by Clause
Parameterise Table Name in .Net/Sql
Is There a Good Reason to Use Upper Case for SQL Keywords
Difference of Two Date Time in SQL Server
Is There a Difference Between Select * and Select [List Each Col]
SQL Update Order of Evaluation
Pivot Query on Distinct Records
Exists VS Join and Use of Exists Clause
Extracting Hours from a Datetime (SQL Server 2005)
How Exactly Does Using or in a MySQL Statement Differ With/Without Parentheses
Improving Performance of Cluster Index Guid Primary Key