Inserting and Transforming Data from SQL Table

Transform and copy table data

I recommend a third approach applying a new staging table and history table in Database B on Server B. The staging table will mostly mirror your Table B, but will not contain any constraints and will have an additional bit column defining Status. The history table will mostly mirror the Table B structure but will contain two additional columns (ChangeDate and ChangeMade). Lastly, for this approach you will need to identify the column(s) used to define the records from the View as unique.

  • Truncate Staging table (from previous results).
  • Export data from View (Server A) to staging table (Server B).
  • Run SQL task to check for consistency and quality of each record. Those records that pass, set the Status bit field value to 1, otherwise set to 0.
  • Apply a MERGE call in a SQL Task to both transform the data from the source (Staging table) to the target table (Table B) and update the History table. This is only applied for those records that have a Status = 1. With the MERGE you can also OUTPUT the history of what was changed to the new History table indicating "I" for insert, "U" for update or "D" for delete based on the matching defined in the MERGE.
  • For those records in the Staging table with a value of 0, send an email to whoever needs to know that X number of problem records were found.

The idea here is that the process doesn't come to a halt and you do not need to force a rollback if a single bad record is found. Also, you can monitor the daily process by viewing the Staging table. I've taken this approach in the past and integrated into it sending email alerts with links to SSRS reports reporting on the problem records. Doing so allowed me to proactively find patterns in the problem records and work with those upstream of me to resolve the problems. If you have over a million records being pulled by the view, then you may want to add to the Staging table a surrogate key (set up as a primary) with an identity starting at 1 and that automatically increments by 1 with each new record imported from the view. Just before running the MERGE, set up a clustered index using the surrogate key. This will greatly improve the performance of the MERGE. Just before truncating Staging table in the first step, drop the index.

Hope this helps.

Adding columns in sql server with transformed data from same table

What you need to do is to first alter the table, then update it.

ALTER TABLE YOUR_TABLE ADD DATETIME_COL DATETIME
ALTER TABLE YOUR_TABLE ADD DATE_COL DATE

UPDATE YOUR_TABLE
SET
DATE_COL = CAST(NVCHAR_DATE AS DATE),
DATETIME_COL = CAST(NVCHAR_DATE AS DATETIME)

However, storing the date as a separate column seems a bit redundant. Maybe a computed column would be a better choice:

ALTER TABLE YOUR_TABLE ADD COMNPUTED_DATE_COL AS CAST(DATETIME_COL AS DATE)

See this SQL Fiddle

SQL - Insert Into - Convert Datatype

Use CONVERT().

INSERT INTO TempSaveArticle (Nr_, Description, [Description 2], Price) 
VALUES (123456789, Yarn, blue, CONVERT(MONEY, 49.0000000000))

Ideally you'd want to do this more dynamically.

INSERT INTO TempSaveArticle (Nr_, Description, [Description 2], Price) 
SELECT myTable.*
FROM myTable
WHERE myTable.ID = 123 OR <<some other conditions>>

MONEY is effectively a DECIMAL.

You can CAST() or CONVERT() it to and from Decimal <--> Money. The only difference I know, is that MONEY is displayed by default using your computers' Language settings. But in my opinion, don't use MONEY, stick with DECIMALS and display it how you want using FORMAT(). More on this argument here.

Transform line-by-line values from another table to a string and insert it into NOT IN clause in SQL

You don't need the variable... just insert your query directly as the not in list.

SELECT *
FROM member
WHERE ID NOT IN (SELECT ID FROM test_user);

Note: It makes life easier to use a consistent casing and layout for your queries.

T-SQL: convert columns to rows and insert/update another table

Here is an option that will dynamically UNPIVOT your data without using Dynamic SQL

To be clear: UNPIVOT would be more performant, but you don't have to enumerate the 50 columns.

This is assuming your columns end with a NUMERIC i.e. FirstName##

Example

Select ID
,FirstName
,LastName
,UniueNumber -- You could use SSN = UniueNumber
From (
SELECT A.ID
,Grp
,Col = replace([Key],Grp,'')
,Value
FROM #data A
Cross Apply (
Select [Key]
,Value
,Grp = substring([Key],patindex('%[0-9]%',[Key]),25)
From OpenJson( (Select A.* For JSON Path,Without_Array_Wrapper ) )
) B
) src
Pivot ( max(Value) for Col in ([FirstName],[LastName],[UniueNumber]) ) pvt
Order By ID,Grp

Results

Sample Image

UPDATE XML Version

Select ID
,FirstName
,LastName
,UniueNumber
From (
SELECT A.ID
,Grp = substring(Item,patindex('%[0-9]%',Item),50)
,Col = replace(Item,substring(Item,patindex('%[0-9]%',Item),50),'')
,Value
FROM #data A
Cross Apply ( values (convert(xml,(Select A.* for XML RAW)))) B(XData)
Cross Apply (
Select Item = xAttr.value('local-name(.)', 'varchar(100)')
,Value = xAttr.value('.','varchar(max)')
From B.XData.nodes('//@*') xNode(xAttr)
) C
Where Item not in ('ID')
) src
Pivot ( max(Value) for Col in (FirstName,LastName,UniueNumber) ) pvt
Order By ID,Grp


Related Topics



Leave a reply



Submit