Moving a Point Along a Path in SQL Server 2008

Moving a Point along a Path in SQL Server 2008

This is a little bit tricky, but it is certainly possible.

Let's start by calculating the bearing from one point to another. Given a starting point, a bearing, and a distance, the following function will return the destination point:

CREATE FUNCTION [dbo].[func_MoveTowardsPoint](@start_point geography,
@end_point geography,
@distance int) /* Meters */
RETURNS geography
AS
BEGIN
DECLARE @ang_dist float = @distance / 6371000.0; /* Earth's radius */
DECLARE @bearing decimal(18,15);
DECLARE @lat_1 decimal(18,15) = Radians(@start_point.Lat);
DECLARE @lon_1 decimal(18,15) = Radians(@start_point.Long);
DECLARE @lat_2 decimal(18,15) = Radians(@end_point.Lat);
DECLARE @lon_diff decimal(18,15) = Radians(@end_point.Long - @start_point.Long);
DECLARE @new_lat decimal(18,15);
DECLARE @new_lon decimal(18,15);
DECLARE @result geography;

/* First calculate the bearing */

SET @bearing = ATN2(sin(@lon_diff) * cos(@lat_2),
(cos(@lat_1) * sin(@lat_2)) -
(sin(@lat_1) * cos(@lat_2) *
cos(@lon_diff)));

/* Then use the bearing and the start point to find the destination */

SET @new_lat = asin(sin(@lat_1) * cos(@ang_dist) +
cos(@lat_1) * sin(@ang_dist) * cos(@bearing));

SET @new_lon = @lon_1 + atn2( sin(@bearing) * sin(@ang_dist) * cos(@lat_1),
cos(@ang_dist) - sin(@lat_1) * sin(@lat_2));

/* Convert from Radians to Decimal */

SET @new_lat = Degrees(@new_lat);
SET @new_lon = Degrees(@new_lon);

/* Return the geography result */

SET @result =
geography::STPointFromText('POINT(' + CONVERT(varchar(64), @new_lon) + ' ' +
CONVERT(varchar(64), @new_lat) + ')',
4326);

RETURN @result;
END

I understand that you require a function that takes a linestring as input, not just start and end points. The point has to move along a path of concatenated line segments, and must continue moving around the "corners" of the path. This might seem complicated at first, but I think it can be tackled as follows:

  1. Iterate through each point of your linestring with STPointN(), from x=1 to x=STNumPoints().
  2. Find the distance with STDistance() between the current point in the iteration to the next point: @linestring.STPointN(x).STDistance(@linestring.STPointN(x+1))
  3. If the above distance > your input distance 'n':

    ...then the destination point is between this point and the next. Simply apply func_MoveTowardsPoint passing point x as start point, point x+1 as end point, and distance n. Return the result and break the iteration.

    Else:

    ...the destination point is further in the path from the next point in the iteration. Subtract the distance between point x and point x+1 from your distance 'n'. Continue through the iteration with the modified distance.

You may have noticed that we can easily implement the above recursively, instead of iteratively.

Let's do it:

CREATE FUNCTION [dbo].[func_MoveAlongPath](@path geography, 
@distance int,
@index int = 1)
RETURNS geography
AS
BEGIN
DECLARE @result geography = null;
DECLARE @num_points int = @path.STNumPoints();
DECLARE @dist_to_next float;

IF @index < @num_points
BEGIN
/* There is still at least one point further from the point @index
in the linestring. Find the distance to the next point. */

SET @dist_to_next = @path.STPointN(@index).STDistance(@path.STPointN(@index + 1));

IF @distance <= @dist_to_next
BEGIN
/* @dist_to_next is within this point and the next. Return
the destination point with func_MoveTowardsPoint(). */

SET @result = [dbo].[func_MoveTowardsPoint](@path.STPointN(@index),
@path.STPointN(@index + 1),
@distance);
END
ELSE
BEGIN
/* The destination is further from the next point. Subtract
@dist_to_next from @distance and continue recursively. */

SET @result = [dbo].[func_MoveAlongPath](@path,
@distance - @dist_to_next,
@index + 1);
END
END
ELSE
BEGIN
/* There is no further point. Our distance exceeds the length
of the linestring. Return the last point of the linestring.
You may prefer to return NULL instead. */

SET @result = @path.STPointN(@index);
END

RETURN @result;
END

With that in place, it's time to do some tests. Let's use the original linestring that was provided in the question, and we'll request the destination points at 350m, at 3500m and at 7000m:

DECLARE @g geography;
SET @g = geography::STGeomFromText('LINESTRING(-122.360 47.656,
-122.343 47.656,
-122.310 47.690)', 4326);

SELECT [dbo].[func_MoveAlongPath](@g, 350, DEFAULT).ToString();
SELECT [dbo].[func_MoveAlongPath](@g, 3500, DEFAULT).ToString();
SELECT [dbo].[func_MoveAlongPath](@g, 7000, DEFAULT).ToString();

Our test returns the following results:

POINT (-122.3553270591861 47.6560002502638)
POINT (-122.32676470116748 47.672728464582583)
POINT (-122.31 47.69)

Note that the last distance we requested (7000m) exceeded the length of the linestring, so we were returned the last point. In this case, you can easily modify the function to return NULL, if you prefer.

How to move along a path but only between two specified path points

Well, I have to agree with LeBeau, and I see that you also do. Since I answered your last question, I just needed to do some minor changes in the function. However, keep his advise in mind for the next times: when asking a question, show us some code you've tried, even if it doesn't work, because it shows effort.

Back to the question.

For this solution, I'll wrap everything inside a function named move, which accepts two arguments, the initial position and the final position (both in percentages):

function move(initialPosition, finalPosition) {

The initial position, as the name implies, set the initial position of the circle along the path. The math is this:

var start = path.node()
.getPointAtLength(path.node().getTotalLength() * initialPosition);

Then, I slightly changed the function from my last answer to accept the initial and final positions:

function translateAlong(path) {
var l = path.getTotalLength() * (finalPosition - initialPosition);
return function() {
return function(t) {
var p = path.getPointAtLength(t * l +
(path.getTotalLength() * initialPosition));
return "translate(" + p.x + "," + p.y + ")";
};
};
}

Here is the demo. Clicking on the button calls move with 0.25 (initial position) and 0.5 (final position) as arguments:

var points = [  [240, 100],  [290, 200],  [340, 50],  [390, 150],  [90, 150],  [140, 50],  [190, 200]];
var svg = d3.select("body").append("svg") .attr("width", 500) .attr("height", 300);
var path = svg.append("path") .data([points]) .attr("d", d3.svg.line() .tension(0) // Catmull–Rom .interpolate("cardinal-closed"));
var color = d3.scale.category10();
var dataPositions = [{ initial: 0.25, final: 0.5}, { initial: 0.5, final: 0.6}];

svg.selectAll(".point") .data(points) .enter().append("circle") .attr("r", 4) .attr("transform", function(d) { return "translate(" + d + ")"; });
d3.select("button").on("click", function() { move(0.25, 0.5);});
function move(initialPosition, finalPosition) {
var start = path.node().getPointAtLength(path.node().getTotalLength() * initialPosition);
var circle = svg.append("circle") .attr("r", 13) .attr("fill", function(d, i) { return color(i) }) .attr("transform", "translate(" + start.x + "," + start.y + ")");
circle.transition() .duration(1000) .attrTween("transform", function() { return translateAlong(path.node())() });
function translateAlong(path) { var l = path.getTotalLength() * (finalPosition - initialPosition); return function() { return function(t) { var p = path.getPointAtLength(t * l + (path.getTotalLength() * initialPosition)); return "translate(" + p.x + "," + p.y + ")"; }; }; }
}
path {  fill: none;  stroke: #000;  stroke-width: 3px;}
circle { stroke: #fff; stroke-width: 3px;}
<script src="//d3js.org/d3.v3.min.js"></script><button>Move</button><br>

How to average two or more geography linestrings using C#/SQL Server 2008 spatial

I can't really post any sample code as I am working from my iPhone right now, but I do have a suggestion (don't know if it's good or bad) ...

For each line, determine each vertex's position (percentage) along the line.

After getting those values, per line, compute new vertices along each line using all of the OTHER lines' percentage values.

At this point, each line should contain the same number of vertices and the Nth vertex of each line corresponds directly with the Nth vertex of every other line.

Now just average vertex 0 for every line to get vertex 0 of the "averaged" line. Repeat for vertex1 of each line, etc.

This should work for lines as well as polygons.

Note that you could also employ a weighted averaging algorithm if you could determine an accuracy value for each line. In the past I have used this approach when trying to average two lines. We had the ability to allow each line to be weighted, typically 50:50, but could go all the way to 100:0 or 0:100, depending on the accuracy if the sources.

I went back and reread your question and saw that you already talked about interpolation. The way you talked about doing it seems like it could smooth or generalize the lines before computing the averages of the interpolated points (the fixed interval points). With my approach you would be densifying each line first and then computing the averages. It seems like that might be more accurate, but maybe not.

Simple way to transpose columns and rows in SQL?

There are several ways that you can transform this data. In your original post, you stated that PIVOT seems too complex for this scenario, but it can be applied very easily using both the UNPIVOT and PIVOT functions in SQL Server.

However, if you do not have access to those functions this can be replicated using UNION ALL to UNPIVOT and then an aggregate function with a CASE statement to PIVOT:

Create Table:

CREATE TABLE yourTable([color] varchar(5), [Paul] int, [John] int, [Tim] int, [Eric] int);

INSERT INTO yourTable
([color], [Paul], [John], [Tim], [Eric])
VALUES
('Red', 1, 5, 1, 3),
('Green', 8, 4, 3, 5),
('Blue', 2, 2, 9, 1);

Union All, Aggregate and CASE Version:

select name,
sum(case when color = 'Red' then value else 0 end) Red,
sum(case when color = 'Green' then value else 0 end) Green,
sum(case when color = 'Blue' then value else 0 end) Blue
from
(
select color, Paul value, 'Paul' name
from yourTable
union all
select color, John value, 'John' name
from yourTable
union all
select color, Tim value, 'Tim' name
from yourTable
union all
select color, Eric value, 'Eric' name
from yourTable
) src
group by name

See SQL Fiddle with Demo

The UNION ALL performs the UNPIVOT of the data by transforming the columns Paul, John, Tim, Eric into separate rows. Then you apply the aggregate function sum() with the case statement to get the new columns for each color.

Unpivot and Pivot Static Version:

Both the UNPIVOT and PIVOT functions in SQL server make this transformation much easier. If you know all of the values that you want to transform, you can hard-code them into a static version to get the result:

select name, [Red], [Green], [Blue]
from
(
select color, name, value
from yourtable
unpivot
(
value for name in (Paul, John, Tim, Eric)
) unpiv
) src
pivot
(
sum(value)
for color in ([Red], [Green], [Blue])
) piv

See SQL Fiddle with Demo

The inner query with the UNPIVOT performs the same function as the UNION ALL. It takes the list of columns and turns it into rows, the PIVOT then performs the final transformation into columns.

Dynamic Pivot Version:

If you have an unknown number of columns (Paul, John, Tim, Eric in your example) and then an unknown number of colors to transform you can use dynamic sql to generate the list to UNPIVOT and then PIVOT:

DECLARE @colsUnpivot AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX),
@colsPivot as NVARCHAR(MAX)

select @colsUnpivot = stuff((select ','+quotename(C.name)
from sys.columns as C
where C.object_id = object_id('yourtable') and
C.name <> 'color'
for xml path('')), 1, 1, '')

select @colsPivot = STUFF((SELECT ','
+ quotename(color)
from yourtable t
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')

set @query
= 'select name, '+@colsPivot+'
from
(
select color, name, value
from yourtable
unpivot
(
value for name in ('+@colsUnpivot+')
) unpiv
) src
pivot
(
sum(value)
for color in ('+@colsPivot+')
) piv'

exec(@query)

See SQL Fiddle with Demo

The dynamic version queries both yourtable and then the sys.columns table to generate the list of items to UNPIVOT and PIVOT. This is then added to a query string to be executed. The plus of the dynamic version is if you have a changing list of colors and/or names this will generate the list at run-time.

All three queries will produce the same result:

| NAME | RED | GREEN | BLUE |
-----------------------------
| Eric | 3 | 5 | 1 |
| John | 5 | 4 | 2 |
| Paul | 1 | 8 | 2 |
| Tim | 1 | 3 | 9 |

SQL Server 2008's Filestream location

  1. You can't. Afaik filestream data is stored localy and SQL will refuse to read/write from/to an UNC.
  2. Your full backups will contain the entire filestream data. Unmanageable? Definetly a very serious challenge.

My question would be what is the benefit you want to extract from the filestream? The usual benefits come from BLOB integration with database operations while keeping availability for Win32 file handle based operations:

Even though FILESTREAM technology has
many attractive features, it may not
be the optimal choice in all
situations. As mentioned earlier, the
size of the BLOB data and the access
patterns are the most significant
factors when deciding whether to store
the BLOB data wholly within the
database or by using FILESTREAM.

Size affects the following:

  • Efficiency with which the BLOB data
    can be accessed using either storage
    mechanism. As mentioned earlier,
    streaming access of large BLOB data is
    more efficient using FILESTREAM, but
    partial updates are (potentially much)
    slower.
  • Efficiency of backing up the
    combined structured and BLOB data
    using either storage mechanism. A
    backup that combines SQL Server
    database files and a large number of
    FILESTREAM files will be slower than a
    backup of just SQL Server database
    files of an equivalent total size.
    This is because of the extra overhead
    of backing up each NTFS file (one per
    FILESTREAM data value). This overhead
    becomes more noticeable when the
    FILESTREAM files are smaller (as the
    time overhead becomes a larger
    percentage of the total time to backup
    per MB of data).

From a pure performance point of view, there are many steps you can do on a file system level to improve performance. What is you current problem, why is your system throughput affected by the media size? It means you have a somewhere a choke point of contention, perhaps a directory enumeration, or some other barrier that causes you to scale the response time with the media size. Your access to the media should be O(1), maybe O(logn),b ut definetely not O(n).

I'd recommend you go over the SQL White Paper FILESTREAM Storage in SQL Server 2008, from where I found my quote about use cases.

Migrating MS Access Database to SQL Server 2008 R2

Well, it really comes down to how many tables and how many relationships you have in the access database.

You could for example install free edition of sql server express on your work station, and do the migration local (say to sql 2017 format).

At that point, then you can choose to script out the database - and later versions of SSMS (sql management studio) certainly supports 2008. Eg this:

Sample Image

So, don't right click on database and choose "Script database as", but choose from the task menu "Generate scripts".

From that, we see that 2008 still can be scripted to.

Sample Image

While of course even in the latest version of sql server, you CAN create a older version database. However, when you create a "previous" version of a database (say in the latest edition of sql server, that only limits compatibility features, but DOES NOT permit you to use or make a back up file made to be restored in previous versions of sql server. (sql server kind of nasty that way - you can ONLY go forward with databases - great capability, but once you move, you can't use nor create backup files that can be consumed by previous versions. But you CAN script out to previous versions.

And part of this comes down to how much time you spent using SSMAA.

And if you do go down the above road, then make sure you tweak the column mappings. So for example, by default datetime in access gets converted to datetime2, and I don't quite remember if 2008 supports datetime2. (regardless, I would stick to datetime anyway).

So, even using the latest migration assistant, you can set (tweak) the column mappaings, and it so at most, you need 1 or 2 changes (such as defaulting to datetime in place datetime2 column types).

And getting and setting up sql express local is a great way to test the migration over and over, since often the first few goes might fail, or not produce the resutls you want. This in fact is another HUGE reason to adopt SSMAA, since you can spend a date tweaking and changing some things, and run it again, and even again.

Now, if you script out a whole database (and data), such text files are RATHER large, and in most cases, you can't use the SSMS (sql management studio) to load such a script, so use the command line sqlcmd.exe to import (process) that scripted database you create by using the scripting wizard I outline above. It spits out a "big" text file, and that can be imported into 2008.

I mean, even with 2008 and that era of SSMS, it does have a import option, and you can import from access databases. However, when you do this, the PK's are dropped, indexes are dropped, and so are relatonships dropped.

So, for 2-5 tables - gee, just import using ssms.

However, if you have 40 tables, boatloads of relationships? Then yes, that is a big job, and I would then 100% recommend you use the Access migration assistant tool. (it takes a bit of time to learn - bit confusing at first, but once you pass the learning curve, I high recommend this tool if you wanting to move up many tables, keep the PK settings, and of course keep related data intact.

Getting an error when moving from SQL Server Express to SQL Server asp.net web app logon

The problem is that you are trying to open an older version of the database file in a newer version of SQL Server. You should attach the database to the SQL Server instance using the SQL Server Management Studio, which will upgrade the database to the correct version.



Related Topics



Leave a reply



Submit