Get Envelope.I.E Overlapping Time Spans

Get envelope.i.e overlapping time spans

Try this one, too. I tested it the best I could, I believe it covers all the possibilities, including coalescing adjacent intervals (10:15 to 10:30 and 10:30 to 10:40 are combined into a single interval, 10:15 to 10:40). It should also be quite fast, it doesn't use much.

with m as
(
select ip_address, start_time,
max(stop_time) over (partition by ip_address order by start_time
rows between unbounded preceding and 1 preceding) as m_time
from ip_sessions
union all
select ip_address, NULL, max(stop_time) from ip_sessions group by ip_address
),
n as
(
select ip_address, start_time, m_time
from m
where start_time > m_time or start_time is null or m_time is null
),
f as
(
select ip_address, start_time,
lead(m_time) over (partition by ip_address order by start_time) as stop_time
from n
)
select * from f where start_time is not null
/

ORACLE SQL - Connecting overlapping timeframes

Here is the older solution (from one of the Comments), adapted to work for pure dates. You may want to compare the different solutions offered here to see which is most efficient for your actual data; different solutions may be best for different situations.

NOTES: I used your input data and created some more for testing. It is assumed that your data is valid (all dates are valid, they have a time component of 00:00:00, and enddate is always greater than or equal to startdate). The solution does not include the inputs factored subquery, it's shown below only for testing. I did NOT order the results by emp# and startdate (the output may be misleading in that regard); if you do need such ordering, you will need to add it explicitly. Note the use of the date literal in the test data. The output shows dates in my current Session settings; if you need a specific format, use to_date() with the desired display format model.

QUERY:

with
inputs ( emp#, startdate, enddate ) as (
select 1, date '2016-01-01', date '2016-01-15' from dual union all
select 1, date '2016-01-03', date '2016-01-05' from dual union all
select 1, date '2016-01-10', date '2016-01-20' from dual union all
select 1, date '2016-01-23', date '2016-01-25' from dual union all
select 1, date '2016-01-24', date '2016-01-27' from dual union all
select 2, date '2016-01-31', date '2016-02-28' from dual union all
select 2, date '2016-03-15', date '2016-03-18' from dual union all
select 2, date '2016-03-19', date '2016-03-19' from dual union all
select 2, date '2016-03-20', date '2016-03-20' from dual
),
m ( emp#, startdate, mdate ) as (
select emp#, startdate,
1 + max(enddate) over (partition by emp# order by startdate
rows between unbounded preceding and 1 preceding)
from inputs
union all
select emp#, NULL, 1 + max(enddate)
from inputs
group by emp#
),
n ( emp#, startdate, mdate ) as (
select emp#, startdate, mdate
from m
where startdate > mdate or startdate is null or mdate is null
),
f ( emp#, startdate, enddate ) as (
select emp#, startdate,
lead(mdate) over (partition by emp# order by startdate) - 1
from n
)
select * from f where startdate is not null

OUTPUT (for data in the inputs CTE):

  EMP# STARTDATE  ENDDATE          
------ ---------- ----------
1 01/01/2016 20/01/2016
1 23/01/2016 27/01/2016
2 31/01/2016 28/02/2016
2 15/03/2016 20/03/2016

PL/SQL: Finding islands in overlapping date ranges defined by a start and an end

I adapted my older solution to this situation (see comment to the original question). The silly +1/86400 (adding a second) is needed to deal with the weird end date/times in your table.

with
inputs ( skp_person, date_insurance_start, date_insurance_end ) as (
select 1, to_date('7.11.2015', 'dd.mm.yyyy'), to_date('1.1.3000' , 'dd.mm.yyyy') from dual union all
select 1, to_date('7.11.2015', 'dd.mm.yyyy'), to_date('1.1.3000' , 'dd.mm.yyyy') from dual union all
select 2, to_date('10.4.2015', 'dd.mm.yyyy'), to_date('1.8.2016 23:59:59' , 'dd.mm.yyyy hh24:mi:ss') from dual union all
select 3, to_date('28.3.2016', 'dd.mm.yyyy'), to_date('1.1.3000' , 'dd.mm.yyyy') from dual union all
select 4, to_date('5.12.2015', 'dd.mm.yyyy'), to_date('31.12.2015 23:59:59', 'dd.mm.yyyy hh24:mi:ss') from dual union all
select 4, to_date('5.12.2015', 'dd.mm.yyyy'), to_date('1.5.2016 23:59:59' , 'dd.mm.yyyy hh24:mi:ss') from dual union all
select 4, to_date('1.2.2016' , 'dd.mm.yyyy'), to_date('1.5.2016 23:59:59' , 'dd.mm.yyyy hh24:mi:ss') from dual union all
select 5, to_date('15.1.2016', 'dd.mm.yyyy'), to_date('2.3.2016 23:59:59' , 'dd.mm.yyyy hh24:mi:ss') from dual union all
select 5, to_date('15.3.2016', 'dd.mm.yyyy'), to_date('2.6.2016 23:59:59' , 'dd.mm.yyyy hh24:mi:ss') from dual
),
m ( skp_person, date_insurance_start, m_date ) as (
select skp_person, date_insurance_start,
max(date_insurance_end + 1/86400)
over (partition by skp_person order by date_insurance_start
rows between unbounded preceding and 1 preceding)
from inputs
union all
select skp_person, null, max(date_insurance_end + 1/86400)
from inputs
group by skp_person
),
f ( skp_person, date_insurance_start, e_date ) as (
select skp_person, date_insurance_start,
lead(m_date) over
(partition by skp_person order by date_insurance_start)
from m
where date_insurance_start > m_date
or date_insurance_start is null or m_date is null
)
select skp_person, date_insurance_start, e_date - 1/86400 as date_insurance_end
from f where date_insurance_start is not null
;

Output: (using my NLS_DATE_FORMAT settings)

SKP_PERSON DATE_INSURANCE_STAR DATE_INSURANCE_END
---------- ------------------- -------------------
1 07.11.2015 00:00:00 01.01.3000 00:00:00
2 10.04.2015 00:00:00 01.08.2016 23:59:59
3 28.03.2016 00:00:00 01.01.3000 00:00:00
4 05.12.2015 00:00:00 01.05.2016 23:59:59
5 15.01.2016 00:00:00 02.03.2016 23:59:59
5 15.03.2016 00:00:00 02.06.2016 23:59:59

Time spent at multiple (same) location - window function, maybe?

Assign flag as 1 when the location changes. Use sum(flag) over (...) to create grouping column. Finally group data:

select id, min(loc), min(start_), max(end_)
from (
select a.*, sum(flag) over(order by start_) grp
from (select t.*, case when loc <> lag(loc) over (order by start_)
then 1 else 0 end flag from t) a )
group by id, grp

dbfiddle demo

Tested in Oracle. Add partition by id in final code.

select not available hours

Setup (test data):

SQL> select * from machines_hist;

CONTRACT START_TIME FIN_TIME
---------- ------------------- -------------------
C1 2015-12-30 05:10:10 2016-01-01 15:10:10
C1 2016-01-02 10:16:20 2016-01-03 12:14:10
C1 2016-01-25 10:16:20 2016-02-10 17:11:10
C1 2016-01-05 02:16:20 2016-01-06 19:18:10
C2 2016-01-15 12:20:22 2016-01-17 13:40:10
C2 2016-02-23 04:13:50 2016-02-24 02:20:44
C3 2016-02-20 10:13:20 2016-02-20 11:16:40
C4 2015-12-23 20:00:00 2015-12-24 12:23:00
C5 2015-12-31 22:34:00 2016-02-23 00:00:00
9 rows selected.
Elapsed: 00:00:00.33

Query: (notice the bind variables - normally provided by application):

with a as (select to_date(:mon || '-' || :yr, 'MON-yyyy') as month_start from dual),
b as (select add_months(month_start, 1) as month_end from a),
c as (select contract, greatest(month_start, start_time) as st,
least(month_end, fin_time) as fin
from machines_hist join a on fin_time >= month_start
join b on start_time <= month_end),
m as (select contract, st,
max(fin) over (partition by contract order by st
rows between unbounded preceding and 1 preceding) as m_time
from c
union all
select contract, NULL, max(fin) from c group by contract),
n as (select contract, st, m_time
from m
where st > m_time or st is null or m_time is null),
f as (select contract, st as st_downtime,
lead(m_time) over (partition by contract order by st) as fin_downtime
from n)
select contract, max(:mon || '-' || :yr) as mth,
round(100 * sum(fin_downtime - st_downtime)/
((select month_end from b) - (select month_start from a)), 2) as downtime_pct
from f
where st_downtime is not null
group by contract
order by contract
/

Bind variables (illustrating SQL*Plus interface - each program has its own mechanism):

SQL> variable yr number
SQL> variable mon varchar2(3)
SQL> begin :mon := 'JAN'; :yr := 2016; end;
2 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.03

Output (notes: script saved as "downtime.sql", called through SQL*Plus; percentage of downtime in the month expressed as 22.3 representing 22.3% etc.; if a contract had NO downtime, it is not included in the output)

SQL> start downtime

CONTRACT MTH DOWNTIME_PCT
---------- ------------ ------------
C1 JAN-2016 32.24
C2 JAN-2016 6.63
C5 JAN-2016 100
3 rows selected.
Elapsed: 00:00:00.19

SQL Server - cumulative sum on overlapping data - getting date that sum reaches a given value

This is one way to do it:

;WITH CTErn AS (
SELECT activity_client_id, activity_type,
activity_start_date, activity_end_date,
ROW_NUMBER() OVER (PARTITION BY activity_client_id
ORDER BY activity_start_date) AS rn
FROM activities
),
CTEdiff AS (
SELECT c1.activity_client_id, c1.activity_type,
x.activity_start_date, c1.activity_end_date,
DATEDIFF(mi, x.activity_start_date, c1.activity_end_date) AS diff,
ROW_NUMBER() OVER (PARTITION BY c1.activity_client_id
ORDER BY x.activity_start_date) AS seq
FROM CTErn AS c1
LEFT JOIN CTErn AS c2 ON c1.rn = c2.rn + 1
CROSS APPLY (SELECT CASE
WHEN c1.activity_start_date < c2.activity_end_date
THEN c2.activity_end_date
ELSE c1.activity_start_date
END) x(activity_start_date)
)
SELECT TOP 1 client_id, client_sign_up_date, activity_start_date,
hoursOfActivicty
FROM CTEdiff AS c1
INNER JOIN clients AS c2 ON c1.activity_client_id = c2.client_id
CROSS APPLY (SELECT SUM(diff) / 60.0
FROM CTEdiff AS c3
WHERE c3.seq <= c1.seq) x(hoursOfActivicty)
WHERE hoursOfActivicty >= 5
ORDER BY seq

Common Table Expressions and ROW_NUMBER() were introduced with SQL Server 2005, so the above query should work for that version.

Demo here

The first CTE, i.e. CTErn, produces the following output:

client_id   activity_type   start_date          end_date          rn
112 Interview 2015-06-01 09:00 2015-06-01 11:00 1
112 CV updating 2015-06-01 09:30 2015-06-01 11:30 2
112 Course 2015-06-02 09:00 2015-06-02 16:00 3
112 Interview 2015-06-03 09:00 2015-06-03 10:00 4

The second CTE, i.e. CTEdiff, uses the above table expression in order to calculate time difference for each record, taking into consideration any overlapps with the previous record:

client_id activity_type start_date       end_date         diff  seq
112 Interview 2015-06-01 09:00 2015-06-01 11:00 120 1
112 CV updating 2015-06-01 11:00 2015-06-01 11:30 30 2
112 Course 2015-06-02 09:00 2015-06-02 16:00 420 3
112 Interview 2015-06-03 09:00 2015-06-03 10:00 60 4

The final query calculates the cumulative sum of time difference and selects the first record that exceeds 5 hours of activity.

The above query will work for simple interval overlaps, i.e. when just the end date of an activity overlaps the start date of the next activity.

Issue with overlapping of div with other div while zooming

For the media query you can use somthing like this to make this responsive .please let me know about the result after applying this.

@media only screen and (max-width: 600px) {
.office-address-details {width:100%; display:block;}
margin:20px auto;
}

Bootstrap 5 fixed-bottom footer covering content

Fixed this problem using min-height property on the page content.

shapefile not overlapping patches

You can use gis:intersecting to do this, but it's not very efficient for what you want to do. For an example, I downloaded the airports dataset from this site that has some free gis data. The airports dataset (ne_10m_airports.shp) contains point data for each airport and some info about each airport. To assign some data to patches, see below- some info in comments:

extensions [ gis ]

globals [ airports ]

patches-own [ airport-name ]

to setup
ca
resize-world 0 125 0 50
set-patch-size 5

; Load the dataset
set airports gis:load-dataset "ne_10m_airports.shp"
gis:set-world-envelope gis:envelope-of airports

; For each point listed in 'airports', ask any patches
; that are intersecting that point to take the name
; of the airport that intersects them (since these are points,
; intersection in this case means the airport coordinates
; lie within the patch.

foreach gis:feature-list-of airports [
x ->
ask patches gis:intersecting x [
set airport-name gis:property-value x "NAME"
set pcolor red
]
]

reset-ticks
end

You could do this with the "MAXTEMPHM" value from your temperature datasets. However, your NetLogo world size is something you'll have to play around with to make sure that the number of patches corresponds to the number of points you have- gis:set-world-envelope only aligns the gis datasets to the NetLogo world, it doesn't affect the patches present. If you have 800000 temperature points you want to load in, you'd need to make your NetLogo world somewhere around 895 patches square, which is a pretty big world. It will take a while to load in the temperature data as described above. It would make things simpler and more efficient (and noticeably faster) to use a raster dataset and gis:apply-raster.



Related Topics



Leave a reply



Submit