How to count overlapping datetime intervals in Pandas?

chestnut :

I have a following DataFrame with two datetime columns:

    start               end
0   01.01.2018 00:47    01.01.2018 00:54
1   01.01.2018 00:52    01.01.2018 01:03
2   01.01.2018 00:55    01.01.2018 00:59
3   01.01.2018 00:57    01.01.2018 01:16
4   01.01.2018 01:00    01.01.2018 01:12
5   01.01.2018 01:07    01.01.2018 01:24
6   01.01.2018 01:33    01.01.2018 01:38
7   01.01.2018 01:34    01.01.2018 01:47
8   01.01.2018 01:37    01.01.2018 01:41
9   01.01.2018 01:38    01.01.2018 01:41
10  01.01.2018 01:39    01.01.2018 01:55

I would like to count how many starts (intervals) are active at the same time before they end at given time (in other words: how many times each row overlaps with the rest of the rows).

E.g. from 00:47 to 00:52 only one is active, from 00:52 to 00:54 two, from 00:54 to 00:55 only one again, and so on.

I tried to stack columns onto each other, sort by date and by iterrating through whole dataframe give each "start" +1 to counter and -1 to each "end". It works but on my original data frame, where I have few millions of rows, iteration takes forever - I need to find a quicker way.

My original basic-and-not-very-good code:

import pandas as pd
import numpy as np

df = pd.read_csv('something.csv', sep=';')

df = df.stack().to_frame()
df = df.reset_index(level=1)
df.columns = ['status', 'time']
df = df.sort_values('time')
df['counter'] = np.nan
df = df.reset_index().drop('index', axis=1)

print(df.head(10))

gives:

    status  time                counter
0   start   01.01.2018 00:47    NaN
1   start   01.01.2018 00:52    NaN
2   stop    01.01.2018 00:54    NaN
3   start   01.01.2018 00:55    NaN
4   start   01.01.2018 00:57    NaN
5   stop    01.01.2018 00:59    NaN
6   start   01.01.2018 01:00    NaN
7   stop    01.01.2018 01:03    NaN
8   start   01.01.2018 01:07    NaN
9   stop    01.01.2018 01:12    NaN

and:

counter = 0

for index, row in df.iterrows():

    if row['status'] == 'start':
        counter += 1
    else:
        counter -= 1
    df.loc[index, 'counter'] = counter

final output:

        status  time                counter
    0   start   01.01.2018 00:47    1.0
    1   start   01.01.2018 00:52    2.0
    2   stop    01.01.2018 00:54    1.0
    3   start   01.01.2018 00:55    2.0
    4   start   01.01.2018 00:57    3.0
    5   stop    01.01.2018 00:59    2.0
    6   start   01.01.2018 01:00    3.0
    7   stop    01.01.2018 01:03    2.0
    8   start   01.01.2018 01:07    3.0
    9   stop    01.01.2018 01:12    2.0

Is there any way i can do this by NOT using iterrows()?

Thanks in advance!

ansev :

Use Series.cumsum with Series.map (or Series.replace):

new_df = df.melt(var_name = 'status',value_name = 'time').sort_values('time')
new_df['counter'] = new_df['status'].map({'start':1,'end':-1}).cumsum()
print(new_df)
   status                time  counter
0   start 2018-01-01 00:47:00        1
1   start 2018-01-01 00:52:00        2
11    end 2018-01-01 00:54:00        1
2   start 2018-01-01 00:55:00        2
3   start 2018-01-01 00:57:00        3
13    end 2018-01-01 00:59:00        2
4   start 2018-01-01 01:00:00        3
12    end 2018-01-01 01:03:00        2
5   start 2018-01-01 01:07:00        3
15    end 2018-01-01 01:12:00        2
14    end 2018-01-01 01:16:00        1
16    end 2018-01-01 01:24:00        0
6   start 2018-01-01 01:33:00        1
7   start 2018-01-01 01:34:00        2
8   start 2018-01-01 01:37:00        3
9   start 2018-01-01 01:38:00        4
17    end 2018-01-01 01:38:00        3
10  start 2018-01-01 01:39:00        4
19    end 2018-01-01 01:41:00        3
20    end 2018-01-01 01:41:00        2
18    end 2018-01-01 01:47:00        1
21    end 2018-01-01 01:55:00        0

We could also use numpy.cumsum:

new_df['counter'] = np.where(new_df['status'].eq('start'),1,-1).cumsum()

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=11753&siteId=1