I have a list of unique random integers and a dataframe with a column of lists, like below:
>>> panel
[1, 10, 9, 5, 6]
>>> df
col1
0 [1, 5]
1 [2, 3, 4]
2 [9, 10, 6]
The output I would like to have is the length of the overlapping between panel
and each individual list in the dataframe:
>>> result
col1 res
0 [1, 5] 2
1 [2, 3, 4] 0
2 [9, 10, 6] 3
Currently, I am using the apply
function, but I was wondering if there are faster ways, since I need to create a lot of panels and loop through this task for each panel.
# My version right now
def cntOverlap(panel, series):
# Typically the lists inside df will be much shorter than panel,
# so I think the fastest way would be converting the panel into a set
# and loop through the lists within the dataframe
return sum(1 if x in panel for x in series)
#return len(np.setxor1d(list(panel), series))
#return len(panel.difference(series))
for i, panel in enumerate(list_of_panels):
panel = set(panel)
df[f"panel_{i}"] = df["col1"].apply(lambda x: cntOverlap(panel, x))
Owing to the variable length data per row, we need to iterate (explicitly or implicitly i.e. under the hoods) staying within Python. But, we can optimize to a level where per iteration compute is minimized. Going with that philosophy, here's one with array-assignment and some masking -
# l is input list of unique random integers
s = df.col1
max_num = 10 # max number in df, if not known use : max(max(s))
map_ar = np.zeros(max_num+1, dtype=bool)
map_ar[l] = 1
df['res'] = [map_ar[v].sum() for v in s]
Alternatively with 2D array-assignment to further minimize per-iteration-compute -
map_ar = np.zeros((len(df),max_num+1), dtype=bool)
map_ar[:,l] = 1
for i,v in enumerate(s):
map_ar[i,v] = 0
df['res'] = len(l)-map_ar.sum(1)