check between two df if same pattern and use groupby in pandas

Grendel :

Hello I have a df1 file such as :

Acc_number
ACC1.1_CP_Sp1_1
ACC2.1_CP_Sp1_1
ACC3.1_CP_Sp1_1
ACC4.1_CP_Sp1_1

and another df2 such as :

Cluster_nb SeqName
Cluster1    YP_009216714
Cluster1    YP_002051918
Cluster1    JZSA01005235.1:37071-37973(-):Sp1_1
Cluster1    NW_014464344.1:68901-69716(-):Sp2_3
Cluster1    YP_001956729
Cluster1    ACC1.1_CP_Sp1_1
Cluster1    YP_009213712
Cluster2    ACC2.1_CP_Sp1_1
Cluster2    NR_014464231.1:35866-36717(-):Sp1_1
Cluster2    NR_014464232.1:35889-36788(-):Sp1_1
Cluster2    YP_009213728
Cluster3    ACC3.1_CP_Sp1_1
Cluster3    NK_014464231.1:35772-38898(-):Sp1_2
Cluster3    NZ_014464232.1:3533-78787(+):Sp1_2
Cluster3    YP_009213723
Cluster3    YP_009213739

I want to check for each Acc_number in df1 if a groupby Cluster_nb that contains Acc_number[i] also contains another sequence with the same extension (the part after _CP_ in the Acc_number) in its (+ or -):... part.

For example

for ACC1.1_CP_Sp1_1 as i

I see by doing a :

df=df2.loc[df2['SeqName']==i]
Cluster_number=df['Cluster_nb'].iloc[0]
df3=df2.loc[df2['Cluster_nb']==Cluster_number]
print(df3)

Cluster_nb SeqName
Cluster1    YP_009216714
Cluster1    YP_002051918
Cluster1    JZSA01005235.1:37071-37973(-):Sp1_1
Cluster1    NW_014464344.1:68901-69716(-):Sp2_3
Cluster1    YP_001956729

that the sequence JZSA01005235.1:37071-37973(-):Sp1_1 in line number 3 has the same Sp1_1 pattern at its end.

So here the answer is yes, ACC1.1_CP_Sp1_1 is in the same cluster as another sequence with the same ending (but with (-or +): in its name)

for ACC3.1_CP_Sp1_1 as i

I see by doing a :

df=df2.loc[df2['SeqName']==i]
Cluster_number=df['Cluster_nb'].iloc[0]
df3=df2.loc[df2['Cluster_nb']==Cluster_number]
print(df3)

Cluster3    ACC3.1_CP_Sp1_1
Cluster3    NK_014464231.1:35772-38898(-):Sp1_2
Cluster3    NZ_014464232.1:3533-78787(+):Sp1_2
Cluster3    YP_009213723
Cluster3    YP_009213739

I see that in the cluster no other sequence has the same ending as ACC3.1_CP_Sp1_1, so the answer is no.

The results should be summarized in df3:

Acc_number present cluster
ACC1.1_CP_Sp1_1 Yes Cluster1
ACC2.1_CP_Sp1_1 Yes Cluster2
ACC3.1_CP_Sp1_1 No NaN
ACC4.1_CP_Sp1_1 No NaN

Thank you a lot for you help

I tried :

for CP in df1['Acc_number']:
  df=df2.loc[df2['SeqName']==CP]
  try: 
    Cluster_number=df['Cluster_nb'].iloc[0]
    df3=df2.loc[df2['Cluster_nb']==Cluster_number]
    for a in df3['SeqName']:
      if '(+)' in a or '(-)' in a:
        if re.sub('.*_CP_','',CP) in a:
          new_df=new_df.append({"Cluster":Cluster_number,"Acc_nb":CP,"present":'yes'}, ignore_index=True)
          print(CP,'yes')
  except:
    continue
sammywemmy :

I made comments in the code itself; overview is to get unique identifiers for each row, merge the dataframes and keep only the columns you are interested in :

  #create an 'ending' column 
  #where u split off the ends after ':'
  df1['ending'] = df1.loc[df1.SeqName.str.contains(':'),'SeqName']
  df1['ending'] = df1['ending'].str.split(':').str[-1]
  #get the cluster number and add to the ending column
  #it will serve as a unique identifier for each row
  df1['ending'] = df1.Cluster_nb.str[-1].str.cat(df1['ending'],sep='_')
  #get rid of null and duplicates; keep only relevant columns
  df1 = df1.dropna().drop('SeqName',axis=1).drop_duplicates('ending')

  #create ending column here as well
  df['ending'] = df['Acc_number'].str.extract(r'((?<=ACC)\d)')
  #merge acc_number with the ending to serve as unique identifier
  df['ending'] = df['ending'].str.cat(df['Acc_number'].str.extract(r'((?<=P_).*)'),sep='_')

  #merge both dataframes
  (df
  .merge(df1,on='ending',how='left')
   #keep only relevant columns
  .filter(['Acc_number','Cluster_nb'])
  #create present column
  .assign(present = lambda x: np.where(x.Cluster_nb.isna(),'no','yes'))
  .rename(columns={'Cluster_nb':'cluster'})
  )

     Acc_number     cluster     present
0   ACC1.1_CP_Sp1_1 Cluster1    yes
1   ACC2.1_CP_Sp1_1 Cluster2    yes
2   ACC3.1_CP_Sp1_1 NaN         no
3   ACC4.1_CP_Sp1_1 NaN         no

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=379682&siteId=1