Euclidean distance between pytorch embedding

Euclidean distance between embedding

    def compute_D(embeddings):
        t1 = embeddings.unsqueeze(1).expand(len(embeddings), len(embeddings), embeddings.shape[1])
        t2 = embeddings.unsqueeze(0).expand(len(embeddings), len(embeddings), embeddings.shape[1])
        d = (t1 - t2).pow(2).sum(2)
        return d

Euclidean distance between two features

import torch.nn.functional as F
distance = F.pairwise_distance(rep_a, rep_b, p=2)

Quick calculation
https://blog.csdn.net/IT_forlearn/article/details/100022244

    def euclidean_dist(x, y):
        """
        Args:
          x: pytorch Variable, with shape [m, d]
          y: pytorch Variable, with shape [n, d]
        Returns:
          dist: pytorch Variable, with shape [m, n]
        """
 
        m, n = x.size(0), y.size(0)
        # xx经过pow()方法对每单个数据进行二次方操作后,在axis=1 方向(横向,就是第一列向最后一列的方向)加和,此时xx的shape为(m, 1),经过expand()方法,扩展n-1次,此时xx的shape为(m, n)
        xx = torch.pow(x, 2).sum(1, keepdim=True).expand(m, n)
        # yy会在最后进行转置的操作
        yy = torch.pow(y, 2).sum(1, keepdim=True).expand(n, m).t()
        dist = xx + yy
        # torch.addmm(beta=1, input, alpha=1, mat1, mat2, out=None),这行表示的意思是dist - 2 * x * yT 
        dist.addmm_(1, -2, x, y.t())
        # clamp()函数可以限定dist内元素的最大最小范围,dist最后开方,得到样本之间的距离矩阵
        dist = dist.clamp(min=1e-12).sqrt()  # for numerical stability
        return dist

Guess you like

Origin blog.csdn.net/weixin_42764932/article/details/112998284