知识混合物综述:代码整理

下城娱乐新闻网 2025-11-09

.p = p def forward(self, g_s, g_t): return [self.at_loss(f_s, f_t) for f_s, f_t in zip(g_s, g_t)] def at_loss(self, f_s, f_t): s_H, t_H = f_s.shape[2], f_t.shape[2] if s_H> t_H: f_s = F.adaptive_avg_pool2d(f_s, (t_H, t_H)) elif s_H < t_H: f_t = F.adaptive_avg_pool2d(f_t, (s_H, s_H)) else: pass return (self.at(f_s) - self.at(f_t)).pow(2).mean() def at(self, f): return F.normalize(f.pow(self.p).mean(1).view(f.size(0), -1))

首先用于avgpool将厚度调整保持一致,然后用于MSE Loss来衡量标准两者贫富差距。

4. SP: Similarity-Preserving

全名:Similarity-Preserving Knowledge Distillation

关键字:

公开发表:ICCV19

SP应属基于联系的科学知识硫酸步骤。文章思想是设想相近性保留的科学知识,使得英语本堂师互联网和学校互联网会对不尽相同的检验导致相近的抑制。可以从下示意图看出处理流程,英语本堂师互联网和学校互联网对应feature map通过计算出来内积,得到bsxbs的相近度乘法,然后用于均方数值来衡量标准两个相近度乘法。

最终Loss为:

G都有的就是bsxbs的乘法。

借助如下:

class Similarity(nn.Module): """Similarity-Preserving Knowledge Distillation, ICCV2019, verified by original author""" def 曲在init曲在(self): super(Similarity, self).曲在init曲在() def forward(self, g_s, g_t): return [self.similarity_loss(f_s, f_t) for f_s, f_t in zip(g_s, g_t)] def similarity_loss(self, f_s, f_t): bsz = f_s.shape[0] f_s = f_s.view(bsz, -1) f_t = f_t.view(bsz, -1) G_s = torch.mm(f_s, torch.t(f_s)) # G_s = G_s / G_s.norm(2) G_s = torch.nn.functional.normalize(G_s) G_t = torch.mm(f_t, torch.t(f_t)) # G_t = G_t / G_t.norm(2) G_t = torch.nn.functional.normalize(G_t) G_diff = G_t - G_s loss = (G_diff * G_diff).view(-1, 1).sum(0) / (bsz * bsz) return loss5. CC: Correlation Congruence

全名:Correlation Congruence for Knowledge Distillation

关键字:

公开发表:ICCV19

CC也应属基于联系的科学知识硫酸步骤。不应该仅仅便是英语本堂师互联网和学校互联网单个检验向量间的歧异,还应该深造两个检验间的表征,而这个表征用于的是Correlation Congruence 英语本堂师互联网雨学校互联网表征间的欧氏东北方。

结构上Loss如下:

借助如下:

class Correlation(nn.Module): """Similarity-preserving loss. My origianl own reimplementation based on the paper before emailing the original authors.""" def 曲在init曲在(self): super(Correlation, self).曲在init曲在() def forward(self, f_s, f_t): return self.similarity_loss(f_s, f_t) def similarity_loss(self, f_s, f_t): bsz = f_s.shape[0] f_s = f_s.view(bsz, -1) f_t = f_t.view(bsz, -1) G_s = torch.mm(f_s, torch.t(f_s)) G_s = G_s / G_s.norm(2) G_t = torch.mm(f_t, torch.t(f_t)) G_t = G_t / G_t.norm(2) G_diff = G_t - G_s loss = (G_diff * G_diff).view(-1, 1).sum(0) / (bsz * bsz) return loss6. VID: Variational Information Distillation

全名:Variational Information Distillation for Knowledge Transfer

关键字:

公开发表:CVPR19

借助于互的资讯(Mutual Information)来衡量标准学校互联网和英语本堂师互联网歧异。互的资讯可以表示出两个变量的互相依赖于持续性,其值越多,表示变量间的依赖于持续性越好。互的资讯计算出来如下:

互的资讯是英语本堂师建模的压强减去在已知学校建模条件下英语本堂师建模的压强。要能是最大化互的资讯,因为互的资讯越多说明H(t|s)越小,即学校互联网确定的情况下,英语本堂师互联网的压强会拉长,证明学校互联网不太可能深造的比较确实。

结构上loss如下:

由于p(t|s)很难计算出来,可以用于变分分布q(t|s)去相比之下真实分布。

其中q(t|s)是用于方差可深造的高斯分布模拟(公式中的log_scale):

借助如下:

class VIDLoss(nn.Module): """Variational Information Distillation for Knowledge Transfer (CVPR 2019), code from author: """ def 曲在init曲在(self, num_input_channels, num_mid_channel, num_target_channels, init_pred_var=5.0, eps=1e-5): super(VIDLoss, self).曲在init曲在() def conv1x1(in_channels, out_channels, stride=1): return nn.Conv2d( in_channels, out_channels, kernel_size=1, padding=0, bias=False, stride=stride) self.regressor = nn.Sequential( conv1x1(num_input_channels, num_mid_channel), nn.ReLU(), conv1x1(num_mid_channel, num_mid_channel), nn.ReLU(), conv1x1(num_mid_channel, num_target_channels), ) self.log_scale = torch.nn.Parameter( np.log(np.exp(init_pred_var-eps)-1.0) * torch.ones(num_target_channels) ) self.eps = eps def forward(self, input, target): # pool for dimentsion match s_H, t_H = input.shape[2], target.shape[2] if s_H> t_H: input = F.adaptive_avg_pool2d(input, (t_H, t_H)) elif s_H < t_H: target = F.adaptive_avg_pool2d(target, (s_H, s_H)) else: pass pred_mean = self.regressor(input) pred_var = torch.log(1.0+torch.exp(self.log_scale))+self.eps pred_var = pred_var.view(1, -1, 1, 1) neg_log_prob = 0.5*( (pred_mean-target)**2/pred_var+torch.log(pred_var) ) loss = torch.mean(neg_log_prob) return loss7. RKD: Relation Knowledge Distillation

全名:Relational Knowledge Disitllation

关键字:

公开发表:CVPR19

RKD也是基于联系的科学知识硫酸步骤,RKD设想了两种人员伤亡表达式,二阶的东北方人员伤亡和三阶的角度人员伤亡。

Distance-wise LossAngle-wise Loss

借助如下:

class RKDLoss(nn.Module): """Relational Knowledge Disitllation, CVPR2019""" def 曲在init曲在(self, w_d=25, w_a=50): super(RKDLoss, self).曲在init曲在() self.w_d = w_d self.w_a = w_a def forward(self, f_s, f_t): student = f_s.view(f_s.shape[0], -1) teacher = f_t.view(f_t.shape[0], -1) # RKD distance loss with torch.no_grad(): t_d = self.pdist(teacher, squared=False) mean_td = t_d[t_d> 0].mean() t_d = t_d / mean_td d = self.pdist(student, squared=False) mean_d = d[d> 0].mean() d = d / mean_d loss_d = F.smooth_l1_loss(d, t_d) # RKD Angle loss with torch.no_grad(): td = (teacher.unsqueeze(0) - teacher.unsqueeze(1)) norm_td = F.normalize(td, p=2, dim=2) t_angle = torch.bmm(norm_td, norm_td.transpose(1, 2)).view(-1) sd = (student.unsqueeze(0) - student.unsqueeze(1)) norm_sd = F.normalize(sd, p=2, dim=2) s_angle = torch.bmm(norm_sd, norm_sd.transpose(1, 2)).view(-1) loss_a = F.smooth_l1_loss(s_angle, t_angle) loss = self.w_d * loss_d + self.w_a * loss_a return loss @staticmethod def pdist(e, squared=False, eps=1e-12): e_square = e.pow(2).sum(dim=1) prod = e @ e.t() res = (e_square.unsqueeze(1) + e_square.unsqueeze(0) - 2 * prod).clamp(min=eps) if not squared: res = res.sqrt() res = res.clone() res[range(len(e)), range(len(e))] = 0 return res8. PKT:Probabilistic Knowledge Transfer

全名:Probabilistic Knowledge Transfer for deep representation learning

关键字:

公开发表:CoRR18

设想一种概率科学知识转移步骤,引入了互的资讯来开展建模。该步骤具有可衔接模态科学知识转移、无需慎重考虑任务类型、可将手工构造融入互联网等好像。

借助如下:

class PKT(nn.Module): """Probabilistic Knowledge Transfer for deep representation learning Code from author: _kt""" def 曲在init曲在(self): super(PKT, self).曲在init曲在() def forward(self, f_s, f_t): return self.cosine_similarity_loss(f_s, f_t) @staticmethod def cosine_similarity_loss(output_net, target_net, eps=0.0000001): # Normalize each vector by its norm output_net_norm = torch.sqrt(torch.sum(output_net ** 2, dim=1, keepdim=True)) output_net = output_net / (output_net_norm + eps) output_net[output_net != output_net] = 0 target_net_norm = torch.sqrt(torch.sum(target_net ** 2, dim=1, keepdim=True)) target_net = target_net / (target_net_norm + eps) target_net[target_net != target_net] = 0 # Calculate the cosine similarity model_similarity = torch.mm(output_net, output_net.transpose(0, 1)) target_similarity = torch.mm(target_net, target_net.transpose(0, 1)) # Scale cosine similarity to 0..1 model_similarity = (model_similarity + 1.0) / 2.0 target_similarity = (target_similarity + 1.0) / 2.0 # Transform them into probabilities model_similarity = model_similarity / torch.sum(model_similarity, dim=1, keepdim=True) target_similarity = target_similarity / torch.sum(target_similarity, dim=1, keepdim=True) # Calculate the KL-divergence loss = torch.mean(target_similarity * torch.log((target_similarity + eps) / (model_similarity + eps))) return loss9. AB: Activation Boundaries

全名:Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons

关键字:

公开发表:AAAI18

要能:让英语本堂师互联网层的突触的抑制边界尽量和学校互联网的一样。说是的抑制边界指的是复合顶点(针对的是RELU这种抑制表达式),其决定了突触的抑制与失活。AB设想的抑制转移人员伤亡,让英语本堂师互联网与学校互联网间的复合边界尽可能保持一致。

借助如下:

class ABLoss(nn.Module): """Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons code: _distillation """ def 曲在init曲在(self, feat_num, margin=1.0): super(ABLoss, self).曲在init曲在() self.w = [2**(i-feat_num+1) for i in range(feat_num)] self.margin = margin def forward(self, g_s, g_t): bsz = g_s[0].shape[0] losses = [self.criterion_alternative_l2(s, t) for s, t in zip(g_s, g_t)] losses = [w * l for w, l in zip(self.w, losses)] # loss = sum(losses) / bsz # loss = loss / 1000 * 3 losses = [l / bsz for l in losses] losses = [l / 1000 * 3 for l in losses] return losses def criterion_alternative_l2(self, source, target): loss = ((source + self.margin) ** 2 * ((source> -self.margin) & (target <= 0)).float() + (source - self.margin) ** 2 * ((source 0)).float()) return torch.abs(loss).sum()10. FT: Factor Transfer

全名:Paraphrasing Complex Network: Network Compression via Factor Transfer

关键字:

公开发表:NIPS18

设想的是factor transfer的步骤。说是的factor,其实是对建模最后的数据结果开展一个编解码的过程,提取出的一个factor乘法,用英语本堂师互联网的factor来个人兴趣学校互联网的factor。

FT千分之为:

借助如下:

class FactorTransfer(nn.Module): """Paraphrasing Complex Network: Network Compression via Factor Transfer, NeurIPS 2018""" def 曲在init曲在(self, p1=2, p2=1): super(FactorTransfer, self).曲在init曲在() self.p1 = p1 self.p2 = p2 def forward(self, f_s, f_t): return self.factor_loss(f_s, f_t) def factor_loss(self, f_s, f_t): s_H, t_H = f_s.shape[2], f_t.shape[2] if s_H> t_H: f_s = F.adaptive_avg_pool2d(f_s, (t_H, t_H)) elif s_H < t_H: f_t = F.adaptive_avg_pool2d(f_t, (s_H, s_H)) else: pass if self.p2 == 1: return (self.factor(f_s) - self.factor(f_t)).abs().mean() else: return (self.factor(f_s) - self.factor(f_t)).pow(self.p2).mean() def factor(self, f): return F.normalize(f.pow(self.p1).mean(1).view(f.size(0), -1))11. FSP: Flow of Solution Procedure

全名:A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning

关键字:_cvpr_2017/papers/Yim_A_Gift_From_CVPR_2017_paper.pdf

公开发表:CVPR17

FSP认为本堂学校互联网不同层控制器的feature间的联系比本堂学校互联网结果好

定义了FSP乘法来定义互联网结构上构造层间的联系,是一个Gram乘法看出数学老师本堂学校的过程。

用于的是L2 Loss开展强制执行FSP乘法。

借助如下:

class FSP(nn.Module): """A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning""" def 曲在init曲在(self, s_shapes, t_shapes): super(FSP, self).曲在init曲在() assert len(s_shapes) == len(t_shapes), 'unequal length of feat list' s_c = [s[1] for s in s_shapes] t_c = [t[1] for t in t_shapes] if np.any(np.asarray(s_c) != np.asarray(t_c)): raise ValueError('num of channels not equal (error in FSP)') def forward(self, g_s, g_t): s_fsp = self.compute_fsp(g_s) t_fsp = self.compute_fsp(g_t) loss_group = [self.compute_loss(s, t) for s, t in zip(s_fsp, t_fsp)] return loss_group @staticmethod def compute_loss(s, t): return (s - t).pow(2).mean() @staticmethod def compute_fsp(g): fsp_list = [] for i in range(len(g) - 1): bot, top = g[i], g[i + 1] b_H, t_H = bot.shape[2], top.shape[2] if b_H> t_H: bot = F.adaptive_avg_pool2d(bot, (t_H, t_H)) elif b_H < t_H: top = F.adaptive_avg_pool2d(top, (b_H, b_H)) else: pass bot = bot.unsqueeze(1) top = top.unsqueeze(2) bot = bot.view(bot.shape[0], bot.shape[1], bot.shape[2], -1) top = top.view(top.shape[0], top.shape[1], top.shape[2], -1) fsp = (bot * top).mean(-1) fsp_list.append(fsp) return fsp_list12. NST: Neuron Selectivity Transfer

全名:Like what you like: knowledge distill via neuron selectivity transfer

关键字:

公开发表:CoRR17

用于新的人员伤亡表达式最小化英语本堂师互联网与学校互联网间的Maximum Mean Discrepancy(MMD), 文中选择的是对其英语本堂师互联网与学校互联网间突触选择样式的分布。

用于核分裂熟练(对应示例poly kernel)并进一步展开以后可得:

实际上给予了Linear Kernel、Poly Kernel、Gaussian Kernel三种,这里借助只给了Poly这种,这是因为Poly这种步骤可以与KD开展交叉,这样结构上效果会非常好。

借助如下:

class NSTLoss(nn.Module): """like what you like: knowledge distill via neuron selectivity transfer""" def 曲在init曲在(self): super(NSTLoss, self).曲在init曲在() pass def forward(self, g_s, g_t): return [self.nst_loss(f_s, f_t) for f_s, f_t in zip(g_s, g_t)] def nst_loss(self, f_s, f_t): s_H, t_H = f_s.shape[2], f_t.shape[2] if s_H> t_H: f_s = F.adaptive_avg_pool2d(f_s, (t_H, t_H)) elif s_H < t_H: f_t = F.adaptive_avg_pool2d(f_t, (s_H, s_H)) else: pass f_s = f_s.view(f_s.shape[0], f_s.shape[1], -1) f_s = F.normalize(f_s, dim=2) f_t = f_t.view(f_t.shape[0], f_t.shape[1], -1) f_t = F.normalize(f_t, dim=2) # set full_loss as False to avoid unnecessary computation full_loss = True if full_loss: return (self.poly_kernel(f_t, f_t).mean().detach() + self.poly_kernel(f_s, f_s).mean() - 2 * self.poly_kernel(f_s, f_t).mean()) else: return self.poly_kernel(f_s, f_s).mean() - 2 * self.poly_kernel(f_s, f_t).mean() def poly_kernel(self, a, b): a = a.unsqueeze(1) b = b.unsqueeze(2) res = (a * b).sum(-1).pow(2) return res13. CRD: Contrastive Representation Distillation

全名:Contrastive Representation Distillation

关键字:

公开发表:ICLR20

将对比深造引入科学知识硫酸中,其要能更新为:深造一个表征,让正检验对的英语本堂师互联网与学校互联网尽可能相比之下,负检验对英语本堂师互联网与学校互联网尽可能远离。

相结合的对比深造问题表示如下:

结构上的硫酸Loss表示如下:

借助如下:

class ContrastLoss(nn.Module): """ contrastive loss, corresponding to Eq (18) """ def 曲在init曲在(self, n_data): super(ContrastLoss, self).曲在init曲在() self.n_data = n_data def forward(self, x): bsz = x.shape[0] m = x.size(1) - 1 # noise distribution Pn = 1 / float(self.n_data) # loss for positive pair P_pos = x.select(1, 0) log_D1 = torch.div(P_pos, P_pos.add(m * Pn + eps)).log_() # loss for K negative pair P_neg = x.narrow(1, 1, m) log_D0 = torch.div(P_neg.clone().fill_(m * Pn), P_neg.add(m * Pn + eps)).log_() loss = - (log_D1.sum(0) + log_D0.view(-1, 1).sum(0)) / bsz return loss class CRDLoss(nn.Module): """CRD Loss function includes two symmetric parts: (a) using teacher as anchor, choose positive and negatives over the student side (b) using student as anchor, choose positive and negatives over the teacher side Args: opt.s_dim: the dimension of student's feature opt.t_dim: the dimension of teacher's feature opt.feat_dim: the dimension of the projection space opt.nce_k: number of negatives paired with each positive opt.nce_t: the temperature opt.nce_m: the momentum for updating the memory buffer opt.n_data: the number of samples in the training set, therefor the memory buffer is: opt.n_data x opt.feat_dim """ def 曲在init曲在(self, opt): super(CRDLoss, self).曲在init曲在() self.embed_s = Embed(opt.s_dim, opt.feat_dim) self.embed_t = Embed(opt.t_dim, opt.feat_dim) self.contrast = ContrastMemory(opt.feat_dim, opt.n_data, opt.nce_k, opt.nce_t, opt.nce_m) self.criterion_t = ContrastLoss(opt.n_data) self.criterion_s = ContrastLoss(opt.n_data) def forward(self, f_s, f_t, idx, contrast_idx=None): """ Args: f_s: the feature of student network, size [batch_size, s_dim] f_t: the feature of teacher network, size [batch_size, t_dim] idx: the indices of these positive samples in the dataset, size [batch_size] contrast_idx: the indices of negative samples, size [batch_size, nce_k] Returns: The contrastive loss """ f_s = self.embed_s(f_s) f_t = self.embed_t(f_t) out_s, out_t = self.contrast(f_s, f_t, idx, contrast_idx) s_loss = self.criterion_s(out_s) t_loss = self.criterion_t(out_t) loss = s_loss + t_loss return loss14. Overhaul

全名:A Comprehensive Overhaul of Feature Distillation

关键字:_ICCV_2019/papers/

公开发表:CVPR19

teacher transform中设想用于margin RELU抑制表达式。 student transform中设想用于1x1卷积。distillation feature postion选择Pre-ReLU。 distance function部份设想了Partial L2 人员伤亡表达式。

部份借助如下:

class OFD(nn.Module): ''' A Comprehensive Overhaul of Feature Distillation _ICCV_2019/papers/ Heo_A_Comprehensive_Overhaul_of_Feature_Distillation_ICCV_2019_paper.pdf ''' def 曲在init曲在(self, in_channels, out_channels): super(OFD, self).曲在init曲在() self.connector = nn.Sequential(*[ nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False), nn.BatchNorm2d(out_channels) ]) for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') if m.bias is not None: nn.init.constant_(m.bias, 0) elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) def forward(self, fm_s, fm_t): margin = self.get_margin(fm_t) fm_t = torch.max(fm_t, margin) fm_s = self.connector(fm_s) mask = 1.0 - ((fm_s <= fm_t) & (fm_t <= 0.0)).float() loss = torch.mean((fm_s - fm_t)**2 * mask) return loss def get_margin(self, fm, eps=1e-6): mask = (fm < 0.0).float() masked_fm = fm * mask margin = masked_fm.sum(dim=(0,2,3), keepdim=True) / (mask.sum(dim=(0,2,3), keepdim=True)+eps) return margin参考文献

_44579633/article/details/119350631

_46239293/article/details/120289163

_PP_JJ/article/details/121578722

_PP_JJ/article/details/121714957

_44633882/article/details/108927033

_46239293/article/details/120266111

_43402775/article/details/109011296

_37665984/article/details/103288582

_376659

山西妇科检查哪家医院好
武汉肝病治疗费用
郑州不孕不育治疗哪家好
北京看妇科去哪好
天津看男科到哪个医院好
腹部疼
急支糖浆治疗哪种咳嗽
新冠后遗症
锻炼的人新冠更轻?抗病毒药还能预防“长新冠”?医生告诉你真相
哪种止咳糖浆止咳比较好
相关阅读

康泰医学(300869.SZ):康泰投资所持比例达1.38% 持股降至5%以下

资讯 2025-11-10

在行医学300869.SZ公告,公司董事局绥芬河市在行投资作价有限公司“在行投资”于2021年10年末8日至2022年1年末12日通过大部分议价交易、大宗比价累计所持公司作价555.4万股,分

哥结婚父母给10万,俺结婚父母给20万,甩回钱俺气得和父母断关连

音乐 2025-11-10

别人都问道道,儿女多的父母里,即使双亲做到得如此一来好,也根本无法一碗水端平。因为双亲的心地,始终是但会有所偏倚的。如果遇见双亲椭圆,那么被冷遇的那个夫妻俩,被逼硬生生吐出这些饥渴。

游族网络(002174.SZ)股东林奇单纯减持股份达到1%

资讯 2025-11-10

游族网络002174.SZ发布公告,据悉,公司从中国银行登记账面有限公司核对获悉,持股5%以上债权人托马斯先生转给质押在红塔银行的股票,于2021年12年底13日至2022年1年底11日之前,以集

家中长子好像天生的领导吗?揭示生活中我们错误相信的7个观念

星闻 2025-11-10

我们会普通人都积累经验,然后将我们所察觉到的数据谈及给其他人。可问题是,我们很难情况下我们的经历,以及我们从他人那里获知的经验是真的符合物理断言。接下来,小编整理了,揭示社会生活中都我们严重错误

环球大通投资:1月末每股综合资本净值0.328港元

视频 2025-11-10

环球大通投资(00905)发布公告,于2022年1同月31日,Corporation每股未经审计区域性国有资产净值0.328港元。 (责任编辑:和讯网站)a href="http:

友情链接