目录
1. Concept of GCL
1.1 自监督学习
1.2 contrastive learning
1.3 data augmentation for GCL
1.3.1 Node dropping
1.3.2 edge perturbation 比扰动
1.3.3 attribute masking 属性掩码
1.3.4 subgraph 子图
2. GCL Algorithms
2.1 常见的图对比算法步骤
2.2 GraphCL Algorithm
2.2.1 Graph data augmentation
2.2.2 GNN-based encoder
2.2.3 Projection head
2.2.4 Contrastive loss function
3. Summary of GCL
4. GCL Implementation
3.1 semi-supervised 实现
3.2 unsupervised 实现
3.3 adversarial 实现
3.4 transfer learning 实现
参考
code
paper
图对比学习 GCL: GCL是一种针对图数据的自监督学习算法。
--》对给定的大量无标签数据,图对比算法旨在训练出一个图编辑器,即GNN,用以得到图表示向量。
自监督学习:主要是利用辅助任务(pretext)从大规模的无监督数据中挖掘出自身的监督信息,通过这种构造的监督信息对网络进行训练,从而可以学习到对下游任务有价值的表征。
--> 通过各种方式从数据本身中为学习算法挖掘到了监督信息。
--> 是否存在不专注于具体细节的表示学习算法来对高层特征编码以实现不同对象之间的区分
对比学习通过正面和负面的例子来学习表征。
since data augmentations are the prerequisite for contrastive learning.
没有数据增强的GCL效果还不如不用!
随机从图中去掉部分比例的节点来扰动graph的完整性,每个节点的dropping概率服从均匀分布(即 random dropping)。
随机增加或删除一定比例的边来扰动Graph的邻接关系,每个边的增加或删除概率服从均匀分布。
随机masking部分节点的属性信息,迫使model使用上下文信息来重构masked node attributes
--》将对比学习技术应用于图表示学习任务上。
Graph是一种离散的数据结构,且一些常见的图学习任务中,数据之间往往存在着紧密的关联(e.g. 链接预测)
1) random sampling 一批(batch) graph
2) 对每一个图进行两次随机的data augmentation,增强后的两个新图称为view。
3) 使用带训练的GNN对view进行编码,得到节点表示向量(node representation)和图表示向量(graph representation)。
4) 根据上述表示向量计算InforNCE损失,其中由同一个graph增强出来的view表示相互靠近,由不同graph增强出来的view表示相互远离。
GraphCL for self-supervised pre-training of GNNs. In graph contrastive learning, pre-training is performed through maximizing the agreement between two augmented views of the same graph via a contrastive loss in the latent space.
paper: Graph Contrastive Learning with Augmentations, NeurIPS 2020.
The given graph G undergoes graph data augmentations to obtain two correlated views Gi, Gj, as a positive pair.
A GNN-based encoder f() extracts graph-level representation vectors hi, hj for augmented graphs Gi, Gj. Graph contrastive learning does not apply any constraint on the GNN architecture.
A non-linear transformation g()(激活函数) named projection head maps augmented representations to another latent space space where the contrastive loss is calculated, e.g. MLP, to obtain zi, zj.
A contrastive loss function L() is defined to enforce maximizing the consistency between positive pairs zi, zj compared with negative pairs.
参考: GraphCL/unsupervised_Cora_Citeseer at master · Shen-Lab/GraphCL · GitHub
1) 构造augmented feature1 and feature2;augmented adjacency matrix1 and matrix2
2) 构建自监督supervised information,由torch.ones全1矩阵和torch.zeros全0矩阵
3)在给定augmented feature and adjacency matrix前提下,由discriminator 1区分第一种数据增强下的feature和shuffled feature,由discriminator 2区分第二种数据增强下的feature和shuffled feature,将结果ret1和ret2相加,作为model学习结果。
4) 将model预测结果与自监督矩阵做反向传播和梯度下降,学习出最优模型参数,以后后面生成feature embeddings。
for epoch in range(nb_epochs):model.train()optimiser.zero_grad()idx = np.random.permutation(nb_nodes)shuf_fts = features[:, idx, :]lbl_1 = torch.ones(batch_size, nb_nodes) # labelslbl_2 = torch.zeros(batch_size, nb_nodes)lbl = torch.cat((lbl_1, lbl_2), 1)if torch.cuda.is_available():shuf_fts = shuf_fts.cuda()lbl = lbl.cuda()logits = model(features, shuf_fts, aug_features1, aug_features2,sp_adj if sparse else adj,sp_aug_adj1 if sparse else aug_adj1,sp_aug_adj2 if sparse else aug_adj2,sparse, None, None, None, aug_type=aug_type)loss = b_xent(logits, lbl) # 在augmentation前提下,discriminater学习区分features和shuffle_features.print('Loss:[{:.4f}]'.format(loss.item()))if loss < best:best = lossbest_t = epochcnt_wait = 0torch.save(model.state_dict(), args.save_name)else:cnt_wait += 1if cnt_wait == patience:print('Early stopping!')breakloss.backward()optimiser.step()
[1] Graph Contrastive Learning with Augmentations, NeurIPS 2020.
下一篇:单链表-----重置版