GNNModel

model

class neuralkg_ind.model.GNNModel.model.Model(args)[source]

Bases: Module

init_emb()[source]
build_model()[source]
build_hidden_layer()[source]
forward()[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

layer

class neuralkg_ind.model.GNNModel.layer.GNNLayer[source]

Bases: Module

message(edges)[source]
forward(g, feat)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class neuralkg_ind.model.GNNModel.layer.Aggregator(emb_dim)[source]

Bases: Module

forward(node)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

abstract update_embedding(nei_msg)[source]
training: bool
class neuralkg_ind.model.GNNModel.layer.SUMAggregator(emb_dim)[source]

Bases: Aggregator

update_embedding(curr_emb, nei_msg)[source]
training: bool
class neuralkg_ind.model.GNNModel.layer.MLPAggregator(emb_dim)[source]

Bases: Aggregator

update_embedding(curr_emb, nei_msg)[source]
training: bool
class neuralkg_ind.model.GNNModel.layer.GRUAggregator(emb_dim)[source]

Bases: Aggregator

update_embedding(curr_emb, nei_msg)[source]
training: bool
class neuralkg_ind.model.GNNModel.layer.BatchGRU(hidden_size=300)[source]

Bases: Module

forward(node, a_scope)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

RGCN

class neuralkg_ind.model.GNNModel.RGCN.RGCN(args)[source]

Bases: Model

Modeling Relational Data with Graph Convolutional Networks (RGCN), which use GCN framework to model relation data.

args

Model configuration parameters.

init_emb()[source]

Initialize the RGCN model and embeddings

Parameters:
  • ent_emb – Entity embedding, shape:[num_ent, emb_dim].

  • rel_emb – Relation_embedding, shape:[num_rel, emb_dim].

forward(graph, ent, rel, norm, triples, mode='single')[source]

The functions used in the training and testing phase

Parameters:
  • graph – The knowledge graph recorded in dgl.graph()

  • ent – The entitiy ids sampled in triples

  • rel – The relation ids sampled in triples

  • norm – The edge norm in graph

  • triples – The triples ids, as (h, r, t), shape:[batch_size, 3].

  • mode – Choose head-predict or tail-predict, Defaults to ‘single’.

Returns:

The score of triples.

Return type:

score

get_score(batch, mode)[source]

The functions used in the testing phase

Parameters:
  • batch – A batch of data.

  • mode – Choose head-predict or tail-predict.

Returns:

The score of triples.

Return type:

score

tri2emb(embedding, triples, mode='single')[source]

Get embedding of triples.

This function get the embeddings of head, relation, and tail respectively. each embedding has three dimensions.

Parameters:
  • embedding (tensor) – This embedding save the entity embeddings.

  • triples (tensor) – This tensor save triples id, which dimension is [triples number, 3].

  • mode (str, optional) – This arg indicates that the negative entity will replace the head or tail entity. when it is ‘single’, it means that entity will not be replaced. Defaults to ‘single’.

Returns:

Head entity embedding. rela_emb: Relation embedding. tail_emb: Tail entity embedding.

Return type:

head_emb

build_hidden_layer(idx)[source]

The functions used to initialize the RGCN model

Parameters:
  • idx – it`s used to identify rgcn layers. The last rgcn layer should use

  • function. (relu as activation) –

Returns:

the relation graph convolution layer

training: bool
class neuralkg_ind.model.GNNModel.RGCN.RelGraphConv(args, in_feat, out_feat, num_rels, regularizer=None, num_bases=None, bias=True, activation=None, self_loop=True, dropout=0.0, layer_norm=False)[source]

Bases: GNNLayer

message(edges)[source]

Message function.

forward(g, feat, etypes, norm=None, *, presorted=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

CompGCN

class neuralkg_ind.model.GNNModel.CompGCN.CompGCN(args)[source]

Bases: Model

Composition-based multi-relational graph convolutional networks (CompGCN),

which jointly embeds both nodes and relations in a relational graph.

args

Model configuration parameters.

init_emb()[source]

Initialize the CompGCN model and embeddings

Parameters:
  • ent_emb – Entity embedding, shape:[num_ent, emb_dim].

  • rel_emb – Relation_embedding, shape:[num_rel, emb_dim].

  • conv1 – The convolution layer.

  • fc – The full connection layer.

  • bn0 – The batch Normalization layer.

  • bn1 – The batch Normalization layer.

  • bn2 – The batch Normalization layer.

  • inp_drop – The dropout layer.

  • hid_drop – The dropout layer.

  • feg_drop – The dropout layer.

build_hidden_layer(idx)[source]
forward(graph, relation, norm, triples)[source]

The functions used in the training phase

Parameters:
  • graph – The knowledge graph recorded in dgl.graph()

  • relation – The relation id sampled in triples

  • norm – The edge norm in graph

  • triples – The triples ids, as (h, r, t), shape:[batch_size, 3].

Returns:

The score of triples.

Return type:

score

get_score(batch, mode)[source]

The functions used in the testing phase

Parameters:
  • batch – A batch of data.

  • mode – Choose head-predict or tail-predict.

Returns:

The score of triples.

Return type:

score

concat(ent_embed, rel_embed)[source]
training: bool
class neuralkg_ind.model.GNNModel.CompGCN.CompGCNCov(in_channels, out_channels, act=<function CompGCNCov.<lambda>>, bias=True, drop_rate=0.0, opn='corr')[source]

Bases: GNNLayer

The comp graph convolution layers, similar to https://github.com/malllabiisc/CompGCN

get_param(shape)[source]
message(edges: EdgeBatch)[source]
reduce_func(nodes: NodeBatch)[source]
comp(h, edge_data)[source]
forward(g: graph, x, rel_repr, edge_type, edge_norm)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

SEGNN

class neuralkg_ind.model.GNNModel.SEGNN.SEGNN(args)[source]

Bases: Module

concat(head_emb, rela_emb)[source]
forward(h_id, r_id, kg)[source]

matching computation between query (h, r) and answer t. :param h_id: head entity id, (bs, ) :param r_id: relation id, (bs, ) :param kg: aggregation graph :return: matching score, (bs, n_ent)

aggragate_emb(kg)[source]

aggregate embedding. :param kg: :return:

training: bool
class neuralkg_ind.model.GNNModel.SEGNN.CompLayer(args)[source]

Bases: Module

forward(kg, ent_emb, rel_emb)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class neuralkg_ind.model.GNNModel.SEGNN.NodeLayer(args)[source]

Bases: Module

forward(kg, ent_emb)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class neuralkg_ind.model.GNNModel.SEGNN.EdgeLayer(args)[source]

Bases: Module

training: bool
forward(kg, ent_emb, rel_emb)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

XTransE

class neuralkg_ind.model.GNNModel.XTransE.XTransE(args)[source]

Bases: Model

Explainable Knowledge Graph Embedding for Link Prediction with Lifestyles in e-Commerce (XTransE), which introduces the attention to aggregate the neighbor node representation.

args

Model configuration parameters.

init_emb()[source]

Initialize the entity and relation embeddings in the form of a uniform distribution.

Parameters:
  • margin – Caculate embedding_range and loss.

  • embedding_range – Uniform distribution range.

  • ent_emb – Entity embedding, shape:[num_ent, emb_dim].

  • rel_emb – Relation_embedding, shape:[num_rel, emb_dim].

score_func(triples, neighbor=None, mask=None, negs=None, mode='single')[source]

Calculating the score of triples.

Parameters:
  • triples – The triples ids, as (h, r, t), shape:[batch_size, 3].

  • neighbor – The neighbors of tail entities.

  • mask – The mask of neighbor nodes

  • negs – Negative samples, defaults to None.

  • mode – Choose head-predict or tail-predict, Defaults to ‘single’.

Returns:

The score of triples.

Return type:

score

transe_func(head_emb, rela_emb, tail_emb)[source]

Calculating the score of triples with TransE model.

Parameters:
  • head_emb – The head entity embedding.

  • rela_emb – The relation embedding.

  • tail_emb – The tail entity embedding.

Returns:

The score of triples.

Return type:

score

forward(triples, neighbor=None, mask=None, negs=None, mode='single')[source]

The functions used in the training and testing phase

Parameters:
  • triples – The triples ids, as (h, r, t), shape:[batch_size, 3].

  • neighbor – The neighbors of tail entities.

  • mask – The mask of neighbor nodes

  • negs – Negative samples, defaults to None.

  • mode – Choose head-predict or tail-predict, Defaults to ‘single’.

Returns:

The score of triples.

Return type:

score

get_score(batch, mode)[source]

The functions used in the testing phase

Parameters:
  • batch – A batch of data.

  • mode – Choose head-predict or tail-predict.

Returns:

The score of triples.

Return type:

score

training: bool

KBAT

class neuralkg_ind.model.GNNModel.KBAT.KBAT(args)[source]

Bases: Module

Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs (KBAT),

which introduces the attention to aggregate the neighbor node representation.

args

Model configuration parameters.

init_GAT_emb()[source]

Initialize the GAT model and embeddings

Parameters:
  • ent_emb_out – Entity embedding, shape:[num_ent, emb_dim].

  • rel_emb_out – Relation_embedding, shape:[num_rel, emb_dim].

  • entity_embeddings – The final embedding used in ConvKB.

  • relation_embeddings – The final embedding used in ConvKB.

  • attentions – The graph attention layers.

  • out_att – The graph attention layers.

init_ConvKB_emb()[source]

Initialize the ConvKB model.

Parameters:
  • conv_layer – The convolution layer.

  • dropout – The dropout layer.

  • ReLU – Relu activation function.

  • fc_layer – The full connection layer.

forward(triples, mode, adj_matrix=None, n_hop=None)[source]

The functions used in the training and testing phase

Parameters:
  • triples – The triples ids, as (h, r, t), shape:[batch_size, 3].

  • mode – The mode indicates that the model will be used, when it

  • 'GAT' (is) –

  • model (it means graph attetion) –

  • 'ConvKB' (when it is) –

:param : :param it means ConvKB model.:

Returns:

The score of triples.

Return type:

score

get_score(batch, mode)[source]

The functions used in the testing phase

Parameters:
  • batch – A batch of data.

  • mode – Choose head-predict or tail-predict.

Returns:

The score of triples.

Return type:

score

forward_Con(triples, mode)[source]
forward_GAT(triples, adj_matrix, n_hop)[source]
cal_Con_score(head_emb, rela_emb, tail_emb)[source]

Calculating the score of triples with ConvKB model.

Parameters:
  • head_emb – The head entity embedding.

  • rela_emb – The relation embedding.

  • tail_emb – The tail entity embedding.

Returns:

The score of triples.

Return type:

score

cal_GAT_score(head_emb, relation_emb, tail_emb)[source]

Calculating the score of triples with TransE model.

Parameters:
  • head_emb – The head entity embedding.

  • rela_emb – The relation embedding.

  • tail_emb – The tail entity embedding.

Returns:

The score of triples.

Return type:

score

training: bool
class neuralkg_ind.model.GNNModel.KBAT.SpecialSpmmFunctionFinal(*args, **kwargs)[source]

Bases: Function

Special function for only sparse region backpropataion layer, similar to https://arxiv.org/abs/1710.10903

static forward(ctx, edge, edge_w, N, E, out_features)[source]

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass.

static backward(ctx, grad_output)[source]

Defines a formula for differentiating the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

class neuralkg_ind.model.GNNModel.KBAT.SpecialSpmmFinal[source]

Bases: Module

Special spmm final layer, similar to https://arxiv.org/abs/1710.10903.

forward(edge, edge_w, N, E, out_features)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class neuralkg_ind.model.GNNModel.KBAT.GraphAttentionLayer(num_nodes, in_features, out_features, nrela_dim, dropout, alpha, concat=True)[source]

Bases: Module

Sparse version GAT layer, similar to https://arxiv.org/abs/1710.10903.

forward(input, edge, edge_embed, edge_list_nhop, edge_embed_nhop)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

Grail

class neuralkg_ind.model.GNNModel.Grail.Grail(args)[source]

Bases: Module

Inductive Relation Prediction by Subgraph Reasoning (Grail), which reasons over local subgraph structures.

args

Model configuration parameters.

rel_emb

Entity embedding, shape: [num_rel, rel_emb_dim].

gnn

RGCN model.

forward(data)[source]

calculating subgraphs score.

Parameters:

data – Tuple of subgraphs and relation labels.

Returns:

The score of subgraphs.

Return type:

output

training: bool
class neuralkg_ind.model.GNNModel.Grail.RGCN(args, basiclayer)[source]

Bases: Model

RGCN model

args

Model configuration parameters.

basiclayer

Layer of RGCN model.

inp_dim

Dimension of input.

emb_dim

Dimension of embedding.

has_attn

Whether there is attention mechanism.

attn_rel_emb

Embedding of relation attention.

attn_rel_emb_dim

Dimension of relation attention Embedding.

init_emb()[source]

Initialize the relation attention embedding, aggregator and features.

build_hidden_layer(idx)[source]

build hidden layer of RGCN.

Parameters:

idx – The idx of layer.

Returns:

Build a basic layer according to whether it is the first layer.

Return type:

output

forward(graph, rela=None)[source]

Getting node and relation embedding.

Parameters:
  • graph – Subgraph of corresponding triple.

  • rela – Embedding of relation.

Returns:

Node embedding. rela: Relation embedding.

Return type:

graph.ndata.pop(‘h’)

training: bool
class neuralkg_ind.model.GNNModel.Grail.RelAttGraphConv(args, inp_dim, out_dim, aggregator, attn_rel_emb_dim, num_rels, num_bases=-1, bias=None, activation=None, dropout=0.0, edge_dropout=0.0, is_input_layer=False, has_attn=False)[source]

Bases: RelGraphConv

Basic layer of RGCN.

args

Model configuration parameters.

bias

Weight bias.

inp_dim

Dimension of input.

out_dim

Dimension of output.

num_rels

The number of relations.

num_bases

The number of bases.

has_attn

Whether there is attention mechanism.

is_input_layer

Whether it is input layer.

aggregator

Type of aggregator.

weight

Weight matrix.

w_comp

Bases matrix.

self_loop_weight

Self-loop weight.

edge_dropout

Dropout of edge.

training: bool
propagate(g, attn_rel_emb=None)[source]

Message propagate function.

Propagate messages and perform calculations according to the graph traversal order.

Parameters:
  • g – Subgraph of triple.

  • attn_rel_emb – Relation attention embedding.

forward(g, rel_emb=None, attn_rel_emb=None)[source]

Update node representation.

Parameters:
  • graph – Subgraph of corresponding triple.

  • rel_emb – Embedding of relation.

  • attn_rel_emb – Embedding of relation attention.

Returns:

Embedding of relation.

Return type:

rel_emb

CoMPILE

class neuralkg_ind.model.GNNModel.CoMPILE.CoMPILE(args)[source]

Bases: Module

Communicative Message Passing for Inductive Relation Reasoning (CoMPILE), which reasons over

local directed subgraph structures and strengthens the message interactions between edges and entitles through a communicative kernel.

args

Model configuration parameters.

latent_dim

Latent dimension.

output_dim

Output dimension.

node_emb

Dimension of node embedding.

relation_emb

Dimension of relation embedding.

hidden_size

Size of hidden layer.

forward(subgraph)[source]

calculating subgraphs score.

Parameters:

subgraph – Subgraph of triple.

Returns:

The output of convolution layer.

Return type:

out_conv

batch_subgraph(subgraph)[source]

calculating subgraphs score.

Parameters:

subgraph – Subgraph of triple.

Returns:

Embedding of subgraph. source_embed: Embedding of source entities. target_embed: Embedding of target entities.

Return type:

graph_embed

CoMPILEConv(node_feat, edge_feat, e2n_sp, e2n_sp2, graph_sizes, target_relation, total_source, total_target, source_node, target_node, edge_sizes=None, node_degs=None)[source]

calculating graph embedding, source embedding and target embedding.

Parameters:
  • node_feat – Feature of nodes.

  • edge_feat – Feature of edges.

  • e2n_sp – Sparse matrix of edges to source nodes.

  • e2n_sp2 – Sparse matrix of edges to target nodes.

  • graph_sizes – The number of each graph nodes.

  • target_relation – Target relation label.

  • total_source – Total source nodes.

  • total_target – Total target nodes.

  • source_node – Source node of triple.

  • target_node – Target node of triple.

  • edge_sizes – The sizes of edges.

  • node_degs – The degrees of nodes.

Returns:

Graph embedding. source_embed: source node embedding. target_embed: target node embedding.

Return type:

gmol_vecs

training: bool
class neuralkg_ind.model.GNNModel.CoMPILE.MySpMM(*args, **kwargs)[source]

Bases: Function

static forward(ctx, sp_mat, dense_mat)[source]

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass.

static backward(ctx, grad_output)[source]

Defines a formula for differentiating the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

neuralkg_ind.model.GNNModel.CoMPILE.gnn_spmm(sp_mat, dense_mat)[source]

SNRI

class neuralkg_ind.model.GNNModel.SNRI.SNRI(args)[source]

Bases: Module

Subgraph Neighboring Relations Infomax for Inductive Link Prediction on Knowledge Graphs (SNRI), which sufficiently

exploits complete neighboring relationsfrom two aspects and apply mutual information (MI) maximization for knowledge graph.

args

Model configuration parameters.

gnn

RGCN model.

rel_emb

Relation embedding, shape: [num_rel + 1, inp_dim].

ent_padding

Entity padding, shape: [1, sem_dim].

w_rel2ent

Weight matrix of relation to entity.

init_ent_emb_matrix(g)[source]

Initialize feature of entities by matrix form.

Parameters:

g – The dgl graph of meta task.

comp_ht_emb(head_embs, tail_embs)[source]

combining embedding of head and tail.

Parameters:
  • head_embs – Embedding of heads.

  • tail_embs – Embedding of tails.

Returns:

Embedding of head and tail.

Return type:

ht_embs

comp_hrt_emb(head_emb, tail_emb, rel_emb)[source]

combining embedding of head, relation and tail.

Parameters:
  • head_emb – Embedding of head.

  • relation_emb – Embedding of relation.

  • tail_emb – Embedding of tail.

Returns:

Embedding of head, relation and tail.

Return type:

hrt_embs

nei_rel_path(g, rel_labels, r_emb_out)[source]

Neighboring relational path module.

Only consider in-degree relations first.

Parameters:
  • g – Subgraph of corresponding triple.

  • rel_labels – Labels of relation.

  • r_emb_out – Embedding of relation.

Returns:

Aggregate paths.

Return type:

output

get_logits(s_G, s_g_pos, s_g_cor)[source]
forward(data, is_return_emb=False, cor_graph=False)[source]

Getting the subgraph-level embedding.

Parameters:
  • data – Subgraphs and relation labels.

  • is_return_emb – Whether return embedding.

  • cor_graph – Whether corrupt the node feature.

Returns:

Representaion of subgraph. s_G: Global Subgraph embeddings. s_g: Local Subgraph embeddings.

Return type:

output

training: bool
class neuralkg_ind.model.GNNModel.SNRI.RelCompGraphConv(args, inp_dim, out_dim, aggregator, attn_rel_emb_dim, num_rels, num_bases=-1, bias=None, activation=None, dropout=0.0, edge_dropout=0.0, is_input_layer=False, has_attn=False)[source]

Bases: RelGraphConv

Basic layer of RGCN for SNRI.

args

Model configuration parameters.

bias

Weight bias.

inp_dim

Dimension of input.

out_dim

Dimension of output.

num_rels

The number of relations.

num_bases

The number of bases.

has_attn

Whether there is attention mechanism.

is_input_layer

Whether it is input layer.

aggregator

Type of aggregator.

weight

Weight matrix.

w_comp

Bases matrix.

self_loop_weight

Self-loop weight.

edge_dropout

Dropout of edge.

propagate(g, attn_rel_emb=None)[source]

Message propagate function.

Propagate messages and perform calculations according to the graph traversal order.

Parameters:
  • g – Subgraph of triple.

  • attn_rel_emb – Relation attention embedding.

training: bool
forward(g, rel_emb, attn_rel_emb=None)[source]

Update node representation.

Parameters:
  • graph – Subgraph of corresponding triple.

  • rel_emb – Embedding of relation.

  • attn_rel_emb – Embedding of relation attention.

Returns:

Embedding of relation.

Return type:

rel_emb_out

class neuralkg_ind.model.GNNModel.SNRI.Discriminator(n_e, n_g)[source]

Bases: Module

Discriminator module for calculating MI.

n_e

dimension of edge embedding.

n_g

dimension of graph embedding.

training: bool
weights_init(m)[source]

Init weights of layers.

Parameters:

m – Model layer.

forward(c, h_pl, h_mi, s_bias1=None, s_bias2=None)[source]

For calculating MI loss.

c

Global Subgraph embeddings.

h_pl

Positive local Subgraph embeddings.

h_mi

Negative local Subgraph embeddings.

s_bias1

Bias of sc_1.

s_bias2

Bias of sc_2.

RMPI

class neuralkg_ind.model.GNNModel.RMPI.RMPI(args)[source]

Bases: Module

Relational Message Passing for Fully Inductive Knowledge Graph Completion (RMPI), which passes messages directly

between relations to make full use of the relation patterns for subgraph reasoning with new techniques on graph transformation, graph pruning, relationaware neighborhood attention, addressing empty subgraphs, etc.

args

Model configuration parameters.

rel_emb

Relation embedding, shape: [num_rel, rel_emb_dim].

conc

Whether apply target-aware attention for 2-hop neighbors.

AggregateConv(graph, u_node, v_node, num_nodes, num_edges, aggr_flag, is_drop)[source]

Function of aggregating relation.

Parameters:
  • graph – Subgraph to corresponding triple.

  • u_node – Node of head entities.

  • v_node – Node of tail entities.

  • num_nodes – The number of nodes.

  • num_edges – The number of edges.

  • agg_flag – 2: 2-hop neighbors 1: 1-hop directed neighbors 0: 1-hop disclosing directed neighbors

  • drop – Whether mask edges.

Returns:

embedding of relation neighbors.

Return type:

rel_neighbor_embd

forward(data)[source]

calculating subgraphs score.

Parameters:

data – Enclosing/disclosing subgraphs and relation labels.

Returns:

socore of subgraphs.

Return type:

output

static sparse_dense_mul(s, d)[source]
static sparse_index_select(s, idx)[source]
training: bool

MorsE

class neuralkg_ind.model.GNNModel.MorsE.MorsE(args)[source]

Bases: Module

Meta-Knowledge Transfer for Inductive Knowledge Graph Embedding (MorsE), which learns transferable meta-knowledge that

can be used to produce entity embeddings.

args

Model configuration parameters.

ent_init

Relation embedding init class.

rgcn

RGCN model.

KGEModel

KGE model.

forward(sample, ent_emb, mode='single')[source]

Calculating triple score.

Parameters:
  • sample – Sampled triplets.

  • ent_emb – Embedding of entities.

  • mode – This arg indicates that negative entity will replace the head or tail entity.

Returns:

Score of triple.

Return type:

score

get_intest_train_g()[source]

Getting inductive test-train graph.

Returns:

test-train graph.

Return type:

indtest_train_g

get_ent_emb(sup_g_bidir)[source]

Getting entities embedding.

Parameters:

sup_g_bidir – Undirected supporting graph.

Returns:

Embedding of entities.

Return type:

ent_emb

get_score(batch, mode)[source]

Getting score of triplets.

Parameters:

batch – Including positive sample, entities embedding, etc.

Returns:

Score of positive or negative sample.

Return type:

score

get_num_rel(args)[source]

Getting number of relation.

Parameters:

args – Model configuration parameters.

Returns:

The number of relation.

Return type:

num_rel

training: bool
class neuralkg_ind.model.GNNModel.MorsE.EntInit(args)[source]

Bases: Module

Class of initializing entities.

args

Model configuration parameters.

rel_head_emb

Embedding of relation to head.

rel_tail_emb

Embedding of relation to tail.

forward(g_bidir)[source]

Initialize entities in graph.

Parameters:

g_bidir – Undirected graph.

training: bool
class neuralkg_ind.model.GNNModel.MorsE.RelMorsGraphConv(args, inp_dim, out_dim, aggregator, num_rels, num_bases=-1, bias=False, activation=None, dropout=0.0, edge_dropout=0.0, is_input_layer=False, has_attn=False)[source]

Bases: RelGraphConv

Basic layer of RGCN.

args

Model configuration parameters.

bias

Weight bias.

inp_dim

Dimension of input.

out_dim

Dimension of output.

num_rels

The number of relations.

num_bases

The number of bases.

has_attn

Whether there is attention mechanism.

is_input_layer

Whether it is input layer.

aggregator

Type of aggregator.

weight

Weight matrix.

w_comp

Bases matrix.

self_loop_weight

Self-loop weight.

edge_dropout

Dropout of edge.

message(edges)[source]

Message function for propagating.

Parameters:

edges – Edges in graph.

Returns:

Embedding of current layer. msg: Message for propagating. a: Coefficient.

Return type:

curr_emb

apply_node_func(nodes)[source]

Function used for nodes.

Parameters:

nodes – nodes in graph.

Returns:

Representation of nodes.

Return type:

node_repr

forward(g)[source]

Update node representation.

Parameters:

g – Subgraph of corresponding triple.

training: bool
class neuralkg_ind.model.GNNModel.MorsE.RGCN(args, basiclayer)[source]

Bases: Model

RGCN model

args

Model configuration parameters.

basiclayer

Layer of RGCN model.

inp_dim

Dimension of input.

emb_dim

Dimension of embedding.

aggregator

Type of aggregator.

build_hidden_layer(idx)[source]

build hidden layer of RGCN.

Parameters:

idx – The idx of layer.

Returns:

Build a basic layer according to whether it is the first layer.

Return type:

output

forward(g)[source]

Getting nodes embedding.

Parameters:

g – Subgraph of corresponding task.

Returns:

Nodes embedding.

Return type:

g.ndata[‘h’]

training: bool
class neuralkg_ind.model.GNNModel.MorsE.KGEModel(args)[source]

Bases: Module

KGE model

args

Model configuration parameters.

model_name

The name of model.

nrelation

The number of relation.

emb_dim

Dimension of embedding.

epsilon

Calculate embedding_range.

margin

Calculate embedding_range and loss.

embedding_range

Uniform distribution range.

relation_embedding

Embedding of relation.

training: bool
forward(sample, ent_emb, mode='single')[source]
Forward function that calculate the score of a batch of triples.

In the ‘single’ mode, sample is a batch of triple. In the ‘head-batch’ or ‘tail-batch’ mode, sample consists two part. The first part is usually the positive sample. And the second part is the entities in the negative samples. Because negative samples and positive samples usually share two elements in their triple ((head, relation) or (relation, tail)).

Parameters:
  • sample – Positive and negative sample.

  • ent_emb – Embedding of entities.

  • mode – ‘single’, ‘head-batch’ or ‘tail-batch’.

Returns:

The score of sample.

Return type:

score