RuleModel

model

class neuralkg_ind.model.RuleModel.model.Model(args)[source]

Bases: Module

init_emb()[source]
score_func(head_emb, relation_emb, tail_emb)[source]
forward(triples, negs, mode)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

tri2emb(triples, negs=None, mode='single')[source]

Get embedding of triples.

This function get the embeddings of head, relation, and tail respectively. each embedding has three dimensions.

Parameters:
  • triples (tensor) – This tensor save triples id, which dimension is [triples number, 3].

  • negs (tensor, optional) – This tenosr store the id of the entity to be replaced, which has one dimension. when negs is None, it is in the test/eval phase. Defaults to None.

  • mode (str, optional) – This arg indicates that the negative entity will replace the head or tail entity. when it is ‘single’, it means that entity will not be replaced. Defaults to ‘single’.

Returns:

Head entity embedding. relation_emb: Relation embedding. tail_emb: Tail entity embedding.

Return type:

head_emb

training: bool

ComplEx_NNE_AER

class neuralkg_ind.model.RuleModel.ComplEx_NNE_AER.ComplEx_NNE_AER(args, rel2id)[source]

Bases: Model

Improving Knowledge Graph Embedding Using Simple Constraints (/ComplEx-NNE_AER), which examines non-negativity constraints on entity representations and approximate entailment constraints on relation representations.

args

Model configuration parameters.

epsilon

Caculate embedding_range.

margin

Caculate embedding_range and loss.

embedding_range

Uniform distribution range.

ent_emb

Entity embedding, shape:[num_ent, emb_dim].

rel_emb

Relation_embedding, shape:[num_rel, emb_dim].

get_rule(rel2id)[source]

Get rule for rule_base KGE models, such as ComplEx_NNE model. Get rule and confidence from _cons.txt file. Update:

(rule_p, rule_q): Rule. confidence: The confidence of rule.

init_emb()[source]

Initialize the entity and relation embeddings in the form of a uniform distribution.

score_func(head_emb, relation_emb, tail_emb, mode)[source]

Calculating the score of triples.

The formula for calculating the score is \(Re(< wr, es, e¯o >)\)

Parameters:
  • head_emb – The head entity embedding.

  • relation_emb – The relation embedding.

  • tail_emb – The tail entity embedding.

  • mode – Choose head-predict or tail-predict.

Returns:

The score of triples.

Return type:

score

forward(triples, negs=None, mode='single')[source]

The functions used in the training phase

Parameters:
  • triples – The triples ids, as (h, r, t), shape:[batch_size, 3].

  • negs – Negative samples, defaults to None.

  • mode – Choose head-predict or tail-predict, Defaults to ‘single’.

Returns:

The score of triples.

Return type:

score

get_score(batch, mode)[source]

The functions used in the testing phase

Parameters:
  • batch – A batch of data.

  • mode – Choose head-predict or tail-predict.

Returns:

The score of triples.

Return type:

score

training: bool

IterE

class neuralkg_ind.model.RuleModel.IterE.IterE(args, train_sampler, test_sampler)[source]

Bases: Model

Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning. (WWW’19) (IterE).

args

Model configuration parameters.

epsilon

Caculate embedding_range.

margin

Caculate embedding_range and loss.

embedding_range

Uniform distribution range.

ent_emb

Entity embedding, shape:[num_ent, emb_dim].

rel_emb

Relation_embedding, shape:[num_rel, emb_dim].

get_axiom()[source]
update_train_triples(epoch=0, update_per=10)[source]

add the new triples from axioms to training triple

Parameters:
  • epoch (int, optional) – epoch in training process. Defaults to 0.

  • update_per (int, optional) – Defaults to 10.

Returns:

training triple after adding the new triples from axioms

Return type:

updated_train_data

split_embedding(embedding)[source]

split embedding

Parameters:

embedding – embeddings need to be splited, shape:[None, dim].

Returns:

The similrity between two matrices.

Return type:

probability

sim(head=None, tail=None, arity=None)[source]

calculate the similrity between two matrices

Parameters:
  • head – embeddings of head, shape:[batch_size, dim].

  • tail – embeddings of tail, shape:[batch_size, dim] or [1, dim].

  • arity – 1,2 or 3

Returns:

The similrity between two matrices.

Return type:

probability

run_axiom_probability()[source]

this function is used to generate a probality for each axiom in axiom pool

update_valid_axioms(input)[source]

this function is used to select high probability axioms as valid axioms and record their scores

generate_new_train_triples()[source]

The function is to updata new train triples and used after each training epoch end

Returns:

The new training dataset (triples).

Return type:

self.train_sampler.train_triples

get_rule(rel2id)[source]

Get rule for rule_base KGE models, such as ComplEx_NNE model. Get rule and confidence from _cons.txt file. Update:

(rule_p, rule_q): Rule. confidence: The confidence of rule.

init_emb()[source]

Initialize the entity and relation embeddings in the form of a uniform distribution.

score_func(head_emb, relation_emb, tail_emb, mode)[source]

Calculating the score of triples.

The formula for calculating the score is DistMult.

Parameters:
  • head_emb – The head entity embedding.

  • relation_emb – The relation embedding.

  • tail_emb – The tail entity embedding.

  • mode – Choose head-predict or tail-predict.

Returns:

The score of triples.

Return type:

score

forward(triples, negs=None, mode='single')[source]

The functions used in the training phase

Parameters:
  • triples – The triples ids, as (h, r, t), shape:[batch_size, 3].

  • negs – Negative samples, defaults to None.

  • mode – Choose head-predict or tail-predict, Defaults to ‘single’.

Returns:

The score of triples.

Return type:

score

get_score(batch, mode)[source]

The functions used in the testing phase

Parameters:
  • batch – A batch of data.

  • mode – Choose head-predict or tail-predict.

Returns:

The score of triples.

Return type:

score

training: bool

RugE

class neuralkg_ind.model.RuleModel.RugE.RugE(args)[source]

Bases: Model

Knowledge Graph Embedding with Iterative Guidance from Soft Rules (RugE), which is a novel paradigm of KG embedding with iterative guidance from soft rules.

args

Model configuration parameters.

epsilon

Caculate embedding_range.

margin

Caculate embedding_range and loss.

embedding_range

Uniform distribution range.

ent_emb

Entity embedding, shape:[num_ent, emb_dim].

rel_emb

Relation_embedding, shape:[num_rel, emb_dim].

init_emb()[source]

Initialize the entity and relation embeddings in the form of a uniform distribution.

score_func(head_emb, relation_emb, tail_emb, mode)[source]

Calculating the score of triples.

The formula for calculating the score is \(Re(< wr, es, e¯o >)\)

Parameters:
  • head_emb – The head entity embedding.

  • relation_emb – The relation embedding.

  • tail_emb – The tail entity embedding.

  • mode – Choose head-predict or tail-predict.

Returns:

The score of triples.

Return type:

score

forward(triples, negs=None, mode='single')[source]

The functions used in the training phase

Parameters:
  • triples – The triples ids, as (h, r, t), shape:[batch_size, 3].

  • negs – Negative samples, defaults to None.

  • mode – Choose head-predict or tail-predict, Defaults to ‘single’.

Returns:

The score of triples.

Return type:

score

get_score(batch, mode)[source]

The functions used in the testing phase

Parameters:
  • batch – A batch of data.

  • mode – Choose head-predict or tail-predict.

Returns:

The score of triples.

Return type:

score

training: bool