loss

Adv_Loss

class neuralkg.loss.Adv_Loss.Adv_Loss(args, model)[source]

Bases: torch.nn.modules.module.Module

Negative sampling loss with self-adversarial training.

args

Some pre-set parameters, such as self-adversarial temperature, etc.

model

The KG model for training.

forward(pos_score, neg_score, subsampling_weight=None)[source]

Negative sampling loss with self-adversarial training. In math:

L=-log sigmaleft(gamma-d_{r}(mathbf{h}, mathbf{t})

ight)-sum_{i=1}^{n} pleft(h_{i}^{prime}, r, t_{i}^{prime} ight) log sigmaleft(d_{r}left(mathbf{h}_{i}^{prime}, mathbf{t}_{i}^{prime} ight)-gamma ight)

Args:

pos_score: The score of positive samples. neg_score: The score of negative samples. subsampling_weight: The weight for correcting pos_score and neg_score.

Returns:

loss: The training loss for back propagation.

normalize()[source]

calculating the regularization.

training: bool

ComplEx_NNE_AER_Loss

class neuralkg.loss.ComplEx_NNE_AER_Loss.ComplEx_NNE_AER_Loss(args, model)[source]

Bases: torch.nn.modules.module.Module

forward(pos_score, neg_score)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

Cross_Entropy_Loss

class neuralkg.loss.Cross_Entropy_Loss.Cross_Entropy_Loss(args, model)[source]

Bases: torch.nn.modules.module.Module

forward(pred, label)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

CrossE_Loss

class neuralkg.loss.CrossE_Loss.CrossE_Loss(args, model)[source]

Bases: torch.nn.modules.module.Module

forward(score, label)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

IterE_Loss

class neuralkg.loss.IterE_Loss.IterE_Loss(args, model)[source]

Bases: torch.nn.modules.module.Module

forward(pos_score, neg_score, subsampling_weight=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

normalize()[source]
training: bool

KBAT_Loss

class neuralkg.loss.KBAT_Loss.KBAT_Loss(args, model)[source]

Bases: torch.nn.modules.module.Module

forward(model, score, neg_score=None, label=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

Margin_Loss

class neuralkg.loss.Margin_Loss.Margin_Loss(args, model)[source]

Bases: torch.nn.modules.module.Module

forward(pos_score, neg_score)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

RGCN_Loss

class neuralkg.loss.RGCN_Loss.RGCN_Loss(args, model)[source]

Bases: torch.nn.modules.module.Module

reg_loss()[source]
forward(score, labels)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

RugE_Loss

class neuralkg.loss.RugE_Loss.RugE_Loss(args, model)[source]

Bases: torch.nn.modules.module.Module

forward(pos_score, neg_score, rule, confidence, triple_num, pos_len)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

SimplE_Loss

class neuralkg.loss.SimplE_Loss.SimplE_Loss(args, model)[source]

Bases: torch.nn.modules.module.Module

forward(pos_score, neg_score)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool