loss¶
Adv_Loss¶
- class neuralkg.loss.Adv_Loss.Adv_Loss(args, model)[source]¶
Bases:
torch.nn.modules.module.Module
Negative sampling loss with self-adversarial training.
- args¶
Some pre-set parameters, such as self-adversarial temperature, etc.
- model¶
The KG model for training.
- forward(pos_score, neg_score, subsampling_weight=None)[source]¶
Negative sampling loss with self-adversarial training. In math:
L=-log sigmaleft(gamma-d_{r}(mathbf{h}, mathbf{t})
ight)-sum_{i=1}^{n} pleft(h_{i}^{prime}, r, t_{i}^{prime} ight) log sigmaleft(d_{r}left(mathbf{h}_{i}^{prime}, mathbf{t}_{i}^{prime} ight)-gamma ight)
- Args:
pos_score: The score of positive samples. neg_score: The score of negative samples. subsampling_weight: The weight for correcting pos_score and neg_score.
- Returns:
loss: The training loss for back propagation.
ComplEx_NNE_AER_Loss¶
- class neuralkg.loss.ComplEx_NNE_AER_Loss.ComplEx_NNE_AER_Loss(args, model)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(pos_score, neg_score)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Cross_Entropy_Loss¶
- class neuralkg.loss.Cross_Entropy_Loss.Cross_Entropy_Loss(args, model)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(pred, label)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
CrossE_Loss¶
- class neuralkg.loss.CrossE_Loss.CrossE_Loss(args, model)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(score, label)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
IterE_Loss¶
- class neuralkg.loss.IterE_Loss.IterE_Loss(args, model)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(pos_score, neg_score, subsampling_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
KBAT_Loss¶
- class neuralkg.loss.KBAT_Loss.KBAT_Loss(args, model)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(model, score, neg_score=None, label=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Margin_Loss¶
- class neuralkg.loss.Margin_Loss.Margin_Loss(args, model)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(pos_score, neg_score)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
RGCN_Loss¶
- class neuralkg.loss.RGCN_Loss.RGCN_Loss(args, model)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(score, labels)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
RugE_Loss¶
- class neuralkg.loss.RugE_Loss.RugE_Loss(args, model)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(pos_score, neg_score, rule, confidence, triple_num, pos_len)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
SimplE_Loss¶
- class neuralkg.loss.SimplE_Loss.SimplE_Loss(args, model)[source]¶
Bases:
torch.nn.modules.module.Module
- forward(pos_score, neg_score)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.