loss¶
Adv_Loss¶
- class neuralkg_ind.loss.Adv_Loss.Adv_Loss(args, model)[source]¶
Bases:
Module
Negative sampling loss with self-adversarial training.
- args¶
Some pre-set parameters, such as self-adversarial temperature, etc.
- model¶
The KG model for training.
- forward(pos_score, neg_score, subsampling_weight=None)[source]¶
Negative sampling loss with self-adversarial training. In math:
L=-log sigmaleft(gamma-d_{r}(mathbf{h}, mathbf{t})
ight)-sum_{i=1}^{n} pleft(h_{i}^{prime}, r, t_{i}^{prime} ight) log sigmaleft(d_{r}left(mathbf{h}_{i}^{prime}, mathbf{t}_{i}^{prime} ight)-gamma ight)
- Args:
pos_score: The score of positive samples. neg_score: The score of negative samples. subsampling_weight: The weight for correcting pos_score and neg_score.
- Returns:
loss: The training loss for back propagation.
ComplEx_NNE_AER_Loss¶
- class neuralkg_ind.loss.ComplEx_NNE_AER_Loss.ComplEx_NNE_AER_Loss(args, model)[source]¶
Bases:
Module
- forward(pos_score, neg_score)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
CrossE_Loss¶
- class neuralkg_ind.loss.CrossE_Loss.CrossE_Loss(args, model)[source]¶
Bases:
Module
- forward(score, label)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Cross_Entropy_Loss¶
- class neuralkg_ind.loss.Cross_Entropy_Loss.Cross_Entropy_Loss(args, model)[source]¶
Bases:
Module
- forward(pred, label)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
KBAT_Loss¶
- class neuralkg_ind.loss.KBAT_Loss.KBAT_Loss(args, model)[source]¶
Bases:
Module
- forward(model, score, neg_score=None, label=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Logsig_Loss¶
- class neuralkg_ind.loss.Logsig_Loss.Logsig_Loss(args, model)[source]¶
Bases:
Module
- forward(pos_score, neg_score)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Margin_Loss¶
- class neuralkg_ind.loss.Margin_Loss.Margin_Loss(args, model)[source]¶
Bases:
Module
- forward(pos_score, neg_score)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
RGCN_Loss¶
- class neuralkg_ind.loss.RGCN_Loss.RGCN_Loss(args, model)[source]¶
Bases:
Module
- forward(score, labels)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
RugE_Loss¶
- class neuralkg_ind.loss.RugE_Loss.RugE_Loss(args, model)[source]¶
Bases:
Module
- forward(pos_score, neg_score, rule, confidence, triple_num, pos_len)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
SimplE_Loss¶
- class neuralkg_ind.loss.SimplE_Loss.SimplE_Loss(args, model)[source]¶
Bases:
Module
- forward(pos_score, neg_score)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Softplus_Loss¶
- class neuralkg_ind.loss.Softplus_Loss.Softplus_Loss(args, model)[source]¶
Bases:
Module
softplus loss. .. attribute:: args
Some pre-set parameters, etc.
- model¶
The KG model for training.
- forward(pos_score, neg_score, subsampling_weight=None)[source]¶
Negative sampling loss Softplus_Loss. In math:
- egin{aligned}
L(oldsymbol{Q}, oldsymbol{W})=& sum_{r(h, t) in Omega cup Omega^{-}} log left(1+exp left(-Y_{h r t} phi(h, r, t)
ight) ight)
&+lambda_1|oldsymbol{Q}|_2^2+lambda_2|oldsymbol{W}|_2^2
end{aligned}
- Args:
pos_score: The score of positive samples (with regularization if DualE). neg_score: The score of negative samples (with regularization if DualE).
- Returns:
loss: The training loss for back propagation.