cenreg.pytorch package
Submodules
cenreg.pytorch.cjd2F module
- class cenreg.pytorch.cjd2F.MseModel(jd_pred: array, copula, learning_rate: float = 0.01, focal_risk: int = -1, init_f: ndarray = None, optimizer=None)
Bases:
Module- configure_optimizers()
- forward()
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- loss()
- cenreg.pytorch.cjd2F.minimize_mse(model, num_epochs: int) ndarray
Estimate marginal distribution from joint distribution.
- Parameters:
model (pytorch model)
num_epochs (int) – Number of epochs.
- Returns:
F_pred – np.ndarray of shape [batch_size, num_risks, num_bin_predictions+1]
- Return type:
estimated CDF.
cenreg.pytorch.copula_torch module
- class cenreg.pytorch.copula_torch.ClaytonCopula(theta: Tensor)
Bases:
objectClayton copula implemented with pytorch.
- cdf(u: Tensor) Tensor
- Parameters:
u (torch.Tensor (float)) – Each element should be in [0, 1].
- Returns:
probability
- Return type:
torch.Tensor (float)
- class cenreg.pytorch.copula_torch.FrankCopula(theta: Tensor)
Bases:
objectFrank copula implemented with pytorch.
- cdf(u: Tensor) Tensor
- class cenreg.pytorch.copula_torch.GumbelCopula(theta: Tensor)
Bases:
objectGumbel copula implemented with pytorch.
- cdf(u: Tensor) Tensor
- class cenreg.pytorch.copula_torch.IndependenceCopula
Bases:
objectIndependence copula implemented with pytorch.
- cdf(u: Tensor) Tensor
- Parameters:
u (torch.Tensor (float)) – tensor of shape [batch_size, 2]. Each element should be in [0, 1].
- Returns:
probability – tensor of shape [batch_size].
- Return type:
torch.Tensor (float)
- class cenreg.pytorch.copula_torch.SurvivalCopula(copula)
Bases:
objectSurvival copula implemented with pytorch.
- cdf(u: Tensor) Tensor
- cenreg.pytorch.copula_torch.create(name: str, theta: float = 0.0)
Create a copula object based on the name and theta parameter. :param name: Name of the copula. Options are “independence” and “frank”. :type name: str :param theta: Parameter for the copula. Default is 0.0. :type theta: float
- Returns:
An instance of the Copula class.
- Return type:
copula
cenreg.pytorch.datamodule module
- class cenreg.pytorch.datamodule.ProbDataModule(batch_size)
Bases:
Dataset- test_dataloader(features, targets)
- train_dataloader(features, targets)
- val_dataloader(features, targets)
cenreg.pytorch.distribution module
- class cenreg.pytorch.distribution.LinearCDF(boundaries: Tensor, values: Tensor = None, apply_cumsum: bool = True)
Bases:
objectDistribution functions with linear interpolation.
A distribution function is represented as a discrete cumulative distribution function (CDF) at pre-defined quantile levels (boundaries). The values between probabilities are computed by using linear interpolation.
If qk_values are two-dimensional tensor, then each row corresponds to a CDF.
- average_cdf(y, mask=None, add_edge=False)
- cdf(y, mask=None, add_edges=False)
Cumulative distribution function (i.e., inverse of quantile function).
- Parameters:
y (Tensor) – CDF values are computed for values y. If dimension of y is one, then cdf(y) is computed for all CDFs. If dimension of y is two, then cdf(y) is computed for each corresponding CDF.
mask (Tensor) – Mask to compute CDF for a subset of CDFs. Tensor must be one-dimensional and its length must be equal to the number of CDFs.
add_edges (bool) – If True, then the CDF values at the boundaries are added.
- Returns:
cdf_values – Compute CDF values for each value in y. Tensor shape is equal to the shape of y.
- Return type:
Tensor
- get_boundary_lengths()
- icdf(alpha, mask=None, add_edges=False)
Quantile function (i.e., inverse of cumulative distribution function).
- Parameters:
alpha (Tensor) – Quantile values are computed for quantile levels alpha. If dimension of alpha is one, then icdf(alpha) is computed for all CDFs. If dimension of alpha is two, then icdf(alpha) is computed for each corresponding CDF.
mask (Tensor) – Mask to compute CDF for a subset of CDFs. Tensor must be one-dimensional and its length must be equal to the number of CDFs.
add_edges (bool) – If True, then the inverse of the CDF values at the boundaries are added.
- Returns:
y – Compute y. Tensor shape is equal to the shape of alpha.
- Return type:
Tensor
- set_knot_values(values, apply_cumsum=True)
Set values of CDF values.
- Parameters:
values (Tensor) – One or two-dimensional tensor containing the values of CDFs. If cdf_values is two-dimensional tensor, then each row corresponds to a CDF and cdf_values[:,j] stores the value of CDF at boundries[j]. Tensor shape must be [num_CDF, len(boundaries)].
apply_cumsum (bool) – If True, then cdf_values is assumed to be the probablity distribution functions (PDFs) and the cumulative sum of cdf_values is computed.
- class cenreg.pytorch.distribution.LinearQuantileFunction(qk_levels: Tensor, qk_values: Tensor = None, apply_cumsum: bool = True)
Bases:
objectQuantile functions with linear interpolation.
A quantile function is defined by a set of quantile values (qk_values) at pre-defined quantile levels (qk_levels). The values between quantile values are computed by using linear interpolation.
If qk_values are two-dimensional tensor, then each row corresponds to a quantile function.
- average_cdf(y, mask=None, add_edge=False)
- cdf(y, mask=None, add_edges=False)
Cumulative distribution function (i.e., inverse of quantile function).
- Parameters:
y (Tensor) – Quantile levels are computed for quantile values y. If dimension of y is one, then cdf(y) is computed for all quantile functions. If dimension of y is two, then cdf(y) is computed for each corresponding quantile function.
mask (Tensor) – Mask to compute quantile function for a subset of quantile functions. Tensor must be one-dimensional and its length must be equal to the number of quantile functions.
add_edges (bool) – If True, then the CDF values at the boundaries are added.
- Returns:
q_levels – Compute quantile levels for each value in y. Tensor shape is equal to the shape of y.
- Return type:
Tensor
- get_qk_lengths()
- icdf(alpha, mask=None, add_edges=False)
Quantile function (i.e., inverse of cumulative distribution function).
- Parameters:
alpha (Tensor) – Quantile values are computed for quantile levels alpha. If dimension of alpha is one, then icdf(alpha) is computed for all quantile functions. If dimension of alpha is two, then icdf(alpha) is computed for each corresponding quantile function.
mask (Tensor) – Mask to compute quantile function for a subset of quantile functions. Tensor must be one-dimensional and its length must be equal to the number of quantile functions.
add_edges (bool) – If True, then the inverse of the CDF values at the boundaries are added.
- Returns:
y – Compute y. Tensor shape is equal to the shape of alpha.
- Return type:
Tensor
- set_knot_values(qk_values, apply_cumsum=True)
Set values of quantile knots.
- Parameters:
qk_values (Tensor) – One or two-dimensional tensor containing the values of quantile knots. If qk_values is two-dimensional tensor, then each row corresponds to a quantile function and qk_values[:,j] stores the value of quantile function at qk_levels[j]. Tensor shape must be [num_quantile_function, len(qk_levels)].
apply_cumsum (bool) – If True, then qk_values is assumed to be the differences of quantile values and the cumulative sum of qk_values is computed.
cenreg.pytorch.loss_cdf module
- class cenreg.pytorch.loss_cdf.Brier(y_bins: Tensor, apply_cumsum: bool = True)
Bases:
objectLoss class for the Brier score.
- loss(pred: Tensor, y: Tensor) Tensor
- class cenreg.pytorch.loss_cdf.CNLL_CR(boundaries: Tensor, num_risks: int)
Bases:
objectCensored Negative Log Likelihood for Competing Risks
- loss(pred: Tensor, observed_times: Tensor, events: Tensor) Tensor
- class cenreg.pytorch.loss_cdf.NegativeLogLikelihood(y_bins: Tensor, apply_cumsum: bool = True)
Bases:
objectLoss class for negative log-likelihood.
- loss(pred: Tensor, y: Tensor, uncensored: Tensor = None) Tensor
- class cenreg.pytorch.loss_cdf.RankedProbabilityScore(y_bins: Tensor, apply_cumsum: bool = True)
Bases:
objectLoss class for the ranked probability score.
- loss(pred: Tensor, y: Tensor) Tensor
- cenreg.pytorch.loss_cdf.brier(dist, y: Tensor, y_bins: Tensor = None) Tensor
Compute the Brier score.
- Parameters:
dist (predicted distribution)
y (Tensor of shape [batch_size])
y_bins (Tensor of shape [num_bin+1])
- Returns:
loss
- Return type:
Tensor of shape [batch_size]
- cenreg.pytorch.loss_cdf.negative_log_likelihood(dist, y: Tensor, y_bins: Tensor = None, uncensored: Tensor = None, EPS: float = 0.0001) Tensor
Compute Negative log-likelihood.
- Parameters:
dist (predicted distribution)
y (Tensor of shape [batch_size, 1])
y_bins (Tensor of shape [num_bins+1])
uncensored (Tensor of shape [batch_size])
EPS (float)
- Returns:
loss
- Return type:
Tensor of shape [batch_size]
- cenreg.pytorch.loss_cdf.ranked_probability_score(dist, y: Tensor, y_bins: Tensor) Tensor
Compute the ranked probability score.
- Parameters:
dist (predicted distribution)
y (Tensor of shape [batch_size])
y_bins (Tensor of shape [num_bins+1])
- Returns:
loss
- Return type:
Tensor of shape [batch_size]
cenreg.pytorch.loss_cjd module
- class cenreg.pytorch.loss_cjd.Brier(y_bins: Tensor, num_risks: int)
Bases:
object- loss(pred: Tensor, observed_times: Tensor, events: Tensor) Tensor
cenreg.pytorch.loss_cont module
cenreg.pytorch.mlp module
- class cenreg.pytorch.mlp.Linear(output_len: int)
Bases:
ModuleSingle Layer Perceptron
- forward(x: Tensor)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cenreg.pytorch.mlp.MLP(input_len: int, output_len: int, num_neuron: int)
Bases:
ModuleMulti Layer Perceptron
- forward(x: Tensor)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cenreg.pytorch.mlp.MLP_MultiHead(input_len: int, output_len: int, output_num: int, num_neuron: int, use_softmax: bool = True)
Bases:
ModuleMulti Layer Perceptron with Multiple Outputs
- forward(x: Tensor)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cenreg.pytorch.mlp.SMM(embed_size: int)
Bases:
ModuleFully monotonic neural network. The output y is a function of input x, and the function is monotonic with respect to all dimensions of x.
- forward(x: Tensor) Tensor
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cenreg.pytorch.mlp.SMM_MultiHead(input_len: int, input_monotone_len: int, output_num: int, num_neuron: int)
Bases:
Module- forward(x: Tensor, t: Tensor) Tensor
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
cenreg.pytorch.utils module
- cenreg.pytorch.utils.denormalize_pred(pred: Tensor, min_y: float, max_y: float) Tensor
Denormalize the predictions using the min and max values.
- Parameters:
pred (torch.Tensor) – The normalized predictions.
min_y (float) – The minimum values for denormalization.
max_y (float) – The maximum values for denormalization.
- Returns:
denormalized_prediction – The denormalized predictions.
- Return type:
torch.Tensor
- cenreg.pytorch.utils.normalize_y(y: Tensor, min_y: float, max_y: float) Tensor
Normalize the input tensor using the min and max values.
- Parameters:
y (torch.Tensor) – The input tensor to be normalized.
min_y (float) – The minimum values for normalization.
max_y (float) – The maximum values for normalization.
- Returns:
normalized_tensor – The normalized tensor.
- Return type:
torch.Tensor