direct.functionals package#
Submodules#
direct.functionals.challenges module#
Direct metrics for the FastMRI and Calgary-Campinas challenges.
- direct.functionals.challenges.fastmri_nmse(gt, pred)[source][source]#
Compute Normalized Mean Square Error metric (NMSE) compatible with the FastMRI challenge.
direct.functionals.grad module#
- class direct.functionals.grad.SobelGradL1Loss(reduction='mean', normalized_grad=True)[source][source]#
Bases:
SobelGradLoss
Computes the sum of the l1-loss between the gradient of input and target:
It returns
\[||u_x - v_x ||_1 + ||u_y - v_y||_1\]where \(u\) and \(v\) denote the input and target images. The gradients w.r.t. to \(x\) and \(y\) directions are computed using the Sobel operators.
-
training:
bool
#
-
training:
- class direct.functionals.grad.SobelGradL2Loss(reduction='mean', normalized_grad=True)[source][source]#
Bases:
SobelGradLoss
Computes the sum of the l1-loss between the gradient of input and target:
It returns
\[||u_x - v_x ||_2^2 + ||u_y - v_y||_2^2\]where \(u\) and \(v\) denote the input and target images. The gradients w.r.t. to \(x\) and \(y\) directions are computed using the Sobel operators.
-
training:
bool
#
-
training:
direct.functionals.hfen module#
direct.nn.functionals.hfen module.
- class direct.functionals.hfen.HFENL1Loss(reduction='mean', kernel_size=15, sigma=2.5, norm=False)[source][source]#
Bases:
HFENLoss
High Frequency Error Norm (HFEN) Loss using L1Loss criterion.
Calculates:
\[|| \text{LoG}(x_\text{rec}) - \text{LoG}(x_\text{tar}) ||_1\]Where LoG is the Laplacian of Gaussian filter, and \(x_\text{rec}), \text{LoG}(x_\text{tar}\) are the reconstructed inp and target images. If normalized it scales it by \(|| \text{LoG}(x_\text{tar}) ||_1\).
-
training:
bool
#
-
training:
- class direct.functionals.hfen.HFENL2Loss(reduction='mean', kernel_size=15, sigma=2.5, norm=False)[source][source]#
Bases:
HFENLoss
High Frequency Error Norm (HFEN) Loss using L1Loss criterion.
Calculates:
\[|| \text{LoG}(x_\text{rec}) - \text{LoG}(x_\text{tar}) ||_2\]Where LoG is the Laplacian of Gaussian filter, and \(x_\text{rec}), \text{LoG}(x_\text{tar}\) are the reconstructed inp and target images. If normalized it scales it by \(|| \text{LoG}(x_\text{tar}) ||_2\).
-
training:
bool
#
-
training:
- class direct.functionals.hfen.HFENLoss(criterion, reduction='mean', kernel_size=5, sigma=2.5, norm=False)[source][source]#
Bases:
Module
High Frequency Error Norm (HFEN) Loss as defined in _[1].
Calculates:
\[|| \text{LoG}(x_\text{rec}) - \text{LoG}(x_\text{tar}) ||_C\]Where C can be any norm, LoG is the Laplacian of Gaussian filter, and \(x_\text{rec}), \text{LoG}(x_\text{tar}\) are the reconstructed inp and target images. If normalized it scales it by \(|| \text{LoG}(x_\text{tar}) ||_C\).
Code was borrowed and adapted from _[2] (not licensed).
References
[1]S. Ravishankar and Y. Bresler, “MR Image Reconstruction From Highly Undersampled k-Space Data by Dictionary Learning,” in IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1028-1041, May 2011, doi: 10.1109/TMI.2010.2090538.
- forward(inp, target)[source][source]#
Forward pass of the
HFENLoss
.- Parameters:
- inptorch.Tensor
inp tensor.
- targettorch.Tensor
Target tensor.
- Returns:
- torch.Tensor
HFEN loss value.
- Return type:
Tensor
-
training:
bool
#
- direct.functionals.hfen.hfen_l1(inp, target, reduction='mean', kernel_size=15, sigma=2.5, norm=False)[source][source]#
Calculates HFENL1 loss between inp and target.
- Parameters:
- inptorch.Tensor
inp tensor.
- targettorch.Tensor
Target tensor.
- reductionstr
Criterion reduction. Default: “mean”.
- kernel_sizeint or list of ints
Size of the LoG filter kernel. Default: 15.
- sigmafloat or list of floats
Standard deviation of the LoG filter kernel. Default: 2.5.
- normbool
Whether to normalize the loss.
- Return type:
torch.Tensor
- direct.functionals.hfen.hfen_l2(inp, target, reduction='mean', kernel_size=15, sigma=2.5, norm=False)[source][source]#
Calculates HFENL2 loss between inp and target.
- Parameters:
- inptorch.Tensor
inp tensor.
- targettorch.Tensor
Target tensor.
- reductionstr
Criterion reduction. Default: “mean”.
- kernel_sizeint or list of ints
Size of the LoG filter kernel. Default: 15.
- sigmafloat or list of floats
Standard deviation of the LoG filter kernel. Default: 2.5.
- normbool
Whether to normalize the loss.
- Return type:
torch.Tensor
direct.functionals.nmae module#
- class direct.functionals.nmae.NMAELoss(reduction='mean')[source][source]#
Bases:
Module
Computes the Normalized Mean Absolute Error (NMAE), i.e.:
\[\frac{||u - v||_1}{||u||_1},\]where \(u\) and \(v\) denote the target and the input.
- forward(input, target)[source][source]#
Forward method of
NMAELoss
.- Parameters:
- input: torch.Tensor
Tensor of shape (*), where * means any number of dimensions.
- target: torch.Tensor
Tensor of same shape as the input.
-
training:
bool
#
direct.functionals.nmse module#
- class direct.functionals.nmse.NMSELoss(reduction='mean')[source][source]#
Bases:
Module
Computes the Normalized Mean Squared Error (NMSE), i.e.:
\[\frac{||u - v||_2^2}{||u||_2^2},\]where \(u\) and \(v\) denote the target and the input.
- forward(input, target)[source][source]#
Forward method of
NMSELoss
.- Parameters:
- input: torch.Tensor
Tensor of shape (*), where * means any number of dimensions.
- target: torch.Tensor
Tensor of same shape as the input.
-
training:
bool
#
- class direct.functionals.nmse.NRMSELoss(reduction='mean')[source][source]#
Bases:
Module
Computes the Normalized Root Mean Squared Error (NRMSE), i.e.:
\[\frac{||u - v||_2}{||u||_2},\]where \(u\) and \(v\) denote the target and the input.
- forward(input, target)[source][source]#
Forward method of
NRMSELoss
.- Parameters:
- input: torch.Tensor
Tensor of shape (*), where * means any number of dimensions.
- target: torch.Tensor
Tensor of same shape as the input.
-
training:
bool
#
direct.functionals.psnr module#
Peak signal-to-noise ratio (pSNR) metric for the direct package.
- class direct.functionals.psnr.PSNRLoss(reduction='mean')[source][source]#
Bases:
Module
Peak signal-to-noise ratio loss function PyTorch implementation.
- Parameters:
- reductionstr
Batch reduction. Default: “mean”.
- forward(input_data, target_data)[source][source]#
Performs forward pass of
PSNRLoss
.- Parameters:
- input_datatorch.Tensor
Input 2D data.
- target_datatorch.Tensor
Target 2D data.
- Returns:
- torch.Tensor
- Return type:
Tensor
-
training:
bool
#
direct.functionals.snr module#
Signal-to-noise ratio (SNR) metric for the direct package.
- class direct.functionals.snr.SNRLoss(reduction='mean')[source][source]#
Bases:
Module
SNR loss function PyTorch implementation.
- forward(input_data, target_data)[source][source]#
Performs forward pass of
SNRLoss
.- Parameters:
- input_datatorch.Tensor
Input 2D data.
- target_datatorch.Tensor
Target 2D data.
- Returns:
- torch.Tensor
- Return type:
Tensor
-
training:
bool
#
- direct.functionals.snr.snr_metric(input_data, target_data, reduction='mean')[source][source]#
This function is a torch implementation of SNR metric for batches. :rtype:
Tensor
\[SNR = 10 \cdot \log_{10}\left(\frac{\text{square_error}}{\text{square_error_noise}}\right)\]where:
\(\text{square_error}\) is the sum of squared values of the clean (target) data.
\(\text{square_error_noise}\) is the sum of squared differences between the input data and the clean (target) data.
If reduction is “mean”, the function returns the mean SNR value. If reduction is “sum”, the function returns the sum of SNR values. If reduction is “none”, the function returns a tensor of SNR values for each batch.
- Parameters:
- input_datatorch.Tensor
- target_datatorch.Tensor
- reductionstr
- Returns:
- torch.Tensor
direct.functionals.ssim module#
This module contains SSIM loss functions for the direct package.
- class direct.functionals.ssim.SSIM3DLoss(win_size=7, k1=0.01, k2=0.03)[source][source]#
Bases:
Module
SSIM loss module for 3D data.
- Parameters:
- win_size: int
Window size for SSIM calculation. Default: 7.
- k1: float
k1 parameter for SSIM calculation. Default: 0.1.
- k2: float
k2 parameter for SSIM calculation. Default: 0.03.
- forward(input_data, target_data, data_range)[source][source]#
Forward pass of
SSIM3Dloss
.- Parameters:
- input_datatorch.Tensor
3D Input data.
- target_datatorch.Tensor
3D Target data.
- data_rangetorch.Tensor
Data range.
- Returns:
- torch.Tensor
- Return type:
Tensor
-
training:
bool
#
- class direct.functionals.ssim.SSIMLoss(win_size=7, k1=0.01, k2=0.03)[source][source]#
Bases:
Module
SSIM loss module as implemented in [1].
- Parameters:
- win_size: int
Window size for SSIM calculation. Default: 7.
- k1: float
k1 parameter for SSIM calculation. Default: 0.1.
- k2: float
k2 parameter for SSIM calculation. Default: 0.03.
References
- forward(input_data, target_data, data_range)[source][source]#
Forward pass of
SSIMloss
.- Parameters:
- input_datatorch.Tensor
2D Input data.
- target_datatorch.Tensor
2D Target data.
- data_rangetorch.Tensor
Data range.
- Returns:
- torch.Tensor
- Return type:
Tensor
-
training:
bool
#
Module contents#
direct.nn.functionals module.
This module contains functionals for the direct package as well as the loss functions needed for training models.
- class direct.functionals.HFENL1Loss(reduction='mean', kernel_size=15, sigma=2.5, norm=False)[source][source]#
Bases:
HFENLoss
High Frequency Error Norm (HFEN) Loss using L1Loss criterion.
Calculates:
\[|| \text{LoG}(x_\text{rec}) - \text{LoG}(x_\text{tar}) ||_1\]Where LoG is the Laplacian of Gaussian filter, and \(x_\text{rec}), \text{LoG}(x_\text{tar}\) are the reconstructed inp and target images. If normalized it scales it by \(|| \text{LoG}(x_\text{tar}) ||_1\).
-
training:
bool
#
-
training:
- class direct.functionals.HFENL2Loss(reduction='mean', kernel_size=15, sigma=2.5, norm=False)[source][source]#
Bases:
HFENLoss
High Frequency Error Norm (HFEN) Loss using L1Loss criterion.
Calculates:
\[|| \text{LoG}(x_\text{rec}) - \text{LoG}(x_\text{tar}) ||_2\]Where LoG is the Laplacian of Gaussian filter, and \(x_\text{rec}), \text{LoG}(x_\text{tar}\) are the reconstructed inp and target images. If normalized it scales it by \(|| \text{LoG}(x_\text{tar}) ||_2\).
-
training:
bool
#
-
training:
- class direct.functionals.HFENLoss(criterion, reduction='mean', kernel_size=5, sigma=2.5, norm=False)[source][source]#
Bases:
Module
High Frequency Error Norm (HFEN) Loss as defined in _[1].
Calculates:
\[|| \text{LoG}(x_\text{rec}) - \text{LoG}(x_\text{tar}) ||_C\]Where C can be any norm, LoG is the Laplacian of Gaussian filter, and \(x_\text{rec}), \text{LoG}(x_\text{tar}\) are the reconstructed inp and target images. If normalized it scales it by \(|| \text{LoG}(x_\text{tar}) ||_C\).
Code was borrowed and adapted from _[2] (not licensed).
References
[1]S. Ravishankar and Y. Bresler, “MR Image Reconstruction From Highly Undersampled k-Space Data by Dictionary Learning,” in IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1028-1041, May 2011, doi: 10.1109/TMI.2010.2090538.
- forward(inp, target)[source][source]#
Forward pass of the
HFENLoss
.- Parameters:
- inptorch.Tensor
inp tensor.
- targettorch.Tensor
Target tensor.
- Returns:
- torch.Tensor
HFEN loss value.
- Return type:
Tensor
-
training:
bool
#
- class direct.functionals.NMAELoss(reduction='mean')[source][source]#
Bases:
Module
Computes the Normalized Mean Absolute Error (NMAE), i.e.:
\[\frac{||u - v||_1}{||u||_1},\]where \(u\) and \(v\) denote the target and the input.
- forward(input, target)[source][source]#
Forward method of
NMAELoss
.- Parameters:
- input: torch.Tensor
Tensor of shape (*), where * means any number of dimensions.
- target: torch.Tensor
Tensor of same shape as the input.
-
training:
bool
#
- class direct.functionals.NMSELoss(reduction='mean')[source][source]#
Bases:
Module
Computes the Normalized Mean Squared Error (NMSE), i.e.:
\[\frac{||u - v||_2^2}{||u||_2^2},\]where \(u\) and \(v\) denote the target and the input.
- forward(input, target)[source][source]#
Forward method of
NMSELoss
.- Parameters:
- input: torch.Tensor
Tensor of shape (*), where * means any number of dimensions.
- target: torch.Tensor
Tensor of same shape as the input.
-
training:
bool
#
- class direct.functionals.NRMSELoss(reduction='mean')[source][source]#
Bases:
Module
Computes the Normalized Root Mean Squared Error (NRMSE), i.e.:
\[\frac{||u - v||_2}{||u||_2},\]where \(u\) and \(v\) denote the target and the input.
- forward(input, target)[source][source]#
Forward method of
NRMSELoss
.- Parameters:
- input: torch.Tensor
Tensor of shape (*), where * means any number of dimensions.
- target: torch.Tensor
Tensor of same shape as the input.
-
training:
bool
#
- class direct.functionals.PSNRLoss(reduction='mean')[source][source]#
Bases:
Module
Peak signal-to-noise ratio loss function PyTorch implementation.
- Parameters:
- reductionstr
Batch reduction. Default: “mean”.
- forward(input_data, target_data)[source][source]#
Performs forward pass of
PSNRLoss
.- Parameters:
- input_datatorch.Tensor
Input 2D data.
- target_datatorch.Tensor
Target 2D data.
- Returns:
- torch.Tensor
- Return type:
Tensor
-
training:
bool
#
- class direct.functionals.SNRLoss(reduction='mean')[source][source]#
Bases:
Module
SNR loss function PyTorch implementation.
- forward(input_data, target_data)[source][source]#
Performs forward pass of
SNRLoss
.- Parameters:
- input_datatorch.Tensor
Input 2D data.
- target_datatorch.Tensor
Target 2D data.
- Returns:
- torch.Tensor
- Return type:
Tensor
-
training:
bool
#
- class direct.functionals.SSIM3DLoss(win_size=7, k1=0.01, k2=0.03)[source][source]#
Bases:
Module
SSIM loss module for 3D data.
- Parameters:
- win_size: int
Window size for SSIM calculation. Default: 7.
- k1: float
k1 parameter for SSIM calculation. Default: 0.1.
- k2: float
k2 parameter for SSIM calculation. Default: 0.03.
- forward(input_data, target_data, data_range)[source][source]#
Forward pass of
SSIM3Dloss
.- Parameters:
- input_datatorch.Tensor
3D Input data.
- target_datatorch.Tensor
3D Target data.
- data_rangetorch.Tensor
Data range.
- Returns:
- torch.Tensor
- Return type:
Tensor
-
training:
bool
#
- class direct.functionals.SSIMLoss(win_size=7, k1=0.01, k2=0.03)[source][source]#
Bases:
Module
SSIM loss module as implemented in [1].
- Parameters:
- win_size: int
Window size for SSIM calculation. Default: 7.
- k1: float
k1 parameter for SSIM calculation. Default: 0.1.
- k2: float
k2 parameter for SSIM calculation. Default: 0.03.
References
- forward(input_data, target_data, data_range)[source][source]#
Forward pass of
SSIMloss
.- Parameters:
- input_datatorch.Tensor
2D Input data.
- target_datatorch.Tensor
2D Target data.
- data_rangetorch.Tensor
Data range.
- Returns:
- torch.Tensor
- Return type:
Tensor
-
training:
bool
#
- class direct.functionals.SobelGradL1Loss(reduction='mean', normalized_grad=True)[source][source]#
Bases:
SobelGradLoss
Computes the sum of the l1-loss between the gradient of input and target:
It returns
\[||u_x - v_x ||_1 + ||u_y - v_y||_1\]where \(u\) and \(v\) denote the input and target images. The gradients w.r.t. to \(x\) and \(y\) directions are computed using the Sobel operators.
-
training:
bool
#
-
training:
- class direct.functionals.SobelGradL2Loss(reduction='mean', normalized_grad=True)[source][source]#
Bases:
SobelGradLoss
Computes the sum of the l1-loss between the gradient of input and target:
It returns
\[||u_x - v_x ||_2^2 + ||u_y - v_y||_2^2\]where \(u\) and \(v\) denote the input and target images. The gradients w.r.t. to \(x\) and \(y\) directions are computed using the Sobel operators.
-
training:
bool
#
-
training:
- direct.functionals.batch_psnr(input_data, target_data, reduction='mean')[source][source]#
This function is a torch implementation of skimage.metrics.compare_psnr.
- Parameters:
- input_data: torch.Tensor
- target_data: torch.Tensor
- reduction: str
- Returns:
- torch.Tensor
- Return type:
Tensor
- direct.functionals.fastmri_nmse(gt, pred)[source][source]#
Compute Normalized Mean Square Error metric (NMSE) compatible with the FastMRI challenge.
- direct.functionals.fastmri_psnr(gt, pred)[source][source]#
Compute Peak Signal to Noise Ratio metric (PSNR) compatible with the FastMRI challenge.
- direct.functionals.fastmri_ssim(gt, target)[source][source]#
Compute Structural Similarity Index Measure (SSIM) compatible with the FastMRI challenge.
- direct.functionals.hfen_l1(inp, target, reduction='mean', kernel_size=15, sigma=2.5, norm=False)[source][source]#
Calculates HFENL1 loss between inp and target.
- Parameters:
- inptorch.Tensor
inp tensor.
- targettorch.Tensor
Target tensor.
- reductionstr
Criterion reduction. Default: “mean”.
- kernel_sizeint or list of ints
Size of the LoG filter kernel. Default: 15.
- sigmafloat or list of floats
Standard deviation of the LoG filter kernel. Default: 2.5.
- normbool
Whether to normalize the loss.
- Return type:
torch.Tensor
- direct.functionals.hfen_l2(inp, target, reduction='mean', kernel_size=15, sigma=2.5, norm=False)[source][source]#
Calculates HFENL2 loss between inp and target.
- Parameters:
- inptorch.Tensor
inp tensor.
- targettorch.Tensor
Target tensor.
- reductionstr
Criterion reduction. Default: “mean”.
- kernel_sizeint or list of ints
Size of the LoG filter kernel. Default: 15.
- sigmafloat or list of floats
Standard deviation of the LoG filter kernel. Default: 2.5.
- normbool
Whether to normalize the loss.
- Return type:
torch.Tensor
- direct.functionals.snr_metric(input_data, target_data, reduction='mean')[source][source]#
This function is a torch implementation of SNR metric for batches. :rtype:
Tensor
\[SNR = 10 \cdot \log_{10}\left(\frac{\text{square_error}}{\text{square_error_noise}}\right)\]where:
\(\text{square_error}\) is the sum of squared values of the clean (target) data.
\(\text{square_error_noise}\) is the sum of squared differences between the input data and the clean (target) data.
If reduction is “mean”, the function returns the mean SNR value. If reduction is “sum”, the function returns the sum of SNR values. If reduction is “none”, the function returns a tensor of SNR values for each batch.
- Parameters:
- input_datatorch.Tensor
- target_datatorch.Tensor
- reductionstr
- Returns:
- torch.Tensor