direct.nn.didn package#
Submodules#
direct.nn.didn.didn module#
- class direct.nn.didn.didn.DIDN(in_channels, out_channels, hidden_channels=128, num_dubs=6, num_convs_recon=9, skip_connection=False)[source][source]#
Bases:
Module
Deep Iterative Down-up convolutional Neural network (DIDN) implementation as in [1].
References
[1]Yu, Songhyun, et al. “Deep Iterative Down-Up CNN for Image Denoising.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 2095–103. IEEE Xplore, https://doi.org/10.1109/CVPRW.2019.00262.
- static crop_to_shape(x, shape)[source][source]#
Crops
x
to specified shape.- Parameters:
- x: torch.Tensor
Input tensor with shape (*, H, W).
- shape: Tuple(int, int)
Crop shape corresponding to H, W.
- Returns:
- cropped_output: torch.Tensor
Cropped tensor.
- Return type:
Tensor
- forward(x, channel_dim=1)[source][source]#
Takes as input a torch.Tensor x and computes DIDN(x).
- Parameters:
- x: torch.Tensor
Input tensor.
- channel_dim: int
Channel dimension. Default: 1.
- Returns:
- out: torch.Tensor
DIDN output tensor.
- Return type:
Tensor
-
training:
bool
#
- class direct.nn.didn.didn.DUB(in_channels, out_channels)[source][source]#
Bases:
Module
Down-up block (DUB) for
DIDN
model as implemented in [1].References
[1]Yu, Songhyun, et al. “Deep Iterative Down-Up CNN for Image Denoising.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 2095–103. IEEE Xplore, https://doi.org/10.1109/CVPRW.2019.00262.
- static crop_to_shape(x, shape)[source][source]#
Crops
x
to specified shape.- Parameters:
- x: torch.Tensor
Input tensor with shape (*, H, W).
- shape: Tuple(int, int)
Crop shape corresponding to H, W.
- Returns:
- cropped_output: torch.Tensor
Cropped tensor.
- Return type:
Tensor
- forward(x)[source][source]#
- Parameters:
- x: torch.Tensor
Input tensor.
- Returns:
- out: torch.Tensor
DUB output.
- Return type:
Tensor
- static pad(x)[source][source]#
Pads input to height and width dimensions if odd.
- Parameters:
- x: torch.Tensor
Input to pad.
- Returns:
- x: torch.Tensor
Padded tensor.
- Return type:
Tensor
-
training:
bool
#
- class direct.nn.didn.didn.ReconBlock(in_channels, num_convs)[source][source]#
Bases:
Module
Reconstruction Block of
DIDN
model as implemented in [1].References
[1]Yu, Songhyun, et al. “Deep Iterative Down-Up CNN for Image Denoising.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 2095–103. IEEE Xplore, https://doi.org/10.1109/CVPRW.2019.00262.
- forward(input_data)[source][source]#
Computes num_convs convolutions followed by PReLU activation on input_data.
- Parameters:
- input_data: torch.Tensor
Input tensor.
- Return type:
Tensor
-
training:
bool
#
- class direct.nn.didn.didn.Subpixel(in_channels, out_channels, upscale_factor, kernel_size, padding=0)[source][source]#
Bases:
Module
Subpixel convolution layer for up-scaling of low resolution features at super-resolution as implemented in [1].
References
[1]Yu, Songhyun, et al. “Deep Iterative Down-Up CNN for Image Denoising.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 2095–103. IEEE Xplore, https://doi.org/10.1109/CVPRW.2019.00262.
- forward(x)[source][source]#
Computes
Subpixel
convolution on input torch.Tensorx
.- Parameters:
- x: torch.Tensor
Input tensor.
- Return type:
Tensor
-
training:
bool
#