direct.nn.didn package#

Submodules#

direct.nn.didn.didn module#

class direct.nn.didn.didn.Subpixel(in_channels, out_channels, upscale_factor, kernel_size, padding=0)[source]#

Bases: Module

Subpixel convolution layer for up-scaling of low resolution features at super-resolution as implemented in [1]_.

References:

__init__(in_channels, out_channels, upscale_factor, kernel_size, padding=0)[source]#

Inits Subpixel.

Parameters:
  • in_channels (int) – Number of input channels.

  • out_channels (int) – Number of output channels.

  • upscale_factor (int) – Subpixel upscale factor.

  • kernel_size (Union[int, Tuple[int, int]]) – Convolution kernel size.

  • padding (int) – Padding size. Default: 0.

forward(x)[source]#

Computes Subpixel convolution on input tensor.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

Output tensor after subpixel convolution.

class direct.nn.didn.didn.ReconBlock(in_channels, num_convs)[source]#

Bases: Module

Reconstruction Block of DIDN model as implemented in [1]_.

References:

__init__(in_channels, num_convs)[source]#

Inits ReconBlock.

Parameters:
  • in_channels (int) – Number of input channels.

  • num_convs (int) – Number of convolution blocks.

forward(input_data)[source]#

Computes num_convs convolutions followed by PReLU activation on input tensor.

Parameters:
  • input_data (Tensor) – torch.Tensor

  • tensor. (Input)

Return type:

Tensor

class direct.nn.didn.didn.DUB(in_channels, out_channels)[source]#

Bases: Module

Down-up block (DUB) for DIDN model as implemented in [1]_.

References:

__init__(in_channels, out_channels)[source]#

Inits DUB.

Parameters:
  • in_channels (int) – int

  • channels. (Number of output)

  • out_channels (int) – int

  • channels.

static pad(x)[source]#

Pads input to height and width dimensions if odd.

Parameters:
  • x (Tensor) – torch.Tensor

  • pad. (Input to)

Returns:

torch.Tensor Padded tensor.

Return type:

x

static crop_to_shape(x, shape)[source]#

Crops x to specified shape.

Parameters:
  • x (Tensor) – torch.Tensor

  • shape (Tuple[int, int])

  • shape – Tuple(int, int)

  • H (Crop shape corresponding to)

  • W.

Returns:

torch.Tensor Cropped tensor.

Return type:

cropped_output

forward(x)[source]#
Parameters:
  • x (Tensor) – torch.Tensor

  • tensor. (Input)

Returns:

torch.Tensor DUB output.

Return type:

out

class direct.nn.didn.didn.DIDN(in_channels, out_channels, hidden_channels=128, num_dubs=6, num_convs_recon=9, skip_connection=False)[source]#

Bases: Module

Deep Iterative Down-up convolutional Neural network (DIDN) implementation as in [1]_.

References:

__init__(in_channels, out_channels, hidden_channels=128, num_dubs=6, num_convs_recon=9, skip_connection=False)[source]#

Inits DIDN.

Parameters:
  • in_channels (int) – int

  • channels. (Number of output)

  • out_channels (int) – int

  • channels.

  • hidden_channels (int) – int

  • Default (Use skip connection.) –

  • num_dubs (int) – int

  • Default

  • num_convs_recon (int) – int

  • Default

  • skip_connection (bool) – bool

  • Default – False.

static crop_to_shape(x, shape)[source]#

Crops x to specified shape.

Parameters:
  • x (Tensor) – torch.Tensor

  • shape (Tuple[int, int])

  • shape – Tuple(int, int)

  • H (Crop shape corresponding to)

  • W.

Returns:

torch.Tensor Cropped tensor.

Return type:

cropped_output

forward(x, channel_dim=1)[source]#

Takes as input a torch.Tensor x and computes DIDN(x).

Parameters:
  • x (Tensor) – torch.Tensor

  • tensor. (Input)

  • channel_dim (int) – int

  • Default (Channel dimension.)

Returns:

torch.Tensor DIDN output tensor.

Return type:

out

Module contents#