direct.nn.kikinet package#

Submodules#

direct.nn.kikinet.config module#

class direct.nn.kikinet.config.KIKINetConfig(model_name: str = '???', engine_name: str | None = None, num_iter: int = 10, image_model_architecture: str = 'MWCNN', kspace_model_architecture: str = 'UNET', image_mwcnn_hidden_channels: int = 16, image_mwcnn_num_scales: int = 4, image_mwcnn_bias: bool = True, image_mwcnn_batchnorm: bool = False, image_unet_num_filters: int = 8, image_unet_num_pool_layers: int = 4, image_unet_dropout_probability: float = 0.0, kspace_conv_hidden_channels: int = 16, kspace_conv_n_convs: int = 4, kspace_conv_batchnorm: bool = False, kspace_didn_hidden_channels: int = 64, kspace_didn_num_dubs: int = 6, kspace_didn_num_convs_recon: int = 9, kspace_unet_num_filters: int = 8, kspace_unet_num_pool_layers: int = 4, kspace_unet_dropout_probability: float = 0.0, normalize: bool = False)[source][source]#

Bases: ModelConfig

image_model_architecture: str = 'MWCNN'#
image_mwcnn_batchnorm: bool = False#
image_mwcnn_bias: bool = True#
image_mwcnn_hidden_channels: int = 16#
image_mwcnn_num_scales: int = 4#
image_unet_dropout_probability: float = 0.0#
image_unet_num_filters: int = 8#
image_unet_num_pool_layers: int = 4#
kspace_conv_batchnorm: bool = False#
kspace_conv_hidden_channels: int = 16#
kspace_conv_n_convs: int = 4#
kspace_didn_hidden_channels: int = 64#
kspace_didn_num_convs_recon: int = 9#
kspace_didn_num_dubs: int = 6#
kspace_model_architecture: str = 'UNET'#
kspace_unet_dropout_probability: float = 0.0#
kspace_unet_num_filters: int = 8#
kspace_unet_num_pool_layers: int = 4#
normalize: bool = False#
num_iter: int = 10#

direct.nn.kikinet.kikinet module#

class direct.nn.kikinet.kikinet.KIKINet(forward_operator, backward_operator, image_model_architecture='MWCNN', kspace_model_architecture='DIDN', num_iter=2, normalize=False, **kwargs)[source][source]#

Bases: Module

Based on KIKINet implementation [1]. Modified to work with multi-coil k-space data.

References

[1]

Eo, Taejoon, et al. “KIKI-Net: Cross-Domain Convolutional Neural Networks for Reconstructing Undersampled Magnetic Resonance Images.” Magnetic Resonance in Medicine, vol. 80, no. 5, Nov. 2018, pp. 2188–201. PubMed, https://doi.org/10.1002/mrm.27201.

forward(masked_kspace, sampling_mask, sensitivity_map, scaling_factor=None)[source][source]#

Computes forward pass of KIKINet.

Parameters:
masked_kspace: torch.Tensor

Masked k-space of shape (N, coil, height, width, complex=2).

sampling_mask: torch.Tensor

Sampling mask of shape (N, 1, height, width, 1).

sensitivity_map: torch.Tensor

Sensitivity map of shape (N, coil, height, width, complex=2).

scaling_factor: Optional[torch.Tensor]

Scaling factor of shape (N,). If None, no scaling is applied. Default: None.

Returns:
image: torch.Tensor

Output image of shape (N, height, width, complex=2).

Return type:

Tensor

training: bool#

direct.nn.kikinet.kikinet_engine module#

class direct.nn.kikinet.kikinet_engine.KIKINetEngine(cfg, model, device, forward_operator=None, backward_operator=None, mixed_precision=False, **models)[source][source]#

Bases: MRIModelEngine

KIKINet Engine.

forward_function(data)[source][source]#

This method performs the model’s forward method given data which contains all tensor inputs.

Must be implemented by child classes.

Return type:

Tuple[Tensor, None]

Module contents#