direct.nn.crossdomain package#

Submodules#

direct.nn.crossdomain.crossdomain module#

class direct.nn.crossdomain.crossdomain.CrossDomainNetwork(forward_operator, backward_operator, image_model_list, kspace_model_list=None, domain_sequence='KIKI', image_buffer_size=1, kspace_buffer_size=1, normalize_image=False, **kwargs)[source]#

Bases: Module

This performs optimisation in both, k-space (“K”) and image (“I”) domains according to domain_sequence.

__init__(forward_operator, backward_operator, image_model_list, kspace_model_list=None, domain_sequence='KIKI', image_buffer_size=1, kspace_buffer_size=1, normalize_image=False, **kwargs)[source]#

Inits CrossDomainNetwork.

Parameters:
  • forward_operator (Callable) – Forward Operator.

  • backward_operator (Callable) – Backward Operator.

  • image_model_list (ModuleList) – Image domain model list.

  • kspace_model_list (Optional[ModuleList]) – K-space domain model list. If set to None, a correction step is applied. Default: None.

  • domain_sequence (str) – Domain sequence containing only "K" (k-space domain) and/or "I" (image domain). Default: "KIKI".

  • image_buffer_size (int) – Image buffer size. Default: 1.

  • kspace_buffer_size (int) – K-space buffer size. Default: 1.

  • normalize_image (bool) – If True, input is normalized. Default: False.

  • **kwargs – Keyword Arguments.

kspace_correction(block_idx, image_buffer, kspace_buffer, sampling_mask, sensitivity_map, masked_kspace)[source]#
Return type:

Tensor

image_correction(block_idx, image_buffer, kspace_buffer, sampling_mask, sensitivity_map)[source]#
Return type:

Tensor

forward(masked_kspace, sampling_mask, sensitivity_map, scaling_factor=None)[source]#

Computes the forward pass of CrossDomainNetwork.

Parameters:
  • masked_kspace (Tensor) – torch.Tensor

  • shape (Scaling factor of)

  • sampling_mask (Tensor) – torch.Tensor

  • shape

  • sensitivity_map (Tensor) – torch.Tensor

  • shape

  • scaling_factor (Optional[Tensor]) – Optional[torch.Tensor]

  • shape – None.

Returns:

torch.Tensor Output image of shape (N, height, width, complex=2).

Return type:

out_image

direct.nn.crossdomain.multicoil module#

class direct.nn.crossdomain.multicoil.MultiCoil(model, coil_dim=1, coil_to_batch=False)[source]#

Bases: Module

This makes the forward pass of multi-coil data of shape (N, N_coils, H, W, C) to a model.

If coil_to_batch is set to True, coil dimension is moved to the batch dimension. Otherwise, it passes to the model each coil-data individually.

__init__(model, coil_dim=1, coil_to_batch=False)[source]#

Inits MultiCoil.

Parameters:
  • model (Module) – Any nn.Module that takes as input with 4D data (N, H, W, C). Typically a convolutional-like model.

  • coil_dim (int) – Coil dimension. Default: 1.

  • coil_to_batch (bool) – If True batch and coil dimensions are merged when forwarded by the model and unmerged when outputted. Otherwise, input is forwarded to the model per coil. Default: False.

forward(x)[source]#

Performs the forward pass of MultiCoil.

Parameters:

x (Tensor) – Multi-coil input of shape (N, coil, height, width, in_channels).

Return type:

Tensor

Returns:

Multi-coil output of shape (N, coil, height, width, out_channels).

Module contents#