direct.nn.vsharp package#
Submodules#
direct.nn.vsharp.config module#
- class direct.nn.vsharp.config.VSharpNetConfig(model_name='???', engine_name=None, num_steps=10, num_steps_dc_gd=8, image_init=InitType.SENSE, no_parameter_sharing=True, auxiliary_steps=0, image_model_architecture=ModelName.UNET, initializer_channels=(32, 32, 64, 64), initializer_dilations=(1, 1, 2, 4), initializer_multiscale=1, initializer_activation=ActivationType.PRELU, image_resnet_hidden_channels=128, image_resnet_num_blocks=15, image_resnet_batchnorm=True, image_resnet_scale=0.1, image_unet_num_filters=32, image_unet_num_pool_layers=4, image_unet_dropout=0.0, image_didn_hidden_channels=16, image_didn_num_dubs=6, image_didn_num_convs_recon=9, image_conv_hidden_channels=64, image_conv_n_convs=15, image_conv_activation=ActivationType.RELU, image_conv_batchnorm=False)[source]#
Bases:
ModelConfig- num_steps = 10#
- num_steps_dc_gd = 8#
- image_init = 'sense'#
- no_parameter_sharing = True#
- auxiliary_steps = 0#
- image_model_architecture = 'unet'#
- initializer_channels = (32, 32, 64, 64)#
- initializer_dilations = (1, 1, 2, 4)#
- initializer_multiscale = 1#
- initializer_activation = 'prelu'#
- image_resnet_num_blocks = 15#
- image_resnet_batchnorm = True#
- image_resnet_scale = 0.1#
- image_unet_num_filters = 32#
- image_unet_num_pool_layers = 4#
- image_unet_dropout = 0.0#
- image_didn_num_dubs = 6#
- image_didn_num_convs_recon = 9#
- image_conv_n_convs = 15#
- image_conv_activation = 'relu'#
- image_conv_batchnorm = False#
- __init__(model_name='???', engine_name=None, num_steps=10, num_steps_dc_gd=8, image_init=InitType.SENSE, no_parameter_sharing=True, auxiliary_steps=0, image_model_architecture=ModelName.UNET, initializer_channels=(32, 32, 64, 64), initializer_dilations=(1, 1, 2, 4), initializer_multiscale=1, initializer_activation=ActivationType.PRELU, image_resnet_hidden_channels=128, image_resnet_num_blocks=15, image_resnet_batchnorm=True, image_resnet_scale=0.1, image_unet_num_filters=32, image_unet_num_pool_layers=4, image_unet_dropout=0.0, image_didn_hidden_channels=16, image_didn_num_dubs=6, image_didn_num_convs_recon=9, image_conv_hidden_channels=64, image_conv_n_convs=15, image_conv_activation=ActivationType.RELU, image_conv_batchnorm=False)#
- class direct.nn.vsharp.config.VSharpNet3DConfig(model_name='???', engine_name=None, num_steps=8, num_steps_dc_gd=6, image_init=InitType.SENSE, no_parameter_sharing=True, auxiliary_steps=-1, initializer_channels=(32, 32, 64, 64), initializer_dilations=(1, 1, 2, 4), initializer_multiscale=1, initializer_activation=ActivationType.PRELU, unet_num_filters=32, unet_num_pool_layers=4, unet_dropout=0.0, unet_norm=False)[source]#
Bases:
ModelConfig- num_steps = 8#
- num_steps_dc_gd = 6#
- image_init = 'sense'#
- no_parameter_sharing = True#
- auxiliary_steps = -1#
- initializer_channels = (32, 32, 64, 64)#
- initializer_dilations = (1, 1, 2, 4)#
- initializer_multiscale = 1#
- initializer_activation = 'prelu'#
- unet_num_filters = 32#
- unet_num_pool_layers = 4#
- unet_dropout = 0.0#
- unet_norm = False#
- __init__(model_name='???', engine_name=None, num_steps=8, num_steps_dc_gd=6, image_init=InitType.SENSE, no_parameter_sharing=True, auxiliary_steps=-1, initializer_channels=(32, 32, 64, 64), initializer_dilations=(1, 1, 2, 4), initializer_multiscale=1, initializer_activation=ActivationType.PRELU, unet_num_filters=32, unet_num_pool_layers=4, unet_dropout=0.0, unet_norm=False)#
direct.nn.vsharp.vsharp module#
This module provides the implementation of vSHARP model.
Most specifically, vSHARP is the variable Splitting Half-quadratic ADMM algorithm for Reconstruction of inverse-Problems (vSHARPP) model as presented in [1]_.
References: .. [1] George Yiasemis et. al. vSHARP: variable Splitting Half-quadratic ADMM algorithm for Reconstruction
of inverse-Problems (2023). https://arxiv.org/abs/2309.09954.
- class direct.nn.vsharp.vsharp.LagrangeMultipliersInitializer(in_channels, out_channels, channels, dilations, multiscale_depth=1, activation=ActivationType.PRELU)[source]#
Bases:
ModuleA convolutional neural network model that initializers the Lagrange multiplier of the
vSHARPNet[1]_.More specifically, it produces an initial value for the Lagrange Multiplier based on the zero-filled image:
\[u^0 = \mathcal{G}_{\psi}(x^0).\]References: .. [1] George Yiasemis et al., “VSHARP: Variable Splitting Half-quadratic ADMM Algorithm for Reconstruction
of Inverse Problems” (2023). https://arxiv.org/abs/2309.09954.
- __init__(in_channels, out_channels, channels, dilations, multiscale_depth=1, activation=ActivationType.PRELU)[source]#
Inits
LagrangeMultipliersInitializer.- Parameters:
in_channels (
int) – Number of input channels.out_channels (
int) – Number of output channels.channels (
tuple[int,...]) – Tuple of integers specifying the number of output channels for each convolutional layer in the network.dilations (
tuple[int,...]) – Tuple of integers specifying the dilation factor for each convolutional layer in the network.multiscale_depth (
int) – Number of multiscale features to include in the output. Default:1.activation (
ActivationType) – Activation function to use on the output. Default:ActivationType.PRELU.
- forward(x)[source]#
Forward pass of
LagrangeMultipliersInitializer.- Parameters:
x (
Tensor) – Input tensor of shape (batch_size, in_channels, height, width).- Return type:
Tensor- Returns:
Output tensor of shape (batch_size, out_channels, height, width).
- class direct.nn.vsharp.vsharp.VSharpNet(forward_operator, backward_operator, num_steps, num_steps_dc_gd, image_init=InitType.SENSE, no_parameter_sharing=True, image_model_architecture=ModelName.UNET, initializer_channels=(32, 32, 64, 64), initializer_dilations=(1, 1, 2, 4), initializer_multiscale=1, initializer_activation=ActivationType.PRELU, auxiliary_steps=0, **kwargs)[source]#
Bases:
ModuleVariable Splitting Half-quadratic ADMM algorithm for Reconstruction of Parallel MRI [1]_.
Variable Splitting Half Quadratic VSharpNet is a deep learning model that solves the augmented Lagrangian derivation of the variable half quadratic splitting problem using ADMM (Alternating Direction Method of Multipliers). It is specifically designed for solving inverse problems in magnetic resonance imaging (MRI).
The VSharpNet model incorporates an iterative optimization algorithm that consists of three steps: z-step, x-step, and u-step. These steps are detailed mathematically as follows:
\[z^{t+1} = \mathrm{argmin}_{z} \lambda \mathcal{G}(z) + \frac{\rho}{2} || x^{t} - z + \frac{u^t}{\rho} ||_2^2 \quad \mathrm{[z-step]}\]\[x^{t+1} = \mathrm{argmin}_{x} \frac{1}{2} || \mathcal{A}_{\mathbf{U},\mathbf{S}}(x) - \tilde{y} ||_2^2 + \frac{\rho}{2} || x - z^{t+1} + \frac{u^t}{\rho} ||_2^2 \quad \mathrm{[x-step]}\]\[u^{t+1} = u^t + \rho (x^{t+1} - z^{t+1}) \quad \mathrm{[u-step]}\]During the z-step, the model minimizes the augmented Lagrangian function with respect to z, utilizing DL-based denoisers. In the x-step, it optimizes x by minimizing the data consistency term through unrolling a gradient descent scheme (DC-GD). The u-step involves updating the Lagrange multiplier u. These steps are iterated for a specified number of cycles.
The model includes an initializer for Lagrange multipliers.
It also allows for outputting auxiliary steps.
VSharpNetis tailored for 2D MRI data reconstruction.References: .. [1] George Yiasemis et al., “VSHARP: Variable Splitting Half-quadratic ADMM Algorithm for Reconstruction
of Inverse Problems” (2023). https://arxiv.org/abs/2309.09954.
- __init__(forward_operator, backward_operator, num_steps, num_steps_dc_gd, image_init=InitType.SENSE, no_parameter_sharing=True, image_model_architecture=ModelName.UNET, initializer_channels=(32, 32, 64, 64), initializer_dilations=(1, 1, 2, 4), initializer_multiscale=1, initializer_activation=ActivationType.PRELU, auxiliary_steps=0, **kwargs)[source]#
Inits
VSharpNet.- Parameters:
forward_operator (
Callable[[tuple[Any,...]],Tensor]) – Forward operator function.backward_operator (
Callable[[tuple[Any,...]],Tensor]) – Backward operator function.num_steps (
int) – Number of steps in the ADMM algorithm.num_steps_dc_gd (
int) – Number of steps in the Data Consistency using Gradient Descent step of ADMM.image_init (
InitType) – Image initialization method. Default: ‘sense’.no_parameter_sharing (
bool) – Flag indicating whether parameter sharing is enabled in the denoiser blocks.image_model_architecture (
ModelName) – Image model architecture. Default: ModelName.UNET.initializer_channels (
tuple[int,...]) – Tuple of integers specifying the number of output channels for each convolutional layer in the Lagrange multiplier initializer. Default: (32, 32, 64, 64).initializer_dilations (
tuple[int,...]) – Tuple of integers specifying the dilation factor for each convolutional layer in the Lagrange multiplier initializer. Default: (1, 1, 2, 4).initializer_multiscale (
int) – Number of multiscale features to include in the Lagrange multiplier initializer output. Default: 1.initializer_activation (
ActivationType) – Activation type for the Lagrange multiplier initializer. Default: ActivationType.PRELU.auxiliary_steps (
int) – Number of auxiliary steps to output. Can be -1 or a positive integer lower or equal to num_steps. If -1, it uses all steps. If I, the last I steps will be used.**kwargs – Additional keyword arguments. Can be model_name or image_model_<param> where <param> represent parameters of the selected image model architecture beyond the standard parameters. Depending on the image_model_architecture chosen, different kwargs will be applicable.
- forward(masked_kspace, sensitivity_map, sampling_mask)[source]#
Computes forward pass of
VSharpNet.- Parameters:
masked_kspace (
Tensor) – Masked k-space of shape (N, coil, height, width, complex=2).sensitivity_map (
Tensor) – Sensitivity map of shape (N, coil, height, width, complex=2).sampling_mask (
Tensor) – Sampling mask of shape (N, 1, height, width, 1).
- Return type:
list[Tensor]- Returns:
List of output images of shape (N, height, width, complex=2).
- class direct.nn.vsharp.vsharp.LagrangeMultipliersInitializer3D(in_channels, out_channels, channels, dilations, multiscale_depth=1, activation=ActivationType.PRELU)[source]#
Bases:
ModuleA convolutional neural network model that initializes the Lagrange multiplier of
VSharpNet3D.This is an extension to 3D data of
LagrangeMultipliersInitializer.- __init__(in_channels, out_channels, channels, dilations, multiscale_depth=1, activation=ActivationType.PRELU)[source]#
Initializes LagrangeMultipliersInitializer3D.
- Parameters:
in_channels (
int) – Number of input channels.out_channels (
int) – Number of output channels.channels (
tuple[int,...]) – Tuple of integers specifying the number of output channels for each convolutional layer in the network.dilations (
tuple[int,...]) – Tuple of integers specifying the dilation factor for each convolutional layer in the network.multiscale_depth (
int) – Number of multiscale features to include in the output. Default: 1.activation (
ActivationType) – Activation function to use on the output. Default: ActivationType.PRELU.
- forward(x)[source]#
Forward pass of
LagrangeMultipliersInitializer3D.- Parameters:
x (
Tensor) – Input tensor of shape (batch_size, in_channels, z, x, y).- Return type:
Tensor- Returns:
Output tensor of shape (batch_size, out_channels, z, x, y).
- class direct.nn.vsharp.vsharp.VSharpNet3D(forward_operator, backward_operator, num_steps, num_steps_dc_gd, image_init=InitType.SENSE, no_parameter_sharing=True, initializer_channels=(32, 32, 64, 64), initializer_dilations=(1, 1, 2, 4), initializer_multiscale=1, initializer_activation=ActivationType.PRELU, auxiliary_steps=-1, unet_num_filters=32, unet_num_pool_layers=4, unet_dropout=0.0, unet_norm=False, **kwargs)[source]#
Bases:
ModuleVharpNet 3D version using 3D U-Nets as denoisers.
This is an extension to 3D of
VSharpNet. For the original paper refer to [1]_.References: .. [1] George Yiasemis et al., “VSHARP: Variable Splitting Half-quadratic ADMM Algorithm for Reconstruction
of Inverse Problems” (2023). https://arxiv.org/abs/2309.09954.
- __init__(forward_operator, backward_operator, num_steps, num_steps_dc_gd, image_init=InitType.SENSE, no_parameter_sharing=True, initializer_channels=(32, 32, 64, 64), initializer_dilations=(1, 1, 2, 4), initializer_multiscale=1, initializer_activation=ActivationType.PRELU, auxiliary_steps=-1, unet_num_filters=32, unet_num_pool_layers=4, unet_dropout=0.0, unet_norm=False, **kwargs)[source]#
Inits
VSharpNet3D.- Parameters:
forward_operator (
Callable[[tuple[Any,...]],Tensor]) – Forward operator function.backward_operator (
Callable[[tuple[Any,...]],Tensor]) – Backward operator function.num_steps (
int) – Number of steps in the ADMM algorithm.num_steps_dc_gd (
int) – Number of steps in the Data Consistency using Gradient Descent step of ADMM.image_init (
InitType) – Image initialization method. Default: ‘sense’.no_parameter_sharing (
bool) – Flag indicating whether parameter sharing is enabled in the denoiser blocks.initializer_channels (
tuple[int,...]) – Tuple of integers specifying the number of output channels for each convolutional layer in the Lagrange multiplier initializer. Default: (32, 32, 64, 64).initializer_dilations (
tuple[int,...]) – Tuple of integers specifying the dilation factor for each convolutional layer in the Lagrange multiplier initializer. Default: (1, 1, 2, 4).initializer_multiscale (
int) – Number of multiscale features to include in the Lagrange multiplier initializer output. Default: 1.initializer_activation (
ActivationType) – Activation type for the Lagrange multiplier initializer. Default: ActivationType.PReLU.auxiliary_steps (
int) – Number of auxiliary steps to output. Can be -1 or a positive integer lower or equal to num_steps. If -1, it uses all steps. If I, the last I steps will be used.unet_num_filters (
int) – U-Net denoisers number of output channels of the first convolutional layer. Default: 32.unet_num_pool_layers (
int) – U-Net denoisers number of down-sampling and up-sampling layers (depth). Default: 4.unet_dropout (
float) – U-Net denoisers dropout probability. Default: 0.0unet_norm (
bool) – Whether to use normalized U-Net as denoiser or not. Default: False.**kwargs – Additional keyword arguments. Can be model_name.
- forward(masked_kspace, sensitivity_map, sampling_mask)[source]#
Computes forward pass of
VSharpNet3D.- Parameters:
masked_kspace (
Tensor) – Masked k-space of shape (N, coil, slice, height, width, complex=2).sensitivity_map (
Tensor) – Sensitivity map of shape (N, coil, slice, height, width, complex=2).sampling_mask (
Tensor) – Sampling mask of shape (N, 1, 1 or slice, height, width, 1).
- Return type:
list[Tensor]- Returns:
List of output images each of shape (N, slice, height, width, complex=2).
direct.nn.vsharp.vsharp_engine module#
Engines for vSHARP 2D and 3D models [1].
Includes supervised, self-supervised and joint supervised and self-supervised learning [2] engines.
References: .. [1] Yiasemis, G., Moriakov, N., Sánchez, C.I., Sonke, J.-J., Teuwen, J.: JSSL: Joint Supervised and
Self-supervised Learning for MRI Reconstruction, http://arxiv.org/abs/2311.15856, (2023). https://doi.org/10.48550/arXiv.2311.15856.
- class direct.nn.vsharp.vsharp_engine.VSharpNet3DEngine(cfg, model, device, forward_operator=None, backward_operator=None, mixed_precision=False, **models)[source]#
Bases:
MRIModelEngineVSharpNet 3D Model Engine.
- __init__(cfg, model, device, forward_operator=None, backward_operator=None, mixed_precision=False, **models)[source]#
Inits
VSharpNet3DEngine.- Parameters:
cfg (
BaseConfig) – Configuration file.model (
Module) – Model.device (
str) – Device. Can be “cuda: {idx}” or “cpu”.forward_operator (
Optional[Callable[[tuple[Any,...]],Tensor]]) – The forward operator. Default: None.backward_operator (
Optional[Callable[[tuple[Any,...]],Tensor]]) – The backward operator. Default: None.mixed_precision (
bool) – Use mixed precision. Default: False.**models (
Module) – Additional models.
- forward_function(data)[source]#
Forward function for
VSharpNet3DEngine.- Parameters:
data (
dict[str,Any]) – Input data dictionary containing keys such as “masked_kspace”, “sampling_mask”, and “sensitivity_map”.- Return type:
tuple[Tensor,None]- Returns:
Tuple containing output images and output k-space.
- class direct.nn.vsharp.vsharp_engine.VSharpNetEngine(cfg, model, device, forward_operator=None, backward_operator=None, mixed_precision=False, **models)[source]#
Bases:
MRIModelEngineVSharpNet 2D Model Engine.
- __init__(cfg, model, device, forward_operator=None, backward_operator=None, mixed_precision=False, **models)[source]#
Inits
VSharpNetEngine.- Parameters:
cfg (
BaseConfig) – Configuration file.model (
Module) – Model.device (
str) – Device. Can be “cuda: {idx}” or “cpu”.forward_operator (
Optional[Callable[[tuple[Any,...]],Tensor]]) – The forward operator. Default: None.backward_operator (
Optional[Callable[[tuple[Any,...]],Tensor]]) – The backward operator. Default: None.mixed_precision (
bool) – Use mixed precision. Default: False.**models (
Module) – Additional models for secondary tasks, such as sensitivity map estimation model.
- forward_function(data)[source]#
Forward function for
VSharpNetEngine.- Parameters:
data (
dict[str,Any]) – Input data dictionary containing keys such as “masked_kspace”, “sampling_mask”, and “sensitivity_map”.- Return type:
tuple[Tensor,Tensor]- Returns:
Tuple containing output images and output k-space.
- class direct.nn.vsharp.vsharp_engine.VSharpNetSSLEngine(cfg, model, device, forward_operator=None, backward_operator=None, mixed_precision=False, **models)[source]#
Bases:
SSLMRIModelEngineSelf-supervised Learning vSHARP Model 2D Engine.
Used for the main experiments for SSL in the JSSL paper [1].
- Parameters:
cfg (
BaseConfig) – Configuration file.model (
Module) – Model.device (
str) – Device. Can be “cuda: {idx}” or “cpu”.forward_operator (
Optional[Callable]) – The forward operator. Default: None.backward_operator (
Optional[Callable]) – The backward operator. Default: None.mixed_precision (
bool) – Use mixed precision. Default: False.**models (
Module) – Additional models.
References
[1] Yiasemis, G., Moriakov, N., Sánchez, C.I., Sonke, J.-J., Teuwen, J.: JSSL: Joint Supervised and
Self-supervised Learning for MRI Reconstruction, http: //arxiv.org/abs/2311.15856, (2023). https: //doi.org/10.48550/arXiv.2311.15856.
- __init__(cfg, model, device, forward_operator=None, backward_operator=None, mixed_precision=False, **models)[source]#
Inits
VSharpNetSSLEngine.- Parameters:
cfg (
BaseConfig) – Configuration file.model (
Module) – Model.device (
str) – Device. Can be “cuda: {idx}” or “cpu”.forward_operator (
Optional[Callable]) – The forward operator. Default: None.backward_operator (
Optional[Callable]) – The backward operator. Default: None.mixed_precision (
bool) – Use mixed precision. Default: False.**models (
Module) – Additional models.
- forward_function(data)[source]#
Forward function for
VSharpNetSSLEngine.- Return type:
None
- class direct.nn.vsharp.vsharp_engine.VSharpNetJSSLEngine(cfg, model, device, forward_operator=None, backward_operator=None, mixed_precision=False, **models)[source]#
Bases:
JSSLMRIModelEngineJoint Supervised and Self-supervised Learning vSHARP Model 2D Engine.
Used for the main experiments in the JSSL paper [1].
- Parameters:
cfg (
BaseConfig) – Configuration file.model (
Module) – Model.device (
str) – Device. Can be “cuda: {idx}” or “cpu”.forward_operator (
Optional[Callable]) – The forward operator. Default: None.backward_operator (
Optional[Callable]) – The backward operator. Default: None.mixed_precision (
bool) – Use mixed precision. Default: False.**models (
Module) – Additional models.
References
[1] Yiasemis, G., Moriakov, N., Sánchez, C.I., Sonke, J.-J., Teuwen, J.: JSSL: Joint Supervised and
Self-supervised Learning for MRI Reconstruction, http: //arxiv.org/abs/2311.15856, (2023). https: //doi.org/10.48550/arXiv.2311.15856.
- __init__(cfg, model, device, forward_operator=None, backward_operator=None, mixed_precision=False, **models)[source]#
Inits
VSharpNetJSSLEngine.- Parameters:
cfg (
BaseConfig) – Configuration file.model (
Module) – Model.device (
str) – Device. Can be “cuda: {idx}” or “cpu”.forward_operator (
Optional[Callable]) – The forward operator. Default: None.backward_operator (
Optional[Callable]) – The backward operator. Default: None.mixed_precision (
bool) – Use mixed precision. Default: False.**models (
Module) – Additional models.
- forward_function(data)[source]#
Forward function for
VSharpNetJSSLEngine.- Return type:
None