People also read lists articles that other readers of this article have read.
Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.
Cited by lists all citing articles based on Crossref citations.Articles with the Crossref icon will open in a new tab.
Martin Schilling1, Sebastian Rosenzweig1,2, Moritz Blumenthal1, and Martin Uecker1,2,31Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany, 2DZHK (German Centre for Cardiovascular Research), Göttingen, Germany, 3Campus Institute Data Science (CIDAS), University of Göttingen, Göttingen, Germany
SynopsisCardiac segmentation is essential for analyzing cardiac function. Manual labeling is relatively slow, so machine learning methods have been proposed to increase segmentation speed and precision. These methods typically rely on cine MR images and supervised learning. However, for real-time cardiac MRI, ground truth segmentations are difficult to obtain due to lower image quality compared to cine MRI. Here, we present a method to obtain ground truth segmentation for real-time images on the basis of self-gated MRI (SSA-FARY).IntroductionPrecise cardiac segmentation is an essential part for the analysis of cardiac function. Neural networks trained for the segmentation of the heart in short-axis view have achieved accuracies of over 90% for segmentation classes like left ventricle, right ventricle and myocardium, and have shown the potential of machine learning methods for fast, precise and reproducible computer-assisted diagnosis ¹. These algorithms are usually trained with supervised learning, which requires ground truth segmentations for each individual training image. The creation of such a dataset is especially difficult for images obtained with real-time cardiac MRI due to lower image quality and corresponding ambiguities of the tissue contours. We propose a method to automatically generate segmentation masks for real-time images by segmenting self-gated images obtained with SSA-FARY ². The segmentation of the self-gated images can be assigned to the real-time reconstructions by matching the corresponding cardiac and respiratory phases to the acquisition time of the data.MethodsCardiac images were acquired on a Siemens Skyra at 3T using a radial FLASH sequence (TE=3ms, TR=1.9ms) with a tiny golden angle of 7°, a flip angle of 10° and an in-plane resolution of 1×1 mm². Real-time reconstruction was performed using NLINV ³ and self-gated MRI was performed using SSA-FARY ² followed by a multi-dimensional reconstruction ⁴. All reconstruction steps were performed using BART ⁵.The self-gated images of one healthy volunteer were segmented with a U-Net-based neural network ⁶ implemented in BART (FIG_1) and trained with the Automated Cardiac Diagnosis Challenge (ACDC) ⁷ dataset, which features time series of cardiac short-axis views of 100 patients (4 pathology groups, 1 healthy group) and ground truth segmentations for left and right ventricular cavity and left ventricular myocardium. Images and segmentation masks have been augmented with spatial and brightness augmentations ⁸.The self-gated images were cropped to center the heart and reduce computational costs of the training. For this purpose, the heart was localized with the Chan-Vese algorithm for contour detection ⁹ based on a method customized for cardiac MRI ¹⁰. The cropped, self-gated images were segmented by the network and the output was automatically
post-processed to remove unsatisfying results. Segmentation results for
the right ventricle were generally not accurate enough, so we focused on
the results of left ventricle and myocardium. Pixel artifacts were
excluded and holes of segmented structures were filled by computing
convex hulls of ventricle and myocardium masks. After these
post-processing steps, a segmentation of an image was judged as
successful if the myocardium segmentation describes a full circle and
no segmentation of the left ventricle is in contact with background
The quality of the segmentation masks was evaluated by visual
inspection.Under-sampled radial k-space data with 15 spokes per frame was used for real-time imaging. Real-time images were then matched to images obtained by self-gating, where multiple real-time images correspond to a single self-gated image (FIG_2). To minimize deviations, only the end-expiration state was used for further analysis. In this state, the spatial heart movement is minimal due to the consistently low lung volume. Due to a decreased lung volume, end-expiration can be identified as the respiratory state with the highest average image intensity in vicinity of the localized heart. The segmentation masks of the self-gated images were assigned to the matched real time images to obtain new training data.Thereby generated training data was then used to re-train the network. The network was once further trained with noise-augmented ACDC training data and once with generated training data from real-time images. These two networks were then compared regarding the segmentation of previously unknown real-time images of another healthy volunteer for slices where cardiac segmentation is expected.ResultsFor end-expiration, we observe only minimal deviations between
the segmentation mask and the structures in the real-time images
(FIG_3).In a preliminary test, the network produces a higher amount of successful segmentations (myocardium segmentation describes a full circle) on a dataset of real-time reconstructed images if it has been re-trained with the generated segmentations and corresponding real-time images rather than with noise-augmented ACDC training data. Without re-training, 75% of images are successfully segmented. After re-training with eight epochs of generated training data from one healthy volunteer, 98% of images are successfully segmented. In similar training time, re-training with noise-augmented images showed no increase of the success rate.Discussion and ConclusionWe presented a fast and easily accessible method to generate training data for cardiac segmentations of real-time images for supervised learning. The method allows to generate training data with an arbitrary amount of under-sampling while providing good segmentation mask. In contrast to mere noise-augmentation, this approach is more realistic as training images with under-sampling artifacts could be generated. However, the method needs further analysis.AcknowledgementsWe were supported by the DZHK (German Centre for Cardiovascular Research) and funded in part by NIH under grant U24EB029240. We acknowledge funding by the “Niedersächsisches Vorab” initiative of the Volkswagen Foundation.References F. Isensee, et al. “Automatic Cardiac Disease Assessment on cine-MRI via Time-Series Segmentation and Domain Specific Features” STACOM 2017, LNCS, vol. 10663, pp. 120-129, March 2018.
 S. Rosenzweig, et al. “Cardiac and Respiratory Self-Gating in Radial MRI Using an Adapted Singular Spectrum Analysis (SSA-FARY),” IEEE Trans. Med. Imag., vol. 39, no. 10, pp. 3029-3041, Oct. 2020.
 S. Rosenzweig, et al. “Simultaneous multi‐slice MRI using cartesian and radial FLASH and regularized nonlinear inversion: SMS‐NLINV” Magn. Reson. Med., vol. 79, pp. 2057-2066, 2018.
 L. Feng et al., “XD-GRASP: Golden-angle radial MRI with reconstruction of extra motion-state dimensions using compressed sensing,” Magn. Reson. Med., vol. 75, no. 2, pp. 775–788, 2016.
 M. Uecker et al. “Berkeley advanced reconstruction toolbox.” In Proc. Intl. Soc. Mag. Reson. Med., vol. 23, p. 2486, 2015.
 O. Ronneberger, P. Fischer, T. Brox. “U-Net: Convolutional Networks for Biomedical Image Segmentation” MICCAI, LNCS, vol. 9351, pp. 234-241, 2015.
 O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, et al. “Deep Learning Techniques for Automatic MRI Cardiac Multi-structures Segmentation and Diagnosis: Is the Problem Solved?” IEEE Trans. Med. Imag., vol. 37, no. 11, pp. 2514-2525, Nov. 2018.
 F. Isensee et al. “batchgenerators — a python framework for data augmentation” doi:10.5281/zenodo.3632567, 2020.
 T. Chan, L. Vese. “An Active Contour Model without Edges” Scale-Space Theories in Computer Vision, 1999.
10] G. Ilias, G. Tziritas. “Fast Fully-Automatic Cardiac Segmentation in MRI Using MRF Model Optimization, Substructures Tracking and B-Spline Smoothing” STACOM 2017, ACDC and MMWHS Challenges, pp. 91-100, 2018.
Operating system: Windows 10Slicer version:4.10.1 (tried also with 4.10.0)Expected behavior: see segmentation in 2DActual behavior: can see only in 3D!
please help me. Unfortunately, the software does not show the painted segment on the 2D screen, just on the 3D. Do you know how can I fix this? It is crucial to me to see them since I need to make a high-resolution segmentation of the hand. Thanks
Segmentation can be hidden at 6 places in Segmentations module – check all of them!
overall hidden slice view
overall fully transparent slice view
overall not shown in a view
segment-specific hidden slice view
segment-specific fully transparent slice view
Click on the image below to see all these controls highlighted:
Unfortunately I have this weird situation
Thanks so much for your help!
Could you please choose a segment to see if visibility is not turned off for specific segments?Could you also show a screenshot that shows the slices in the 3D view (to make sure they overlap)?
I hope they are ok.
How did you create the segmentation? Maybe the segmentation only contain closed surface representation. If that’s the case, either change “Representation in 2D views” to “Closed surface” (or create a binary labelmap representation by clicking on “Create” button in “Binary labelmap” row in Representations section).
(or create a binary labelmap representation by clicking on “Create” button in “Binary labelmap” row in Representations
I just clicked the “add segment button” in the segment editor… after a while it did not shown up the segmentation in 2D… I did not do anything…
I tried but apparently it does not work
an interesting thing (maybe) is that if I load the saved data from on a new project (where the” 2d edit view” is functioning) i can t change the position of the slice. It is fixed at the left as in the picture
You need to load a volume to be able to position slice viewers with the slider at the top (you can still position slice views by Shift+MouseMove in the 3D viewer).
How did you create the segmentation?What would you like to do (edit the segmentation, compute statistics, …)?
From the Segment Editor I just click on the Add button… after nearly 12 bones, the segmentation in 2D disappeared.
If I open a new model, it works perfectly.
I need the model to calculate the electromagnetic distributions of currents in the hand. I need to make a precise hand segmentation in order to have reliable values.
I am new to this software. I find it awesome, but this is bad, because after 6 hours of work probably now I can’t see another solution if not start from scratch
We need to be able to reproduce the problem that you are experiencing to be able to help. Could you record a screen capture video and share that (or give step-by-step description of what you do exactly)?
Dear Andras, I uploaded here the way I did. Basically, I just added segments, used the threshold, painted the segment (sometimes I used fill between the slices) and in the end of the process I used the smoothing tools (one time for each segment, so for each bone).
I am wondering if there’s an upper limit to the number of segments one can make.
PS my project is a master thesisPPS I m going to delete that video on youtube as soon as this is solved
ok, I was able to fix the problem just by taking the old segments in a new segmentation!Since a friend of mine had the same problem, I decided to post the solution
Go to data and drag and drop the segments in a new segmentation (I started from scratch a new segmentation, then did that and it worked)
If you understand why (sorry I m new to Slicer) maybe you could explain to me, truly thanks for your time!
I don’t see any issue in the video above (segments are still visible in the slice views).
Note that you can probably segment the image probably 10x faster if after settings threshold range form masking, you paint a small piece in each bone then use Grow from seeds effect to create a complete segmentation.
Yes, in the youtube video I just show how I did as you asked me. After few hours of work like that the segments in the 2D screens disappeared.
Yes I know the grow from seed effect, but unfortunately to make a precise work it’s difficult unfortunately (segmentation of all blood vessels, tendons, fat and so on) from a 1T MRI (too low resolution I guess)… I tried but I always have overlapping and weird boundaries
Thanks for the clarification.
If you send one of the scenes file where the segments are not visible in 2D then I would investigate why it could happen.
I have the exact same problem:
Operating System: Ubuntu 20.04.2 LTSSlicer version: 4.11.20210226Expected behavior: see segmentation in 2DActual behavior: can only see segmentation in 3D (even after checking all 6 locations in the Segmentations module)
Context: The segmentation has been generated by a deep neural network. Its dimensions match the dimensions of the CT scans.
@lassoan I can send you the files if you have time to look at it.
Thank you so much for your help !
Yes, please save the scene as .mrb file, upload it somewhere, post the link here, and I’ll have a look.
Here’s the link of the scene saved as an .mrb file.I appreciate your help !
You don’t see the segments in slice views, because slice views bounds are set to the background volume by default, and the volume is very far from the segmentation:
You can force your slice viewers to show the segmentation by choosing “Jump slices – centered” crosshair mode and moving the mouse over to the segmentation in 3D view while holding down Shift key.
This issue happens because the geometry if the segmentation is incorrect: the segments have (0,0,0) origin and (1,1,1) spacing. It may also cause problems elsewhere that its scalar type is float. How did you create this segmentation?
Une augmentation de 10% de la pénétration du haut débit mobile en Afrique entraînera une augmentation d’au moins 2,5% du PIB par habitat. L’engagement de Smart Africa est d’augmenter la pénétration du haut débit sur tout le continent.
Télécharger: https://smartafrica.org/knowledge/customer-segmentation/ …pic.twitter.com/cOsE78YMu6
Expanding mobile #broadband penetration by 10% in #Africa will yield an increase of at least 2.5% in GDP per capita. Smart Africa’s commitment is to increase broadband penetration across Africa.
https://smartafrica.org/knowledge/customer-segmentation/ …#smartbroadband2025 #connectivity #smartafricapic.twitter.com/NMXDxgqgyU