Research and Advances
Artificial Intelligence and Machine Learning Contributed articles

AI-CHD: An AI-Based Framework for Cost-Effective Surgical Telementoring of Congenital Heart Disease

3D heart modeling and AI bring new cardiac surgery to remote and less-developed regions.
Posted
  1. Introduction
  2. Key Insights
  3. Remote Surgery
  4. AI-CHD
  5. Real-Time Video and Audio Transmission
  6. AI-CHD Case Study
  7. Looking Forward
  8. Conclusion
  9. Acknowledgments
  10. References
  11. Authors
open hand under human heart, illustration

Congenital heart disease (CHD), the most common congenital birth defect, has long been known as one of the main causes of infant death during the first year of life.1 More than one million of the world’s approximately 135 million newborns are born each year with CHD.21 Over the last century, cardiac surgery has been an effective approach to tackling CHD; its remarkable advance has decreased the mortality rate of newborns with CHD.10

Back to Top

Key Insights

  • Congenital heart disease (CHO), the most common congenital birth defect, is usually treated with heart surgery. However, such procedures require highly skilled surgeons, who are especially scarce in remote and less-developed regions.
  • Surgical telementoring enables an expert surgeon to remotely guide a less-experienced surgeon. The main challenge is high costs brought on by inefficient planning based on low-quality images.
  • Using AI to automtatically construct accurate 3D models of the heart from low-quality images can help during remote surgical planning, via 3D printing or virtual reality (VR) technology, and can improve operational efficiency during surgery.

However, that lower mortality rate is mostly observed in developed countries rather than developing ones. Surgical treatment of CHD requires highly skilled surgeons along with complex infrastructures and equipment. While developed countries have perfected their treatment of CHD for more than 50 years, developing countries are still in the early stages. It is estimated that the number of congenital cardiac surgeons needs to increase by 1,250 times to satisfy only the basic needs of CHD treatment worldwide,16 and most of those surgeons reside in developed countries. As a result, the mortality rate in developing countries is currently at 20%, strikingly higher than the 3% to 7% in developed countries,16 not to mention the fact that mortality rates in developing countries are likely underreported due to the lack of proper diagnosis.

Back to Top

Remote Surgery

Remote surgery has been an active field for decades, enabling experienced surgeons to remotely instruct robots (telerobotics) or guide less-experienced surgeons (surgical telementoring).8 It enables high-quality surgical expertise to be passed from surgeons in developed countries to those in developing ones, or from high-end urban medical centers to rural hospitals inside developing nations.

Telerobotics enables surgeons to remotely control robots in a master-slave relationship. Stable camera systems are implemented in both sites. At the robot site, multiple cameras construct a virtual image of the operative field, which is provided to the surgeon site. At the surgeon site, multiple cameras and 3D imaging systems capture the surgeon’s hand movements, which are sent to the robot site and emulated by the robot operating on the patient. Images and movements are required to be transmitted in real time, but large latency may affect the surgeon’s performance or even lead to surgical failure. Telerobotics can also let surgeons sit comfortably while performing some delicate operations requiring fine movements. However, due to the highly demanding nature and potential risks, telerobotics has very limited clinical application. Only a few systems, including da Vinci14 and Zeus,13 are approved for use. At present, telerobotics is still in its early stages.

Surgical telementoring, on the other hand, consists of an expert surgeon remotely guiding a less-experienced surgeon. Such guidance is achieved with real-time audio and video transmission. Thus, the two surgeons can discuss the procedure in real time, and the expert surgeon can deliver precise guidance based on the real-time video streaming of the surgery process. Surgical telementoring can be performed in rural areas and austere environments, not only for difficult surgeries but also for surgery education. Like telerobotics, surgical telementoring requires real-time data transmission. Fortunately, 4G and 5G communication technologies have made this possible across great distances. Surgical telementoring has been widely adopted and explored in clinical use thanks to improved transmission quality, and the technology carries less potential risk compared with telerobotics.11

f1.jpg
Figure 1. Surgical telementoring: Technical assistance and guidance via real-time video and audio streams.

Though also in its early stage, surgical telementoring is still more mature than telerobotics, with its relatively lower cost and lower technical complexity. However, surgical telementoring for CHD still faces challenges. First, cardiac surgery for the treatment of CHD is rather complex, generally regarded as the jewel in the crown of surgery. As such, CHD diagnosis and surgical planning are usually time-consuming and costly; delivering this expertise to developing countries or rural areas can often be time- and cost-prohibitive. For example, examining the medical image of a CHD patient for diagnosis takes even a very experienced radiologist several hours, whereas that time is usually on the order of minutes for common heart diseases. Second, the machine quality and operator skill in developing countries or rural areas may be limited, leading to issues such as low imaging quality under non-ideal settings.

Back to Top

AI-CHD

One potential solution to reduce costs is to use artificial intelligence (AI) to automatically construct accurate 3D models of the heart from medical images, a critical yet otherwise time-consuming process in CHD surgical telementoring. Before surgery, this model can help during remote surgical planning and discussions via 3D printing or virtual reality (VR) technologies. During surgery, viewing the model via a 2D screen can enhance operational efficiency by fostering communication between the surgeon and the novice.

Our novel solution, an AI-based framework called AI-CHD, aims to construct accurate and efficient heart models for surgical telementoring of CHD based on 3D computed tomography (CT) images. Considering that the artifact type and pattern in CT images acquired with medium- and low-end machines or by users with limited skills may be different from those in standard training sets, the framework first exploits a weakly supervised way to remove artifacts in a CT image, which does not require a prior training set. Further, considering that hearts in CHD exhibit large variations in structure and/or great vessel connections without local tissue changes, the framework then deploys hybrid deep-learning and graph analysis to tackle the model construction. We evaluate each step with collected datasets and the overall system with a case study.

f2.jpg
Figure 2. Overall flow of AI-CHD.

Medical image artifact reduction. Medical images exhibit various types of artifacts, with different patterns and mixtures that depend on many factors, including scan setting, machine condition, patient size and age, surrounding environment, etc. This problem is even worse on middle-of-the-road and low-end CT imaging machines—often operated by less-skilled technologists—which are common in rural areas in developing countries. On the other hand, existing deep learning-based artifact-reduction methods for medical images are restricted by the specific training data that contains predetermined artifact types and patterns, which can hardly capture all possibilities exclusively. Accordingly, they can only work well under the scenarios defined by the training data. In this step, we exploit the power of deep learning but without using pre-trained networks for medical artifact reduction. Specifically, at test time we train a lightweight, image-specific, artifact-reduction network using data synthesized from the input image. Without requiring any prior training data, our method can work with almost any medical images that contain varying or unknown artifacts.

The main flow of artifact reduction contains two modules: artifact synthesis and artifact removal.3 In the first module, radiologists need to annotate a total of 10–20 regions of interest (RoI) from a 3D-input CT image. These RoI are further used to train a lightweight, five-layer synthesis network, which synthesizes a large number of paired patches. Note that as different medical images have different ranges of pixel values, we normalize pixel values so that each has a value between 0 and 1. With synthesized paired patches, theoretically any existing CNN-based artifact-reduction networks can be trained. However, a key issue here is that we perform the task on each input image. Deep and complex networks may need many data pairs and long training times. Smaller networks, on the other hand, may not attain desired performance. Thus, in the second module, we resort to an artifact-removal network—a compact, attentive generative network architecture that can pay special attention to artifacts and train them adversarially for faster convergence. It is formed via a two-step attentive-recurrent network followed by a 10-layer contextual autoencoder to reduce artifacts and restore the information obstructed by them. Once trained, the artifact-removal network is applied to all slices of the 3D volume for artifact reduction.

Fundamental to our method is the fact that artifacts in most medical images exhibit localized patterns—that is, they do not cover the entire image uniformly. It is almost always possible to identify “clean” regions (with few artifacts) and “dirty” regions (with significant artifacts) within an image. This makes it possibile to synthesize paired, dirty-clean training patches from an image with artifacts. In addition, as visual entropy inside a single image is much smaller than in a general external collection of images,30 the synthesized training data does not need to be big, and the associated artifact-reduction network can be compact and converge quickly.


AI-CHD is an accurate, AI-based framework for surgical telementoring of CHD developed through deep collaborations between computer scientists, radiologists, and surgeons.


We evaluate the performance of this step with CT images containing different levels of Poisson noise collected by our wide-detector, 256-slice MDCT scanner with 8 cm of coverage, using the following protocol: collimation, (96–128)X0.625 mm; rotation time, 270 ms, which corresponds to a 135ms standard temporal resolution; slice thickness, 0.9 mm; and reconstruction interval, 0.45 mm. Adaptive-axial z-collimation was used to optimize the cranio-caudal length. Data was obtained at 40%-50% of the RR interval, using a 5% phase tolerance around the 45% phase. All CT images are qualitatively evaluated by our radiologists on structure-preservation and artifact levels. For quantitative evaluation, due to the lack of ground truth, we follow most existing works24,28 and select the most homogeneous area in regions of interest chosen by radiologists. Standard deviation (artifact level) of the pixels in the area should be as low as possible.

Our method trains and tests on each individual patient’s image, and no pre-training is involved. We compare our method with state-of-the-art, deep learning-based, medical-image artifact-reduction methods on a cycle-consistent adversarial denoising network (CCADN),9 which is trained following the exact same reported settings. The CT training data set for the CCADN contains 100,000 image patches. We consider both the ideal situation, where test images only contain Poisson noise levels such as those in the training set, and non-ideal situations, where test images have higher noise levels. Figure 3(a) shows the results. Qualitatively, our method and CCADN preserve structures well for both ideal and non-ideal situations. Our method outperforms CCADN even at noise levels that the CCADN is trained to reduce—that is, the regions in Figure 3(a)(1). Quantitatively, our method beats CCADN in both ideal and non-ideal situations, achieving up to 29.2% lower standard deviation and 18.6% on average.

f3.jpg
Figure 3. Step-by-step performance of AI-CHD.

Though our method is trained and tested on each test image, it has almost the same execution time compared with CCADN, due to the significantly reduced network complexity and faster convergence brought by the internal visual entropy.

Medical image segmentation. CHD usually comes with significant variations in heart structures and great-vessel connections, which renders general whole-heart and great-vessel segmentation methods18,22 in normal anatomy ineffective. Most existing segmentation methods are only dedicated to CHD target blood pool and myocardium.23,29 Recently, semi-automated segmentation in CHD has also been explored,17 requiring users to locate an initial seed. However, fully automated segmentation of whole-heart and great-vessel segmentation in CHD remains a missing piece in the literature. Inspired by the success of graph matching in several applications with large variations,12 we propose to combine deep learning25 and graph matching for fully automated whole-heart and great-vessel segmentation in CHD.26 Particularly, we leverage deep learning to segment the four chambers and myocardium followed by blood pool, where variations are usually small and accuracy can be high. We then extract the vessel connection information and apply graph matching to determine the categories of all the vessels.

The overall flow for whole-heart and great-vessel segmentation—left ventricle (LV), right ventricle (RV), left atrium (LA), right atrium (RA), myocardium (Myo), aorta (Ao), and pulmonary artery (PA)—contains two modules: whole-heart segmentation and great-vessel segmentation. In whole-heart segmentation, RoI cropping is first presented to extract the area that includes the heart and its surrounding vessels. We resize the input image to a low resolution of 64X64X64 and then adopt the same segmentation-based extraction as in Payer et al.18 to get the RoI. The RoI are then resized to 64X64X64 and fed into a 3D U-net5 for segmentation.

In great-vessel segmentation, blood-pool segmentation is conducted on each 2D slice of the input using a 2D U-net19 with an input size of 512X512. Note that to detect the blood-pool boundary for easy graph extraction in graph matching later, we add another class: blood-pool boundary in the segmentation. With the high-resolution blood segmentation, whole-heart segmentation achieves chamber and myocardium refinement by refining chamber and myocardium boundaries. By removing the blood pool corresponding to the low-resolution, whole-heart segmentation, great-vessel segmentation obtains the blood pool corresponding to the great vessels and adopts graph matching to identify Ao, PA, and anomalous vessels.

For evaluation, we collected a dataset composed of 68 3D CT images captured by a Siemens Biograph 64 machine. The ages of the associated patients range from one month to 21 years, with the majority between one month and two years. The size of the images is 512 X 512X(130–340), and the typical voxel size is 0.25X0.25X0.5 mm3. The dataset covers 14 types of CHD, including six common types—atrial septal defect (ASD), atrio-ventricular septal defect (AVSD), patent ductus arteriosus (PDA), pulmonary stenosis (PS), ventricular septal defect (VSD), and co-arctation (CA)—plus eight less-common ones—Tetrology of Fallot (ToF), transposition of the great arteries (TGA), pulmonary artery sling (PAS), anomalous drainage (AD), common arterial trunk (CAT), aortic arch anomalies (AAA), single ventricle (SV), and pulmonary atresia (PuA).

uf1.jpg
Figure. Mean and standard deviation (SD) of Dice score of the state-of-the-art method, Seg-CNN,18 and our method (in %) for seven substructures of whole-heart and great-vessel segmentation.

All labeling was performed by experienced radiologists, and the labeling time per image was two to three hours. The labels include seven substructures: LV, RV, LA, RA, Myo, Ao, and PA. For easy processing, venae cavae (VC) and pulmonary vein (PV) are also labeled as part of RA and LA respectively, as they are connected, and their boundaries are relatively hard to define. Anomalous vessels are also labeled as one of the above seven substructures based on their connections. The comparison with Seg-CNN18 is shown in the Table. Our method can achieve a mean Dice score between 5.8% and 19.2% higher across the seven substructures (12% higher on average) with almost the same standard deviation. The highest improvement is achieved in Ao, due to its simple graph connection with successful graph matching. The smallest improvement is obtained in myocardium because it is not well considered in the high-resolution blood-pool segmentation. Figure 3(b) shows visualization of CAT segmentation using our method and Seg-CNN. Our method can clearly segment Ao and PA with some slight mis-segmentation between PA and LA. However, Seg-CNN segments the main part of Ao as PA, since pixel-level segmentation by U-net is based only on the surrounding pixels, and the connection information is not well exploited.

Back to Top

Real-Time Video and Audio Transmission

Real-time video and audio transmission is also a critical part of surgical telementoring. Such transmission needs high data rates and low latency so that important decisions can be made in real time to avoid potential complications. In addition, wireless transmission is always preferred over wired transmission in an operating room. The most common wireless transmission method so far, 4G, offers about 50-ms latency and a 10-Mbps average data rate. Such moderate transmission quality and speeds can support less-complex telemonitoring procedures, such as addiction management20 and training.2

5G wireless communication is emerging to offer about 10-ms latency (1 ms in special cases) and higher data rates of 50+ Mbps on average. Such high transmission quality has enabled many complex procedures, including intestinal tumor procedure,15 liver removal of a laboratory test animal,6 laparoscopic cholecystectomy,27 and gall bladder surgery.4 However, surgical telementoring has not been reported in cardiac surgery applications for CHD treatment, possibly due to its extremely high complexity and high risk.

Back to Top

AI-CHD Case Study

On April 3, 2019, we used AI-CHD to perform China’s first 5G-based heart telementoring procedure, with collaboration between Guangdong Provincial People’s Hospital (GPPH) and the People’s Hospital of Gaozhou (PHG). Guangdong Mobile Communications Group and Huawei provided 5G capabilities. AI-CHD produced an accurate heart model of the patient, which was used for pre-surgical planning and training, and for real-time guidance during the surgery.

The 41-year-old patient, Ms. Green (alias), from Gaozhou, Guangdong Province, was experiencing shortness of breath, chest pain, difficulty walking, and insomnia. She was diagnosed with atrial septal defect (ASD) with tricuspid insufficiency, severe pulmonary hypertension, and cardiac failure at PHG. ASD is a moderate type of CHD that can be treated through a relatively easy, low-risk operation at a young age; however, her condition had not been detected due to lack of screening availability in the rural area where she lived. And, the untreated ASD brought about other conditions, such as severe pulmonary hypertension and heart failure. Thus, her condition changed from a relatively simple CHD to a complex one. Only surgery could save her heart.

Being in a less-developed region, PHG surgeons only have experience in conventional open-heart surgery, a higher-risk procedure, especially for a long-suffering patient such as Ms. Green. A minimally invasive, lower-risk surgery was preferred. However, Ms. Green would have had to travel 250 miles to GPPH, one of China’s largest cardiac medical centers, and the nearest hospital with surgeons capable of performing such a complex procedure. Considering Ms. Green’s weakened condition, the journey was not feasible. Therefore, telementoring was the most suitable approach. With comprehensive discussion and analysis, Dr. Huiming Guo, GPPH’s chief physician of cardiac surgery, agreed to serve as telementor in this surgery. Dr. Guo brought much expertise with minimally invasive surgeries to the table and is well known internationally in the surgical treatment of CHD.

Before Ms. Green’s surgery, we first collected her 3D cardiac CT images from PHG. AI-CHD then produced her 3D heart model, see Figure 4(a), with a clinically acceptable accuracy of 0.81 (Dice score) for the surgery. Runtime was less than two minutes, much faster than manual segmentation—which could take two to three hours or even longer—thus significantly reducing the cost. With the 3D heart model, surgical planning and training were then performed. As shown in Figure 4(b), the heart model was first printed out with a 3D printer (thin vessels of pulmonary artery and pulmonary vein were removed, as they were not related to the surgery). The printed model provided a straightforward view of the heart’s appearance and structure and showed where the problem was. Dr. Guo used the printed model to discuss the surgical plan with other members of his team and the remote team at PHG. Once the plan was set, virtual surgical training was carried out via VR, shown in Figure 4(c).

f4.jpg
Figure 4. AI-CHD case study.

As seen in Figure 4(d), VR enables doctors to enlarge and shrink specific heart structures, including its inner structures, and to perform virtual operations such as infusion and suture as a practice. In this way, Dr. Guo could get a comprehensive understanding of the surgery to be telemonitored and establish the best operations/parameters for effective guidance.

After Dr. Guo confirmed the surgical plan and the detailed process with the help of AI-CHD, telementoring of Ms. Green’s surgery began at 9:35 AM on April 3, 2019. Figure 4(d) shows pictures of the guidance team at GPPH and the actual surgery at PHG. The procedure featured four real-time video streams: the view of the surgeon, the corresponding VR view of the heart, the telementor (Dr. Guo), and the operating room. Since this was China’s first CHD telementoring surgery, doctors from the cardiac surgery, cardiac ultrasound, and cardiac imaging departments all showed up to observe the event, as seen in the telementor view.

Based on the real-time view of the surgeon and the corresponding VR scene, Dr. Guo could easily recognize the current view of the heart and offer immediate guidance via a real-time audio stream. For example, when determining the opening point at the pericardium, the surgeon asked, “Should the pericardium be opened here?” Dr. Guo answered instantaneously, “Move up three centimeters.” For the suture in the operation, which is also a key part of the surgery, Dr. Guo reminded the surgeon, “Do not be too close to the Koch’s triangle when stitching; otherwise, it is easy to cause myocardial injury and block the heart-beating rhythm conduction.” Koch’s triangle was drawn in VR video and shown to surgeons in the operating room 250 miles away in real time.

Throughout the telemonitoring, the transmission rate for video streams is stabilized at around 25 Mbps with a latency of 30 ms. The surgery went smoothly and finished at 1:00 PM. The heart was sutured, Ms. Green’s heart resumed beating, and after a week in recovery, she was discharged. The post-operative review showed that pulmonary artery pressure and mitral regurgitation were within the normal range. The patient remains in good health as of the writing of this article (Dec. 1, 2019). Our case study has received extensive media coverage in some of the biggest and most influential news organizations in China, including Xinhua28 and Global Times.7

Back to Top

Looking Forward

This is the third year of collaboration between computer scientists, cardiac surgeons, and radiologists in our team. The original work of artifact reduction and segmentation of cardiac CT images has gradually evolved into a holistic system of 3D heart-model construction. In the future, in addition to further optimizing AI-CHD, we plan to explore the following four promising areas for automatic and cost-efficient treatment of CHD:

Artifact-aware segmentation. The current approach involves two steps: artifact reduction and segmentation. However, it may be possible to improve efficiency by performing segmentation directly on noisy images in just one step, as shown in Figure 5(a). We believe this is promising for two reasons: First, artifacts typically display some patterns, making it possible to capture and remove them and segment targets jointly in one neural network. Second, while it may be a problem to obtain the training dataset, we can use our artifact reduction method or other existing methods to get clean images and then perform manual labeling.

f5.jpg
Figure 5. Future work of AI-CHD on medical image-artifact reduction and segmentation: Artifact-aware segmentation (a) that directly performs segmentation on noisy images rather than artifact reduction first and then segmentation, and graph-aware segmentation (b) that takes graph information among thin vessels into consideration for accurate segmentation.

Graph-aware segmentation. Our method and other existing methods are still challenged to correctly segment thin vessels, especially thin PA vessels. The main reason is that the rich connection information among these vessels is not well exploited. As shown in Figure 5(b), we may extract the graphs of these thin vessels from blood-pool segmentation results and whole-heart and great-vessel segmentation results to represent connection information. Then, graph-aware analysis that takes both segmentation and connection information into consideration can be performed to obtain more accurate segmentation results.

Automatic diagnosis. Accurate diagnosis of CHD is more significant compared with artifact reduction and segmentation. The lack of CHD diagnosis experience in developing regions means many cases are not diagnosed correctly and miss timely treatment.16 To gain the expertise to be able to make such a diagnosis, radiologists require more than 10 years of training, which can be time-consuming and costly. Even experienced radiologists may need up to a half-hour to diagnose a patient with CHD. Thus, automatic CHD diagnosis is preferred, for its ability to provide large-scale, high-quality, cost-efficient medical care. To be clinically acceptable, automatic CHD diagnosis also needs to report the features or reasons for the diagnosis with an accompanying confidence score. Radiologists could more easily verify the results; low confidence scores would denote cases in need of manual diagnosis.

Automatic surgery planning. Due to the large structure variations in CHD, dozens of surgical procedures exist, each containing parameters such as opening point, incision size, direction. Currently, surgeons plan based on their experience, which may or may not be the optimal choice in terms of prognosis. We will further extend AI-CHD to enable accurate automatic surgical planning for optimal treatment.

Back to Top

Conclusion

AI-CHD is an accurate, AI-based framework for surgical telementoring of CHD. It is developed through deep collaborations between computer scientists, radiologists, and surgeons. The technology enables cost-effective and timely model construction of hearts in CHD, which assists radiologists and surgeons with performing efficient surgical planning and training in CHD surgery, as demonstrated by the case study. AI-CHD can reduce costs while improving the quality of CHD surgery telementoring in developing countries and regions.

Back to Top

Acknowledgments

This work was approved by the Research Ethics Committee of Guangdong General Hospital, Guangdong Academy of Medical Science under Protocol No. 20140316. It was supported by the National Key Research and Development Program of China (2018YFC1002600), the Science and Technology Planning Project of Guangdong Province, China (No.2017B090904034, 2017B030314109, 2018B090944002, 2019B020230003), Guangdong Peak Project (DFJH201802), and the National Natural Science Foundation of China (No.62006050).

    1. Bernier, P-L., Stefanescu, A., Samoukovic, G., and Tchervenkov, C.I. The challenge of congenital heart disease worldwide: Epidemiologic and demographic facts. In Seminars in Thoracic and Cardiovascular Surgery: Pediatric Cardiac Surgery Annual 13, Elsevier (2010), 26–34.

    2. Bogen, E.M., Augestad, K.M., Patel, H.R.H., and Lindsetmo, R-O. Telementoring in education of laparoscopic surgeons: An emerging technology. World Journal of Gastrointestinal Endoscopy 6, 5 (2014), 148.

    3. Chen, Y-J., Chang, Y-J., Wen, S-C., Shi, Y., Xu, X., Ho, T-Y., Jia, Q., Huang, M., and Zhuang, J. Zero-shot medical image artifact reduction. In 2020 IEEE 17th Intern. Symp. on Biomedical Imaging (ISBI), 862–866.

    4. Chinese surgeons conduct remote surgery using 5G technology. The Times of India (June 11, 2019), https://timesofindia.indiatimes.com/world/china/chinese-surgeons-conduct-remote-surgery-using-5g-technology/articleshow/69742530.cms.

    5. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., and Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Intern. Conf. on Medical Image Computing and Computer-Assisted Intervention. Springer (2016), 424–432.

    6. Cuthbertson, A. Surgeon performs world's first remote operation using 5G surgery on animal in China. Independent (Jan. 17, 2019), https://www.independent.co.uk/life-style/gadgets-and-tech/news/5g-surgery-china-robotic-operation-a8732861.html.

    7. First AI+5G surgery completed with Huawei's technological support. Global Times (2019), http://www.globaltimes.cn/content/1144600.shtml.

    8. Huang, E.Y., Knight, S., Guetter, C.R., Davis, C.H., Moller, M., Slama, E., and Crandall, M. Telemedicine and telementoring in the surgical specialties: A narrative review. The American Journal of Surgery 218, 4 (2019), 760–766.

    9. Kang, E., Koo, H.J., Yang, D.H., Seo, J.B., and Ye, J.C. Cycle consistent adversarial denoising network for multiphase coronary CT angiography. (2018), arXiv preprint arXiv:1806.09748.

    10. Kempny, A., Dimopoulos, K., Uebing, A., Diller, G-P., Rosendahl, U., Belitsis, G., Gatzoulis, M.A., and Wort, S.J. Outcome of cardiac surgery in patients with congenital heart disease in England between 1997 and 2015. PLoS One 12, 6 (2017), e0178963.

    11. Lacy, A.M., Bravo, R., Otero-Piñeiro, A.M., Pena, R., De Lacy, F.B., Menchaca, R., and Balibrea, J.M. 5G-assisted telementored surgery. British Journal of Surgery 106, 12 (2019), 1576–1579.

    12. Lajevardi, S.M., Arakala, A., Davis, S.A., and Horadam, K.J. Retina verification system based on biometric graph matching. IEEE Transactions on Image Processing 22, 9 (2013), 3625–3635.

    13. Marescaux, J. and Rubino, F. The ZEUS robotic system: Experimental and clinical applications. Surgical Clinics 83, 6 (2003), 1305–1315.

    14. Nifong, L.W., Chu, V.F., Bailey, B.M., Maziarz, D.M., Sorrell, V.L., Holbert, D., and Chitwood, Jr., W.R. Robotic mitral valve repair: Experience with the da Vinci system. The Annals of Thoracic Surgery 75, 2 (2003), 438–443.

    15. Nita, R. World's first 5G-powered surgery: Dr. Antonio de Lacy. World Record Academy (2019), https://www.worldrecordacademy.org/medical/worlds-first-5g-powered-surgery-dr-antonio-de-lacy-219142.

    16. Ntiloudi, D., Giannakoulas, G., Parcharidou, D., Panagiotidis, T., Gatzoulis, M.A., and Karvounis, H. Adult congenital heart disease: A paradigm of epidemiological change. International J. of Cardiology 218 (2016), 269–274.

    17. Pace, D.F., Dalca, A.V., Brosch, T., Geva, T., Powell, A.J., Weese, J., Moghari, M.H., and Golland, P. Iterative segmentation from limited training data: Applications to congenital heart disease. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer (2018), 334–342.

    18. Payer, C., Štern, D., Bischof, H., and Urschler, M. Multi-label whole heart segmentation using CNNs and anatomical label configurations. In Intern. Workshop on Statistical Atlases and Computational Models of the Heart. Springer (2017), 190–198.

    19. Ronneberger, O., Fischer, P., and Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Intern. Conf. on Medical Image Computing and Computer-Assisted Intervention, Springer (2015), 234–241.

    20. Sagi, M.R., Aurobind, G., Chand, P., Ashfak, A., Karthick, C., Kubenthiran, N., Murthy, P., Komaromy, M., and Arora, S. Innovative telementoring for addiction management for remote primary care physicians: A feasibility study. Indian Journal of Psychiatry 60, 4 (2018), 461.

    21. Van Der Linde, D., Konings, E.E.M., Slager, M.A., Witsenburg, M., Helbing, W.A., Takkenberg, J.J.M., and Roos-Hesselink, J.W. Birth prevalence of congenital heart disease worldwide: A systematic review and meta-analysis. Journal of the American College of Cardiology 58, 21 (2011), 2241–2247.

    22. Wang, C., MacGillivray, T., Macnaught, G., Yang, G., and Newby, D. A two-stage 3D Unet framework for multi-class segmentation on full resolution image. (2018), arXiv preprint arXiv:1804.04341.

    23. Wolterink, J.M., Leiner, T., Viergever, M.A., and Išgum, I. Dilated convolutional neural networks for cardiovascular MR segmentation in congenital heart disease. In Reconstruction, Segmentation, and Analysis of Medical Images. Springer (2016), 95–102.

    24. Wolterink, J.M., Leiner, T., Viergever, M.A., and Išgum, I. Generative adversarial networks for noise reduction in low-dose CT. IEEE Transactions on Medical Imaging 36, 12 (2017), 2536–2545.

    25. Xu, X., Liu, Q., Yang, L., Hu, S., Chen, D., Hu, Y., and Shi, Y. Quantization of fully convolutional networks for accurate biomedical image segmentation. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (2018), 8300–8308.

    26. Xu, X., Wang, T., Shi, Y., Yuan, H., Jia, Q., Huang, M., and Zhuang, J. Whole heart and great vessel segmentation in congenital heart disease using deep neural networks and graph matching. In Intern. Conf. on Medical Image Computing and Computer-Assisted Intervention, Springer (2019), 477–485.

    27. Yamei. 5G remote surgery conducted in central China. Xinhua (2019), http://www.xinhuanet.com/english/2019-06/11/c_138134223.htm.

    28. Yang, Q., Yan, P., Zhang, Y., Yu, H., Shi, Y., Mou, X., Kalra, M.K., Zhang, Y., Sun, L., and Wang, G. Low dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Transactions on Medical Imaging (2018).

    29. Yu, L., Yang, X., Qin, J., and Heng, P-A. 3D FractalNet: Dense volumetric segmentation for cardiovascular MRI volumes. In Reconstruction, Segmentation, and Analysis of Medical Images. Springer (2016), 103–110.

    30. Zontak, M. and Irani, M. Internal statistics of a single natural image. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference, 977–984.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More