1 00:00:00,100 --> 00:00:02,255 This video presents a new approach 2 00:00:02,260 --> 00:00:04,695 to reconstruction of Computed Tomography 3 00:00:04,735 --> 00:00:06,915 data driven by geometric techniques. 4 00:00:07,000 --> 00:00:08,875 Computer tomography is one 5 00:00:08,900 --> 00:00:10,510 of the most popular modalities 6 00:00:10,520 --> 00:00:12,615 because its capability to visualize 7 00:00:12,710 --> 00:00:14,198 internal organs of the body 8 00:00:14,200 --> 00:00:16,480 with high resolution. 9 00:00:16,485 --> 00:00:18,445 In Computed tomography an unknown 10 00:00:18,530 --> 00:00:21,685 object is reconstructed from projection images. 11 00:00:21,695 --> 00:00:23,485 To acquire the projection images, 12 00:00:23,545 --> 00:00:25,165 we have an x-ray source 13 00:00:25,175 --> 00:00:26,920 and an image detector. 14 00:00:27,015 --> 00:00:29,375 Then, data are obtained by 15 00:00:29,385 --> 00:00:31,615 exposing a patient or an 16 00:00:31,860 --> 00:00:32,990 object to a beam of x-rays, 17 00:00:33,040 --> 00:00:34,781 which can be understood as 18 00:00:34,850 --> 00:00:36,211 a set of rays penetrating 19 00:00:36,270 --> 00:00:37,625 a weighted region. 20 00:00:38,035 --> 00:00:39,180 Then we move our imaging system 21 00:00:39,265 --> 00:00:40,870 to a different position and 22 00:00:41,030 --> 00:00:43,050 generate an additional projection image. 23 00:00:43,130 --> 00:00:45,040 We follow this rotation and 24 00:00:45,135 --> 00:00:47,095 acquire many projection images 25 00:00:47,100 --> 00:00:48,965 up to a thousand different positions. 26 00:00:49,070 --> 00:00:51,436 Here you see an example of 27 00:00:51,550 --> 00:00:52,475 projection images of a rabbit. 28 00:00:52,550 --> 00:00:55,515 We used 220 positions 29 00:00:55,515 --> 00:00:56,820 to capture these images. 30 00:00:57,045 --> 00:00:58,653 These projection data are then 31 00:00:58,725 --> 00:01:00,685 reconstructed using standard techniques 32 00:01:00,925 --> 00:01:02,680 based on the Radon Transform. 33 00:01:02,700 --> 00:01:04,890 The volume is reconstructed by 34 00:01:04,980 --> 00:01:06,765 back projecting the acquired data. 35 00:01:06,933 --> 00:01:09,215 The summation is performed 36 00:01:09,365 --> 00:01:10,065 all projection angles. 37 00:01:10,160 --> 00:01:11,850 Here you see how a 38 00:01:11,900 --> 00:01:14,090 slice of rabbit data is reconstructed. 39 00:01:14,330 --> 00:01:16,785 The Nyquist sampling criteria limit 40 00:01:16,825 --> 00:01:19,140 techniques based on the Radon transform. 41 00:01:19,145 --> 00:01:21,795 Therefore, these techniques provide 42 00:01:21,815 --> 00:01:23,310 reduced image quality when 43 00:01:23,340 --> 00:01:24,630 using a reduced number of 44 00:01:24,675 --> 00:01:27,080 projections. But using fewer 45 00:01:27,085 --> 00:01:29,305 projections reduces radiation dose 46 00:01:29,355 --> 00:01:31,260 and reduces incidence of 47 00:01:31,280 --> 00:01:32,995 cancer in patient population. 48 00:01:33,100 --> 00:01:34,875 The literature shows that 49 00:01:34,895 --> 00:01:36,955 high radiation dose as used 50 00:01:36,965 --> 00:01:39,635 in standard CT can lead to cancer. 51 00:01:39,740 --> 00:01:41,260 Compressed sensing presents a new 52 00:01:41,285 --> 00:01:44,080 method to capture and represent compressible 53 00:01:44,095 --> 00:01:46,395 signals at a sampling rate significantly 54 00:01:46,490 --> 00:01:49,215 below the Nyquist rate. The main 55 00:01:49,340 --> 00:01:50,805 idea behind this new approach can 56 00:01:50,885 --> 00:01:52,970 be summarized as an alternative to 57 00:01:53,010 --> 00:01:54,905 directly reconstructing a signal 58 00:01:55,075 --> 00:01:57,018 or an image. The projection 59 00:01:57,070 --> 00:01:59,550 data are transformed into a sparse version. 60 00:01:59,585 --> 00:02:02,115 Then the signal or image is 61 00:02:02,245 --> 00:02:04,390 reconstructed in a dimensionality-reduced 62 00:02:04,415 --> 00:02:07,635 space by minimizing the L1 - distance. 63 00:02:07,840 --> 00:02:09,995 To use this method, the 64 00:02:10,010 --> 00:02:11,960 signal to-be-reconstructed needs 65 00:02:11,985 --> 00:02:13,910 to have sparse-like attributes, 66 00:02:14,085 --> 00:02:16,540 which standard medical images do not have. 67 00:02:16,650 --> 00:02:18,320 We observed that standard CT 68 00:02:18,355 --> 00:02:20,525 techniques converge quickly when the 69 00:02:20,760 --> 00:02:22,895 intensity of all voxels is similar. 70 00:02:22,990 --> 00:02:24,745 This occurs because the value 71 00:02:24,750 --> 00:02:26,135 placed in each voxel along each 72 00:02:26,300 --> 00:02:29,050 penetrating ray will be close to its 73 00:02:29,175 --> 00:02:31,360 actual intensity when all voxels 74 00:02:31,445 --> 00:02:33,013 have similar intensity. 75 00:02:33,135 --> 00:02:35,865 Additionally, we observed that when 76 00:02:36,040 --> 00:02:38,105 plotting the entropy as a function of the 77 00:02:38,288 --> 00:02:40,235 number of projections used in the 78 00:02:40,320 --> 00:02:42,595 reconstruction a plateau is visible 79 00:02:42,710 --> 00:02:45,365 even after a small number of projections. 80 00:02:45,395 --> 00:02:48,520 Thus, it should be possible to 81 00:02:48,590 --> 00:02:50,465 reconstruct high quality images from 82 00:02:50,555 --> 00:02:52,255 a limited number of projections. 83 00:02:52,315 --> 00:02:54,695 Based on these observations, our 84 00:02:54,760 --> 00:02:56,880 reconstruction strategy resembles a 85 00:02:56,935 --> 00:02:59,205 geometric compressed sensing approach. 86 00:02:59,330 --> 00:03:01,615 As the sparse transform, we 87 00:03:01,705 --> 00:03:03,240 are handling each region of the 88 00:03:03,295 --> 00:03:06,165 to-be-reconstructed image as separate entity. 89 00:03:06,220 --> 00:03:08,635 Let's review our approach step by step. 90 00:03:08,695 --> 00:03:10,340 We have an unknown object. 91 00:03:10,380 --> 00:03:12,490 A radon-based technique is used 92 00:03:12,520 --> 00:03:14,075 to generate a poor quality 93 00:03:14,145 --> 00:03:16,340 reconstruction. To obtain a rough 94 00:03:16,385 --> 00:03:18,500 segmentation of our object-of-interest, 95 00:03:18,750 --> 00:03:21,085 we make use of models or priors. 96 00:03:21,105 --> 00:03:23,990 In order to make use of our earlier 97 00:03:24,030 --> 00:03:26,235 observation, we have to classify the rays 98 00:03:26,310 --> 00:03:27,790 that penetrate the object. 99 00:03:27,985 --> 00:03:29,885 As you see here, the rays 100 00:03:29,955 --> 00:03:31,365 that penetrate the object are 101 00:03:31,390 --> 00:03:33,320 shown in red, and the complement 102 00:03:33,490 --> 00:03:35,075 of these rays is shown in green. 103 00:03:35,120 --> 00:03:37,620 To classify our rays, we reduce 104 00:03:37,895 --> 00:03:39,860 the problem from 3D to 2D. 105 00:03:39,990 --> 00:03:42,930 Each plane of rays is handled separately. 106 00:03:43,060 --> 00:03:44,660 Now lets take a closer look 107 00:03:44,670 --> 00:03:47,270 at one of our 2D arrangements. 108 00:03:47,275 --> 00:03:49,125 In the 2D plane, we have 109 00:03:49,155 --> 00:03:50,825 a set of objects obtained from 110 00:03:51,035 --> 00:03:52,775 the rough segmentation, surrounded 111 00:03:52,860 --> 00:03:54,420 by lower density regions called 112 00:03:54,500 --> 00:03:57,470 complements. To separate the reconstruction 113 00:03:57,495 --> 00:03:58,745 of the objects from that of 114 00:03:58,985 --> 00:04:01,345 the complements, we first approximate 115 00:04:01,498 --> 00:04:03,870 the rough boundary of the objects by 116 00:04:03,985 --> 00:04:07,640 simple polygons and then treat the polygons as obstacles. 117 00:04:08,000 --> 00:04:10,665 To efficiently classify all the rays, 118 00:04:10,880 --> 00:04:13,810 we use a point-line duality transformation 119 00:04:13,835 --> 00:04:15,725 on the set of boundary segments 120 00:04:15,960 --> 00:04:18,640 of the polygons. Each vertex of 121 00:04:18,665 --> 00:04:21,130 the polygons transforms to a dual 122 00:04:21,215 --> 00:04:24,135 line and each ray maps to a point in 123 00:04:24,165 --> 00:04:27,095 the dual space. The set of dual lines 124 00:04:27,140 --> 00:04:29,815 forms an arrangement, which partitions the 125 00:04:29,895 --> 00:04:32,420 space into a set of convex cells. 126 00:04:32,585 --> 00:04:34,820 Each cell contains the set of rays 127 00:04:34,895 --> 00:04:37,850 intersecting the same subset of boundary 128 00:04:38,040 --> 00:04:40,525 segments of the polygons, that is, 129 00:04:40,750 --> 00:04:42,900 all rays traveling through the hourglass 130 00:04:43,030 --> 00:04:46,060 defined by the set of segments. All 131 00:04:46,175 --> 00:04:48,620 types of rays can be identified by sweeping the 132 00:04:48,675 --> 00:04:50,665 arrangement in an online fashion. 133 00:04:50,960 --> 00:04:53,200 To report all the cells and rays in 134 00:04:53,235 --> 00:04:55,050 a more efficient way, we use 135 00:04:55,065 --> 00:04:57,695 the topological peeling algorithm due to its 136 00:04:57,960 --> 00:05:00,085 optimal time and space complexities. 137 00:05:00,175 --> 00:05:02,360 Let's go back into 3D. 138 00:05:02,610 --> 00:05:05,125 We have shown how to optimally classify the 139 00:05:05,130 --> 00:05:08,425 rays for one plane by using geometric techniques. 140 00:05:08,770 --> 00:05:10,510 In our algorithm, we perform 141 00:05:10,575 --> 00:05:12,375 this for all projection images. 142 00:05:12,490 --> 00:05:14,650 For the remainder of this presentation, 143 00:05:14,760 --> 00:05:17,070 two projections will continue to represent 144 00:05:17,185 --> 00:05:20,225 the vast number of images that are actually used. 145 00:05:20,260 --> 00:05:22,260 We reconstruct from the outside 146 00:05:22,360 --> 00:05:24,690 inward in a layer-by-layer fashion. 147 00:05:24,800 --> 00:05:27,350 First, we reconstruct the outer region 148 00:05:27,380 --> 00:05:29,910 by using the green rays. Next, 149 00:05:29,955 --> 00:05:32,808 we project the reconstructed outer region. 150 00:05:32,945 --> 00:05:34,915 These resulting data are subtracted 151 00:05:34,975 --> 00:05:36,690 from the original projections. 152 00:05:37,020 --> 00:05:39,960 Second, we reconstruct the outer object 153 00:05:39,990 --> 00:05:42,335 by using the red rays. We project 154 00:05:42,435 --> 00:05:44,160 the reconstructed object by 155 00:05:44,190 --> 00:05:46,440 using the remaining rays. These 156 00:05:46,510 --> 00:05:50,060 resulting data are subtracted from the projection space. 157 00:05:50,185 --> 00:05:52,375 Finally we reconstruct the remaining 158 00:05:52,450 --> 00:05:54,775 object by using the yellow rays. 159 00:05:54,780 --> 00:05:57,050 In the end, we have produced a 160 00:05:57,120 --> 00:05:58,790 reliable reconstruction from 161 00:05:58,850 --> 00:06:00,630 a reduced number of projections. 162 00:06:00,815 --> 00:06:02,300 Now to our results: 163 00:06:02,375 --> 00:06:04,275 Here you see real data from our 164 00:06:04,355 --> 00:06:06,740 rabbit study. To the left is 165 00:06:06,835 --> 00:06:09,645 a rough segmentation of the object-of-interest. 166 00:06:09,770 --> 00:06:12,275 The center object shows the reconstruction 167 00:06:12,315 --> 00:06:15,525 from a reduced number of projections obtained 168 00:06:15,610 --> 00:06:17,425 using standard techniques. 169 00:06:17,495 --> 00:06:20,330 On the right you see our reconstruction. 170 00:06:20,405 --> 00:06:22,755 Compared to the center image that is 171 00:06:22,810 --> 00:06:25,315 calculated from the same number of projections, 172 00:06:25,400 --> 00:06:29,985 our result on the right has a significantly improved image quality. 173 00:06:31,310 --> 00:06:32,940 We thank you for your time and 174 00:06:33,075 --> 00:06:34,875 interest from the University at Buffalo.