High resolution photography of Alcator C-Mod to develop compelling composite photos and virtual tours


R.T. Mumgaard, C. Bolin*


* Bolin Photography, Cambridge MA USA




A large set of photos of the interior of Alcator C-Mod were obtained at the end of the 2013 maintenance period. The purpose of the photos was to create realistic high-resolution representations of the interior of Alcator C-Mod for display online and in print. These will be distributed for public relation, outreach and engineering documentation.  A novel camera mounting and movement system was developed to allow automated photography and a stable platform for long exposures. A high quality fisheye lens was rented and photography settings were developed to optimize the end product in the challenging invessel setting. 1500+ photos were taken over three days and post processed thereafter. The photos were bracketed to allow for high dynamic range which was tone mapped for presentation.  Some of the photos were stitched together using open source software into a 92Megapixel image showing 93% of the Alcator C- Mod outer wall.  Photos were stitched together to form 360deg x180deg projections at nine of the ten ports. These are then be used to drive virtual tours on web platforms. Composed single photos were also taken.  The photo repository was used to derive movies and a 3D point cloud of the vessel.  Ideas for future efforts are discussed.


General setup


The photos were taken just prior to the vacuum vessel being pumped down, immediately before the LEDs, which are only used during maintenance periods, were removed. One of the ten large flanges (Gport) remained off to enable access, thus this region is missing from the photographs. All the covers on delicate instrumentation were removed to enable photography of the system as close to operational as possible.


A Nikon D80 DSLR (10.2MP) with a Nikkor 10.5mm fisheye lens was used. This lens has a very large 180deg diagonal field of view enabling large portions of the vessel to be seen in a single photo, though with large distortion which was later removed. This fast lens (f/2.8) was stopped down (f/4.5) to increase depth of field since features on the wall were 20cm to 150cm from the camera.  Low gain (ISO-100) was used to reduce graininess at the expense of long exposures, which was tolerable since the camera and scene were stationary. The lens was manually focused to best balance the focus of the various features, and the camera was controlled remotely to enable automated operation and a stable platform for bracketing.


High dynamic range (HDR) and mild tone-mapping were necessary to create an appealing photo for the web and print due to the weak lighting inside the tokamak and the many reflections from the metal components. At each position a bracket of three photos was taken at three different shutter speeds (0.5s, 2.0s, 8.0s ; roughly -2EV, 0EV, +2EV;). These photos were then cross-registered and merged into a 32- bit depth HDR TIFF, which was then tone mapped using Photomatrix Pro HDR software as shown in Figure 1.





Figure 1: Tone mapping process reduces bright reflections and enhances dark recesses.


Note how the bright reflections are not saturated and the dark recesses of the port can be seen in the tone- mapped result. All photos sets were tone mapped with the exact same settings as a batch in order to enable stitching. Several composed photographs were taken from different ports looking inward using clamping and accurate camera positioning hardware developed for this purpose. The distortion from the

tone-mapped images was removed and the photos were mildy sharpened.  An example is shown in Figure





Figure 2: A composed shot looking in from Fport. One author is behind the center column while the other triggers the camera remotely.


A camera positioning system was developed consisting of a flexible rail and a car that rides along it. The rail was wrapped around the inside column of the tokamak in a manner similar to a band clamp.  The flexibility and light weight of the rail enabled it to be installed and repositioned with only one person invessel, an important consideration due to the reduced vessel access. The car rolls around the tokamak center column carrying various tripod heads, which can hold the camera in different configurable orientations.  An engineering diagram and photo of the system are shown in Figure 3.




Figure 3: A novel camera rail and car system allows the camera position to be automated.


The car and camera are pulled around the tokamak with fish line using a stepper-motor actuated reel, thus allowing the camera to be moved precisely.  The camera is remotely controllable, triggered via the same computer that controls its location. A MATLAB program synchronizes the movement and camera triggering.  This allows a very large set of photos to be taken of the vessel automatically and precisely with minimal oversight.


Composite photo of the outer wall


To photograph the outer wall as if it was unrolled and laid flat the camera must observe it straight on. To do this the camera was mounted so it was looking straight out from the center column, with the camera placed as close as possible to the center column. This enabled the outer wall and top and bottom shelves

to be photographed in a single wide angle photo.


Since there are many items on the outer wall at different depths, and the camera cannot be placed in the center of the torus due to the center column, parallax and perspective create problems as the camera moves. Therefore, a very large number of photos are needed so only the center portion of each photo, where the camera is observing the wall most radially, is used in the final composite photo. To accomplish this, the camera was moved in very small increments - 1inch along the track, ~3deg around the torus - taking the set of three bracketed photos at each stop. Ninety-two sets of three photos were taken automatically over the course of ~1hr. A schematic of the process showing the field of view is shown in Figure 4.





Figure 4: The camera is moved around the torus capturing the outer wall (left). The wide field of view captures the outer wall and shelf (right).


Each set of three bracketed photos was merged into an HDR image and then tone mapped.  The photos were stitched together using the open source stitching software Hugin, which identifies features in each photo that also occur in neighboring photos. The software then rotates, stretches and morphs the photos to align them to each other.  In order to create a good stitch the photos were first cropped down into slices showing only the region in the center of the photo where the outer wall is viewed nearly radially. The software was instructed to only use the center portion of each cropped photo.



Figure 5: Photos were cropped into overlapping vertical slices (top) to create a seamless radial view (bottom).


The aligned photos were made horizontal using the row of bolts on the bottom shelf near the bottom of the view. The stitching paths were modified so stitches did not disrupt geometric patterns such as the antenna screens and the lower hybrid grill, as this would be very evident to the eye.  It was also important to route stitching so it did not cut through diagnostics on the walls, as they will be pointed out in future outreach uses of the photo. Generally, the stitching was difficult because the items on the wall range in distance from the camera from 0.4m to nearly 2m and the lens distorts the top and bottom shelves. Items with long paths radially were the most difficult to stitch due to parallax errors. The tubes low in Fport and Hport are good examples. The seams were then blended along the stitching paths. The process is shown in Figure 5. There are 91vertical seams in the final photo, the worst one is at Jport where the tiles do not align, this was due to an operation error where several photos were missed, and thus the stitch had to use wider, non-radial slices.


The end result is a stitched photo that is 24050 x 3845 pixels (92.5 Megapixels) covering approximately

93% of the outer wall, omitting Gport which was used for access. This yields an approximate resolution of 100dpi for features at the wall. The height of the photo encompasses both vertical surfaces and horizontal surfaces. Tall items are distorted, as are items which extend radially. The view and dimensions are shown in Figure 6. Note the white balance appears to change from left to right due to the different LEDs used.




Figure 6: The composite stitched photo covers 93% of the width of the wall, and the entire height of the wall and top and bottom shelves.


This photo will be used for engineering documentation and future facility planning and will be enlarged to near life size to show visitors the configuration of Alcator C-Mod when they enter the laboratory. The image could serve as interactive roll-over graphic for outreach purposes where clicking on or hovering over items on the wall brings up auxiliary information about them and links to more information.


Projection photos for a virtual tour


In order to form the views used for a virtual tour a special set of photos must be created that capture the sphere around the camera as if it was viewed from a single point. This differs from the previous long

outer wall stitch photo because the camera should not translate, only pivot in angle.


To accomplish this, the camera was mounted on a gimbaled tripod head, which rotates the camera about its entrance pupil (the zero parallax point) about the three axes.  In this configuration the relationship between objects at different distances stays constant as the camera rotates, because there is no parallax. The gimbaled tripod head was attached to the car on the track, placing the camera near the center of the plasma and allowing it to be pointed in arbitrary directions, as shown in Figure 7.




Figure 7: The gimbal tripod head is attached to a car to allow the camera to point in arbitrary directions about its no-parallax point..

The car was then moved to the center of each port and locked into place. Ten sets of three bracketed photos were taken, eight viewing horizontally through 360deg, one upwards and one downwards thus encompassing the entire sphere around the camera. The camera was pivoted by one author operating the tripod head invessel while the other author operated the camera remotely. The sets of three photos were then merged into HDR and tone mapped. The ten resulting photos were then stitched onto the surface of a sphere using Hugin and the result was then projected onto a flat using an equirectangular projection at high resolution. The process is shown in Figure 8.




Figure 8: The 10 tone mapped photos encompass the sphere around the camera. They are projected to that sphere using software then flattened into an equirectangular projection which is then stitched and blended.



The process was repeated at each of nine of the ten ports, totaling 270 photos merged into nine tone mapped equirectangular projections.


The resulting equirectangular photos cover 360deg horizontally and 180deg vertically at each port, thus encompassing the entire sphere around the camera. This is a standardized format for interactive virtual reality viewers, which take the image and reproject it back onto the surface of a sphere, and which can be navigated by the user by tilting and panning the camera. Note that each of the nine photos contains the camera mount car and tripod head, which appears to end in space due to the blended photo stitching.

Commercial virtual tour software is then used to display the photos online interactively. Links are added in the virtual sphere to navigate between the photos as if somebody was navigating through the space. Additional links can be added which when clicked on call up other supporting information, such as embedded webpages, auxiliary images or supplemental information. This can be used to create an interactive experience where the user navigates through the tokamak, clicking on interesting items and receiving further information.  Additionally, preplanned tours can be created from these photos where the camera is tilted and panned to show the observer specific items.


Other uses of the photos


The automation of the rail, car and camera system allows for a variety of photos to be taken easily.  In addition to the view looking at the outer wall radially, sets of photos were taken observing the wall at other angles. Thus a composite outer wall photo could be constructed, but instead of looking radially, it would show everything slightly in profile. Examples are shown in Figure 9.




Figure 9: In addition to radial views of the wall, it was also photographed in profile and downward (not shown).


Additionally, the rail was used to photograph the divertor looking straight downward from the ceiling of the vessel.


The spacing between the photos was larger (30 for the panned and downward vs 92 for the radial) which would make the stitching more complex, but likely still achievable. In all there are 278 high-quality tone-mapped photos of the vessel available with many different angles. These have been used as frames to make movies.


With so many photos available it is conceivable to use open source 3D mapping software to construct a

3D model of the vessel and project the photos back onto it, similar to what is done in CGI for movies and video games. This was attempted using the tutorial located at: http://wedidstuff.heavyimage.com/index.php/2013/07/12/open-source-photogrammetry-workflow/


A few screenshots of the process using the 92 radial photos are shown in Figure 10.


omage 023


Figure 10: Photos were used to create a 3D point cloud of the vessel wall.


If this is to be successful, the photos need to have the fisheye projection mapped to rectilinear prior to analysis and to use the entire set of 278 photos.  Note the above point cloud is inverted. The system solved the cameras looking inward at wall instead of outward at wall, likely due to "bulging" in the fisheye photos.


Microsoft research has developed an application called Photosynth, which maps the photos together and allows you to navigate them online. This was attempted with marginal success, and the result can be found here: http://photosynth.net/view.aspx?cid=6b948a0c-8942-4ed8-9569-9abd54332565


Future work


Suggestions for improving the process if it is revisited in the future are:


    Take more gimbaled shots in the cell, power room and control room to expand virtual tour outside of vessel.

    Rent a high-quality full-frame digital camera. This would increase the resolution per photo and thus the overall resolution. The D80 is fairly antiquated.

    Modify the rail to allow the car to travel around full 360deg, it is currently limited to ~330deg due to gap in rail containing the clamp used to secure it to the central column.

    Use a rectilinear lens and take more photos at different heights for the outer wall stitch. This would allow higher resolution and less distortion. Also consider focus stacking to capture details at different distances.

    Consider temporarily installing Gport during the automated sequence (perhaps just position it) to show the entire vessel.

    Purchase a motorized gimbaled tripod head to allow this sequence to be automated without requiring manned presence invessel.

    Take an eleventh photo during the gimbaled sequence with the tripod head rotated to mask the black tripod arm out.

    Take more scans in different direction to try to build a more accurate 3D photogrammetry model.

    Take a set of photos with the camera pointed toroidally using very small movements. This can be used to make a very high resolution fly through movie where the viewer is transported around the vessel at the center of the plasma.  Likely need >400 steps through the 360deg to make the result smooth.

    Take a similar small-movement series along a linear track entering the vessel. This would show the view traversing the rather small entrance port.

    Take a few high quality posed photos with people working invessel for human interest.


A key thing to note is that the photo taking process is automated, thus increasing the number of photos by a factor of 10 requires more time with the system in vessel but only marginal more work. The tone- mapping, cropping, processing and stitching are all batch driven so the workflow is fairly insensitive to the number of photos involved, aside from computation time, which was not a limiting factor for this work. (note:  Hugin is parallelized)




The authors would like to thank Christian Theiler and Ted Golfinopoulos for their assistance overseeing the work done invessel, Ed Fitzgerald and Bill Forbes for their work machining and assembling the camera rail.


Software used:


To control the camera: Camera Control Pro V1.3 http://www.nikonusa.com (commercial)


To automate the motion: MATLAB R2012b  www.mathworks.com/products/matlab/ (commercial)


To combine the brackets and tonemap: Photomatrix Pro 4.2.7 www.hdrsoft.com (commercial)


To stitch the photos: Hugin 2012 http://hugin.sourceforge.net/ (open source)


To batch crop and convert: IrfanView v4.36 www.irfanview.com (open source)


To crop/rotate and edit levels/curves: GIMP 2.8.6 www.gimp.org (open source)


To create the 3D point cloud: VisualSFM  http://ccwu.me/vsfm/ (open source)


Location of the photo products:


The finalized jpeg photos: \\psfc\psfcfiles\Engineering Drawings\Engineering\EngImages\C- Mod_Interior_2013\finalized


Derived videos: : \\psfc\psfcfiles\Engineering Drawings\Engineering\EngImages\C- Mod_Interior_2013\finalized\movies

Repository for good photos: : \\psfc\psfcfiles\Engineering Drawings\Engineering\EngImages\C- Mod_Interior_2013\tone_mapped


This report and the figures it contains: : \\psfc\psfcfiles\Engineering Drawings\Engineering\EngImages\C- Mod_Interior_2013\operations


Return to the PSFC