Abstract
Optical accuracy is a primary driver of parabolic trough concentrating solar power (CSP) plant performance, but can be damaged by wind loads, gravity, error during installation, and regular plant operation. Collecting and analyzing optical measurements over an entire operating parabolic trough plant is difficult, given the large scale of typical installations. Distant Observer, a software tool developed at the National Renewable Energy Laboratory, uses images of the absorber tube reflected in the collector mirror to measure both surface slope in the parabolic mirror and offset of the absorber tube from the ideal focal point. This technology has been adapted for fast data collection using low-cost commercial drones, but until recently still required substantial human labor to process large amounts of data. A new method leveraging advanced deep learning and computer vision tools can drastically reduce the time required to process images. This new method addresses the primary analysis bottleneck, identifying featureless, reflective mirror corner points to a high degree of accuracy. Recent work has shown promising results using computer vision methods. The combined deep learning and computer vision approach presented here proved highly effective and has the potential to further automate data collection and analysis, making the tool more robust. The method presented in this paper automatically identified 74.3% of mirror corners within 2 pixels of their manually marked counterparts and 91.9% within 3 pixels. This level of accuracy is sufficient for practical Distant Observer analysis within a target uncertainty. A commercial drone collected video of over 100 parabolic trough modules at an operating CSP plant to demonstrate the deep learning and computer vision method’s usefulness in processing large amounts of data. These troughs were successfully analyzed using Distant Observer, paired with the new deep learning and computer vision algorithm, and can provide plant operators and trough designers with valuable insight about plant performance, operating strategies, and plant-wide optical error trends.
1 Introduction
Parabolic trough is the most deployed concentrating solar power (CSP) technology, with over 90 plants operating worldwide [1,2]. The optical efficiency of the solar field directly affects the performance and eventually the economics of the whole plant. Optical characterization of parabolic trough collectors can provide useful information on trough performance and operating strategy. Among the common sources of optical error in parabolic trough collectors, mirror slope error has been considered the most dominant [3,4]. Characterizing this error in the field provides more realistic information than data collected in the lab, as it captures deviations from design point performance due to installation, wind loads, in-field degradation, gravity, and wear from daily operation. Of the tools available for collector optical characterization [5–8], few are suitable for use in the field, and fewer can process data quickly enough for field-scale optical characterization, encompassing thousands of trough modules. This work details a three-step approach to automate the most labor-intensive steps in an optical measurement tool using object detection and computer vision (CV). Recent work by the German Aerospace Center (DLR) to automate their QFly/Trough Absorber Reflection MEasurement System (TARMES) shows promising results [9,10] and is discussed in further detail below.
Deep learning, computer vision, and optical analysis techniques can be used to identify poorly performing troughs in the field and possible corrective measures. The ability to quickly and accurately take optical measurements on a large portion of the field will:
help define plant-wide operating strategies and corrections;
identify under-performing collectors and indicate how they can be adjusted;
find systemic failure points, like sagging receiver tubes or twisting down the length of an entire collector assembly, which can be corrected with structural improvements to increase optical performance.
Since 2012, the National Renewable Energy Laboratory (NREL) has been developing an optical measurement tool for parabolic solar collectors that directly measures reflector slope deviations and absorber misalignment [11–13]. This tool, called Distant Observer (DO), uses a digital camera to photograph the reflection of the absorber tube on parabolic trough reflectors. From these images, DO calculates optical errors based on the distortion of these reflections. It has been successfully used on various parabolic trough designs [13]. Similar techniques have been successfully demonstrated by researchers at the German Aerospace Centre, DLR [6,9].
1.1 Distant Observer.
Characterizing a complete parabolic trough module with DO requires a video or series of photos, taken in such a way that the absorber tube’s reflection sweeps from one edge of the collector mirror to the other. From these input images—like the ones in Fig. 1—DO generates optical error maps for the mirror slope and absorber position offset.

Examples of possible images used in a DO analysis. Sixty to hundred and twenty images are used to capture a full slope map of a collector surface. The white box shows a single module that would be analyzed. Trough images courtesy of Solar Dynamics LLC.
DO analyzes these images by capturing the reflected absorber tube and the actual absorber location. From these measurements, DO generates several different optical characterization outputs, including the offset of the absorber tube from its ideal location, in both the x and z directions, and slope error maps, which show the collector mirror slope error at each point on its surface. This is illustrated in Fig. 2.

Left, a diagram of the camera capturing the absorber tube and its reflection. Right, two types of results of a DO analysis; (1) plots of absorber tube offset and (2) maps of slope error on the surface of the mirror.
Figure 2 also shows two significant results of a DO analysis: (1) offset plots of the absorber tube from the ideal focal point in the x and z directions and (2) surface slope maps. Plant operators can use this information to make operating corrections and identify under-performing areas for maintenance or replacement.
1.2 Aerial Data Collection and Automation.
Unmanned aerial system (UAS) technology can be leveraged to quickly collect the images used in DO on operating parabolic trough plants. It allows for rapid, non-intrusive measurements, giving plant operators insight into trough performance in the field.
In recent years, commercial UAS technology has drastically improved. Low-cost, off-the-shelf consumer drones have high enough camera resolutions for DO analysis (many are better than the cameras DO was developed on) and can be flown with relatively little training. More advanced UAS support flight paths with hundreds of pre-programed waypoints, allowing an operator to efficiently collect data and enabling off-site flight planning [14]. One goal of this work was to adapt the Distant Observer tool to lower cost UAS, which usually do not have sub-meter global positioning system (GPS) accuracy and do not constantly record GPS information or camera pose while collecting video. This assumption will let plants use their own (<$5k USD) drones, making the measurement system more accessible. This means very little information about the UAS and camera position were used in the automation methods described in this paper.
As described in the subsection below, the four corners of a parabolic trough module must be identified with near pixel-level accuracy to automate the most labor-intensive processes of the DO software and generate accurate optical measurements. Until recently, DO still required substantial human labor to post-process the massive amounts of data that would be collected at a commercial-scale solar field. Computer vision and deep learning methods can drastically speed up this process but pose an interesting challenge; the mirrored surface of solar collectors lacks traditional “features” used for identification. While the reflected image (often sky) is usually different from the background (often ground), reflected structures in the mirror can create challenges for classical computer vision methods. DLR has adapted their TARMES/QFly tool for aerial data collection [15] and for automated trough module identification using computer vision methods [9]. The DO analysis described here is most similar to the “high resolution” data collection in Refs. [9,10,16]. While Ref. [9] shows an accurate computer vision trough identification, there is little information about the robustness of this technique, and what information is required (internal trough geometries, GPS and camera pointing information, images without reflections, etc.). Combining similar computer vision techniques with a deep learning model may lead to a more robust trough module detection system. The deep learning model presented here was trained on a custom data set of a wide array of UAS images from two operating solar fields. The trough identification performance results presented here are on a similarly varied set of images, which include different lighting, troughs with missing or broken mirror panels, troughs with reflected absorber tubes, etc. The deep learning model identifies troughs without any UAS GPS or camera pose information, or information about the plant. It is given only an image as input. The two computer vision methods presented here determine the trough orientation and then identify the module corner points with high accuracy using the deep learning model results. Since this method can identify trough modules with very little information, less effort is likely needed to tune the computer vision tools to specific plants, lighting, or irregular troughs. Some additional testing and software development are needed to commercialize this tool, but the combined deep learning and computer vision approach has the potential to further reduce the equipment, time, and labor required for a high resolution optical analysis of a solar field. The work in this paper describes an effective combined deep learning and computer vision approach and examines the accuracy of this method.
1.3 Required Corner Identification Accuracy.
In the DO software, the UAS/camera’s position is calculated directly in each image using photogrammetry and the four corners of a trough module as known reference points. Photogrammetry with a properly calibrated camera yields a much more accurate camera position than UAS GPS, which can be off by more than a meter for off-the-shelf UAS systems. In addition, a photogrammetry camera positioning system makes the DO software much more versatile, usable with inexpensive drones, and more straightforward because it only requires images, camera parameters (lens focal length, sensor size, and distortion parameters), and trough parameters (trough dimensions, focal length, and absorber tube diameter) as inputs, not detailed positioning data.
Accurate identification of the known reference points in the image is required for accurate photogrammetry. Part of this work was to determine the corner identification accuracy necessary to achieve an uncertainty of ± 0.25 milliradians in the final DO mean mirror optical error measurement result. This was accomplished using a simple sensitivity study; the four manually identified module corner points in each image of a representative DO data set (approximately 100 images) were shifted in the 2D image to a random position within some radius r, in centimeters, of the manually marked corner point. The overall mean DO combined slope and absorber error result in milliradians was compared to the original result with the unperturbed corners. Using centimeters instead of pixels decouples corner identification sensitivity from many of the factors that influence it, such as camera resolution and distance from the trough. In addition, using centimeters allows for a general metric that can be applied to each new camera, UAS, and data collection condition. This shift from pixels to centimeters was done using a known length-scale in the image.
The results of this study are shown in Fig. 3. By repeating the DO analysis many times at random perturbations within a series of maximum offset radii, we can find statistical means and values at the 95th percentile. A linear fit with the intercept fixed at the origin (indicating no reference point offset and no error) accurately captures this trend. This analysis shows that with 95% certainty, if the module corner points identified by the deep learning and computer vision algorithms are within 1.18 cm of manually marked points, the overall effect on the mean DO error result will be ≤±0.25 milliradians. This centimeter offset corresponds to 2–3 pixels for the data collected in the field, depending on UAS altitude and camera resolution.
2 Methodology
The most effective algorithm for mirrored corner identification identified in this work uses both deep learning and computer vision in tandem. This three-step process uses deep learning and two computer vision techniques, leveraging the strengths of each method, and is depicted in Fig. 4. Each of these steps is described in detail below, followed by an analysis of the accuracy of this method. Using deep learning and computer vision together is essential for achieving accurate results. Deep learning alone can effectively identify objects of interest without being misled by reflected objects but lacks the precision required for optical analysis with DO due to the fact that segmentation masks are probabilistic and so appear with rough rounded corners. Computer vision can accurately capture mirror edges and shapes but is not robust enough to distinguish shadows, reflected objects, and interfering structures from the mirrors themselves. In this system, the deep learning model identifies the trough in an input image and the computer vision techniques refine this into precise trough module corner points.

A graphical depiction of the final automatic corner identification algorithm, which uses three techniques: deep learning, a computer vision ROI method, and a computer vision edge detection method
2.1 Deep Learning Module Identification and Classification.
A deep learning model was created to identify parabolic trough modules in images collected by a UAS. Existing deep learning tools provided a useful starting point for DO efforts. Detectron2 is a software library from Facebook Artificial Intelligence (AI) Research for deep learning object detection and segmentation [17]. Detectron2’s pre-trained models and release under an Apache license make it a powerful tool for object detection research projects. For this work we started with a mask region-based convolutional neural network model from the Detectron2 Model Zoo. This model uses feature pyramid networks and region proposal networks to identify objects in each image. Using its Retina net/Tensormask implementation, we can obtain panoptic segmentation features that are nothing but the bounding boxes, segmentation masks of the region of interest (ROI). The current image segmentation deep learning model was trained on three custom classes:
A DO module is a module that is fully in-frame, with a reflected absorber tube.
An empty module is a module that is fully in-frame, but does not contain a reflected absorber.
A partial module is a trough module that is cutoff by the edge of the image.
Examples of these classes are shown in Fig. 5.

Examples of the three new segmentation classes; DO module (left), empty module (center), and partial module (right). Each image also has partial modules at the left and right edges.
The deep learning model was trained on labeled images collected at an operating CSP plant. Individual modules were roughly contoured and labeled using VGG Image Annotator∖ and compiled into a .json annotation file. Table 1 shows the number of instances of each class. It was immediately clear that a deep learning model would not be able to identify every module for analysis, but that this would not be necessary for this application. Because images are sequential video frames, it is straightforward to extrapolate corner points into images without a deep learning estimate. For this reason, a relatively small number of images were used to train the model. We determined that an average precision of above 50% on DO modules would be adequate to accurately identify and track a module’s position.
Deep learning model results, number of training and testing instances of each class, and mean average precision (mAP) generated using common objects in context (COCO) evaluator [21]
DO modules | Empty modules | Partial modules | |
---|---|---|---|
Training instances | 112 | 94 | 272 |
Testing instances | 28 | 51 | 129 |
mAP: bounding box | 80.7% | 79.0% | 75.9% |
mAP: segmentation | 91.1% | 85.1% | 86.3% |
DO modules | Empty modules | Partial modules | |
---|---|---|---|
Training instances | 112 | 94 | 272 |
Testing instances | 28 | 51 | 129 |
mAP: bounding box | 80.7% | 79.0% | 75.9% |
mAP: segmentation | 91.1% | 85.1% | 86.3% |
2.2 Corner Detection.
The algorithm uses two distinct computer vision methods to turn segmentation masks into precise corner estimates. First, a “region of interest” method extracts the nearly rectangular module shape from the organically shaped segmentation mask area. Opening and closing functions remove noise in the image caused by structures and reflections. This method passes corner estimates to the second computer vision step, which uses edge detection and Hough transforms to refine the corner positions to their final values. The two techniques are described in detail below.
2.2.1 Region of Interest Analysis and Corner Estimation.
This method is an intermediate step between deep learning and corner refinement. It uses the DO module tensor masks generated from the deep learning model. The deep learning tensor mask is modified with a computer vision padding to ensure that the complete DO module gets captured inside the mask. There are two cases expected while performing the corner detection. Case-1 illustrates the main flow of stages involved in the ROI method and finally determines the corner estimates as shown in Fig. 6. Case-2 is one of the edge case scenarios handled in this implementation and is illustrated in Fig. 7.

Stages of case-1 showing CV ROI analysis and corner estimation: (a) padded tensor mask, (b) initial hue, saturation, value mask, (c) errosion applied, (d) contours are applied, (e) the significant contour highlighted, (f) minimum rectangle fitted to contour, (g) polyfill produces fitted rectangle's mask, and (h) Shi-Tomasi method used to find mask corners

Stages of case-1 showing CV ROI analysis and corner estimation: (a) padded tensor mask, (b) initial hue, saturation, value mask, (c) errosion applied, (d) contours are applied, (e) the significant contour highlighted, (f) minimum rectangle fitted to contour, (g) polyfill produces fitted rectangle's mask, and (h) Shi-Tomasi method used to find mask corners

Stages of case-2 showing CV ROI analysis and corner estimation: (a) padded tensor mask, (b) initial hue, saturation, value mask, (c) errosion applied, (d) contours are applied, (e) two significant contours are identified, (f) minimum rectangles are fitted to both contours, (g) polyfill produces two rectangle masks, (h) Shi-Tomasi method used to find all eight mask corners, and (i) classification system is applied to recognize the four prominent corners

Stages of case-2 showing CV ROI analysis and corner estimation: (a) padded tensor mask, (b) initial hue, saturation, value mask, (c) errosion applied, (d) contours are applied, (e) two significant contours are identified, (f) minimum rectangles are fitted to both contours, (g) polyfill produces two rectangle masks, (h) Shi-Tomasi method used to find all eight mask corners, and (i) classification system is applied to recognize the four prominent corners
Case-1: Figure 6(a) shows the padded tensor mask of the DO module. Figure 6(b) is the initial mask extracted using the hue, saturation, value (HSV) color spaces with a balanced filter of the parabolic trough’s hue, saturation, and value elements. As a result, a tiny portion of mirror surfaces of the parabolic troughs adjacent to the main DO module is still present in the mask. Therefore, a series of erosions have been performed on the mask to separate unwanted regions, as shown in Fig. 6(c). Using OpenCV contours, it was possible to obtain continuous points along the boundaries of the objects in the image with the same color or intensity. Figure 6(d) shows the contours obtained for the mask. The significant contour is the maximum area out of all the contour areas. For example, Fig. 6(e) shows the significant contour in the mask. In Fig. 6(f), considering the rotation of the contour from Fig. 6(e), a fitted rectangle with a minimum area is constructed. A fitted rectangle can restore any damaged contours caused by uneven mirror reflections or broken mirrors of parabolic troughs. In Fig. 6(g), CV polyfill derives the mask of the fitted rectangle. Finally, the Shi-Tomasi method [18] identifies the corners of the DO module at the final stage, as shown in Fig. 6(h).
Case-2 is where the absorber tube is significant in the HSV mask extraction of the parabolic trough. Hence, there will be a partition created at the absorber tube. For example, in case-2, Fig. 7(e), two significant contours are detected with similar areas. Then, in Fig. 7(f), considering the rotation and the minimum regions of both significant contours, corresponding fitted rectangles are constructed. Then, in Fig. 7(g), the CV polyfill is used to produce the mask of the fitted rectangles. Finally, the Shi-Tomasi method identifies the mask’s corners, as shown in Fig. 7(h). Because there are eight possible corners detected for the two contours, an appropriate classification method is applied based on the coordinate system to recognize four prominent corners of the DO module, as shown in Fig. 7(i). Next, the final corners from both the cases are further refined using the Hough corner refinement method, following this section.
2.2.2 Hough Corner Refinement.
The final algorithm step uses computer vision edge detection techniques to further refine the corner point estimates. The previous two steps only identify corner points when the confidence level is above a high threshold, resulting in many of the images in a DO data set (approximately 100 images) that do not have any corner estimates. The final algorithm step adequately accounts for this. As shown in Fig. 8, the corner refinement algorithm skips forward to the first point with two consecutive corner estimates from the deep learning and ROI steps. Corner points are refined for each image. In cases where no corner estimate is available from the deep learning and ROI model, corner estimates are extrapolated forward from the previous images. Once the algorithm reaches the end of the data set, it returns to the first refined corner point and extrapolates backwards to the beginning to the data set if necessary.

The final refinement step looped through images, using corner estimates from the deep learning and ROI method when available, and interpolating from previous corner points when not. The algorithm skips forward then interpolates backwards if there are not estimates for the initial corner points.
Starting with a corner estimate from either the deep learning and ROI method or extrapolation from previous images, the refinement process follows the steps shown on the right side of Fig. 8. First, lens distortion is removed from the image using intrinsic camera parameters estimated with a calibration procedure. This removes radial “barreling” distortion, ensuring that the straight edges of the parabolic trough appear straight in the image. Next, an edge detection method is applied to the image, as shown in the top section of Fig. 9. From here, a Hough transform is applied to identify strong edges. Using previously identified edges and information from the deep learning and ROI methods, we can constrain the algorithm to search for strong edges within a small range of angles. A Hough transform and peaks corresponding to strong edges are shown in the bottom left part of Fig. 9. The bottom right portion shows these peaks overlaid onto an image, matching nicely with trough edges.

Edge detection (above) followed by a Hough transform is used to identify strong edges in the region of the corners from the previous step. Angular edge constraints help filter out incorrect edges.
A similar technique is repeated to find vertical trough edges. For these, edges are detected in a small area around the estimated corner points. By using a small region, we can ensure that the curved vertical trough edges appear roughly straight and can be fitted with a straight Hough line, as shown in Fig. 10. Figure 10 also shows the final step in the process, where corner estimates are adjusted to the intersection point of the side and top edges of the trough module.
3 Results and Discussion
The three-step algorithm detailed above effectively identified mirror corners to a high degree of accuracy. Images were extracted from DO videos collected at an operating CSP plant and fed into the deep learning model. The model identified segmentation masks covering classes it identified, as shown in Fig. 11. The identified masks for the DO module class were used as a starting point for the computer vision analysis. Table 1 shows the measured average precision of the final deep learning model, along with the total training instances of each class.

DO data collected at an operating CSP plant overlaid with segmentation masks generated by the deep learning model
Deep learning masks were passed through the two computer vision steps. To test the accuracy of the combined algorithm, 1768 module corners were labeled manually by members of the research team. These serve as a “ground truth” for comparison with automatically identified corners, limited only by the time required to click corners, not by the (near-instantaneous) speed of the deep learning model. Due to image resolution and human error, we determined that the uncertainty in manual corner identification was approximately ±1 pixel.
Figure 12 shows a distribution of distances between deep learning and computer vision identified corners and the associated manually identified corners, which peaks between 1 and 1.5 pixels, and has very few far misses. All misses >10 pixels are captured in the bin furthest to the right.

Distributions of absolute pixel distances of automatically identified corner points to manually labeled points for the final corner detection algorithm. Bin-width 0.25 pixels, with right-most bin contains all values >10 pixels. Data were collected on 4K resolution video frames using a calibrated, fixed 22 mm focal length camera (35 mm format equivalent). 50–90 images used per trough and trough dimensions were supplied by the plant. Edge detection was performed with adaptive edge thresholds and Hough lines were identified at a discritization of 0.1 deg to accurately capture trough edges. The search region for vertical trough edges was 10.4% of the trough height and 2.5% of the trough width around the estimated corner position. Data available on request.

Distributions of absolute pixel distances of automatically identified corner points to manually labeled points for the final corner detection algorithm. Bin-width 0.25 pixels, with right-most bin contains all values >10 pixels. Data were collected on 4K resolution video frames using a calibrated, fixed 22 mm focal length camera (35 mm format equivalent). 50–90 images used per trough and trough dimensions were supplied by the plant. Edge detection was performed with adaptive edge thresholds and Hough lines were identified at a discritization of 0.1 deg to accurately capture trough edges. The search region for vertical trough edges was 10.4% of the trough height and 2.5% of the trough width around the estimated corner position. Data available on request.
Figure 13 and Table 2 show similar information. This plot shows the cumulative fraction of corners within a given radius of the manually identified point. The final algorithm places roughly 74% of corners within 2 pixels of the manually identified point, and over 90% within 3 pixels. Given the uncertainty in human corner labeling, the final algorithm result is as accurate as can be reasonably expected. In some cases, this led to a larger deviation in the mean measured optical error than expected. This is primarily due to the fact that the sensitivity study assumed random error, where in reality, the automatic corner detection methods consistently identified corners in specific directions with respect to the manually identified corners. Overall, automatically identified corners are more consistent and likely more correct than manually identified corners. Using automatically identified corners (versus manually identified) will induce less than 0.25 milliradians of additional uncertainty measured mean optical error. The original uncertainty of this method is described in previous work [11,12].

Normalized plots showing the fraction of automatically identified corner points within a given absolute pixel distance of manually identified points. The step size on the plot is 0.25 pixels. As in Fig. 12, data were collected on 4K resolution video frames using a calibrated, fixed 22mm focal length camera (35mm format equivalent). 50–90 images used per trough and trough dimensions were supplied by the plant. Edge detection was performed with adaptive edge thresholds and Hough Lines were identified at a discritization of 0.1 deg to accurately capture trough edges. The search region for vertical trough edges was 10.4% of the trough height and 2.5% of the trough width around the estimated corner position. Data available on request.

Normalized plots showing the fraction of automatically identified corner points within a given absolute pixel distance of manually identified points. The step size on the plot is 0.25 pixels. As in Fig. 12, data were collected on 4K resolution video frames using a calibrated, fixed 22mm focal length camera (35mm format equivalent). 50–90 images used per trough and trough dimensions were supplied by the plant. Edge detection was performed with adaptive edge thresholds and Hough Lines were identified at a discritization of 0.1 deg to accurately capture trough edges. The search region for vertical trough edges was 10.4% of the trough height and 2.5% of the trough width around the estimated corner position. Data available on request.
3.1 Field Test.
The primary goal of this work is to enable fast, automatic, large-scale optical analysis of parabolic troughs. To test this, data were collected at an operating parabolic trough CSP plant. Videos were collected by UAS at different times of day in different lighting conditions. The data collection team determined that lighting was best in the evening when troughs are being returned to stow position. Collectors aiming off-sun was important for a high-contrast receiver tube reflection, and dusk provides a good opportunity for data collection without disturbing plant operation. Troughs can be paused at zenith orientation in their daily return to eastern horizon, allowing for long data collection flights without inducing any extra plant parasitic losses due to re-orienting troughs, or interrupting daily operation.
To test the system’s large-scale processing effectiveness, data from over 100 trough modules were analyzed using the deep learning and computer vision algorithms described above. The complete analysis consisted of roughly 10,000 trough images, or approximately 40,000 individual module corners, and required minimal human input. The time required to collect and process data is still significant. Even for a UAS and automatic analysis method, it takes several seconds per module to collect video, and several more to travel to the next module. Further techno-economic analysis will be necessary to determine the value of fast optical analysis to the plant and the scale of future analysis (full-field, random sampling, focus on problematic sections, etc.).
4 Summary and Conclusions
Automatic identification of mirrors is a non-trivial problem, given the lack of (non-reflected) surface features, and high degree of accuracy required for optical analysis. The combined deep learning and computer vision approach described above leverages state-of-the-art deep learning and computer vision libraries and techniques to accurately identify parabolic trough modules and track their corners with good precision. Images of parabolic troughs required for the DO software were collected using a UAS at an operating solar power plant. Automatically identified corners using the methods described above were very close to manually labeled points, with 74.3% falling within 2 pixels and 91.9% falling within 3 pixels. Given a manual identification accuracy of ±1 pixel, this is a highly accurate result.
This work demonstrates deep learning and computer vision as powerful tools for optical analysis of CSP systems, and shows that both techniques together can yield robust mirror corner identification. The techniques described here are designed for use on parabolic troughs, but also have promising applications in central receiver heliostat plants, which can be analyzed using similar techniques [19,20].
4.1 Future Work.
Both automation and validation of the DO software have opportunities for improvement. For one, further validation of the UAS-based DO software in the field would instill confidence and help quantify software uncertainty. A number of uncertainty studies were performed as part of this work to determine the required automatic corner identification accuracy. These studies successfully identified the corner accuracy needed to keep variation in the DO results small; however, UAS field data collection may introduce new sources of uncertainty, and validating the DO results collected in the field is difficult due to the lack of comparable tools.
4.2 Commercialization.
The end objective is for this method to be a tool for both parabolic trough CSP plant operators and trough designers. DO has been licensed to Solar Dynamics LLC, a research, development, and service company in the CSP industry. They also have the source code for the deep learning model and computer vision corner identification. Accurate automatic trough module identification can be a valuable tool for DO optical analysis and for automating other field metrology methods (identifying broken mirrors, absorber vacuum tubes, etc.).
A modified version of Distant Observer may be applicable for quality assurance (QA) measurements, either on the assembly line or immediately after troughs are assembled in the field. The simplicity of the UAS-based DO system gives it an advantage over traditional QA systems, which often require large costly installations for optical measurement. DO’s simplicity and affordability make it especially practical as a QA tool for small-scale industrial process heat applications, an emerging application for CSP.
Acknowledgment
This work was authored in part by the National Renewable Energy Laboratory, operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. Funding is provided by U.S. Department of Energy Office of Energy Efficiency and Renewable Energy (EERE) Solar Energy Technologies Office (SETO). The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government.
The authors would also like to acknowledge Ryan Shininger, Tim Wendelin, and Patrick Marcotte from Solar Dynamics LLC for the raw data collection and very helpful discussions during the development of these methods.
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.