1. Field of the Invention
The present invention relates generally to damage assessment, and in particular, to a method, apparatus, and article of manufacture for generating a damage proxy map from radar signals.
2. Description of the Related Art
(Note: This application references a number of different publications as indicated throughout the specification by reference numbers enclosed in brackets, e.g., [x]. A list of these different publications ordered according to these reference numbers can be found below in the section entitled “References.” Each of these publications is incorporated by reference herein.)
Major earthquakes cause buildings to collapse, often claiming the lives of many. When a major earthquake hits an area with significant human population, a critical component of the situational awareness for rescue operations and disaster size estimation is rapid, accurate, and comprehensive detection of building damage. Thus, spaceborne and airborne remote sensing technique is crucial. So far, private companies and government agencies have been mostly manually interpreting high-resolution optical images acquired before and after the earthquake to create damage maps, and automation of this process is still an active research area.
Radar remote sensing has a few advantages when compared to optical sensors: 1) it does not require sunlight; 2) there is no cloud cover in radar image (clouds are transparent to radar); and 3) coherence character of a radar signal makes it highly sensitive to surface scattering property change. For this reason, interferometric synthetic aperture radar (InSAR) coherence differencing has been tested for its usefulness to generate damage maps of urban areas hit by a major earthquake. However, it turns out that perpendicular and temporal baselines of interferometric pairs need to almost be the same in order for the technique to be useful.
To better understand these problems, a detailed description of damage assessment requirements and interferometric coherence may be useful.
Damage Assessment Requirements
Earthquakes of significant magnitude cause buildings to collapse, often claiming the lives of many. For example, the 2010 Mw 7.0 Haiti earthquake killed about 230,000 people, with large uncertainty in the death toll, causing about 280,000 buildings to collapse or be severely damaged [1]. The 2003 Mw 6.6 Bam earthquake in Iran killed at least 26,000 people, about 27% of the population of the city of Bam [2]. More recently, the February 2011 Mw 6.3 Christchurch earthquake killed 185 people and caused a wide range of building damage and extensive liquefaction damage with about 400,000 tons of silt. The 2011 M9.0 Tohoku-oki earthquake and the triggered tsunami damaged over one million buildings, leaving more than 15,000 people dead [3], [4]. The majority of casualties are in cities, but the extent of damage to smaller towns and villages can be even more difficult to assess, especially on short time scales, because communications networks are frequently damaged and ineffective.
When a major earthquake hits an area with significant human population, a rapid, accurate, and comprehensive assessment of building damage is needed for timely situational awareness for rescue operations and loss estimation. Thus, space-borne and airborne remote sensing techniques can play a critical role. However, operational use of remote sensing data for disaster response, especially for earthquakes, volcanic eruptions, landslides, and tsunamis, has not yet been effectively put into practice. Academia, private companies, government agencies, and international responding organizations have mostly relied on high-resolution optical [5] and radar imagery to create damage assessment maps using image processing and classification algorithms [6]. Recently, a crowd sourcing approach was tested using high-resolution optical imagery analyzed by a group of trained experts for post-disaster damage mapping of the 2010 Mw 7.0 Haiti earthquake [7]. However, automation of these processes is still an active research area, and an optimal methodology both for the mapping and the subsequent data integration and processing has yet to be developed [8].
Radar remote sensing has some advantages over optical remote sensing: 1) adar does not require sunlight, 2) clouds are transparent to radar signal, and 3) coherent character of radar makes it highly sensitive to surface property change. For these reasons, the use of synthetic aperture radar (SAR) and interferometric SAR (InSAR) have been explored by many groups. Brunner et al. (2010) used the similarity between post-event SAR images and predicted a SAR signature derived from pre-event high-resolution optical imagery [9]. 3-D model based ray tracing can also be used to simulate high-resolution reflectivity maps [10], [11]. These techniques have potential to improve the quality of SAR-based damage assessment. However, the model requires ground truth measurements or independent remote sensing data such as LiDAR (light detection and ranging) measurements.
Coherence from repeat pass interferometry provides a quantitative measure of ground surface property change during the time span of the interferometric pair. Major damage to a building significantly increases the interferometric phase variance in the impacted resolution element. This change shows up as a decorrelation, or a decrease in coherence. Zebker et al. (1996) recognized the use of InSAR coherence to detect ground surface change due to lave flow on Kilauea volcano observed with Space Shuttle Imaging Radar-C (SIR-C) [12], and Dietterich et al. (2012) used coherence images from multiple tracks of Envisat satellite to track lava flow decorrelation on Kilauea volcano [13]. Simons et al. (2002) used decorrelation from ERS radar data as a tool for mapping surface rupture and ground surface disturbance from intense shaking during the 1999 Mw 7.1 Hector Mine earthquake [14], and Fielding at al. (2005) used coherence changes to map surface ruptures and damage to the city of Bam, Iran from the 2003 Bam earthquake [15].
Building damage is often much more localized than lava flow and surface rupture. Moreover, damaged buildings are often surrounded by other decorrelation sources such as changes in vegetation during the time span of interferometric pair. This background variability makes it difficult to isolate building damage decorrelation in a single coherence map. One way to reduce such noise is to take the difference [15] or ratio [16] of two coherence maps. However, these approaches work only under a special condition when the two coherence images contain similar amount of the background noise decorrelation, a condition that can be achieved when the perpendicular and temporal baselines of two interferometric pairs are almost the same [15].
The perpendicular baseline that causes geometric decorrelation varies from one interferometric pair to another, although the decorrelation can be mitigated with range spectral filtering at the expense of range resolution when volume scattering is negligible [17]. Temporal decorrelation occurs everywhere at various rates, and the Doppler centroid changes at every data acquisition. Volume scattering causes different amounts of decorrelation depending on perpendicular and temporal baselines [18]-[20]. A recent study by Lavalle et al. (2011) provided theoretical explanation on the coupling of temporal decorrelation and vertical structure of vegetation, implying that it is not possible to decouple temporal and volumetric decorrelation of vegetation [21]. These inherent couplings of different decorrelation sources challenge the ability to isolate coherence change due to building damage.
Interferometric Coherence
Interferometric coherence, a measure of similarity between two radar echoes, has been used to estimate the quality of InSAR data. This statistical quantity is calculated as
                              γ          =                                                                  〈                                                      c                    1                                    ⁢                                      c                    2                    *                                                  〉                                                                                  ⁣                                                      〈                                                                  c                        1                                            ⁢                                              c                        1                        *                                                              〉                                    ⁢                                      〈                                                                  c                        2                                            ⁢                                              c                        2                        *                                                              〉                                                                                      ,                  0          ≤          γ          ≤          1                                    (        1        )            where c1 and c2 are complex pixel values of two SAR images coregistered, and   denotes ensemble average, an average over multiple realizations of a random variable, but practically calculated with spatial average assuming ergodicity [20]. Without the absolute value notation, the quantity is called complex correlation (or coherence) function, a coherent sum of interferogram weighted with the product of the root mean square of the amplitude of two SAR images [22]. The term correlation refers to mathematical similarity, whereas coherence refers to physical similarity. As used herein, the term coherence is used as the magnitude of the complex correlation function.
The coherence is determined by many factors such as radar systems, imaging geometry, ground surface property change, and InSAR processing methods. The dependency of coherence on each parameter is known to be multiplicative and can be written as:γ=γBpγBtγBfγBVγBKγBP  (2)where γB's are coherences due to baselines (perpendicular, temporal, and spectral, respectively), γV is due to volume scattering, γK is due to system thermal noise, and γP represents coherence due to processing, such as image focusing, coregistration, and interpolation. Each term takes a value between 0 and 1, and any process that decreases the coherence is called decorrelation, defined as 1−γX, where X is Bp, Bt, Bf, V, K, or P [23].
When an earthquake causes buildings to collapse, the property of microwave scattering at the buildings changes. This change causes a decrease in coherence, which appears as an additional term γD (i.e. coherence due to earthquake damage) multiplied to the original coherence. What is needed is the ability to spatially isolate the damaged areas (γD<1) from undamaged areas (γD=1), so that DPMs (Damage Proxy Maps) can be reliably produced from radar images following major earthquakes or other natural or anthropogenic disasters.