Abstract: Structural health monitoring (SHM) aims to assess civil infrastructures’ performance and ensure safety. Automated detection of in situ events of interest, such as earthquakes, from extensive continuous monitoring data, is important to ensure the timeliness of subsequent data analysis. To overcome the poor timeliness of manual identification and the inconsistency of sensors, this paper proposes an automated seismic event detection procedure with interpretability and robustness. The sensor-wise raw time series is transformed into image data, enhancing the separability of classification while endowing with visual understandability. Vision Transformers (ViTs) and Residual Networks (ResNets) aided by a heat map–based visual interpretation technique are used for image classification. Multitype faulty data that could disturb the seismic event detection are considered in the classification. Then, divergent results from multiple sensors are fused by Bayesian fusion, outputting a consistent seismic detection result. A real-world monitoring data set of four seismic responses of a pair of long-span bridges is used for method validation. At the classification stage, ResNet 34 achieved the best accuracy of over 90% with minimal training cost. After Bayesian fusion, globally consistent and accurate seismic detection results can be obtained using a ResNet or ViT. The proposed approach effectively localizes seismic events within multisource, multifault monitoring data, achieving automated and consistent seismic event detection.
You may also like
Structural health monitoring systems continuously monitor the operational state of structures, generating a large amount of monitoring data during the process. The structural responses of extreme events, such as earthquakes, ship collisions, or typhoons, could be captured and further analyzed. However, it is challenging to identify these extreme events due to the interference of faulty data. Real-world monitoring systems suffer from frequent misidentification and false alarms. Unfortunately, it is difficult to improve the system’s built-in algorithms, especially the deep neural networks, partly because the current neural networks only output results and do not provide an interpretable decision-making basis. In this study, a deep learning-based method with visual interpretability is proposed to identify seismic data under sensor faults interference. The transfer learning technique is employed to learn the features of seismic data and faulty data with efficiency. A post-hoc interpretation algorithm termed Gradient-weighted Class Activation Mapping (Grad-CAM) is embedded into the neural networks to uncover the interest regions that support the output decision. The in-situ seismic responses of a cable-stayed long-span bridge are used for method verification. The results show that the proposed method can effectively identify seismic data mixed with various types of faulty data while providing good interpretability.
In structural health monitoring (SHM), revealing the underlying correlations of monitoring data is of considerable significance, both theoretically and practically. In contrast to the traditional correlation analysis for numerical data, this study seeks to analyse the correlation of probability distributions of inter-sensor monitoring data. Due to induced by some commonly shared random excitations, many structural responses measured at different locations are usually correlated in distributions. Clarifying and quantifying such distributional correlations not only enables a more comprehensive understanding of the essential dependence properties of SHM data, but also has appealing application values; however, statistical methods pertinent to this topic are rare. To this end, this article proposes a novel approach using functional data analysis techniques. The monitoring data collected by each sensor are divided into time …
In this study, we propose a machine‐learning‐based approach to identify the modal parameters of the output‐only data for structural health monitoring (SHM) that makes full use of the characteristic of independence of modal responses and the principle of machine learning. By taking advantage of the independent feature of each mode, we use the principle of unsupervised learning, turning the training process of the neural network into the process of modal separation. A self‐coding neural network is designed to identify the structural modal parameters from the vibration data of structures. The mixture signals, that is, the structural response data, are used as the input of the neural network. Then, we use a complex loss function to restrict the training process of the neural network, making the output of the third layer the modal responses we want, and the weights of the last two layers are mode shapes. The neural …
Structural health monitoring (SHM) systems provide opportunities to understand the structural behaviors remotely in real-time. However, anomalous measurement data are frequently collected from structures, which greatly affect the results of further analyses. Hence, detecting anomalous data is crucial for SHM systems. In this article, we present a simple yet efficient approach that incorporates complementary information obtained from multi-view local binary patterns (LBP) and random forests (RF) to distinguish data anomalies. Acceleration data are first converted into gray-scale image data. The LBP texture features are extracted in three different views from the converted images, which are further aggregated as the anomaly representation for the final RF prediction. Consequently, multiple types of data anomalies can be accurately identified. Extensive experiments validated on an acceleration dataset acquired on a …