Overall, these advancements have signifi

MinaKim344499 46 views 5 slides Aug 27, 2025
Slide 1
Slide 1 of 5
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5

About This Presentation

Liderazgo


Slide Content

Liderazgo

In recent years, machine learning techniques are widely used in data-driven battery diagnostics due to their flexibility and nonlinear matching capabilities [[32], [33], [34], [35], [36]]. Such learning machines establish relationships between these features and health conditions by extract relevant features from battery data that characterize battery degradation [37]. The AI-driven hierarchical framework harnesses cutting-edge advancements to predict battery behavior, providing both qualitative and quantitative diagnostics across the entire lifecycle [38,39]. For example, one study used charging data to design 39 domain features based on differential methods (incremental capacity and differential voltage) [40]. A stacked ensemble learning model, consisting of four base learners and one meta-learner, achieved accurate capacity estimation on a dataset comprising 420 batteries and nine battery packs. This study addressed the challenge of accurately estimating battery capacity and predicting capacity decay under complex working conditions through feature engineering and machine learning models. It overcame noise-related challenges and missing data in field data, providing high-accuracy and physically consistent battery capacity and remaining useful life (RUL) predictions. Therefore, extracting well-represented features is the focus of traditional machine learning methods. Another study used filter-based, wrapper-based, and embedded methods for selecting health indicators (HI) and evaluated the performance of these methods in combination with four widely used machine learning algorithms: artificial neural networks, support vector machines, relevance vector machines, and Gaussian process regression (GPR) on three public datasets. The results indicated that the HI selection method combined with GPR showed superior estimation performance in terms of accuracy and computational efficiency [41]. However, the feature extraction process involves significant computation and testing, and it is challenging to identify features suitable for various operational and environmental conditions.

Unlike traditional machine learning techniques that require extensive manual feature engineering, deep learning approaches employ multi-layer neural networks to automatically extract features from raw inputs, thereby minimizing the need for manual feature engineering. For example, a study used a deep convolutional neural network (CNN) to automatically extract highly representative features from partial charging curves, which effectively prevents the loss of important information and results in accurate battery capacity estimation. Additionally, the study used traditional machine learning models as baselines to validate the generalization and accuracy of the proposed deep learning model [42]. Another study utilized raw current, voltage, and temperature signals as inputs to CNNs to achieve precise SOH estimation [43]. However, these methods assume that batteries operate within a predefined voltage range to collect the necessary input data. To enhance the flexibility of this approach, another study employed CNNs to map short-term charging data collected within a few hundred seconds to SOH estimation [44]. This method allows input data to be collected at any stage during battery charging, enabling the rapid acquisition of data across different voltage ranges. Additionally, a study demonstrated that long short-term memory (LSTM) excels in feature extraction and prediction accuracy [45]. However, as battery application scenarios and operating conditions continue to evolve, the applicability and prediction accuracy of a single model may be limited under varying conditions. To address these limitations, a study introduced an automatic framework, CNN-active state tracking LSTM (ASTLSTM), by combining one-dimensional (1-D) CNNs and ASTLSTM networks [46]. This end-to-end framework can automatically extract hierarchical features and optimize model hyperparameters during battery degradation processes. CNN-ASTLSTM reduces the cost of manual modeling, minimizes the impact of manual intervention on SOH estimation accuracy, and achieves optimal trade-off performance It is evident that the effectiveness of deep learning methods still hinges on the feature extraction capability of the models. However, existing deep learning methods often fall short in extracting battery features, especially in capturing long-range dependencies and handling large-scale sequential data. To address these limitations, the Transformer model, an emerging deep learning architecture [47], has demonstrated exceptional performance in natural language processing [48] and time series analysis [49]. The Transformer model leverages a self-attention mechanism to effectively capture global information and long-range dependencies without relying on traditional recurrent structures. This endows the Transformer model with higher flexibility and accuracy in handling complex battery capacity estimation tasks.

The Transformer-based approach has been widely adopted for time-series analysis, particularly in the fields of battery status estimation and diagnostics [[50], [51], [52], [53], [54]]. For example, one study extracted high-dimensional feature information using EIS and then used a Transformer model to estimate the SOH of batteries [55]. The results showed that feature extraction significantly improved the model estimation accuracy across the entire lifecycle, highlighting the superior performance of the Transformer model in handling long-term data. Additionally, another study extracted 28 features related to charge and discharge, applied Pearson correlation coefficients (PCC) for feature selection, and utilized standard Transformer and encoder-only Transformer neural networks for SOH estimation, demonstrating strong performance [56]. Further, a separate study improved the long-term prediction accuracy and temperature adaptability of lithium-ion battery SOH estimation by combining incremental capacity analysis (ICA) with a Transformer network [57]. Specifically, in the feature extraction process, peak features of the incremental capacity curve were extracted using a dual-filtering method, ensuring the quality of the input data. The Transformer model, with its multi-head attention mechanism, enhanced the ability to capture critical features, leading to more accurate SOH estimation. Thus, by integrating feature extraction methods such as EIS, ICA, and PCC, the effectiveness of the Transformer model in estimating battery SOH is significantly enhanced. This approach to feature extraction improves data quality, enabling the model to more accurately capture key features, which in turn markedly improves prediction accuracy and adaptability. To further leverage the strengths of deep learning architectures, another study proposed a novel SOH estimation method based on data preprocessing and a CNN-Transformer framework [58]. Features were selected and processed using PCC, principal component analysis, and min-max feature scaling methods. Validation on NASA battery dataset demonstrated the model high accuracy and stability in estimating battery SOH. However, the Transformer model faces challenges related to high computational complexity and memory consumption when processing long sequential data, limiting its efficiency in practical applications. In order to address this issue, one study introduced a multi-head probabilistic sparse self-attention mechanism and “distillation” techniques based on the Transformer model, significantly reducing computational complexity and memory consumption while enhancing prediction speed [59]. Another study proposed a method based on probsparse self-attention Transformer and multi-scale temporal feature fusion for estimating the cell SOH [60]. By incorporating the cross-stage partial- probsparse self-attention mechanism, each key (K) in the scaled dot-product computation can focus on primary queries (Q), significantly optimizing computational efficiency and memory usage. Furthermore, the introduction of dilated causal convolution effectively expands the receptive field without increasing computational load, enabling the Transformer model to capture long-range dependencies more efficiently. In summary, these improvements significantly enhance the performance and applicability of the Transformer model in handling complex time-series data.

Overall, these advancements have significantly enhanced the performance and applicability of Transformer models in handling complex time-series data. However, despite these significant improvements, the enhanced Transformer models often focus on long-range dependencies while neglecting the capture of local features. Moreover, the high computational demands of traditional attention mechanisms have considerably increased computational costs, severely hindering the adoption of this approach. Moreover, due to the complexity of the model structure, it is challenging to adapt to scenarios with insufficient training data or varying data distributions. To address these issues, we propose the predictive pretrained Transformer (PPT) model. This model, based on the Transformer architecture, employs a multi-head probabilistic probsparse self-attention mechanism, which not only maintains high performance but also significantly reduces computational complexity and memory usage. By integrating 1-D convolutions with the probsparse attention mechanism, the model effectively captures correlations across different temporal dimensions. Furthermore, the incorporation of transfer learning further reduces model training costs and enhances accuracy and generalization capabilities. The specific battery SOH estimation workflow is illustrated in Fig. 1. The main contributions of this study are summarized as follows:
Tags