Academic Journal
Predicting Screening Efficiency of Probability Screens Using KPCA-GRNN with WP-EE Feature Reconstruction
Title: | Predicting Screening Efficiency of Probability Screens Using KPCA-GRNN with WP-EE Feature Reconstruction |
---|---|
Authors: | Qingtang Chen, Yijian Huang |
Source: | Advances in Mathematical Physics, Vol 2024 (2024) |
Publisher Information: | Wiley, 2024. |
Publication Year: | 2024 |
Collection: | LCC:Physics |
Subject Terms: | Physics, QC1-999 |
More Details: | The screening system is a nonlinear and non-Gaussian complex system. To better characterize its attributes and improve the prediction accuracy of screening efficiency, this study involves the acquisition of the vibration signals and screening efficiency data under various operational conditions. Subsequently, empirical mode decomposition energy entropy (EMD-EE), variational mode decomposition energy entropy (VMD-EE), and wavelet packet energy entropy (WP-EE) features are extracted from the time series vibration signals, and three single input energy entropy-generalized regressive neural network (GRNN) prediction accuracy models are established and compared. Furthermore, we introduce the kernel principal component analysis (KPCA)-WP-EE feature reconstruction-GRNN prediction algorithm. This approach involves reconstructing the feature vector by optimizing WP-EE-GRNN prediction under varying parameters. The parameterized GRNN model is then predicted and analyzed through secondary reconstruction involving KPCA dimensionality reduction features. The results show that WP-EE-GRNN achieves superior prediction accuracy compared to box dimension (d)-GRNN, box dimension-back propagation neural network (BPNN), and d-weighted least squares support vector machine, WP-d-GRNN, WP-EE-BPNN, EMD-EE-GRNN, and VMD-EE-GRNN. Additionally, the WP-EE feature reconstruction-GRNN algorithm exhibits higher prediction accuracy than the single-input WP-EE-GRNN algorithm. The WP-EE-GRNN prediction algorithm using KPCA dimensionality reduction and secondary reconstruction not only achieves higher prediction accuracy than prior to KPCA dimensionality reduction but also improves prediction efficiency. Following the extraction of two core principal components, model parameters when KPCA’s σ2 = 0.85, the optimal parameter of GRNN model Spread = 0.051, and the optimal number of training samples N = 19, the average prediction error is 1.434%, the minimum prediction error reaching 0.708%, the minimum root mean square error reaching 0.836% and Pearson correlation coefficient marking the closest to 1, these result all representing the optimum achievable values. The budget model selects the optimal parameter combination scheme for the system. |
Document Type: | article |
File Description: | electronic resource |
Language: | English |
ISSN: | 1687-9139 |
Relation: | https://doaj.org/toc/1687-9139 |
DOI: | 10.1155/2024/5588864 |
Access URL: | https://doaj.org/article/fc06623ae72940be9d84b64a36598944 |
Accession Number: | edsdoj.fc06623ae72940be9d84b64a36598944 |
Database: | Directory of Open Access Journals |
Full text is not displayed to guests. | Login for full access. |
FullText | Links: – Type: pdflink Url: https://content.ebscohost.com/cds/retrieve?content=AQICAHjPtM4BHU3ZchRwgzYmadcigk49r9CVlbU7V5F6lgH7WwEET5YF7Kzdx5eljYcuZEVIAAAA4jCB3wYJKoZIhvcNAQcGoIHRMIHOAgEAMIHIBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDDB4-SLuXKREBf9AmgIBEICBmv0aD4tW0e-s2NMNsIobhhTOMkbZH4bJvcHeVBbUSTEHmUnNSw6c8Nn4zPQqYGr6hpm9FKtnt-m37cr8NBbygPMOxTC4WxScCJXbUjtMHKt11Entpq5De9OYAGgPNpdsPFuS31niUx4F2aJX4XmbqTboUk3MBNNU5uqiWTIqeT8cv9hq7OzuBu9FQMny-CJeFkYkC8KcLqNQTsY= Text: Availability: 1 Value: <anid>AN0179684062;[8nc3]17jun.24;2024Sep19.01:13;v2.2.500</anid> <title id="AN0179684062-1">Predicting Screening Efficiency of Probability Screens Using KPCA‐GRNN with WP‐EE Feature Reconstruction </title> <sbt id="AN0179684062-2">1. Introduction</sbt> <p>The screening system is a nonlinear and non‐Gaussian complex system. To better characterize its attributes and improve the prediction accuracy of screening efficiency, this study involves the acquisition of the vibration signals and screening efficiency data under various operational conditions. Subsequently, empirical mode decomposition energy entropy (EMD‐EE), variational mode decomposition energy entropy (VMD‐EE), and wavelet packet energy entropy (WP‐EE) features are extracted from the time series vibration signals, and three single input energy entropy‐generalized regressive neural network (GRNN) prediction accuracy models are established and compared. Furthermore, we introduce the kernel principal component analysis (KPCA)‐WP‐EE feature reconstruction‐GRNN prediction algorithm. This approach involves reconstructing the feature vector by optimizing WP‐EE‐GRNN prediction under varying parameters. The parameterized GRNN model is then predicted and analyzed through secondary reconstruction involving KPCA dimensionality reduction features. The results show that WP‐EE‐GRNN achieves superior prediction accuracy compared to box dimension (d)‐GRNN, box dimension‐back propagation neural network (BPNN), and d‐weighted least squares support vector machine, WP‐d‐GRNN, WP‐EE‐BPNN, EMD‐EE‐GRNN, and VMD‐EE‐GRNN. Additionally, the WP‐EE feature reconstruction‐GRNN algorithm exhibits higher prediction accuracy than the single‐input WP‐EE‐GRNN algorithm. The WP‐EE‐GRNN prediction algorithm using KPCA dimensionality reduction and secondary reconstruction not only achieves higher prediction accuracy than prior to KPCA dimensionality reduction but also improves prediction efficiency. Following the extraction of two core principal components, model parameters when KPCA's σ2 = 0.85, the optimal parameter of GRNN model Spread = 0.051, and the optimal number of training samples N = 19, the average prediction error is 1.434%, the minimum prediction error reaching 0.708%, the minimum root mean square error reaching 0.836% and Pearson correlation coefficient marking the closest to 1, these result all representing the optimum achievable values. The budget model selects the optimal parameter combination scheme for the system.</p> <p>The probability screen is a typical nonlinear, non‐Gaussian, time‐varying multiinput and multioutput vibration system, manifesting different vibration signal outputs in varying operational conditions. Screening efficiency is an important indicator for gauging the overall performance of the screening system. However, numerous factors influence the system's screening efficiency, including vibration frequency, amplitude, inclination, and various operational conditions, alongside structural characteristics, material properties, and other relevant factors [[<reflink idref="bib1" id="ref1">1</reflink>]]. Currently, some scholars use simulation software to simulate the screening structure characteristics by establishing mechanical models. This approach enables the exploration of how screening parameters impact screening efficiency [[<reflink idref="bib2" id="ref2">2</reflink>]]. Li and Huang [[<reflink idref="bib3" id="ref3">3</reflink>]], Shi and Huang [[<reflink idref="bib4" id="ref4">4</reflink>]], Tang [[<reflink idref="bib5" id="ref5">5</reflink>]], and Zheng and Huang [[<reflink idref="bib6" id="ref6">6</reflink>]] delved into the influence of kernel parameters in least squares support vector machine (LS‐SVM) on the prediction screening efficiency for probability screens. Their research also encompassed screening efficiency prediction through SVM based on higher‐order cumulant auto‐regressive (AR) models and examined the effect of vibration parameters on the system's screening efficiency by studying the Wigner higher‐order spectrum characteristics of vibration signal time series. In these studies, feature extraction and machine learning techniques for probability screen time series were relatively straightforward, primarily focusing on features such as AR model coefficients and fractal dimensions of time series or time series as feature vectors that have large dimensions or single input. Besides, SVM was predominantly chosen as the machine learning method. Consequently, ample room remains for advancing time series feature extraction, classification prediction methodology, and prediction efficiency enhancement within the domain of probability screen vibration systems. Selecting the feature extraction and prediction algorithm suitable for the system's screening efficiency, along with the corresponding model parameter optimization, continues to be a pivotal area of research in the realm of probability screen studies.</p> <p>In recent years, a growing body of research has explored time series feature extraction and prediction algorithms, with many scholars achieving promising results in their research on state feature extraction, classification prediction, equipment fault diagnosis, state recognition monitoring, target detection, and other aspects through the combination of empirical mode decomposition energy entropy (EMD‐EE) of signals and machine learning methods such as SVM [[<reflink idref="bib7" id="ref7">7</reflink>]]. Additionally, the wavelet packet energy entropy (WP‐EE) of a signal has emerged as a characteristic parameter of the state, often paired with optimization methods like the gray wolf optimizer‐SVM, back propagation neural network (BPNN), LSSVM, and particle swarm optimization. This approach has been applied to the fields of mechanical equipment, medical health, and power systems, for tasks like fault diagnosis, feature parameter extraction, and state recognition research [[<reflink idref="bib9" id="ref8">9</reflink>], [<reflink idref="bib11" id="ref9">11</reflink>]]. Generalized regressive neural network (GRNN) is a radial basis function network based on mathematical statistics. It has robust nonlinear mapping ability and rapid learning, making it particularly well‐suited for scenarios with limited sample sizes [[<reflink idref="bib12" id="ref10">12</reflink>]]. GRNN models have been widely used in state prediction, pattern recognition, fault diagnosis, and other areas. For instance, based on a study [[<reflink idref="bib13" id="ref11">13</reflink>]] comparing several different time series feature extraction methods, BPNN and GRNN were selected to predict and compare the damping efficiency of the magnetorheological system, with GRNN model prediction achieving favorable results. Kernel principal component analysis (KPCA) is a nonlinear method to achieve data dimensionality reduction or feature extraction, effectively reduce vector dimensions, and reconstruct feature vectors [[<reflink idref="bib14" id="ref12">14</reflink>]]. When combined with machine learning, KPCA can reduce the dimensions of feature vectors through parameter optimization, while improving computational efficiency in prediction and classification tasks, all while ensuring prediction accuracy. Despite these advancements, there is a dearth of literature concerning the integration of WP‐EE with GRNN [[<reflink idref="bib15" id="ref13">15</reflink>]]. Furthermore, there is a notable absence of research literature on the combination of WP‐EE and GRNN applied to probability screens, as well as the potential of WP‐EE and GRNN‐KPCA.</p> <p>This paper focuses on the experimental prototype of the probability screen as its research subject. It conducts tests and obtains the time series and screening efficiency of the system vibration signal under various operational conditions, calculates the energy entropy of the decomposed signal by wavelet packet decomposition (WPD) under different decomposition layers and wavelet basis functions, and computes the energy entropy with EMD and VMD. This study also uses a GRNN model to predict the screening efficiency of its single input characteristics and conducts a comparative analysis of the prediction performance between <emph>d</emph>‐GRNN, <emph>d</emph>‐BPNN, <emph>d</emph>‐Weighted LSSVM, WP‐<emph>d</emph>‐GRNN, WP‐EE‐BPNN, EMD‐EE‐GRNN, variational mode decomposition energy entropy (VMD‐EE)‐GRNN, and WP‐EE‐GRNN. By analyzing the prediction effect of single‐input WP‐EE‐GRNN, the WPD decomposition scheme is extracted for feature vector reconstruction, and the second feature reconstruction is then performed through KPCA dimensionality reduction. On the basis of setting different WPD parameters, GRNN parameters, and KPCA parameters, the KPCA‐WP‐EE‐GRNN precalculation method is established to predict the screening efficiency, analyze the prediction effect, and select the optimal parameter combination scheme suitable for the probability screen system's prediction algorithm.</p> <hd id="AN0179684062-3">2. Basic Theory</hd> <p></p> <hd id="AN0179684062-4">2.1. WP‐EE</hd> <p>WPD involves projecting a time series signal into the space of the base function and decomposing the subsignals of high‐ and low‐frequency components through a series of filters with different center frequencies but the same bandwidth, which can decompose the signal in multiple layers. WPD is a decomposition method without redundancy and omission [[<reflink idref="bib9" id="ref14">9</reflink>], [<reflink idref="bib11" id="ref15">11</reflink>]].</p> <p>For a time series signal <emph>x</emph> (<emph>t</emph>), given the scale function <emph>φ</emph> (<emph>t</emph>) and wavelet function <emph>∅</emph> (<emph>t</emph>), the two‐scale equation of wavelet packet transform is as follows [[<reflink idref="bib11" id="ref16">11</reflink>], [<reflink idref="bib16" id="ref17">16</reflink>]]:1 <ephtml> &lt;math display="block" altimg="urn:x-wiley:16879120:media:admp5588864:admp5588864-math-0001" xmlns="http://www.w3.org/1998/Math/MathML"&gt;&amp;#969;2n=2&amp;#8721;khk&amp;#969;n2t&amp;#8722;k&amp;#969;21n+=2&amp;#8721;kgk&amp;#969;n2t&amp;#8722;k,&lt;/math&gt; </ephtml> where <emph>ω</emph><subs>0</subs>(<emph>t</emph>) = <emph>φ</emph>(<emph>t</emph>), <emph>ω</emph><subs>1</subs>(<emph>t</emph>) = <emph>∅</emph>(<emph>t</emph>), while <emph>h</emph> (<emph>k</emph>) and <emph>g</emph> (<emph>k</emph>) represent the corresponding low‐pass and high‐pass filter coefficients, respectively. The sequence {<emph>ω</emph><subs><emph>n</emph></subs>} constructed by Equation (<reflink idref="bib1" id="ref18">1</reflink>) is the wavelet packet determined by the basis function. Notable wavelet base functions include the Haar wavelet base, DB series wavelet base, Biorthogonal wavelet system, Coiflet wavelet system, Symlets wavelet system, Molet wavelet base, Mexican Hat wavelet base, Meyer wavelet base, among others [[<reflink idref="bib17" id="ref19">17</reflink>]].</p> <p>Information entropy can reflect the uncertainty of a signal or system or the complexity of a random signal [[<reflink idref="bib8" id="ref20">8</reflink>]]. The information entropy of the total energy of each subsignal obtained by WPD is called WP‐EE [[<reflink idref="bib8" id="ref21">8</reflink>]]. Different WP‐EE will be obtained under different WPD layers and wavelet basis functions. The WP‐EE of a signal is then expressed as follows [[<reflink idref="bib9" id="ref22">9</reflink>], [<reflink idref="bib11" id="ref23">11</reflink>]]:2 <ephtml> &lt;math display="block" altimg="urn:x-wiley:16879120:media:admp5588864:admp5588864-math-0002" xmlns="http://www.w3.org/1998/Math/MathML"&gt;H=&amp;#8722;&amp;#8721;i=0n&amp;#603;jkilg&amp;#603;jki,&lt;/math&gt; </ephtml> where <emph>ɛ</emph><subs><emph>j</emph><emph>k</emph></subs> is the relative energy of the <emph>k</emph>th WPD component of the <emph>j</emph>th layer.</p> <hd id="AN0179684062-5">2.2. GRNN</hd> <p>GRNN is a radial basis function network based on mathematical statistics. Its theoretical basis is nonlinear regression analysis. Its main structure includes four layers: input, mode, summation, and output layers. The smoothing factor <emph>Spread</emph> is an important parameter of GRNN models. The model's output can be typically changed by adjusting the smoothing factor value [[<reflink idref="bib13" id="ref24">13</reflink>]].</p> <hd id="AN0179684062-6">2.3. KPCA</hd> <p>KPCA is based on the principle of kernel functions. It involves projecting the input space into the high‐dimensional space through nonlinear mapping and then conducting principal component analysis on the mapped data within the high‐dimensional space. This method possesses robust nonlinear processing ability. Typically, KPCA is applied by adjusting kernel function parameters <emph>σ</emph><sups>2</sups> to obtain different feature vectors and assess the contribution rate of each principal component. The principal components with high contribution rates are selected as the features for analysis, thereby accomplishing dimensionality reduction [[<reflink idref="bib14" id="ref25">14</reflink>]].</p> <hd id="AN0179684062-7">3. Data Source and Processing</hd> <p></p> <hd id="AN0179684062-8">3.1. Probability Screen Structure and Vibration Test</hd> <p>This experiment uses the probability screen experimental machine (Figure 1) developed by Huang Yijian's team. Its structure comprises a screen, feed inlet, discharge outlet, screen body, vibration excitation motor, support, and other components. The experimental test system consists of a computer, a data acquisition card (PCI6014), a piezoelectric acceleration sensor (HK9103), a charge amplifier (HK9205), and a LabVIEW software environment. Factors affecting screening efficiency during operation encompass material properties, screen surface inclination, feed speed, screen amplitude, vibration frequency, and more. This experiment mainly considers the impact of screen amplitude <emph>A</emph> and excitation frequency <emph>f</emph> on screening efficiency. The vibration test is conducted at <emph>A</emph> = 3, 4, 5, 6, and 7 mm, <emph>f</emph> = 15, 20, 25, 30, 35, and 40 Hz, and the two parameters are combined in 30 working states, respectively. The signal sampling frequency is set to <emph>f</emph><subs><emph>s</emph></subs> = 1,000 Hz, and the time series of vibration acceleration in the <emph>Z</emph> direction and corresponding screening efficiency are obtained. The screen size in the experiment measures 1.0 mm × 1.0 mm, and the selected material has a particle size of 0.6 mm [[<reflink idref="bib1" id="ref26">1</reflink>], [<reflink idref="bib5" id="ref27">5</reflink>]].</p> <p> <img src="https://imageserver.ebscohost.com/img/embimages/rdk/8NC3/17jun24/admp5588864-fig-0001a.jpg?ephost1=dGJyMNLr40Sepq84v%2bbwOLCmsE2epq5Srqa4SK6WxWXS" alt="admp5588864-fig-0001a.jpg" title="1 (a)" /> </p> <p></p> <hd id="AN0179684062-10">3.2. WP‐EE Characteristics of Time Series</hd> <p>For the detected vibration signal time series, 1,024 test values from the stable signal section are chosen for analysis under each working condition. For the extracted time series analysis segment signals, WPD is applied across decomposition layers 1–6. The wavelet basis functions used include Daubechies wavelets (db1–5), Biological wavelets (bior2.2, 1.3, 1.5), Cofflet wavelets (coif1–5), Symlets wavelets (sym1–5), Fejer–Korovkin orthogonal wavelets (fk14, 8, 6, 4), and Discrete Meyer wavelets (dmey). WPD is employed to extract and decompose the wavelet coefficients of each sub‐signal, followed by the calculation of the information entropy of its total energy. Figure 2 shows the WP‐EE of the vibration signal of the probability screen at <emph>A</emph> = 3 mm and <emph>f</emph> = 15 Hz. It illustrates that WP‐EE exhibits a consistent trend in its variation with the number of decomposition layers <emph>s</emph>. A greater number of decomposition layers <emph>s</emph> results in higher WP‐EE values, reflecting increased information about the complexity of the state signal. For the same number of layers <emph>s</emph>, WP‐EE values differ significantly across various wavelet basis functions. The energy entropy values corresponding to Daubechies wavelets are relatively large, followed by the energy entropy values corresponding to Biorthogonal wavelets, and the energy entropy values corresponding to the other several basis functions are relatively close. Furthermore, the WP‐EE values calculated under various operational conditions exhibit substantial numerical variations. This disparity effectively captures the distinctions in signal complexity across various working conditions, serving as a significant characteristic of the system's state.</p> <p> <img src="https://imageserver.ebscohost.com/img/embimages/rdk/8NC3/17jun24/admp5588864-fig-0002.jpg?ephost1=dGJyMNLr40Sepq84v%2bbwOLCmsE2epq5Srqa4SK6WxWXS" alt="admp5588864-fig-0002.jpg" title="2 WP‐EE of vibration signal at A = 3 mm and f = 15 Hz under different parameters." /> </p> <p></p> <hd id="AN0179684062-12">4. Prediction Algorithm Design</hd> <p></p> <hd id="AN0179684062-13">4.1. KPCA‐GRNN Prediction Algorithm Based on WP‐EE</hd> <p>Considering the distinct characteristics of WP‐EE in different states, it is proposed to obtain the energy entropy reconstruction feature vector through the WPD of time series and use the KPCA‐GRNN model to predict and analyze the probability screening efficiency. The steps of the algorithm are as follows:</p> <p></p> <ulist> <item> (<reflink idref="bib1" id="ref28">1</reflink>) Extract and analyze the signal. Detect the time series under different operating conditions and screening efficiency and select a stable signal segment for analysis.</item> <p></p> <item> (<reflink idref="bib2" id="ref29">2</reflink>) Compute the WP‐EE. WPD is applied on the time series of each state analysis signal considering different layers and wavelet basis functions. The wavelet decomposition coefficients are obtained for each node along with the total energy and its information entropy for each coefficient resulting from the time series decomposition under different states.</item> <p></p> <item> (<reflink idref="bib3" id="ref30">3</reflink>) Set up training and test sample sets. The WP‐EE of time series in different states is taken as the single input eigenvalue, and the corresponding screening efficiency is taken as the state output value to form 30 sample sets. The first <emph>N</emph> (<emph>N</emph> ranging from 16 to 25) samples constitute the training set, and the last five samples represent the test set.</item> <p></p> <item> (<reflink idref="bib4" id="ref31">4</reflink>) GRNN model prediction. Set the training sample number <emph>N</emph> and smoothing factor <emph>Spread</emph> range of the GRNN model, predict the screening efficiency with the GRNN model, and obtain the mean absolute value of the relative error between the predicted and experimental values of five test samples, which is called the absolute average error <emph>R</emph><subs>abs_av</subs>:</item> <item>ath display="block" altimg="urn:x-wiley:16879120:media:admp5588864:admp5588864-math-0003" xmlns="http://www.w3.org/1998/Math/MathML"&gt;Rabs_ av=15∑i=15Yitest−Yiout.&lt;/math&gt; _ht_</item> </ulist> <p>For the same number of training samples <emph>N</emph>, the minimum absolute average error obtained from different <emph>m</emph> (<emph>Spread</emph>) values is designated as the prediction error <emph>R</emph>:4 <ephtml> &lt;math display="block" altimg="urn:x-wiley:16879120:media:admp5588864:admp5588864-math-0004" xmlns="http://www.w3.org/1998/Math/MathML"&gt;R=minRabs&amp;#95;av1,&amp;#8195;Rabs&amp;#95;av2,&amp;#8195;...,&amp;#8195;Rabs&amp;#95;&amp;#8201;avm.&lt;/math&gt; </ephtml></p> <p>The average of each prediction error <emph>R</emph> for different training sample numbers <emph>N</emph> (<emph>N</emph> ranging from 16 to 25) is termed the average prediction error <emph>R</emph><subs>av</subs> expressed as follows:5 <ephtml> &lt;math display="block" altimg="urn:x-wiley:16879120:media:admp5588864:admp5588864-math-0005" xmlns="http://www.w3.org/1998/Math/MathML"&gt;Rav=110&amp;#8721;j=125Rj.&lt;/math&gt; </ephtml></p> <p></p> <ulist> <item> (<reflink idref="bib5" id="ref32">5</reflink>) The WP‐EE eigenvector of the reconstructed features is optimized. Compare and analyze the prediction accuracy of WP‐EE under different decomposition levels and wavelet basis functions. Select WP‐EE for the reconstruction feature vector when the average prediction error <emph>R</emph><subs>av</subs> is smaller.</item> <p></p> <item> (<reflink idref="bib6" id="ref33">6</reflink>) KPCA and secondary feature vector reconstruction. KPCA is performed on the reconstructed energy entropy feature vector, and the kernel function parameters <emph>σ</emph><sups>2</sups> are adjusted. Determine the contribution rate of each principal component, take the first several principal components with a high contribution rate, and reduce the dimensions to reconstruct the feature vector twice.</item> <p></p> <item> (<reflink idref="bib7" id="ref34">7</reflink>) KPCA‐GRNN prediction and optimal prediction parameter combination determination. Use different parameter values for <emph>σ</emph><sups>2</sups>. Input the second reconstruction feature vector after dimensionality reduction to GRNN prediction according to step (<reflink idref="bib4" id="ref35">4</reflink>). Select the best prediction result and the corresponding parameter combination scheme based on the prediction result analysis.</item> </ulist> <p>In order to clearly illustrate the above algorithm process of the KPCA‐GRNN prediction algorithm based on WP‐EE, the algorithm flowchart is drawn in Figure 3.</p> <p> <img src="https://imageserver.ebscohost.com/img/embimages/rdk/8NC3/17jun24/admp5588864-fig-0003.jpg?ephost1=dGJyMNLr40Sepq84v%2bbwOLCmsE2epq5Srqa4SK6WxWXS" alt="admp5588864-fig-0003.jpg" title="3 Flowchart of KPCA‐GRNN prediction algorithm based on WP‐EE." /> </p> <p></p> <hd id="AN0179684062-15">4.2. Comparison Prediction Algorithm Design</hd> <p>For comparative analysis, the GRNN, BPNN, and weighted LSSVM probability screening prediction methods based on time‐series box dimension feature extraction in reference [[<reflink idref="bib18" id="ref36">18</reflink>]] were used to compare the prediction accuracy of screening efficiency. Then, the WP‐<emph>d</emph>‐GRNN, WP‐EE‐BPNN, EMD‐EE‐GRNN, and VMD‐EE‐GRNN prediction algorithms with single feature input are designed and compared with the WP‐EE‐GRNN prediction algorithm with single feature input. The WP‐<emph>d</emph>‐GRNN, WP‐EE‐BPNN, and EMD‐EE‐GRNN prediction algorithm are carried out according to steps (<reflink idref="bib1" id="ref37">1</reflink>)–(<reflink idref="bib4" id="ref38">4</reflink>) in Section 4.1, where the EE in steps (<reflink idref="bib2" id="ref39">2</reflink>) and (<reflink idref="bib3" id="ref40">3</reflink>) is replaced with box dimension (<emph>d</emph>) for the WP‐<emph>d</emph>‐GRNN, the GRNN in steps (<reflink idref="bib4" id="ref41">4</reflink>) is replaced with BPNN for WP‐EE‐BPNN, the WP‐EE in steps (<reflink idref="bib2" id="ref42">2</reflink>) and (<reflink idref="bib3" id="ref43">3</reflink>) is replaced with EMD‐EE for the EMD‐EE‐GRNN. The algorithm flow of the EMD‐EE‐GRNN prediction algorithm is illustrated in Figure 4. The VMD‐EE‐GRNN prediction algorithm follows steps (<reflink idref="bib1" id="ref44">1</reflink>) and (<reflink idref="bib4" id="ref45">4</reflink>) in Section 4.1, while in steps (<reflink idref="bib2" id="ref46">2</reflink>) and (<reflink idref="bib3" id="ref47">3</reflink>), the VMD parameter penalty factor <emph>M</emph> value is initially set, followed by configuring the range of the decomposition number <emph>K</emph>, and obtaining the energy entropy of the corresponding decomposition signal using VMD, instead of WP‐EE. After step (<reflink idref="bib4" id="ref48">4</reflink>), an additional step (<reflink idref="bib5" id="ref49">5</reflink>) is introduced: obtain the prediction error corresponding to different <emph>K</emph> values, select the <emph>K</emph> value with the lowest average prediction error under varying <emph>N</emph> values, and then define the <emph>M</emph> range. Subsequently, repeat steps (<reflink idref="bib2" id="ref50">2</reflink>)–(<reflink idref="bib4" id="ref51">4</reflink>) to obtain the minimum prediction error of the VMD‐EE‐GRNN model across different <emph>M</emph> values. The flowchart of the VMD‐EE‐GRNN prediction algorithm is illustrated in Figure 5. It is worth noting that the time series data selected for all the three comparison algorithms are the same, and the parameter range set during the GRNN model prediction within the dashed box is consistent. Finally, the prediction performance is assessed by comparing the prediction error of the three algorithms.</p> <p> <img src="https://imageserver.ebscohost.com/img/embimages/rdk/8NC3/17jun24/admp5588864-fig-0004.jpg?ephost1=dGJyMNLr40Sepq84v%2bbwOLCmsE2epq5Srqa4SK6WxWXS" alt="admp5588864-fig-0004.jpg" title="4 Flowchart of EMD‐EE‐GRNN prediction algorithm." /> </p> <p></p> <p> <img src="https://imageserver.ebscohost.com/img/embimages/rdk/8NC3/17jun24/admp5588864-fig-0005.jpg?ephost1=dGJyMNLr40Sepq84v%2bbwOLCmsE2epq5Srqa4SK6WxWXS" alt="admp5588864-fig-0005.jpg" title="5 Flowchart of VMD‐EE‐GRNN prediction algorithm." /> </p> <p></p> <hd id="AN0179684062-18">5. Prediction Result Analysis</hd> <p></p> <hd id="AN0179684062-19">5.1. Analysis of Prediction Results of Three Algorithms with Single Feature Input</hd> <p>The EMD‐EE is obtained by calculating the modal components, envelope energy, and total energy information entropy of the time series across 30 operational conditions using EMD, followed by prediction with the GRNN model. In the case of VMD, the procedure begins with setting <emph>M</emph> = 99 and <emph>K</emph> between 1 and 15. The energy information entropy of each mode component, envelope energy, and total energy are computed after applying VMD, and VMD‐EE is then derived. After the initial prediction using the GRNN model, the value of <emph>K</emph> that yields the smallest average prediction error is determined, and <emph>K</emph> is then set accordingly. With <emph>M</emph> = 1 : 10 : 200, the energy entropy is then computed after VMD, and a second prediction is carried out using the GRNN model. For the WP‐EE of 30 states obtained by WPD under the decomposition levels <emph>s</emph> = 1–6 and the DB3 wavelet basis function applied to the time series data, the GRNN model is employed for prediction. The energy entropy obtained by three different methods is used as a single input characteristic value, and the screening efficiency is used as the output value to construct 30 samples. The first <emph>N</emph> samples, with <emph>N</emph> ranging from 16 to 25, constitute the training sample set, and the last five samples are used as the test set. Different <emph>Spread</emph> values are set accordingly. GRNN is then used for prediction, with the prediction error <emph>R</emph> determined for various numbers <emph>N</emph> of training samples. A graphical representation of the <emph>R</emph>–<emph>N</emph> curve is displayed in Figure 6, showcasing the prediction performance of WP‐<emph>d</emph>‐GRNN, WP‐EE‐BPNN, EMD‐EE‐GRNN, VMD‐EE‐GRNN, and WP‐EE‐GRNN when decomposing layers 1–6 and using DB3 as the basis function. Chen and Huang [[<reflink idref="bib18" id="ref52">18</reflink>]] used three methods, <emph>d</emph>‐GRNN, d‐BPNN, and <emph>d</emph>‐Weighted LSSVM, to predict screening efficiency under different training sample sizes. The predicted results are listed in Table 1.</p> <p> <img src="https://imageserver.ebscohost.com/img/embimages/rdk/8NC3/17jun24/admp5588864-fig-0006.jpg?ephost1=dGJyMNLr40Sepq84v%2bbwOLCmsE2epq5Srqa4SK6WxWXS" alt="admp5588864-fig-0006.jpg" title="6 Comparison of prediction algorithms." /> </p> <p></p> <p>1 Table Prediction results of screening efficiency under different training samples.</p> <p> <ephtml> &lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th align="left"&gt;The number of training samples&lt;/th&gt;&lt;th align="center"&gt;Minimum prediction error &lt;italic&gt;R&lt;/italic&gt; (%) based on box&amp;#8208;dimension &lt;italic&gt;d&lt;/italic&gt; [&lt;xref ref-type="bibr" rid="bibr18"&gt;18&lt;/xref&gt;]&lt;/th&gt;&lt;th align="center"&gt;Minimum prediction error &lt;italic&gt;R&lt;/italic&gt; (%) based on EE&lt;/th&gt;&lt;/tr&gt;&lt;tr&gt;&lt;th align="center"&gt;&lt;italic&gt;d&lt;/italic&gt; &amp;#8208; GRNN&lt;/th&gt;&lt;th align="center"&gt;&lt;italic&gt;d&lt;/italic&gt; &amp;#8208;BPNN&lt;/th&gt;&lt;th align="center"&gt;&lt;italic&gt;d&lt;/italic&gt;&amp;#8208;Weighted&amp;#8208;LSSVM&lt;/th&gt;&lt;th align="center"&gt;WP&amp;#8208;&lt;italic&gt;d&lt;/italic&gt;&amp;#8208;GRNN&lt;/th&gt;&lt;th align="center"&gt;WP&amp;#8208;EE&amp;#8208;BPNN&lt;/th&gt;&lt;th align="center"&gt;EMD&amp;#8208;EE&amp;#8208;GRNN&lt;/th&gt;&lt;th align="center"&gt;VMD&amp;#8208;EE&amp;#8208;GRNN&lt;/th&gt;&lt;th align="center"&gt;WP&amp;#8208;EE&amp;#8208;GRNN&lt;/th&gt;&lt;th align="center"&gt;KPCA&amp;#8208;WP&amp;#8208;EE&amp;#8208;GRNN&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td align="left"&gt;16&lt;/td&gt;&lt;td align="center"&gt;5.35&lt;/td&gt;&lt;td align="center"&gt;5.41&lt;/td&gt;&lt;td align="center"&gt;6.38&lt;/td&gt;&lt;td align="center"&gt;3.90&lt;/td&gt;&lt;td align="center"&gt;4.59&lt;/td&gt;&lt;td align="center"&gt;5.77&lt;/td&gt;&lt;td align="center"&gt;4.57&lt;/td&gt;&lt;td align="center"&gt;2.48&lt;/td&gt;&lt;td align="center"&gt;1.62&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;17&lt;/td&gt;&lt;td align="center"&gt;5.67&lt;/td&gt;&lt;td align="center"&gt;5.21&lt;/td&gt;&lt;td align="center"&gt;6.62&lt;/td&gt;&lt;td align="center"&gt;2.43&lt;/td&gt;&lt;td align="center"&gt;4.57&lt;/td&gt;&lt;td align="center"&gt;5.48&lt;/td&gt;&lt;td align="center"&gt;4.69&lt;/td&gt;&lt;td align="center"&gt;1.80&lt;/td&gt;&lt;td align="center"&gt;1.64&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;18&lt;/td&gt;&lt;td align="center"&gt;5.41&lt;/td&gt;&lt;td align="center"&gt;5.51&lt;/td&gt;&lt;td align="center"&gt;6.20&lt;/td&gt;&lt;td align="center"&gt;1.75&lt;/td&gt;&lt;td align="center"&gt;4.15&lt;/td&gt;&lt;td align="center"&gt;5.48&lt;/td&gt;&lt;td align="center"&gt;5.13&lt;/td&gt;&lt;td align="center"&gt;1.45&lt;/td&gt;&lt;td align="center"&gt;1.42&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;19&lt;/td&gt;&lt;td align="center"&gt;5.85&lt;/td&gt;&lt;td align="center"&gt;5.29&lt;/td&gt;&lt;td align="center"&gt;6.42&lt;/td&gt;&lt;td align="center"&gt;1.79&lt;/td&gt;&lt;td align="center"&gt;4.24&lt;/td&gt;&lt;td align="center"&gt;5.48&lt;/td&gt;&lt;td align="center"&gt;5.05&lt;/td&gt;&lt;td align="center"&gt;1.34&lt;/td&gt;&lt;td align="center"&gt;0.78&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;20&lt;/td&gt;&lt;td align="center"&gt;5.84&lt;/td&gt;&lt;td align="center"&gt;5.20&lt;/td&gt;&lt;td align="center"&gt;6.05&lt;/td&gt;&lt;td align="center"&gt;1.81&lt;/td&gt;&lt;td align="center"&gt;3.43&lt;/td&gt;&lt;td align="center"&gt;5.48&lt;/td&gt;&lt;td align="center"&gt;5.53&lt;/td&gt;&lt;td align="center"&gt;1.99&lt;/td&gt;&lt;td align="center"&gt;0.71&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;21&lt;/td&gt;&lt;td align="center"&gt;5.75&lt;/td&gt;&lt;td align="center"&gt;5.46&lt;/td&gt;&lt;td align="center"&gt;5.49&lt;/td&gt;&lt;td align="center"&gt;2.85&lt;/td&gt;&lt;td align="center"&gt;2.83&lt;/td&gt;&lt;td align="center"&gt;5.48&lt;/td&gt;&lt;td align="center"&gt;2.66&lt;/td&gt;&lt;td align="center"&gt;1.92&lt;/td&gt;&lt;td align="center"&gt;0.76&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;22&lt;/td&gt;&lt;td align="center"&gt;5.46&lt;/td&gt;&lt;td align="center"&gt;4.20&lt;/td&gt;&lt;td align="center"&gt;5.33&lt;/td&gt;&lt;td align="center"&gt;3.44&lt;/td&gt;&lt;td align="center"&gt;3.05&lt;/td&gt;&lt;td align="center"&gt;5.71&lt;/td&gt;&lt;td align="center"&gt;2.66&lt;/td&gt;&lt;td align="center"&gt;1.92&lt;/td&gt;&lt;td align="center"&gt;0.95&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;23&lt;/td&gt;&lt;td align="center"&gt;5.55&lt;/td&gt;&lt;td align="center"&gt;3.28&lt;/td&gt;&lt;td align="center"&gt;4.14&lt;/td&gt;&lt;td align="center"&gt;3.22&lt;/td&gt;&lt;td align="center"&gt;1.85&lt;/td&gt;&lt;td align="center"&gt;5.72&lt;/td&gt;&lt;td align="center"&gt;2.66&lt;/td&gt;&lt;td align="center"&gt;1.65&lt;/td&gt;&lt;td align="center"&gt;0.95&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;24&lt;/td&gt;&lt;td align="center"&gt;5.10&lt;/td&gt;&lt;td align="center"&gt;3.45&lt;/td&gt;&lt;td align="center"&gt;4.40&lt;/td&gt;&lt;td align="center"&gt;1.77&lt;/td&gt;&lt;td align="center"&gt;1.57&lt;/td&gt;&lt;td align="center"&gt;5.70&lt;/td&gt;&lt;td align="center"&gt;2.66&lt;/td&gt;&lt;td align="center"&gt;1.51&lt;/td&gt;&lt;td align="center"&gt;0.92&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;25&lt;/td&gt;&lt;td align="center"&gt;6.04&lt;/td&gt;&lt;td align="center"&gt;3.62&lt;/td&gt;&lt;td align="center"&gt;4.80&lt;/td&gt;&lt;td align="center"&gt;3.03&lt;/td&gt;&lt;td align="center"&gt;1.57&lt;/td&gt;&lt;td align="center"&gt;5.72&lt;/td&gt;&lt;td align="center"&gt;2.66&lt;/td&gt;&lt;td align="center"&gt;2.03&lt;/td&gt;&lt;td align="center"&gt;0.92&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;Average&lt;/td&gt;&lt;td align="center"&gt;5.60&lt;/td&gt;&lt;td align="center"&gt;4.66&lt;/td&gt;&lt;td align="center"&gt;5.58&lt;/td&gt;&lt;td align="center"&gt;2.60&lt;/td&gt;&lt;td align="center"&gt;3.18&lt;/td&gt;&lt;td align="center"&gt;5.60&lt;/td&gt;&lt;td align="center"&gt;3.83&lt;/td&gt;&lt;td align="center"&gt;1.81&lt;/td&gt;&lt;td align="center"&gt;1.07&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt; </ephtml> </p> <p>From the data in Table 1, it can be seen that the minimum prediction error values of the <emph>d</emph>‐GRNN method for screening efficiency under different sample sizes are between 5.10% and 6.04%, with an average value of 5.60%. The minimum prediction error value of the <emph>d</emph>‐BPNN method for screening efficiency under different sample sizes is 3.28%–5.51%, with an average value of 4.66%. The minimum prediction error values of the <emph>d</emph>‐weighted LSSVM method for screening efficiency under different sample sizes are all between 4.14% and 6.62%, with an average value of 5.58%.</p> <p>Figure 6 provides several noteworthy insights. First, the EMD‐EE‐GRNN prediction error <emph>R</emph> demonstrates a slight upward trend with varying <emph>N</emph> (16–25), ranging from 5.48% to 5.7%, with an average prediction error <emph>R</emph><subs>av</subs> of 5.60%, reaching its minimum of 5.48% when <emph>N</emph> = 17–21. Second, in VMD‐EE‐GRNN predictions, when <emph>M</emph> = 99, the average prediction error varies from 5% to 7% for different values of <emph>K</emph>, with the lowest average prediction error <emph>R</emph><subs>av</subs> = 5.15% found at <emph>K</emph> = 4. With <emph>K</emph> = 4, the VMD‐EE‐GRNN prediction yields an average error ranging from 4.5% to 7% under different <emph>M</emph> and <emph>N</emph> values, with the smallest average prediction error of 4.57% achieved for <emph>M</emph> = 50 and the minimum prediction error dropping to 2.66% within <emph>N</emph> = 21 to 25. While VMD‐EE‐GRNN offers smaller average and minimum prediction errors compared to EMD‐EE‐GRNN, the prediction process is more complex. Third, the WP (DB3)‐<emph>d</emph>‐GRNN prediction error <emph>R</emph> demonstrates a slight upward trend with varying <emph>N</emph> (16–25), ranging from 1.75% to 3.90%, with an average prediction error <emph>R</emph><subs>av</subs> of 2.60%, reaching its minimum of 1.75% when <emph>N</emph> = 18 and <emph>s</emph> = 3. Fourth, the WP (DB3)‐EE‐BPNN prediction error <emph>R</emph> demonstrates a slight upward trend with varying <emph>N</emph> (16–25), ranging from 1.57% to 4.59%, with an average prediction error <emph>R</emph><subs>av</subs> of 3.18%, reaching its minimum of 1.57% when <emph>N</emph> = 24–25 and <emph>s</emph> = 3. Finally, the error <emph>R</emph> for WP‐EE (DB3)‐GRNN prediction is mostly stable or declining with <emph>N</emph> but varies significantly across different layers, with prediction errors averaging between 1.90% and 6.17%. Notably, when <emph>s</emph> = 6, the prediction error is relatively stable and small, and <emph>R</emph><subs>av</subs> = 1.91%. When <emph>N</emph> = 21 and <emph>s</emph> = 3, the prediction error reaches the minimum error value of 1.34%.</p> <p>Overall, the WP‐EE‐GRNN prediction algorithm consistently demonstrates higher prediction accuracy compared to <emph>d</emph>‐GRNN, <emph>d</emph>‐BPNN, <emph>d</emph>‐Weighted LSSVM, WP‐<emph>d</emph>‐GRNN, WP‐EE‐BPNN, EMD‐EE‐GRNN, and VMD‐EE‐GRNN, showcasing its superior predictive capabilities.</p> <hd id="AN0179684062-21">5.2. KPCA‐GRNN Prediction Results Based on WP‐EE</hd> <p></p> <hd id="AN0179684062-22">5.2.1. Analysis of WP‐EE‐GRNN Prediction Results</hd> <p>Different wavelet basis functions, including Daubechies wavelets (db1–5), Coiflet wavelets (coif1–5), Symlets wavelets (sym1–5), Biometric wavelets Reverser (bior2.2, 1.3, 1.5), Biorthogonal (rbio2. 2, 1.3, 1.5), Fejer‐Korovkin orthometric wavelets (fk14,8,6,4), and Discrete Meyer wavelets (dmey), were selected, resulting in 26 types of WPD when <emph>S</emph> = 1–6. This process yields 156 groups of WP‐EE. By adjusting the <emph>Spread</emph> value of GRNN model parameters predicting with GRNN, prediction errors were determined for various numbers of training samples <emph>N</emph>. The average prediction errors corresponding to <emph>N</emph> values between 7 and 25 were calculated, and the relationship curve between the average prediction error and the number of layers <emph>s</emph> was visualized under each wavelet basis function, as illustrated in Figure 7.</p> <p> <img src="https://imageserver.ebscohost.com/img/embimages/rdk/8NC3/17jun24/admp5588864-fig-0007a.jpg?ephost1=dGJyMNLr40Sepq84v%2bbwOLCmsE2epq5Srqa4SK6WxWXS" alt="admp5588864-fig-0007a.jpg" title="7 (a)" /> </p> <p></p> <p>Figure 7 reveals that the average prediction error value <emph>R</emph><subs>av</subs> with the number of decomposition layers <emph>s</emph> presents an obvious nonlinear relationship, but the changing trend of the relationship curve under different wavelet basis functions remains relatively consistent. Notably, the average prediction error is generally small when <emph>s</emph> = 3 and <emph>s</emph> = 6, with <emph>R</emph><subs>av</subs> ranging between 1.8% and 6.7%.</p> <hd id="AN0179684062-24">5.2.2. WP‐EE Reconstruction Feature Vector‐GRNN Prediction</hd> <p>To further improve the prediction accuracy, the 14 WP‐EE reconstructions with small average prediction error (1.8%–2.1%) are taken as the state eigenvectors, and the screening efficiency forms a 30 × 15 sample set. The previous <emph>N</emph> = 7–25 samples are the training set, and the last five samples are the test set. The optimization prediction is carried out by setting distinct GRNN model parameter <emph>Spread</emph> values. The <emph>R–N</emph> curve is presented in Figure 8. It is evident that the prediction error after reconstruction decreases with the increase in the number of training samples <emph>N</emph>, with error values ranging between 1.25% and 2.29%. The change is relatively stable, and the average prediction error <emph>R</emph><subs>av</subs> is 1.703%. When <emph>N</emph> = 24 and 25, the prediction error achieves a minimum of 1.264%. In comparison to using a single WP‐EE input without reconstruction, the prediction error is significantly reduced, and the sensitivity to the number of training samples is substantially diminished. However, since the feature vector comprises 14‐dimensions, the prediction requires a significant amount of computational time.</p> <p> <img src="https://imageserver.ebscohost.com/img/embimages/rdk/8NC3/17jun24/admp5588864-fig-0008.jpg?ephost1=dGJyMNLr40Sepq84v%2bbwOLCmsE2epq5Srqa4SK6WxWXS" alt="admp5588864-fig-0008.jpg" title="8 R–N curve of KPCA‐WP‐EE‐GRNN prediction." /> </p> <p></p> <hd id="AN0179684062-26">5.2.3. KPCA Dimensionality Reduction and Feature Vector Secondary Reconstruction</hd> <p>The WP‐EE feature vector reconstruction process expands the original 1‐dimensional vector to a 14‐dimensional vector, significantly increasing the computational time required for prediction. To reduce the dimensions while improving prediction efficiency and enhancing prediction accuracy, the KPCA method is adopted, along with the RBF kernel function. The kernel function parameters <emph>σ</emph><sups>2</sups> are calculated and adjusted, taking into consideration the contribution rate of each principal component. Subsequently, the first <emph>n</emph> eigenvectors with the highest contribution rate are selected for secondary reconstruction, effectively achieving dimensionality reduction. Parameter adjustments involve varying <emph>σ</emph><sups>2</sups> and extracting 1, 2, 3, 4, and 5 principal components for further feature vector reconstruction to optimize predictions using the GRNN model.</p> <hd id="AN0179684062-27">5.2.4. KPCA‐WP‐EE‐GRNN Prediction Results</hd> <p>To evaluate the KPCA‐WP‐EE‐GRNN prediction algorithm, we extracted the first <emph>n</emph> (ranging from 1 to 5) principal components and reconstructed the resulting <emph>n</emph> feature vectors obtained by KPCA. Subsequently, we established as ample set and adjusting the KPCA kernel function parameters <emph>σ</emph><sups>2</sups> = 0.05 : 0.2 : 2.85, the GRNN model parameter <emph>Spread</emph> = 0.001 : 0.05 : 10, and the number of training samples <emph>N</emph> = 7–25. Following the KPCA‐WP‐EE‐GRNN prediction algorithm process, we calculated the average prediction error for different <emph>N</emph> values and different <emph>n</emph> numbers of principal components. Additionally, we employed 14 eigenvectors as input and maintained consistent parameter settings for <emph>σ</emph><sups>2</sups> and <emph>Spread</emph> for GRNN prediction. Comparing the average prediction error before and after the second reconstruction, we plotted the <emph>R–N</emph> curve (as depicted in Figure 8) to analyze the variance in prediction outcomes. We also considered principal component contribution rates, prediction average errors, minimum errors, and the optimal <emph>σ</emph><sups>2</sups> for the prediction algorithm. Prediction results of screening efficiency under different training samples, such as average prediction error under different <emph>N</emph> (<emph>R</emph><subs>av</subs>), the minimum prediction error under different <emph>N</emph> (<emph>R</emph><subs>min</subs>), and the minimum <emph>RRMSE</emph> under different <emph>N</emph> (<emph>RRMSE</emph><subs>min</subs>). The minimum root mean square error (<emph>RMSE</emph>) under different <emph>N</emph> (<emph>RMSE</emph><subs>min</subs>) and Pearson correlation coefficient, the relevant parameters related to the algorithm before and after KPCA dimensionality reduction and reconstruction, such as the optimal <emph>Spread</emph> and the optimal number of training samples <emph>N</emph>, are listed in Table 2.</p> <p>2 Table Prediction results and optimal parameters related to the algorithm before and after KPCA dimensionality reduction and reconstruction.</p> <p> <ephtml> &lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th align="left"&gt;Feature reconstruction&lt;/th&gt;&lt;th align="center"&gt;Prediction results&lt;/th&gt;&lt;th align="center"&gt;Model parameters with minimum error&lt;/th&gt;&lt;/tr&gt;&lt;tr&gt;&lt;th align="center"&gt;Average prediction error under different &lt;italic&gt;N R&lt;/italic&gt;&lt;sub&gt;av&lt;/sub&gt; (%)&lt;/th&gt;&lt;th align="center"&gt;The minimum prediction error under different &lt;italic&gt;N R&lt;/italic&gt;&lt;sub&gt;min&lt;/sub&gt; (%)&lt;/th&gt;&lt;th align="center"&gt;The minimum RRMSE under different &lt;italic&gt;N&lt;/italic&gt; RRMSE&lt;sub&gt;min&lt;/sub&gt; (%)&lt;/th&gt;&lt;th align="center"&gt;The minimum RMSE under different &lt;italic&gt;N&lt;/italic&gt; RMSE&lt;sub&gt;min&lt;/sub&gt; (%)&lt;/th&gt;&lt;th align="center"&gt;Pearson correlation coefficient&lt;/th&gt;&lt;th align="center"&gt;KPCA parameter &lt;italic&gt;&amp;#963;&lt;/italic&gt;&lt;sup&gt;2&lt;/sup&gt;&lt;/th&gt;&lt;th align="center"&gt;&lt;italic&gt;Spread&lt;/italic&gt; value&lt;/th&gt;&lt;th align="center"&gt;Number of training samples&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td align="left"&gt;14 energy entropy&lt;/td&gt;&lt;td align="center"&gt;1.703&lt;/td&gt;&lt;td align="center"&gt;1.264&lt;/td&gt;&lt;td align="center"&gt;1.274&lt;/td&gt;&lt;td align="center"&gt;1.051&lt;/td&gt;&lt;td align="center"&gt;0.988&lt;/td&gt;&lt;td align="center"&gt;&amp;#8212;&lt;/td&gt;&lt;td align="center"&gt;0.001&lt;/td&gt;&lt;td align="center"&gt;24, 25&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;Extracting 5 principal components&lt;/td&gt;&lt;td align="center"&gt;1.685&lt;/td&gt;&lt;td align="center"&gt;0.979&lt;/td&gt;&lt;td align="center"&gt;1.089&lt;/td&gt;&lt;td align="center"&gt;0.907&lt;/td&gt;&lt;td align="center"&gt;0.993&lt;/td&gt;&lt;td align="center"&gt;0.85&lt;/td&gt;&lt;td align="center"&gt;0.101&lt;/td&gt;&lt;td align="center"&gt;25&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;Extracting 4 principal components&lt;/td&gt;&lt;td align="center"&gt;1.666&lt;/td&gt;&lt;td align="center"&gt;1.004&lt;/td&gt;&lt;td align="center"&gt;1.103&lt;/td&gt;&lt;td align="center"&gt;0.924&lt;/td&gt;&lt;td align="center"&gt;0.994&lt;/td&gt;&lt;td align="center"&gt;2.25&lt;/td&gt;&lt;td align="center"&gt;0.101&lt;/td&gt;&lt;td align="center"&gt;24, 25&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;Extracting 3 principal components&lt;/td&gt;&lt;td align="center"&gt;1.614&lt;/td&gt;&lt;td align="center"&gt;0.996&lt;/td&gt;&lt;td align="center"&gt;1.219&lt;/td&gt;&lt;td align="center"&gt;1.030&lt;/td&gt;&lt;td align="center"&gt;0.995&lt;/td&gt;&lt;td align="center"&gt;1.05&lt;/td&gt;&lt;td align="center"&gt;0.151&lt;/td&gt;&lt;td align="center"&gt;25&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;Extracting 2 principal components&lt;/td&gt;&lt;td align="center"&gt;1.434&lt;/td&gt;&lt;td align="center"&gt;0.708&lt;/td&gt;&lt;td align="center"&gt;1.166&lt;/td&gt;&lt;td align="center"&gt;0.836&lt;/td&gt;&lt;td align="center"&gt;0.997&lt;/td&gt;&lt;td align="center"&gt;0.85&lt;/td&gt;&lt;td align="center"&gt;0.051&lt;/td&gt;&lt;td align="center"&gt;19&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td align="left"&gt;Extracting 1 principal components&lt;/td&gt;&lt;td align="center"&gt;1.703&lt;/td&gt;&lt;td align="center"&gt;1.301&lt;/td&gt;&lt;td align="center"&gt;1.454&lt;/td&gt;&lt;td align="center"&gt;1.209&lt;/td&gt;&lt;td align="center"&gt;0.996&lt;/td&gt;&lt;td align="center"&gt;0.25&lt;/td&gt;&lt;td align="center"&gt;0.051&lt;/td&gt;&lt;td align="center"&gt;25&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt; </ephtml> </p> <p>Tables 1 and 2 and Figure 8 reveal that the <emph>R–N</emph> curve, representing the input of the secondary reconstruction feature vector after the principal component dimensionality reduction extracted by KPCA, is basically consistent with the trend before dimensionality reduction. It also exhibits a stepwise decline as <emph>N</emph> increases. However, there are notable improvements in <emph>R</emph><subs>av</subs>, <emph>R</emph><subs>min</subs>, <emph>R</emph><subs>RRMSEmin</subs>, <emph>R</emph><subs>RMSEmin</subs>, and Pearson correlation coefficient compared to before dimensionality reduction. The prediction outcomes display a certain relationship with the number of extracted principal components. Particularly, when extracting 2, 3, 4, and 5 principal components, there is a significant reduction in <emph>R</emph><subs>av</subs>, <emph>R</emph><subs>min</subs>, RRMSE<subs>min</subs>, RMSE<subs>min</subs>, and Pearson correlation coefficient compared to the results before dimensionality reduction. Extracting two principal components yields the lowest <emph>R</emph><subs>av</subs> at 1.434%, <emph>R</emph><subs>min</subs> at 0.708%, and <emph>RMSE</emph><subs>min</subs> at 0.836%, marking the lowest values achieved, while the Pearson correlation coefficient at 0.997, marking the closest to 1. In this scenario, the corresponding model parameters are as follows: KPCA parameters <emph>σ</emph><sups>2</sups> = 0.85, the optimal parameter of GRNN model <emph>Spread</emph> = 0.051, and an optimal number of training samples <emph>N</emph> = 19. These parameters constitute the optimal parameter combination scheme for the model. The application of KPCA dimensionality reduction and secondary reconstruction of feature vectors not only reduces prediction time but also enhances prediction accuracy.</p> <hd id="AN0179684062-28">6. Conclusion</hd> <p>Through the design of the KPCA‐GRNN prediction algorithm and comparative analysis with WP‐EE‐based algorithms, several key conclusions can be drawn regarding feature extraction from probability screen vibration signal and screening efficiency prediction:</p> <p></p> <ulist> <item> (<reflink idref="bib1" id="ref53">1</reflink>) Compared with <emph>d</emph>‐GRNN, <emph>d</emph>‐BPNN, <emph>d</emph>‐WeightedLSSVM, WP‐<emph>d</emph>‐GRNN, WP‐EE‐BPNN, EMD‐EE‐GRNN, and VMD‐EE‐GRNN algorithms, the WP‐EE‐GRNN algorithm exhibits the highest prediction accuracy for screening efficiency in general. Furthermore, the prediction error for both algorithms either decreases or remains stable as the number of training samples increases.</item> <p></p> <item> (<reflink idref="bib2" id="ref54">2</reflink>) Using the WP‐EE‐GRNN algorithm, the relationship between the predicted average error value and the number of training samples <emph>N</emph> and decomposition layers <emph>s</emph> demonstrates a clear nonlinear pattern. Notably, most of the predicted average errors are minimal when the number of layers <emph>s</emph> = 3 or 6.</item> <p></p> <item> (<reflink idref="bib3" id="ref55">3</reflink>) The optimized feature vector for GRNN prediction reconstruction, derived from WP‐EE under different decomposition levels and basis functions, outperforms GRNN with a single energy entropy input feature. However, it does entail increased computational time.</item> <p></p> <item> (<reflink idref="bib4" id="ref56">4</reflink>) The WP‐EE‐GRNN prediction algorithm, incorporating KPCA dimensionality reduction and secondary reconstruction, not only achieves superior prediction accuracy compared to pre‐KPCA dimensionality reduction but also reduces prediction time and improves prediction efficiency.</item> <p></p> <item> (<reflink idref="bib5" id="ref57">5</reflink>) The KPCA‐GRNN prediction algorithm based on WPD energy entropy demonstrates a high level of prediction accuracy and facilitates the selection of an optimal model parameter combination for screening efficiency prediction in probability screens.</item> </ulist> <p>Thanks for the strong support, guidance and help of Huang Yijian's team from Huaqiao University.</p> <hd id="AN0179684062-29">Data Availability</hd> <p>The (grnnquxianhuizhi_KPCA.m) data used to support the findings of this study were supplied by (Chen Qingtang) under license and so cannot be made freely available. Requests for access to these data should be made to Chen Qingtang, E‐mail: chenqingt@yeah.net.</p> <hd id="AN0179684062-30">Conflicts of Interest</hd> <p>The authors declare that they have no conflicts of interest.</p> <hd id="AN0179684062-31">Acknowledgments</hd> <p>This study was funded by the Guiding Science Project of Fujian Province (2018H0031 and 2021H0059); team construction funds for advanced materials and laser processing of Putian University; Fujian Province Key Laboratory of CNC Machine Tools and Intelligent Manufacturing (KJTPT2019ZDSYS02020063).</p> <ref id="AN0179684062-32"> <title> Footnotes </title> <blist> <bibl id="bib1" idref="ref1" type="bt">1</bibl> <bibtext> Academic Editor: Zine El Abiddine Fellah</bibtext> </blist> </ref> <ref id="AN0179684062-33"> <title> REFERENCES </title> <blist> <bibtext> Tang Q. and Huang Y. J., Analysis of probability screen efficiency using bispectrum estimation based on AR model, Journal of Huaqiao University (Natural Science). (2011) 32, no. 3, 253 – 257.</bibtext> </blist> <blist> <bibl id="bib2" idref="ref2" type="bt">2</bibl> <bibtext> Qiao J., Duan C., Zhao Y., Jiang H., and Diao H., Study on screening efficiency of banana vibrating screen based on 3D DEM simulation, Proceedings of the 7th International Conference on Discrete Element Methods, December 2017, Springer Singapore, 1265 – 1275, https://doi.org/10.1007/978-981-10-1926-5%5f130, 2-s2.0-85007350690.</bibtext> </blist> <blist> <bibl id="bib3" idref="ref3" type="bt">3</bibl> <bibtext> Li Z. B. and Huang Y. J., Analysis of screen vibration signals of probability sieve using AR bispectrum and its diagonal slices, Mechanical Science and Technology for Aerospace Engineering. (2012) 31, no. 1, 113 – 117, https://doi.org/10.13433/j.cnki.1003-8728.2012.01.019.</bibtext> </blist> <blist> <bibl id="bib4" idref="ref4" type="bt">4</bibl> <bibtext> Shi Z. Z. and Huang Y. J., Research on screening efficiency based on AR model of high-order cumulant LS-SVM, China Mechanical Engineering. (2011) 22, no. 16, 1965 – 1969.</bibtext> </blist> <blist> <bibl id="bib5" idref="ref5" type="bt">5</bibl> <bibtext> Tang Q., Study on Time Frequency Characteristics of Wigner Higher Order Spectrum and Its Application in Screening Operation, 2011, Huaqiao University.</bibtext> </blist> <blist> <bibl id="bib6" idref="ref6" type="bt">6</bibl> <bibtext> Zheng G. X. and Huang Y. J., Since the synchronous performance testing research, Mechanical Design and Manufacturing. (2010) 28, no. 7.</bibtext> </blist> <blist> <bibl id="bib7" idref="ref7" type="bt">7</bibl> <bibtext> Yang Z., Kong C., Wang Y., Rong X., and Wei L., Fault diagnosis of mine asynchronous motor based on MEEMD energy entropy and ANN, Computers &amp; Electrical Engineering. (2021) 92, https://doi.org/10.1016/j.compeleceng.2021.107070, 107070.</bibtext> </blist> <blist> <bibl id="bib8" idref="ref20" type="bt">8</bibl> <bibtext> Chen X., Yang Y., Cui Z., and Shen J., Vibration fault diagnosis of wind turbines based on variational mode decomposition and energy entropy, Energy. (2019) 174, no. 3, 1100 – 1109, https://doi.org/10.1016/j.energy.2019.03.057, 2-s2.0-85063336969.</bibtext> </blist> <blist> <bibl id="bib9" idref="ref8" type="bt">9</bibl> <bibtext> Liu X., Li J., Shi B., Ding G., Dong F., and Zhang Z., Intelligent detection technology for leakage bag of baghouse based on distributed optical fiber sensor, Optical Fiber Technology. (2019) 52, https://doi.org/10.1016/j.yofte.2019.101947, 2-s2.0-85068504580, 101947.</bibtext> </blist> <blist> <bibtext> Hao Y., Zhu L., Yan B., Qin S., Cui D., and Lu H., Milling chatter detection with WPD and power entropy for Ti-6Al-4V thin-walled parts based on multi-source signals fusion, Mechanical Systems and Signal Processing. (2022) 177, https://doi.org/10.1016/j.ymssp.2022.109225, 109225.</bibtext> </blist> <blist> <bibtext> Zhang X. J., Ding Y. H., Huang L. Y., and Cheng X. F., Automatic Classification Method of MEG Based on Wavelet Packet and Energy Entropy, 2016, Computer Technology and Development.</bibtext> </blist> <blist> <bibtext> Zhang Z., Wang S., and Fu J., Application of improved GRNN algorithm for task man-hours prediction in metro project, Signal and Information Processing, Networking and Computers, 2023, 917, Springer, Singapore, 1421 – 1430, https://doi.org/10.1007/978-981-19-3387-5_169.</bibtext> </blist> <blist> <bibtext> Yi-ze C. and Qing-tang C., State prediction of MR system by VMD-GRNN based on fractal dimension, Advances in Mechanical Engineering. (2022) 14, no. 12, https://doi.org/10.1177/16878132221145899.</bibtext> </blist> <blist> <bibtext> Yang F., Ma Z., and Xie M., Image classification with parallel KPCA-PCA network, Computational Intelligence. (2022) 38, no. 2, 397 – 415, https://doi.org/10.1111/coin.12503.</bibtext> </blist> <blist> <bibtext> Chen H., Assala P. D. S., Cai Y., and Yang P., Intelligent transient overvoltage location in distribution systems using wavelet packet decomposition and general regression neural networks, IEEE Transactions on Industrial Informatics. (2016) 12, no. 5, 1726 – 1735, https://doi.org/10.1109/TII.2016.2520909, 2-s2.0-85012050788.</bibtext> </blist> <blist> <bibtext> Chickaramanna S. G., Veerabhadrappa S. T., Shivakumaraswamy P. M., Sheela S. N., Keerthana S. K., Likith U., Swaroop L., and Meghana V., Classification of arrhythmia using machine learning algorithm, Revue d'intelligence Artificielle. (2022) 36, no. 4, 529 – 534.</bibtext> </blist> <blist> <bibtext> Zeng X. W., Zhao W. M., Shi H. K., and Li Z. R., Selection of wavelet basis function in process of time-frequency analysis of earthquake signals using wavelet packet transform, Journal of Seismological Research. (2010) 33, no. 4, 323 – 328.</bibtext> </blist> <blist> <bibtext> Chen Q. and Huang Y., Prediction of comprehensive dynamic performance for probability screen based on AR model-box dimension, Journal of Measurements in Engineering. (2023) 11, no. 4, 525 – 535, https://doi.org/10.21595/jme.2023.23522.</bibtext> </blist> </ref> <aug> <p>By Qingtang Chen; Yijian Huang and Zine El Abiddine Fellah, AcademicEditor</p> <p>Reported by Author; Author; Author</p> </aug> <nolink nlid="nl1" bibid="bib11" firstref="ref9"></nolink> <nolink nlid="nl2" bibid="bib12" firstref="ref10"></nolink> <nolink nlid="nl3" bibid="bib13" firstref="ref11"></nolink> <nolink nlid="nl4" bibid="bib14" firstref="ref12"></nolink> <nolink nlid="nl5" bibid="bib15" firstref="ref13"></nolink> <nolink nlid="nl6" bibid="bib16" firstref="ref17"></nolink> <nolink nlid="nl7" bibid="bib17" firstref="ref19"></nolink> <nolink nlid="nl8" bibid="bib18" firstref="ref36"></nolink> CustomLinks: – Url: https://resolver.ebsco.com/c/xy5jbn/result?sid=EBSCO:edsdoj&genre=article&issn=16879139&ISBN=&volume=2024&issue=&date=20240101&spage=&pages=&title=Advances in Mathematical Physics&atitle=Predicting%20Screening%20Efficiency%20of%20Probability%20Screens%20Using%20KPCA-GRNN%20with%20WP-EE%20Feature%20Reconstruction&aulast=Qingtang%20Chen&id=DOI:10.1155/2024/5588864 Name: Full Text Finder (for New FTF UI) (s8985755) Category: fullText Text: Find It @ SCU Libraries MouseOverText: Find It @ SCU Libraries – Url: https://doaj.org/article/fc06623ae72940be9d84b64a36598944 Name: EDS - DOAJ (s8985755) Category: fullText Text: View record from DOAJ MouseOverText: View record from DOAJ |
---|---|
Header | DbId: edsdoj DbLabel: Directory of Open Access Journals An: edsdoj.fc06623ae72940be9d84b64a36598944 RelevancyScore: 984 AccessLevel: 3 PubType: Academic Journal PubTypeId: academicJournal PreciseRelevancyScore: 984.353637695313 |
IllustrationInfo | |
Items | – Name: Title Label: Title Group: Ti Data: Predicting Screening Efficiency of Probability Screens Using KPCA-GRNN with WP-EE Feature Reconstruction – Name: Author Label: Authors Group: Au Data: <searchLink fieldCode="AR" term="%22Qingtang+Chen%22">Qingtang Chen</searchLink><br /><searchLink fieldCode="AR" term="%22Yijian+Huang%22">Yijian Huang</searchLink> – Name: TitleSource Label: Source Group: Src Data: Advances in Mathematical Physics, Vol 2024 (2024) – Name: Publisher Label: Publisher Information Group: PubInfo Data: Wiley, 2024. – Name: DatePubCY Label: Publication Year Group: Date Data: 2024 – Name: Subset Label: Collection Group: HoldingsInfo Data: LCC:Physics – Name: Subject Label: Subject Terms Group: Su Data: <searchLink fieldCode="DE" term="%22Physics%22">Physics</searchLink><br /><searchLink fieldCode="DE" term="%22QC1-999%22">QC1-999</searchLink> – Name: Abstract Label: Description Group: Ab Data: The screening system is a nonlinear and non-Gaussian complex system. To better characterize its attributes and improve the prediction accuracy of screening efficiency, this study involves the acquisition of the vibration signals and screening efficiency data under various operational conditions. Subsequently, empirical mode decomposition energy entropy (EMD-EE), variational mode decomposition energy entropy (VMD-EE), and wavelet packet energy entropy (WP-EE) features are extracted from the time series vibration signals, and three single input energy entropy-generalized regressive neural network (GRNN) prediction accuracy models are established and compared. Furthermore, we introduce the kernel principal component analysis (KPCA)-WP-EE feature reconstruction-GRNN prediction algorithm. This approach involves reconstructing the feature vector by optimizing WP-EE-GRNN prediction under varying parameters. The parameterized GRNN model is then predicted and analyzed through secondary reconstruction involving KPCA dimensionality reduction features. The results show that WP-EE-GRNN achieves superior prediction accuracy compared to box dimension (d)-GRNN, box dimension-back propagation neural network (BPNN), and d-weighted least squares support vector machine, WP-d-GRNN, WP-EE-BPNN, EMD-EE-GRNN, and VMD-EE-GRNN. Additionally, the WP-EE feature reconstruction-GRNN algorithm exhibits higher prediction accuracy than the single-input WP-EE-GRNN algorithm. The WP-EE-GRNN prediction algorithm using KPCA dimensionality reduction and secondary reconstruction not only achieves higher prediction accuracy than prior to KPCA dimensionality reduction but also improves prediction efficiency. Following the extraction of two core principal components, model parameters when KPCA’s σ2 = 0.85, the optimal parameter of GRNN model Spread = 0.051, and the optimal number of training samples N = 19, the average prediction error is 1.434%, the minimum prediction error reaching 0.708%, the minimum root mean square error reaching 0.836% and Pearson correlation coefficient marking the closest to 1, these result all representing the optimum achievable values. The budget model selects the optimal parameter combination scheme for the system. – Name: TypeDocument Label: Document Type Group: TypDoc Data: article – Name: Format Label: File Description Group: SrcInfo Data: electronic resource – Name: Language Label: Language Group: Lang Data: English – Name: ISSN Label: ISSN Group: ISSN Data: 1687-9139 – Name: NoteTitleSource Label: Relation Group: SrcInfo Data: https://doaj.org/toc/1687-9139 – Name: DOI Label: DOI Group: ID Data: 10.1155/2024/5588864 – Name: URL Label: Access URL Group: URL Data: <link linkTarget="URL" linkTerm="https://doaj.org/article/fc06623ae72940be9d84b64a36598944" linkWindow="_blank">https://doaj.org/article/fc06623ae72940be9d84b64a36598944</link> – Name: AN Label: Accession Number Group: ID Data: edsdoj.fc06623ae72940be9d84b64a36598944 |
PLink | https://login.libproxy.scu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsdoj&AN=edsdoj.fc06623ae72940be9d84b64a36598944 |
RecordInfo | BibRecord: BibEntity: Identifiers: – Type: doi Value: 10.1155/2024/5588864 Languages: – Text: English Subjects: – SubjectFull: Physics Type: general – SubjectFull: QC1-999 Type: general Titles: – TitleFull: Predicting Screening Efficiency of Probability Screens Using KPCA-GRNN with WP-EE Feature Reconstruction Type: main BibRelationships: HasContributorRelationships: – PersonEntity: Name: NameFull: Qingtang Chen – PersonEntity: Name: NameFull: Yijian Huang IsPartOfRelationships: – BibEntity: Dates: – D: 01 M: 01 Type: published Y: 2024 Identifiers: – Type: issn-print Value: 16879139 Numbering: – Type: volume Value: 2024 Titles: – TitleFull: Advances in Mathematical Physics Type: main |
ResultId | 1 |