International Journal of Mechanical Engineering and Applications
Volume 4, Issue 6, December 2016, Pages: 212-225

Artificial Neural Network Approach for Transient Forced Convective Heat Transfer Optimization

Ahmet Tandiroglu

Department of Mechanical Engineering Technology, Vocational High School of Erzincan, Erzincan University, Erzincan, Turkey

Email address:

To cite this article:

Ahmet Tandiroglu. Artificial Neural Network Approach for Transient Forced Convective Heat Transfer Optimization. International Journal of Mechanical Engineering and Applications. Vol.4, No. 6, 2016, pp. 212-225. doi: 10.11648/j.ijmea.20160406.12

Received: October 29, 2016; Accepted: November 16, 2016; Published: November 23, 2016


Abstract: This present research uses artifical neural networks (ANNs) to analyze and estimate the influence of transfer functions and training algorithms on experimentally determined Nusselt numbers, friction factors, entropy generation numbers and irreversibility distribution ratios for nine different baffle plate inserted tubes. Nine baffle-inserted tubes have several baffles with various geometric parameters used in the experiments with a baffle area blockage ratio of two, with different pitch to diameter ratios, different baffle orientation angles and different baffle spacings. The actual experimental data sets were used from previous author’s studies and applied as a input data set of ANNs. MATLAB toolbox was used to search better network configuration prediction by using commonly used multilayer feed-forward neural networks (MLFNN) with back propagation (BP) learning algorithm with thirteen different training functions with adaptation learning function of mean square error and TANSIG transfer function. In this research, eighteen data samples were used in a series of runs for each nine samples of baffle-inserted tube. Reynold number, tube lenght to baffle spacing ratio, baffle orientation angle and pitch to diameter ratio were considered as input variables of ANNs and the time averaged values of Nusselt number, friction factor, entropy generation number and irreversibility distribution ratio were determined as the target data. The total 70% of the experimental data was used to train, 15% was used to test and the rest of data was used to check the validity of the ANNs. The TRAINBR training function was found as the best model for predicting the target experimental outputs. Almost perfect accuracy between the neural network predictions and experimental data was achieved with mean relative error (MRE) of 0,000105816% and correlation coefficient (R) that was 0,999160176 for all datasets, which suggests the reliability of the ANNs as a strong tool for predicting the performance of transient forced convective heat transfer applications.

Keywords: Heat Transfer Enhancement, Transient Forced Convection, Baffle Inserted Tubes, Artifical Neural Network, Training Function


1. Introduction

Artifical Neural Networks (ANNs) have been successfully used in many engineering applications to simulate nonlinear complex system without requiring any input and output knowledge such as dynamic control, system identification and performance prediction of thermal systems in heat transfer applications. ANN have been widely used for thermal analysis of heat exchangers during the last two decades. The applications of ANN for thermal analysis of heat exchangers are reviewed in detail [1].

The various network architectures were tested in [2] suggesting feed-forward network with log-sigmoid node functions in the first layer and a linear node function in the output layer to be the most advantageous architecture to use for prediction of helically-finned tube performance. A feed forward ANN approach trained by Levenberg–Marquardt algorithm was developed to predict friction factor in the serpentine microchannels with rectangular cross section has been investigated experimentally [3] hybrid high order neural network and a feed forward neural network are developed and applied to find an optimized empirical correlation for prediction of dryout heat transfer. The values predicted by the models are compared with each other and also with the previous values of empirical correlation [4]. ANN is applied for heat transfer analysis of shell-and-tube heat exchangers with segmental baffles or continuous helical baffles. Three heat exchangers were experimentally investigated. Limited experimental data was obtained for training and testing neural network configurations. The commonly used back propagation algorithm was used to train and test networks. Prediction of the outlet temperature differences in each side and overall heat transfer rates were performed. Different network configurations were also studied by the aid of searching a relatively better network for prediction [5]. ANN is used for heat transfer analysis in corrugated channels. A data set evaluated experimentally is prepared for processing with the use of neural networks. Back propagation algorithm, the most common learning method for ANNs, was used in training and testing the network [6]. The capabilities of an ANN approach for predicting the performance of a liquid desiccant dehumidifier in terms of the water condensation rate and dehumidifier effectiveness is proposed [7]. An application of ANNs to characterize thermo-hydraulic behavior of helical wire coil inserts inside tube. An experimental study was carried out to investigate the effects of four types of wire coil inserts on heat transfer enhancement and pressure drop. The performance of the ANN was found to be superior in comparison with corresponding power-law regressions [8]. This paper describes the selection of training function of an ANN for modeling the heat transfer prediction of horizontal tube immersed in gas–solid fluidized bed of large particles. The ANN modeling was developed to study the effect of fluidizing gas velocity on the average heat transfer coefficient between fluidizing bed and horizontal tube surface. The feed-forward network with back propagation structure implemented using Levenberg–Marquardt’s learning rule in the neural network approach. Performances of five training functions implemented in training neural network for predicting the heat transfer coefficient [9]. It is reported the results of an experimental investigation to characterize the thermal performance of different configurations of phase change material based pin fin heat sinks. An ANN is developed to determine the optimal configuration of the pin fin heat sink that maximizes the operating time for the n-eicosane based heat sink [10]. A non-iterative method is applied utilizing ANN and principal component analysis to estimate the parameters that define the boundary heat flux. The inversion has been accomplished by employing a non-iterative method using ANN and principal component analysis. The potential use of covariance analysis in reducing the dimensionality of the inverse problem has also been demonstrated [11]. A generalized neural network analysis for natural convection heat transfer from a horizontal cylinder is developed and a three-layer network is used for predicting the Nusselt number. The number of the neurons in the hidden layer was determined by a trial and error process together with cross-validation of the experimental data evaluating the performance of the network and standard sensitivity analysis [12]. Heat transfer correlation developed [13] to assist the heat exchanger designer in predicting the heat transfer coefficient along a horizontal straight circular tube with uniform wall heat flux for a specified inlet configuration in the transition region by using ANN. An application of ANNs was presented to predict the pressure drop and heat transfer characteristics in the plate-fin heat exchangers [14]. A new and detailed three-layer BP network model for prediction of performance parameters on prototype wet cooling towers is developed successfully in this paper, and the improved BP algorithm, the gradient descent algorithm with momentum, is used in [15]. The results of an experimental investigation carried out to characterize the thermal performance of different configurations of phase change material based pin fin heat [16]. In the research ANN approach has been utilized to characterize the thermo hydraulic behavior of corrugated tubes combined with twisted tape inserts in a turbulent flow regime. The experimental data sets have been utilized in training and validation of the ANN in order to predict the heat transfer coefficients and friction factors inside the corrugated tubes combined with twisted tape inserts, and the results were compared to the experimental data [17]. ANNs are utilized to compile values of the mean Nusselt number particularized to binary gas mixtures in the Prandtl number sub-region. Thereafter, these values are used to generate a heat transfer correlation that is obtained from using a combination of available data and predicted values [18]. A linear regression approach was used to correlate experimentally-determined Colburn j-factors and Fanning friction factors for flow of liquid water in helically-finned tubes. The principal finding of the [19] investigation is the fact that in helically-finned tubes both Fanning friction factors and Colburn j-factors can be correlated with exponentials of linear combinations of the same five simple groups of parameters and a constant. The ANNs has been applied for the unsteady heat transfer in a rectangular duct for the prediction of unsteady heat transfer in a rectangular duct [20]. An experimental study has been carried out to investigate the axial variation of inlet temperature and the impact of inlet frequency on decay indices in the thermal entrance region of a parallel plate channel. The investigation was conducted with laminar forced flows.

Despite the fact that comprehensive studies were conducted on heat transfer applications in the literature, the research studies concerning the effectiveness and comparision of different ANN models considering transfer functions and training algorithms in the broader sense are not sufficient. The main focus of the present study is based on the experimental data obtained from author’s previous studies [21-23] for optimizing transient forced convective heat transfer for turbulent flow in a circular tube with baffle inserts using tangent sigmoid TANSIG function and thirteen training algorithms to predict ANN performances based on mean relative error  and correlation coefficient  for all data sets.

2. Experimental Procedure and Data Collection

2.1. Experimental Setup

The experimental setup illustrated in Figure 1 is used for data gathering for the heat transfer analysis. A detailed description of the experimental setup is avaliable in some of author’s previous researches in detail [21-23].

Figure 1. Schematic diagram of experimental setup.

The flow geometries and related parameters is shown in Figures 2-4.

Figure 2. Schematic of 45 degree half circle baffled tubes.

Figure 3. Schematic of 90 degree half circle baffled tubes.

Figure 4. Schematic of 180 degree half circle baffled tubes.

The detailed geometrical parameters of baffled tubes were tabulated in Table 1. The heat loss calibration tests were performed before taking measurements on the system for each type of baffle inserted tubes in the following manner. Each baffle inserted tube was completely filled up with insulation materials of multi layer glass wool and constant heat flux was supplied through the pipe wall by means of PLC integrated DC power supply.

Table 1. Details of investigated baffle inserted tubes.

Baffle Type Designation β(°) H (x10-3m) H/D
Type 1 18031 180 31 1
Type 2 18062 180 62 2
Type 3 18093 180 93 3
Type 4 9031 90 31 1
Type 5 9062 90 62 2
Type 6 9093 90 93 3
Type 7 4531 45 31 1
Type 8 4562 45 62 2
Type 9 4562 45 93 3

The average wall temperatures were evaluated at eleven points along the test section in terms of heat flux, difference in wall and ambient temperatures. The time averaged wall temperature variations by time were recorded using data online acquisition system. When the steady state condition is established to insure that external thermal equilibrium can be achieved, heat loss calibration tests for different values of power supply are reported for a steady state case. It was found that the heat loss is directly proportional to the difference between the wall and ambient temperatures. The required constant of proportionality was taken from the previously determined heat loss calibrations. It was observed that the maximum heat loss did not exceed %5 all through the test runs. More detailed explanation of the heat loss calibration technique was given by [21-23].

2.2. Data Reduction

Data reduction of baffle inserted tubes presented above in Figures 2-4. is avaliable in author’s previous researches in detail [21-23] for fully developed turbulent flow by using ANNs. The independent parameters are Reynolds number and tube diameter. The Reynolds numbers based on the tube hydraulic diameter are given by,

(1)

The average fully developed heat transfer coefficients are evaluated as follows,

(2)

where A is convective heat transfer area. Nusselt numbers and friction factor for fully developed turbulent flow are evaluated by using Eq. (4) and Eq. (5) respectively.

(3)

(4)

where  is pressure gradient.

It is considered a circular pipe flow with constant heat flux  and cross sectional area,  as shown schematically in Figure 5.

Figure 5. Schematic view of circular pipe flow with constant heat flux.

For an incompressible viscous fluid with mass flow rate,  passes through the pipe of lenght of . In general this heat transfer arrangement is characterised by an control volume on a finite lenght of  pressure gradient is , by a finite wall bulk fluid temperature difference . For an investigated heat transfer region, the rate of entropy generation per unit lenght expression is

(5)

where the first term on the right hand side is the contribution made by heat transfer, while the second term is the contribution due to fluid friction that is

(6)

The second law requires for all real processes. Since classical macroscopic thermodynamics does not provide any theoretical way to calculate entropy generation of irreversible process directly, the only way to determine how much  is greater than zero is to use data obtained from experiments. To describe the effects of the flow conditions and geometry parameters of Reynolds number , Prandtl number , pitch to diameter ratio , baffle orientation angle , ratio of smooth to baffled crossection area  and ratio of tube length to baffle spacing  on transient forced convection heat transfer for turbulent flow in a circular tube with baffle inserts, time averaged Nusselt number, time averaged friction factor, time averaged entropy generation per unit time and irreversibility distribution ratio are related as follows:

(7)

In Eq. (7), the Prandtl number, which should be an important parameter affecting the heat transfer of baffle inserted tube, is defined as:

(8)

But, Prandtl number has not been separately considered in this investigation because air is only used as working fluid and its Prandtl number in the considered experimental range temperature range remains almost constant. So, Eq. (7) can be simplified as:

(9)

Similarly, time averaged friction factor and time averaged entropy generation per unit time is related as given below.

(10)

(11)

An important dimensionless parameter in the second law analysis of convective heat transfer is the irreversibility distribution ratio [25] is:

(12)

The parameter  describes the relative importance of fluid friction in the total irreversibility of a flow passage. As it is known augmentation entropy generation number  can be rewritten as

(13)

where

(14)

(15)

Irreversibility distribution ratio can be obtained for the reference smooth tube as

(16)

where

(17)

(18)

substituting Eqs. (17) and (16) into Eq. (8), can be simplified and rewritten for turbulent flow in the forms of

(19)

where

(20)

Reynolds-Colburn analogy between heat transfer and momentum for turbulent flow is given by

(21)

Introducing Eq. (21) into Eq. (19), irreversibility distribution ratio is obtained for the turbulent flow as

(22)

Eq. (22) permits a quick estimate of , without having to calculate the Reynolds number [26]. For the case of  where the the rate of entropy generation per unit lenght of smooth pipe is equal to the rate of entropy generation per unit lenght of bafle inserted augmented pipe

(23)

irreversibility distribution ratio , can be expressed as

(24)

For the present case proposed augmentation technique having minimum area under the graph of  versus  can be optimally selected in order to yield the maximum reduction in heat exchanger duct irreversibility called irreversibility minimization analysis.

2.3. Experimental Uncertainity Analysis

The uncertainties of experimental quantities were computed by using the method presented [27]. The uncertainty calculation method used involves calculating derivatives of the desired variable with respect to individual experimental quantities and applying known uncertainties. The general equation presented by [27] showing the magnitude of the uncertainty in  is

(25)

where  and  is the variable that affects the results of .

The experimental results up to a Reynolds number of 20000 were correlated with a standard deviation of 5% at most. Experimental uncertainties in the Reynolds number, friction factor, and Nusselt number were estimated by the above procedure described [27]. The mean uncertainties are 2.5% in the Reynolds number, 4% in the friction number. The highest uncertainties are 9% in the Nusselt number for the type 9031. Uncertainties in the Nusselt number range between 5% and 8% for 300020000 at the type 18093 and 8% and 10% 300020000 at the type 9031, highest uncertainties being at the lowest Reynolds number [21-23].

2.4. Development of Artificial Neural Network

Figure 6. Artifical neural network architecture in the study.

ANN is a numerical model that simulates the human brain’s biological neural network ability to learn and recognize complicated nonlinear functions. This learning ability makes the ANN more powerful than the parametric approaches. ANN usage in heat transfer applications is popular because of its functional approximation between the inputs and desired outputs. In this present study a MLFNN with BP learning algorithm [28] has been used. It is simple and high learning rates; therefore it is widely used to train the networks.

The ANN model was developed for the system with four independent parameters in the input layer (Reynold number, tube lenght to baffle spacing ratio, baffle orientation angles and pitch to diameter ratio), four parameters (time averaged values of Nusselt number, friction factor, entropy generation number and irreversibility distribution ratio) and ten neurons in hidden layer. The architecture of the network for this current study is shown in Figure 6.

Neural network tool in the MATLAB R2011b version is used for ANN modelling of the system. There are fourteen different back propogation (BP) training algorithms in MATLAB ANN toolbox [29]. In this study, multilayer feed-forward neural networks (MLFNN) with back propagation (BP) training and validation algorithms were applied for each of thirteen different training functions given in Table 2.

Table 2. ANN training function descriptions used in for the study.

Training function Description
TRAINBR Bayesian regularization. Modification of the Levenberg-Marquardt training algorithm to produce networks that generalize well. Reduces the difficulty of determining the optimal network architecture [30, 31]
TRAINCGB Powell-Beale conjugate gradient algorithm. Slightly larger storage requirements than TRAINCGP. Generally faster convergence [32].
TRAINSCG Scaled conjugate gradient algorithm. The only conjugate gradient algorithm that requires no line search. A very good general purpose training algorithm [33, 34].
TRAINCGP Polak-Ribiere conjugate gradient algorithm. Slightly larger storage requirements than traincgf. Faster convergence on some problems [35].
TRAINCGF Fletcher-Reeves conjugate gradient algorithm. Has smallest storage requirements of the conjugate gradient algorithms [35].
TRAINLM Levenberg-Marquardt algorithm. Fastest training algorithm for networks of moderate size. Has memory reduction feature for use when the training set is large [36, 37].
TRAINRP Resilient backpropagation. Simple batch mode training algorithm with fast convergence and minimal storage requirements [38].
TRAINR Random order incremental training w/learning functions. TRAINR trains a network with weight and bias learning rules with incremental updates after each presentation of an input. Inputs are presented in random order.
TRAINGD Basic gradient descent. Slow response, can be used in incremental mode training.
TRAINGDM Gradient descent with momentum. Generally faster than traingd. Can be used in incremental mode training.
TRAINGDA Gradient descent with adaptive lr backpropagation. TRAINGDA is a network training function that updates weight and bias values according to gradient descent with adaptive learning rate.
TRAINBFG BFGS quasi-Newton method. Requires storage of approximate Hessian matrix and has more computation in each iteration than conjugate gradient algorithms, but usually converges in fewer iterations [39,40].
TRAINGDX Adaptive learning rate. Faster training than TRAINGD, but can only be used in batch mode training.

2.5. Normalization of Experimental Data

It is desirable to normalize all the input and output data with the largest and smallest values of each of the data sets, since the variables of input and output data have different physical units and ranges. So, all of the input and output data were normalized between 0.1 and 0.9 due to restriction of sigmoid function [41-43] using the below rearranged formula as follows:

(26)

where the  is the measured value, while  and  values are the minimum and maximum values of found in the train set and also employed data for normalization are given shown in Table 3.

Table 3. The range of employed data in the modelling.

Variable Range
Minimum Maximum

0.01517 0.1835

52.87346 2712.383

1.50E-04 0.00142

0.00228 0.06986

TANSIG transfer function gives better results than logarithmic sigmoid function (LOGSIG) according to present investigation as mentioned [44]. TANSIG transfer function is being used as an activation function in the hidden layer of ANN [24] is given as

(27)

3. Results and Discussion

MATLAB toolbox was used to search better network configuration prediction by using commonly used feed forward back propagation algorithm with thirteen different training functions with adaptation learning function of MSE and TANSIG transfer function. In this research, eighteen data samples were used in a series of runs for each nine samples of baffle-inserted tube. Reynold number, tube lenght to baffle spacing ratio, baffle orientation angle and pitch to diameter ratio were considered as input variables of ANNs and the time averaged values of Nusselt number, friction factor, entropy generation number and irreversibility distribution ratio were determined as the target data. Up to 70% of the whole experimental data was used to train the models, 15% was used to test the outputs and the remaining data points which were not used for training were used to evaluate the validity of the ANNs. As mentioned above the ANN was trained using all possible thirteen different training functions avaliable in MATLAB toolbox. To determine the optimal neural network structure, both the error convergence rates was checked by changing the number of hidden layer and also by decreasing momentum rate ranged from 0.9 to 0.7 in successive decreasement of 0.025 to increase learning rate of the networks. Based on the analysis, it was observed that the optimal number of hidden neurons varies mostly from one training function to another one but the optimal momentum rate was found to be 0.825 for all training functions. TRAINBR training function has shown better performance as compared to other twelve training functions under the constant network parameters. Constructed configuration of TRAINBR network has ten neurons in the hidden layer as shown in Figure 7.

Figure 7. Constructed configuration of TRAINBR network.

The absolute fraction of variance values () and optimal number of hidden neurons for each training function were determined and tabulated in Table 4.

Table 4. Absolute fraction of variance ( ) values for different training algorithms.

Training algorithm Number of optimal hidden neurons R2
Training Validation Test All
TRAINBR 10 0,99960 0,99957 0,99911 0,99958
TRAINCGB 5 0,99949 0,99928 0,99914 0,99940
TRAINSCG 5 0,99938 0,99927 0,99899 0,99895
TRAINCGP 8 0,99929 0,99894 0,99855 0,99890
TRAINCGF 6 0,99907 0,99875 0,99815 0,99885
TRAINLM 6 0,99900 0,99855 0,99812 0,99885
TRAINRP 7 0,99877 0,99804 0,99774 0,99842
TRAINR 5 0,99823 0,99731 0,99696 0,99812
TRAINGD 7 0,99792 0,99729 0,99690 0,99762
TRAINGDM 7 0,99715 0,99716 0,99688 0,99702
TRAINGDA 6 0,99681 0,99689 0,99651 0,99696
TRAINBFG 9 0,99629 0,99468 0,99618 0,99621
TRAINGDX 5 0,99372 0,99371 0,99394 0,99371

Training regression plots for the best training algorithm of TRAINBR are shown in Figure 8.

Figure 8. Neural network training regression plots for TRAINBR.

Thirteen different ANN training models have been compared by mean square error (MSE), mean relative error (MRE) and absolute fraction of variance () mathematically expressed as following equations:

(28)

(29)

(30)

where  is the actual (experimental) value,  is the predicted (output) value and  is the number of the data. The networks were trained for all thirteen different training functions under same network parameters. The training was continued till the least value of MSE at a definite value of epochs attained for all thirteen different training functions seperately. The use of the MSE is an excellent numerical criterion for evaluating the performance of a prediction tool. Table 5 shows the results for the MRE, MSE and R2 values for different training algorithms. After analysing all the results, TRAINBR training function has shown best performance as compared to other twelve training functions for predicting the target experimental outputs which has the least MSE value.

Table 5. ,  and  values for different training algorithms.

Training algorithm MRE MSE R2
TRAINBR 0,000105816 9,99956E-09 0,999160176
TRAINCGB 0,000158725 2,24993E-08 0,998800360
TRAINSCG 0,000603162 3,24897E-07 0,997901103
TRAINCGP 0,005820003 3,025E-05 0,997801210
TRAINCGF 0,010581825 1E-04 0,997701323
TRAINLM 0,996613818 0,88701807 0,997701323
TRAINRP 0,996825454 0,887394837 0,996842496
TRAINR 0,997248727 0,88814861 0,996243534
TRAINGD 0,998306910 0,890034442 0,995245664
TRAINGDM 0,998306910 0,890034442 0,994048880
TRAINGDA 0,998962983 0,891204663 0,993929242
TRAINBFG 0,999492075 0,89214895 0,992434364
TRAINGDX 0,999788366 0,892677968 0,987459564

The graphs in Figures 9-12. generated by using friction factor, Nusselt number, entropy generation number and irreversibility distribution ratio values that appear in all the tested ANN training algorithms with respect to Reynolds numbers respectively.

Figure 9. Scatter plot indicating the performance of .

Figure 10. Scatter plot indicating the performance of .

Figure 11. Scatter plot indicating the performance of  

Figure 12. Scatter plot indicating the performance of .

Best training performance plot for the best training algorithm of TRAINBR is shown in Figure 13. This figure is the performance plots of the mean square error value versus the number of epochs that is iteration numbers. Mean square error decreases with increasing iteration numbers and converges to a steady state value based on the TRAINBR algorithm characteristic as the best training performance is achieved at 246 epochs.

Figure 13. Performance curve for the best training algorithm of TRAINBR.

The training state of the best training algorithm of TRAINBR is shown in Figure 14. In this graph it is clearly shown that optimized network is developed with mean squared error of 9.99956x10-9 and sum of squared network parameters found to be 308.7523. The performance goal of optimized network having 89.8305 parameters is achieved in 246 epochs.

Figure 14. Performance plots of the best training function of TRAINBR.

A comparison of predicted values using best training function TRAINBR and the experimental values of the system is given in Table 6 for performance evaluation. The deviation values (MSE, MRE, and ) of thirteen different training functions for estimation of ANNs are presented in Table 7. A well trained ANN model produces small MSE and large  values. According to this table, the optimal network configuration which is TRAINBR training function has a lower MSE and higher  values. A parity plots of the output layer parameters are drawn to show the performance of optimal ANN TRAINBR training function in Figures 15-18. All of the graphs clearly show that the TRAINBR training function works very well. Based on these figures and MSE values of Table 5, parity plots show the accuracy with which the optimal ANN TRAINBR training function predicts output layer parameters of friction factor, Nusselt number, entropy generation number and irreversibility distribution ratio obtained from the experimental outputs. The coefficient of determination values  for best training function TRAINBR has achieved unity for all outputs. The results show that the optimal neural network configuration TRAINBR training function is successful in predicting the solution of transient forced convective heat transfer problems to determine friction factor, Nusselt number, entropy generation number and irreversibility distribution ratio.

Table 6. Comparison of experimental and ANNs results for best training function TRAINBR.

  Experimental data ANN results
Re

3000 0,07135 122,717090 0,00118 0,00684 0,0708365 121,4901191 0,0013682 0,0069716
4000 0,07234 196,236060 0,00112 0,00579 0,0718166 194,2738994 0,0013088 0,0059321
5000 0,07313 282,433110 0,00106 0,00509 0,0725987 279,6089789 0,0012494 0,0052391
6000 0,07377 380,296890 0,00100 0,00458 0,0732323 376,4941211 0,0011900 0,0047342
7000 0,07432 489,064210 0,00094 0,00419 0,0737768 484,1737679 0,0011306 0,0043481
8000 0,07480 608,130180 0,00088 0,00388 0,0742520 602,0490782 0,0010712 0,0040412
9000 0,07523 736,998720 0,00081 0,00362 0,0746777 729,6289328 0,0010019 0,0037838
10000 0,07561 875,252480 0,00075 0,00341 0,0750539 866,5001552 0,0009425 0,0035759
11000 0,07596 1022,53344 0,00069 0,00323 0,0754004 1012,308306 0,0008831 0,0033977
12000 0,07628 1178,52965 0,00063 0,00307 0,0757172 1166,744554 0,0008237 0,0032393
13000 0,07657 1342,96585 0,00057 0,00293 0,0760043 1329,536392 0,0007643 0,0031007
14000 0,07685 1515,59658 0,00051 0,00280 0,0762815 1500,440814 0,0007049 0,0029720
15000 0,07710 1696,20096 0,00045 0,00269 0,0765290 1679,239150 0,0006455 0,0028631
16000 0,07734 1884,57877 0,00039 0,00259 0,0767666 1865,733182 0,0005861 0,0027641
17000 0,07757 2080,54730 0,00033 0,00250 0,0769943 2059,742027 0,0005267 0,0026750
18000 0,07778 2283,93883 0,00027 0,00242 0,0772022 2261,099642 0,0004673 0,0025958
19000 0,07799 2494,59867 0,00021 0,00235 0,0774101 2469,652883 0,0004079 0,0025265
20000 0,07818 2712,38344 0,00015 0,00228 0,0775982 2685,259806 0,0003485 0,0024572

Figure 15. Scatter diagram of  showing the performance of optimal ANN.

Figure 16. Scatter diagram of  showing the performance of optimal ANN.

Figure 17. Scatter diagram of  showing the performance of optimal ANN.

Figure 18. Scatter diagram of  showing the performance of optimal ANN.

4. Conclusions

In this paper, the performance of transient forced convection heat transfer with nine various baffle inserted tubes have been analyzed to determine optimal training function by using commonly used MLFNN with BP learning function with thirteen different training function with adaptation learning function of mean square error and TANSIG transfer function. The importance of this study is to develop an optimal ANN configuration between thirteen different ANN configurations using an actual experimental data set and to develop an optimal ANN architecture as well.

The ANN architecture consists of four independent parameters in input layer and four dependent parameters in output layer. It is obvious that all of the the training functions are in good agreement with the experimental data set but TRAINBR training function is the best training function for prediction of output layer parameters. Almost perfect accuracy between the TRAINBR neural network training function predictions and experimental data was achieved with mean relative error  of 0,000105816% and correlation coefficient  that was 0,999160176 for all data sets, which suggests the reliability of the ANNs as a strong tool for predicting the performance of transient forced convective heat transfer applications.

Nomenclature

   

measured value

specific heat

velocity

pressure gradient  

Greek symbols

tube inlet diameter

baffle orientation angle

heat transfer coefficient

density,

baffle spacing or pitch

irreversibility distribution ratio

ratio of pitch to tube inlet diameter

Pi number

dimensionless pressure drop

dynamic viscosity  

thermal conductivity

kinematic viscosity

tube length

Subscripts

mass flow rate

baffle inserted tube

augmentation entropy generation number

average

Nusselt number

bulk

Prandtl number

mean

heat transferred to fluid

smooth pipe

coefficient of correlation

transient condition

coefficient of determination

wall

Reynolds number

maximum

cross sectional area

minimum

rate of entropy generation

fluid friction

temperature

heat transfer

Abbreviations

ANN: artifical neural network

BP: back propagation

DC: direct current

LOGSIG: logarithmic sigmoid

MLFNN: multilayer feed-forward neural network

MSE: mean square error

MRE: mean relative error

PLC: programmable logic controller

TANSIG: tangent sigmoid

Acknowledgements

The author is grateful to F. KAYALAR and M. Ç. YILMAZ for their valuable help in improving quality of this paper.


References

  1. M. Mohanraj, S. Jayaraj, and C. Muraleedharan, Applications of artificial neural networks for thermal analysis of heat exchangers – A review, Int. J. Therm. Sci. 90 (2015) 150–172.
  2. G. J. Zdaniuk, L. M. Chamra, D. Keith Walters, Correlating heat transfer and friction in helically-finned tubes using artificial neural networks, Int. J. Heat Mass Transf. 50 (2007) 4713–4723.
  3. M. Rahimi, M. Hajialyani, R. Beigzadeh, A. A. Alsairafi, Application of artificial neural network and genetic algorithm approaches for prediction of flow characteristic in serpentine microchannels, Chem. Eng. Res. Des. 98 (2015) 147–156.
  4. D. Rostamifard, M. Fallahnezhad, S. Zaferanlouei, S. Setayeshi, M. H. Moradi, Empirical correlation study of dryout heat transfer at high pressure using high order neural network and feed forward neural network, Heat Mass Transf. 47 (2011) 439–448.
  5. G. N. Xie, Q. W. Wang, M. Zeng, L. Q. Luo, Heat transfer analysis for shell-and-tube heat exchangers with experimental data by artificial neural networks approach, Appl. Therm. Eng. 27 (2007) 1096–1104.
  6. Y. Islamoglu, A. Kurt, Heat transfer analysis using ANNs with experimental data for air flowing in corrugated channels, Int. J. Heat Mass Transf. 47 (2004) 1361–1365.
  7. A. T. Mohammad, S. Bin Mat, M. Y. Sulaiman, K. Sopian, A. A. Al-abidi, Implementation and validation of an artificial neural network for predicting the performance of a liquid desiccant dehumidifier, Energy Convers. Manag. 67 (2013) 240–250.
  8. M. R. Jafari Nasr, A. Habibi Khalaj, S. H. Mozaffari, Modeling of heat transfer enhancement by wire coil inserts using artificial neural network analysis, Appl. Therm. Eng. 30 (2010) 143–151.
  9.  L. V. Kamble, D. R. Pangavhane, T. P. Singh, Neural network optimization by comparing the performances of the training functions -Prediction of heat transfer from horizontal tube immersed in gas–solid fluidized bed, Int. J. Heat Mass Transf. 83 (2015) 337–344.
  10. R. Baby, C. Balaji, Thermal optimization of PCM based pin fin heat sinks: An experimental study, Appl. Therm. Eng. 54 (2013) 65–77.
  11. A. Kumar, C. Balaji, A Principal Component Analysis and neural network based non-iterative method for inverse conjugate natural convection, Int. J. Heat Mass Transf. 53 (2010) 4684–4695.
  12. Ş. Ö. Atayılmaz, H. Demir, Ö. Ağra, Application of artificial neural networks for prediction of natural convection from a heated horizontal cylinder, Int. Commun. Heat Mass Transf. 37 (2010) 68–73.
  13. A. J. Ghajar, L. M. Tam, and S. C. Tam, Improved Heat Transfer Correlation in the Transition Region for a Circular Tube with Three Inlet Configurations Using Artificial Neural Networks, Heat Transf. Eng. 25 ( 2) (2004) 30–40.
  14. H. Peng, X. Ling, Neural networks analysis of thermal characteristics on plate-fin heat exchangers with limited experimental data, Appl. Therm. Eng. 29 (2009) 2251-2256.
  15. M. Gao, Y.-T. Shi, N.-N. Wang, Y.-B. Zhao, F.-Z. Sun, Artificial neural network model research on effects of cross-wind to performance parameters of wet cooling tower based on level Froude number, Appl. Therm. Eng. 51 (2013) 1226-1234.
  16. R. Baby and C. Balaji, Thermal optimization of PCM based pin fin heat sinks: An experimental study, Appl. Therm. Eng. 54 (2013) 65–77.
  17. Jafari Nasr MR, Khalaj AH. Heat transfer coefficient and friction factor prediction of corrugated tubes combined with twisted tape inserts using artificial neural network, Heat Transfer Eng. 31 (2010) 59–69.
  18. G. Diaz and A. Campo, Artificial neural networks to correlate in-tube turbulent forced convection of binary gas mixtures, Int. J. Therm. Sci. 48 (7) (2009) 1392–1397.
  19. G. J. Zdaniuk, R. Luck, and L. M. Chamra, Linear correlation of heat transfer and friction in helically-finned tubes using five simple groups of parameters, Int. J. Heat Mass Transf. 51 (3) (2008) 13–14.
  20. N. Sozbir, I. Ekmekci, Experimental study and artificial neural network modeling of unsteady laminar forced convection in a rectangular duct, Heat Mass Transf. 43 (2007) 749-758.
  21. A. Tandiroglu, Effect of flow geometry parameters on transient heat transfer for turbulent flow in a circular tube with baffle inserts, International Journal of Heat and Mass Transfer 49 (9–10) (2006) 1559–1567.
  22. A. Tandiroglu, Irreversibility minimization analysis of transient heat transfer for turbulent flow in a circular tube with baffle inserts, Journal of Enhanced Heat Transfer 13 (3) (2006) 215-229.
  23. A. Tandiroglu, Effect of flow geometry parameters on transient entropy generation for turbulent flow in circular tube with baffle inserts, Energy Conversion and Management 48 (3) (2007) 898-906.
  24. Vogl, T. P., J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon, Accelerating the convergence of the backpropagation method, Biological Cybernetics 59 (1988) 257-263.
  25. Dullette, W. R. and Bejan, A. Conservation of avaliable work (exergy) by using promoters of swirl flow in forced convection heat transfer, Energy 5 (8-9) (1980) 587- 596.
  26. A. Bejan, Entropy Generation Minimization, CRC Pres, Boca Raton, New York, 1996.
  27. Kline, S. J. and McClintock, F. A., Describing Uncertainties in Single Sample Experiments, Mechanical Engineering 75 (1953) 385–387.
  28. C. K. Tan, J. Ward, S. J. Wilcox, R. Payne, Artificial neural network modeling of the thermal performance of a compact heat exchanger, Appl. Therm. Eng. 29 (2009) 3609–3617.
  29. H. Demuth, M. Beale and M. T. Hagan, Neural Network Toolbox for Use with MATLAB, Neural Network Toolbox User’s Guide, The Math Works Inc., Natick, MA, 2005.
  30. Foresee, F. D., and M. T. Hagan, Gauss-Newton approximation to Bayesian regularization, Proceedings of the 1997 International Joint Conference on Neural Networks, 1997.
  31. D. J. C. Markay, Bayesian interpolation, Neural Comput. 4 (1992) 415–447.
  32. Powell, M. J. D.,"Restart procedures for the conjugate gradient method," Mathematical Programming, vol. 12, pp. 241-254, 1977.
  33. Moller, M. F., "A scaled conjugate gradient algorithm for fast supervised learning," Neural Networks, vol. 6, pp. 525-533, 1993.
  34. M. T. Hagan, H. B. Demuth, M. H. Beale, Neural Network Design, PWS Publishing, Boston, MA, 1996.
  35. Scales, L. E., Introduction to Non-Linear Optimization, New York: Springer-Verlag, 1985.
  36. F. D. Foresee, M. T. Hagan, Gauss–Newton approximation to Bayesian regularization, in: Proceedings of the International Joint Conference on Neural Networks, IEEE Press, Piscataway, NJ, 1997, pp. 1930–1935.
  37. M. T. Hagan, M. Menhaj, Training feed forward networks with the Marquardt algorithm, IEEE Trans. Neural Networks 5 (1994) 989–993.
  38. Riedmiller, M., and H. Braun, "A direct adaptive method for faster backpropagation learning: The RPROP algorithm," Proceedings of the IEEE International Conference on Neural Networks, San Francisco,1993.
  39. Gill, P. E., W. Murray, and M. H. Wright, Practical Optimization, New York: Academic Press, 1981.
  40. J. E. Dennis, R. B. Schanabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall, Englewood Cliffs, NJ, 1983.
  41. G. E. Nasr, E. A. Badr, C. Joun, Back propagation neural networks for modeling gasoline consumption, Energy Convers. Manage. 44 (2003) 893–905.
  42. Sanjay C, Jyothi C, Chin CW. A study of surface roughness in drilling using mathematical analysis and neural Networks, Int. J. Adv. Manu. Technol. 29 (2006) 846–52.
  43. G. E. Nasr, E. A. Badr, C. Joun, Back propagation neural networks for modeling gasoline consumption, Energy Convers. Manage. 44 (2003) 893–905.
  44. Rostamifard Dariush, FallahnezhadMehdi,ZaferanloueiSalman,SetayeshiSaeed andMoradi Mohammad Hassan, Empirical correlation study of dryout heat transfer at high pressure using high order neural network and feed forward neural network, Heat Mass Transfer 47 (2011) 439–448.

Article Tools
  Abstract
  PDF(949K)
Follow on us
ADDRESS
Science Publishing Group
548 FASHION AVENUE
NEW YORK, NY 10018
U.S.A.
Tel: (001)347-688-8931