Betulinic acid, 3β-hydroxy-lup-20(29)-ene-28-oic acid (1) is a naturally occurring pentacyclic lupane-type triterpene. It shows a broad range of biological and properties such as inhibition of human immunodeficiency virus (HIV), anti-bacterial, anti-malarial, anti-inflammatory, anthelmintic, antioxidant and anticancer properties (Yogeeswari and Sriram, 2005). However, the medical uses of betulinic acid in the pharmaceutical industry is strongly limited since it is insoluble in water (0.02 mg/mL), which causes a difficulty in preparation of injectable formulations for biological assays and decreases its bioavailability in the organism. The introduction of polar groups at the C-3 and C-28 positions such as phthalates, amino acids or sugar moieties, in certain cases, increases the hydrosolubility and anticancer activity (Thibeault et al. 2007; Gauthier et al. 2008). The methods for the synthesis of 3- Artificial neural network (ANN) is a highly simplified model of the structure of a biological network (Mandal et al. 2009).The fundamental processing element of ANN is an artificial neuron (or simply a neuron). A biological neuron receives inputs fromother sources, combines them, performs generally a nonlinear operation on the result, and then outputs the final result (Bas and Boyaci, 2007). The basic advantage of ANN is that it does not need any mathematical model since an ANN learns from examples and recognizes patterns in a series of input and output data without any prior assumptions about their nature and interrelations (Mandal et al. 2009). ANN eliminates the limitations of the classical approaches by extracting the desired information using the input data. Applying ANN to a system needs sufficient input and output data instead of a mathematical equation (Ali Akcayol and Cinar, 2005). ANN is a good alternative to conventional empirical modeling based on polynomial and linear regressions (Kose, 2008). Recently, ANNs are the most popular artificial learning tool in
biotechnology, with applications ranging from pattern recognition in
chromatographic spectra and expression profiles, to functional analyses of
genomic and proteomic sequences (Manohar and Divakar, 2005). Many applications of the ANN for
prediction of the biotechnological processes have been reported in the
literatures (Manohar and Divakar, 2005; Szaleniec et al. 2006; Bas
et al. 2007; Silva et al. 2008; Abdul Rahman et
al. 2009). For example, Manohar and Divakar (2005) reported an analysis of
enzymatic esterification of anthranilic acid with methanol using artificial
neural network. Methanol concentration, enzyme concentration, period of
incubation, buffer volume and log P-values were input parameters, while the percentage yield of ester was
the output. The optimized values of network for
learning rate and momentum were 0.6 and 0.8, respectively. The learning was
completed in 388 cycles with an average absolute deviation of 5.7%. For testing
data, absolute deviation of predicted yield was varied between 1.3 and 33%. The
average absolute deviation of the predicted values from experimental values was
15%. Abdul Rahman et al.
(2009) have presented an artificial neural network (ANN) trained by backpropagation algorithm to predict the yield of
enzymatic synthesis of dioctyl adipate. Reaction temperature, reaction time,
amount of enzyme, and substrate molar ratio were the four input variables.
The best network was found to
be composed of seven hidden nodes using a hyperbolic tangent sigmoid transfer
function. The correlation coefficient (R Employing neural network models would lead to saving time and cost by predicting the results of the reactions so that the most promising conditions can then be verified (Abdul Rahman et al. 2009). The aim of the present work is to obtain an optimized ANN for predicting the yield of enzymatic acylation of betulinic acid with phthalic anhydride through a proper selection of the training algorithm. To do that, four training algorithms belonging to two broad classes have been evaluated: gradient descent algorithm, and Levenberg-Marquardt algorithm. Immobilized lipase (triacylglycerol hydrolase, EC
3.1.1.3; Novozym 435, 10000 PLU/g) from Chloroform and The enzymatic reaction performed in this study is
shown in Figure 1. To a magnetically stirred solution of betulinic acid
(25 mg, 0.0547 mmol), K
_{2}SO_{4} and concentrated under reduced pressure.
The residue was chromatographed with gradient on silica gel 60 (n-hexane/ethyl
acetate, 9:1-5:1, v/v). The ester fractions were combined and weighed after
evaporation of the solvents. The percentage isolated yield of ester (%Yield) is
defined as:
^{1}H & ^{13}C-NMR
spectra of the compound on a Varian Unity Inova 500 NMR spectrometer operating
at 26ºC and matched literature data (Kvasnica et al. 2005).
The determination of number of neurons in hiddenlayers is very important as it affects the training time and generalization property of neural networks. A higher value of neurons in hidden layer may force the network to memorize (as opposed to generalize) the patterns which it has seen during training whereas a lower value of neurons in hidden layer would waste a great deal of training time in finding its optimal representation (Hussain et al. 1992). There is no general rule for selecting the number of neurons in a hidden layer. It depends on the complexity of the system being modelled (Cheng et al. 2008). The most popular approach to finding the optimal number of neurons in hidden layer is by trial and error (Ahmed, 2005). In this study, trial and error approach was used to determine the optimum neurons in hidden layer of the network (examined from 1 to 20 neurons). Scaled data are passed into the input layer and then is propagated from input layer to hidden layer and finally to the output layer of the network (Hussain et al. 2002). Every node in hidden or output layer firstly acts as a summing junction which combines and modifies the inputs from the previous layer using the following equation (Jorjani et al. 2008):
where
y
z There are many types of learning algorithms in the literature which can be used for training of the network (Krose and Smagt, 1996; Haykin, 1998; Christodoulou and Georgiopoulos, 2001). However, it is so difficult to know which learning algorithm will be more efficient for a given problem (Saracoglu, 2008). The algorithms used to train ANN in this study are standard or incremental backpropagation (IBP), batch backpropagation (BBP), quick propagation (QP) and Levenberg-Marquardt backpropagation (LM). These algorithms are belonging to two classes: gradient descent backpropagation algorithm and Levenberg-Marquardt backpropagation algorithm. The details of the algorithms have been reported elsewhere (Ghaffari et al. 2006). The learning rate and momentum coefficient for the networks were chosen as the default values of the NeuralPower software. Therefore, the default values of network for learning rate and momentum coefficient are 0.15 and 0.8 using incremental backpropagation (IBP) and 0.1 and 0.4 using batch backpropagation (BBP), respectively. The default value for learning rate using quick propagation (QP) is 0.8 and momentum coefficient is not employed in QP mode (Ghaffari et al. 2006).
where nis the number of points, y The coefficient of determination, R
where n is the number of points, y Absolute average deviation (AAD) is another important index to evaluate the ANN output error between the actual and the predicted output (Bas and Boyaci, 2007):
where
y
At first, the gradient descent backpropagation
algorithms in three versions were used to train the neural networks. In order to determine the optimum number
of neurons in hidden layer, a series of topologies was examined, in which the
number of neurons was varied from 1 to 20. The root mean square error (RMSE)
was used as the error function. Also, the coefficient of determination (R
Various topologies (from 1 to 20 hidden neurons) using Levenberg-Marquardt (LM) algorithm were examined. The results show that a network with 8 hidden neurons produced the best performances. The performance of the network for testing at different hidden neurons using LM algorithm is also shown in Figure 3.
The results for various algorithms are summarized and
presented in Table 2. As shown in Table 2, the quick propagation algorithm has a better performance relative to
incremental backpropagation, batch backpropagation and Levenberg–Marquardt
backpropagation algorithms, because the best result derived from QP algorithm
with 4-9-1 topology that has minimum RMSE, maximum R The predictive ability of generated model was estimated
using validation data (unseen data) which were excluded from training. The
actual and predicted yields for validation data are also presented in Table 1.
The root mean squared error (RMSE) for validation data is 0.6626; the coefficient of determination (R An artificial neural network for enzymatic synthesis of betulinic acid ester has been optimized through a proper selection of the training algorithm. Different ANNs, trained with standard or incremental backpropagation (IBP), batch backpropagation (BBP), quick propagation (QP) and Levenberg-Marquardt backpropagation (LM), have been evaluated with respect to their predictive ability. A robust comparison of the performances of the above four algorithms was made employing standard statistical indices. The results of this study show that the quick propagation algorithm implemented by NeuralPower software gave the best performances. The optimal configuration of the ANN model using quick propagation algorithm found to be 4-9-1. Therefore, it can be concluded that the ANN model described in this paper is an efficient quantitative tool to predict the isolated yield of ester in the enzymatic synthesis of betulinic acid ester. Finally, it was reported in literature that the appropriate selection of the training algorithm allows maximizing the predictive ability of the artificial neural network (Torrecilla et al. 2007). Thus, the results of this paper show that a correct selection of training algorithm is essential for successful data modeling using artificial neural network. The authors wish to thank all the staff in the Department of Chemistry of Universiti Putra Malaysia for their help in this study. ABDUL
RAHMAN, Mohd Basyaruddin; CHAIBAKHSH, Naz; BASRI, Mahiran; SALLEH,
Abu Bakar and ABDUL RAHMAN, Raja Noor Zaliha Raja. Application
of artificial neural network for yield prediction of lipase-catalyzed
synthesis of dioctyl adipate. AHMAD, F.B.H; OMAR, J. and ALI, A.M. Chemical
examination of local plant: triterpene from leaf of Malaysian AHMED, Farid E. Artificial neural networks for
diagnosis and survival prediction in colon cancer. ALI AKCAYOL, M. and CINAR, Can. Artificial neural
network based modeling of heated catalytic converter performance. AMANI, Amir; YORK, Peter; CHRYSTYN, Henry; CLARK,
Brain J. and DO, Duong Q. Determination of factors controlling the particle
size in nanoemulsions using artificial neural networks. BAS, Deniz and BOYACI, Ismail H. Modeling and
optimization II: comparison of estimation capabilities of response surface
methodology with artificial neural networks in a biochemical reaction. BAS, Deniz; DUDAK, Fahriye C. and BOYACI, Ismail H.
Modeling and optimization IV: Investigation of reaction kinetics and kinetic
constants using a program in which artificial neural network (ANN) was
integrated. BASHEER, I.A. and HAJMEER, M. Artificial neural
networks: fundamentals, computing, design, and application. BASRI, Mahiran; ABDUL RAHMAN, Raja Noor Zaliha Raja;
EBRAHIMPOUR, Afshin; SALLEH, Abu Bakar; GUNAWAN, Erin Ryantin and ABD RAHMAD,
Mohd Basyarrudin. Comparison of estimation capabilities of response
surfacemethodology (RSM) with artificial neural network (ANN) in
lipase-catalyzed synthesis of palm-based wax ester. BOWEN, W. Richard; JONES, Meirion G. and YOUSEF,
Haitham N.S. Dynamic ultrafiltration of proteins-a neural network approach. CHENG, Jin; LI, Q.S. and XIAO, Ru-Cheng. A new
artificial neural network-based response surface method for structural
reliability analysis. CHRISTODOULOU, Christos and GEORGIOPOULOS, Michael.
CPC-X Software, NeuralPower User Guide. Regsoft Inc.,
2004. Available from Internet: CYBENKO, G. Approximation by superpositions of a sigmoidal function. GAUTHIER, Charles; LEGAULT, Jean; LAVOIE, Serge;
RONDEAU, Simon; TREMBLAY, Samuel and PICHETTE, André A. Synthesis of two
natural betulinic acid saponins containing
α-l-rhamnopyranosyl-(1→2)-α-l-arabinopyranose and their
analogues. GHAFFARI, A.; ABDOLLAHI, H.; KHOSHAYAND, M.R.;
SOLTANI BOZCHALOOI, I.; DADGAR,
A. and RAFIEE-TEHRANI, M.
Performance comparison of neural network training algorithms in modeling of
bimodal drug delivery. HAYKIN, Simon. HUSH, D. and HORNE, B.G. Progress in supervised
neural networks. HUSSAIN, Mukhtar; BEDI, Jatinder S. and SINGH,
Harpreet. Determining number of neurons in hidden layers for binary error
correcting codes. In: HUSSAIN, M.A.; SHAFIUR RAHMAN, M. and NG, C.W.
Prediction of pores formation (porosity) in foods during drying: generic models
by the use of hybrid neural network. IRIE, B. and MIYAKE, S. Capabilities of three-
layered perceptrons. In: IZADIFAR, M. and ZOLGHADRI JAHROMI, M. Application of
genetic algorithm for optimization of vegetable oil hydrogenation process. JAIN, Sanjay K.; SARKAR, Archana and GARG, Vaibhav.
Impact of declining trend of flow on Harike Wetland, India. JORJANI, E.; CHEHREH CHELGANI, S. and MESROGHLI, S.H.
Application of artificial neural networks to predict chemical desulfurization
of Tabas coal. KASIRI, M.B.; ALEBOYEH, H. and ALEBOYEH, A. Modeling
and optimization of heterogeneous photo-fenton process with response surface
methodology and artificial neural networks. KHARE, Mukesh and SHIVA NAGENDRA S.M. KOSE, Erdogan. Modelling of colour perception of different age groups using
artificial neural networks. KROSE, Ben and SMAGT, Patrick van der. KVASNICA, Miroslav; SAREK, Jan; KLINOTOVA, Eva;
DZUBAK, Petr and HAJDUCH, Marian. Synthesis of phthalates of betulinic acid and betulin with cytotoxic
activity. LOUGHLIN, Wendy A. Biotransformations in organic
synthesis. MANDAL,
Sumantra; SIVAPRASAD, P.V; VENUGOPAL, S. and MURTHY, K.P.N. Artificial
neural network modeling to evaluate and predict the deformation
behavior of stainless steel type AISI 304L during hot torsion. MANOHAR, Balaraman and DIVAKAR,
Soundar. An artificial neural network analysis of porcine
pancreas lipase catalysed esterification of anthranilic acid with methanol. MOHANTY, Swati. Estimation of vapour liquid
equilibria of binary systems, carbon dioxide-ethyl caproate, ethyl caprylate
and ethyl caprate using artificial neural networks. MUKHERJEE, Rama; JAGGI, Manu; RAJENDRAN, Praween;
SRIVASTAVA, Sanjay K.; SIDDIQUI, Mohammad J.A.; VARDHAN, Anand and BURMAN,
Anand C. Synthesis of 3-O-acyl/3-benzylidene/3-hydrazone/3-hydrazine/
17-carboxyacryloyl ester derivatives of betulinic acid as anti-angiogenic
agents. MUKHERJEE, Rama; KUMAR, Vivek; SRIVASTAVA, Sanjay K.;
AGARWAL, Shiva K. and BURMAN, Anand C. Betulinic acid derivatives as anticancer
agents: structure activity relationship. NATH, A. and CHATTOPADHYAY, P.K. Optimization of oven
toasting for improving crispness and other quality attributes of ready to eat
potato-soy snack using response surface methodology. RAJENDRAN, Praveen; JAGGI, Manu; SINGH,
Manoj K.; MUKHERJEE, Rama and BURMAN, Anand C. Pharmacological evaluation of
C-3 modified betulinic acid derivatives with potent anticancer activity. RAZAVI, Mohammad A.; MORTAZAVI, Ali and MOUSAVI,
Mahmoud. Dynamic modeling of milk ultrafiltration by artificial neural network. SARACOGLU, O. Galip. An artificial neural network
approach for the prediction of absorption measurements of an evanescent field
fiber sensor. SHANKAR, T.J. and BANDYOPADHYAY, S. Prediction of
extrudate properties using artificial neural networks. SILVA, James A.; COSTA NETO, Edilson Holanda;
ADRIANO, Wellington S.; FERREIRA, Andrea L.O. and GONÇALVES, Luciana R.B. Use of neural networks in
the mathematical modelling of the enzymic synthesis of amoxicillin catalysed by
penicillin G acylase immobilized in chitosan. SIN, H.N.; YUSOF, S.; SHIKH ABDUL HAMID, N. and ABDU.
RAHMAN, R. Optimization of enzymatic clarification of sapodilla juice using
response surface methodology. SONG,
Xiaowei; MITNITSKI, Arnold; MACKNIGHT, Chris and ROCKWOOD, Kenneth.
Assessment of individual risk of death using self-report data:
an artificial neural network compared with a frailty index. SZALENIEC, Maciej; WITKO, Małgorzata;
TADEUSIEWICZ, Ryszard and GOCLON, Jakub. Application of artificial neural
networks and DFT-based parameters for prediction of reaction kinetics of
ethylbenzene dehydrogenase. THIBEAULT, Dominic; GAUTHIER, Charles; LEGAULT, Jean;
BOUCHARD, Jimmy; DUFOUR, Philippe and PICHETTE Andre. Synthesis and
structure-activity relationship study of cytotoxic germanicane and lupane-type
3b-O-monodesmosidic saponins starting from betulin. TORRECILLA, J.S.; OTERO, L. and SANZ, P.D.
Optimization of an artificial neural network for thermal/pressure food
processing: Evaluation of training algorithms. WANG, Lingzhao; YANG, Bao;
WANG, Rui and DU, Xiuqiao. Extraction of pepsin-soluble collagen
from grass carp ( YASIN, Yamin; BASRIi, Mahiran; FAUJAN, Ahmad and
SALLEH, Abu Baker Salleh. Response
surface methodology as a tool to study the lipase-catalyzed synthesis of
betulinic acid ester. YOGEESWARI, Prumal and SRIRAM, Dharmarajan. Betulinic acid and its derivatives: a review on
their biological properties. ZAREVUKA, Marie and WIMMER, Zdenek. Plant products
for pharmacology: Application of enzymes in their transformations. Note: Electronic Journal of Biotechnology is not responsible if on-line references cited on manuscripts are not available any more after the date of publication. |