The thermo-acoustic instabilities arising in combustion processes cause significant deterioration and safety issues in various human-engineered systems such as land and air based gas turbine engines. The phenomenon is described as selfsustaining and having large amplitude pressure oscillations with varying spatial scales of periodic coherent vortex shedding. Early detection and close monitoring of combustion instability are the keys to extending the remaining useful life (RUL) of any gas turbine engine. However, such impending instability to a stable combustion is extremely difficult to detect only from pressure data due to its sudden (bifurcationtype) nature. Toolchains that are able to detect early instability occurrence have transformative impacts on the safety and performance of modern engines. This paper proposes an endto- end deep convolutional selective autoencoder approach to capture the rich information in hi-speed flame video for instability prognostics. In this context, an autoencoder is trained to selectively mask stable flame and allow unstable flame image frames. Performance comparison is done with a wellknown image processing tool, conditional random field that is trained to be selective as well. In this context, an informationtheoretic threshold value is derived. The proposed framework is validated on a set of real data collected from a laboratory scale combustor over varied operating conditions where it is shown to effectively detect subtle instability features as a combustion process makes transition from stable to unstable region.
Prognostics and Health Monitoring (PHM), deep convolutional network, selective autoencoder, combustion instabilities, implicit labeling
Akintayo, A., Lee, N., Chawla, V., Mullaney, M., Marett, C., Singh, A., . . . Sarkar, S. (2016, March). An end-to-end convolutional selective autoencoder approach to soybean cyst nematode eggs detection. arXiv(arXiv:1603.07834v1), 1 - 10.
Akintayo, A., Lore, K. G., Sarkar, S., & Sarkar, S. (2016, March). Early detection of combustion instabilities using convolutional selective autoencoders on hi-speed video. arXiv(arXiv:1603.07839v1), 1- 10.
Barbu, A. (2009). Learning real-time mrf inference for image denoising. IEEEXplore, 978-1-4244-3991-1(09), 1574-1581.
Berger, A. L., Pietra, S. A. D., & Pietra, V. J. D. (1996). A maximum entropy approach to natural language processing. Association for Computational Linguistics, 22(1), 1 - 36.
Bergstra, J., Breulex, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., . . . Bengio, Y. (2010, June). Theano: a cpu and gpu math expression compiler. Proceedings of the Python for Scientific Computing Conference (SciPy). (Oral Presentation)
Berkooz, G., Holmes, P., & Lumley, J. L. (1993). The proper orthogonal decomposition in the analysis of turbulent flows. Annual Review of Fluid Mechanics, 25(1), 539- 575. doi: 10.1146/annurev.fl.25.010193.002543
Bishop, C. M. (2006). Pattern recognition and machine learning. New York, NY, USA: Springer.
Collobert, R., &Weston, J. (2008). A unified architecture for natural language processing:deep neural networks with multitask learning. 25th International Conference on Machine Learning, 1 - 7.
Domke, J. (2013). Learning graphical model parameters with approximate marginal inference. IEEE Transaction on Pattern Analysis and Machine Intelligence, 35(10), 2454 - 2467.
Duchi, J., Hazan, E., & Singer, Y. (2011, July). Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12, 2121-2159.
Erdogan, H. (2010, December). A tutorial on sequence labeling. ICMLA.
Farabet, C., Couprie, C., Najman, L., & LeCun, Y. (2013). Learning hierarchical features for scene labeling. exdb, 1-15.
Fung, J.,&Mann, S. (2004, August). Using multiple graphics cards as a general purpose parallel computer: Applications to compute vision. International Conference on Pattern Recognition, ICPR, 1, 805-808.
Graves, A. (2014, June). Generating sequences with recurrent neural networks. arXiv:1308.0850v5 [cs.NE], 1 - 43.
Graves, A., & Schmidhuber, J. (2005). Framewise phoneme classification with bidirectional lstm and other neural network architectures. IJCNN, 1 - 8.
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. R. (2012, July). Improving neural networks by preventing co-adaptation of feature detectors. arXiv(1207.0580v1), 1 - 18.
Hussain, A. K. M. F. (1983). Coherent structures - reality and myth. Physics of Fluids, 26(10), 2816-2850.
Jones, S. (2015, April). Convolutional autoencoders in python/theano/lasagne. Retrieved Feb 03, 2016, from https://swarbrickjones.wordpress.com/
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. NIPS 2012: Neural Information Processing Systems, Lake Tahoe, Nevada.
Kulesza, T., Amershi, S., Caruana, R., Fisher, D., & Charles, D. (2014, April). Strudtured labeling to facilitate concept evolution in machine learning. ACM, 1 - 10.
Kullback, S., & Liebler, R. (1951). On information and sufficieny. The annals of mathematical statistics, 79-86.
Lafferty, J., McCallum, A., & Pereira, F. C. (2001, June). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. International Conference on Machine Learning, 282-289.
LeCun, Y., & Bengio, Y. (1998). Convolutional networks for images, speech and time-series [Computer software manual]. MIT Press.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998, November). Gradient-based learning applied to document recognition. Proc of IEEE, 1-46.
Li, H., Zhou, X., Jeffries, J. B., & Hanson, R. K. (2007, February). Sensing and control of combustion instabilities in swirl-stabilized combustors using diode-laser absorption. AIAA, 45(2), 1-9.
Liu, C., Ghosal, S., Jiang, Z., & Sarkar, S. (2016). An unsupervised spatiotemporal graphical modeling approach to anomaly detection in cps. Proceedings of the International Conference on Cyber-physical Systems (ICCPS), 1 - 10.
Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. CVPR, 3431-3440.
Lore, K. G., Akintayo, A., & Sarkar, S. (2016, June). Llnet: A deep autoencoder approach to natural lowlight image enhancement. Elsevier Pattern Recognition( doi:10.1016/j.patcog.2016.06.008), 1-13.
Masci, J., Meier, U., Ciresan, D., & Schmidhuber, J. (2011). Stacked convolutional auto-encoders for hierarchical feature extraction. In T. H. et al. (Ed.), (p. 52-59). Springer-Verlag Berlin Heidelberg.
Meyer, A. (2011-2012). Hmm and part of speech tagging. Lecture Note.
Rabiner, L. (1989). A tutorial on hidden markov models and selected applications in speech proccessing. Proceedings of the IEEE, 77(2), 257-286.
Sarkar, S., Lore, K. G., & Sarkar, S. (2015, December). Early detection of combustion instability by neural-symbolic analysis on hi-speed video. In Workshop on cognitive computation: Integrating neural and symbolic approaches (coco @ nips 2015). Montreal, Canada.
Sarkar, S., Lore, K. G., Sarkar, S., Ramaman, V., Chakravarthy, S. R., Phoha, S., & Ray, A. (2015). Early detection of combustion instability from hi-speed flame images via deep learning and symbolic time series analysis. Annual Conference of the Prognostics and Health Management Management Society, 1-10.
Scherer, D., Muller, A., & Behnke, S. (2010). Evaluation of pooling operations in convolutional architectures for object recognition. Intenational Conference on Artificial Neural Networks, ICANN, 1-10.
Schmid, P. J. (2010). Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656, 5-28. doi:10.1017/S0022112010001217
Sercu, T., Puhrsch, C., Kingsbury, B., & LeCun, Y. (2016, September). Very deep multilingual convolutional neural networks for lvcsr. ICAPPS, 1–5.
Simonyan, K., & Zisserman, A. (2015, April). Very deep convolutionaal networks for large scale image recognition. International Conference on Learning and Recognition(arXiv:1409.1556v6), 14.
Sutskever, I., Martens, J., Dahl, G., & Hinton, G. (2013). On the importance of initialization and momentum in deep learning. International Conference on Machine Learning, JMLR, 28.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., . . . Rabinovich, A. (2015). Going deeper with convolutions. CVPR, 9.
Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331, 1279-1285.
Thoma, M. (2016, February). Lasagne for python newbies. Retrieved 03, from https://martinthoma.com/lasagne for pythonnewbies/
Zeiler, M. D. (2012, December). Adadelta:an adaptive learning rate method. arXiv:1212.5701v1, 1-6.
Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. ECCV(8689), 813-833.