Research

Recent Research Projects and Publications

Dec 2016 – Nov 2017Semantic Segmentation of Skin Lesions and Tissue Classification

While the focus of the research is on healing prediction using visual properties of wound images learnt using a deep neural network, we have also in parallel worked on skin lesion classification and segmentation. We have investigated deep learning methods for melanoma classification and semantic segmentation for lesions such as seborrhoeic keratosis, nevus as well as melanoma.

UPDATES:
1. We participated in ISIC 2017 Skin Lesion Classification and Segmentation Challenge and were placed in the Top 5 out of 23 teams for the Classification Challenge.
2. My paper “Lesion Segmentation using Deep Hypercolumn Features” won the Best Imaging Paper Award at CVIS 2017 Conference, Oct 2017.

Publications:
1. Ramachandram, D. and Taylor, G.W., 2017,“Lesion Segmentation using Deep Hypercolumn Features.” Journal of Computational Vision and Imaging Systems : Special Issue on CVIS 2017, Vol. 3, No. 1, pp:144-148, 2017.
2. DeVries, T. and Ramachandram, D., 2017. “Skin Lesion Classification Using Deep Multi-scale Convolutional Neural Networks.” arXiv preprint arXiv:1703.01402.
3. Ramachandram, D. and DeVries, T., 2017. “LesionSeg: Semantic segmentation of skin lesions using Deep Convolutional Neural Network.” arXiv preprint arXiv:1703.03372.

June 2015 – Oct 2016Deep Multimodal Fusion and Architecture Optimization
Deep learning of multimodal problems such as activity recognition involves learning a joint representation of disparate modalities such as video, audio, skeletal pose etc.
However, most deep multimodal architectures are manually designed. Questions such as “When to fuse?”, or “What to fuse?” often are left to the experimenter. In this research, we
study the feasibility of casting fusion structure as a Bayesian Optimization problem. We focus on gesture recognition problem domain that involves between 4-5 different modalities.

Publications:

1. Ramachandram, D., Lisicki, M., Shields, T., Amer, M. and Taylor, G.W., “Structure optimization for deep multimodal fusion networks using graph-induced kernels”, Proc. of 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Apr 26-28, Bruges, Belgium.

2. Ramachandram, D. and Taylor, G.W., “Deep Multimodal Learning: A survey on recent advances and trends.”, IEEE Signal Processing Magazine: Special Issue on Deep Learning for Visual Understanding, Vol.34, No.6, pp:96-108, 2017

3. Ramachandram, D., Lisicki, M., Shields, T., Amer, M. and Taylor, G.W.,“Bayesian Optimization on Graph-structured Search Spaces: Optimizing Deep
Multimodal Fusion Architectures”
, Neurocomputing (to appear).

Past Research

My PhD research  topic was in Robotic Vision. In particular, I designed and implemented a novel laser-based structured illumination for scene characterization and investigated the effectiveness of numerous image descriptors for a robotic manipulator visual positioning task.  Neural networks were use to learn the  highly non-linear sensory-motor mapping between the image features and the robot’s joint angles.  In addition, I solved machine vision problems related to semiconductor die inspection.

Between 2003 and 2015, I focused on two inter-related research areas within the domain of Computer Vision:

(a) Semantic Image Understanding : which includes problems like object class recognition, semantic gap problem, knowledge-guided segmentation, context verification, multi-modal (image and text) integration and image knowledge representation. The scope includes natural imagery, and medical imagery.

(b) Medical Image Analysis: covering aspects like tumour segmentation, computer-aided diagnosis, medical image classification and retrieval and collaborative architectures for tele-radiological applications.

Past Research Grants

2015-2017 Adaptive Appearance Models in Long Term Tracking, ScienceFund Grant, Ministry of Science, Technology and Innovation, Primary Researcher (Note: I left USM while project was underway)

2013-2015 Vehicle Detection and Tracking in Aerial Surveillance Video, USM Short Term Research Grant, Primary Researcher

2007-2010 Narrowing the Semantic Gap: Feature to Concept Mapping, Fundamental Research Grant, Ministry of Higher Education, Malaysia, Primary Researcher

2007-2010 Delineation and Visualization of Tumours and Risk Structures, Research University Grant, USM, Co-Researcher

2007-2010 Multimodal Meaning Normalization Through Ontologies, Research University Grant, Universiti Sains Malaysia, Co-Researcher

2006-2008 Segmentation and 3D Visualization of Tumor from 2D CT Datasets, Science Fund Research Grant, Ministry of Science, Technology and Innovation, Malaysia, Primary Researcher

2006-2008 Development of a Web front-end to the Grid for Resource Sharing, Image analysis and Visualization, Science Fund Research Grant, Ministry of Science, Technology and Innovation, Malaysia, Co-Researcher

2005-2007 Hybrid Multiscale Image Segmentation, USM Short Term Research Grant, Co-Researcher

2004-2007 Ocular Fundus Image Segmentation for Early detection of Diabetic Retinopathy, Universiti Sains Malaysia Short Term Research Grant , Primary Researcher

1999-2002 Intelligent Vision Inspection System for Quality Control in Semiconductor Industry, IRPA Top-Down Grant, Ministry of Science, Technology and Environment, Malaysia, Co-Researcher

Snapshots of Past Research

Contextual Models for Object Class Recognition

Here,  the role of contextual information in improving object class recognition is being investigated. The notion of context in computer vision is defined. Our investigations have led to a framework that elegantly the  utilizes semantic and spatial context to boost the performance of a given object classifier. The contextual knowledge for a given image domain is modelled through  a probabilistic framework.

Selected Publications

  1. Mozaherul Abu Hasanat, Dhanesh Ramachandram, and Rajeswari Mandava. Conves: A context verification framework for object recognition system. In Proceedings of the International Conference on Information Systems, Technology and Applications. Kuwait University Press, 2009.
  2. Mozaherul Abul Hasanat, Dhanesh Ramachandram, and Rajeswari Mandava. Bayesian belief network learning algorithms for modelling contextual relationships in natural imagery: a comparative study. Artificial Intelligence Review, pages 1-18, 2010. DOI: 10.1007/s10462-010-9176-8.
  3. Mozaherul Abu Hasanat, Siti Zubaidah Harun, Dhanesh Ramachandram, and Rajeswari Mandava.Object class recognition using neat-evolved artificial neural network. In Proceedings of the International Conference on Computer Graphics, Imaging and Visualization, pages 271-275, Penang, Malaysia, 2008. IEEE Computer Society.

Learning the Kernel

Today, an unprecedented amount  data is being generated daily with the advent of electronic devices such as  mobile phones,  cameras, medical imaging, satellites etc. While data mining is focused on discovering patterns vast amounts of data, a branch of machine learning is focused on learning from unstructured data. In many practical  problems which require machine learning,  there is usually an  abundance of unlabelled or partially labelled data  while labelled data would be scarce. Semi supervised learning techniques have been successfully used in these instances. In this research, we propose an approach to learn the kernel using transferred knowledge from unlabelled data to cope with situations where training examples are scarce.In our approach, unlabelled data has been used to construct an optimized kernel that better generalizes on the target dataset. For the proposed kernel learning algorithm, Fisher Discriminant Analysis (FDA) is used in conjunction with Maximum Mean Discrepancy (MMD) test of statistics to optimize a base kernel using labelled and unlabelled data. Thereafter, the constructed kernel from both labelled and unlabelled datasets is used in SVM to evaluate the results which proved to increase prediction accuracy.

Selected Publications

  1. M. Abbasnejad, Dhanesh Ramachandram, and Rajeswari Mandava. An unsupervised approach to learn the kernel functions: from global influence to local similarity.Neural Computing and Applications, pages 1-13, 2010. 10.1007/s00521-010-0411-7.
  2. Mohamad Ehsan Abbasnejad, Dhanesh Ramachandram, and Rajeswari Mandava. Optimizing kernel functions using transfer learning from unlabelled data. In Proceedings of the 2nd International Conference on Machine Vision (ICMV2009), Dubai, U.A.E, 12 2009. IEEE.


Medical Imaging

I work on computer-aided diagnosis of a number of medical problems which involves the analysis of medical imaging modalities such as CT and MRI scans. Since this area of research is multi-disciplinary in nature, we have engaged in a long-standing collaboration with radiologists,  surgeons,  graphics and visualization experts. A difficult research issue in medical image analysis involves the accurate segmentation or delineation of soft-tissues. Since the boundary separating tumour and viable tissue is often very vaguely characterized, improved delineation algorithm based on level-set based active contours have been developed. The intelligent use of anatomical atlases can be used to “guide” the complete segmentation of organs and specific structures.  Specific research problems  include the automated detection and segmentation of White Matter Lesions in MRI, Segmentation of Osteosarcoma using Harmony Search optimized Fuzzy Clustering, and Medical Image Retrieval.

Selected Publications
  1. Kok Haur Ong, Dhanesh Ramachandram, and Rajeswari Mandava. Automated white matter lesion segmentation in MRI using Box-whisker plot outlier detection.In Abhir Balerao and Nasir Rajpoot, Editors, Proceedings of the Medical Image Analysis and Understanding Conference (MIUA 2010), volume 1,pages 227-231. British Machine Vision Association, University of Warwick, 2010.
  2. Mahmoud Saleh Jawarneh, Rajeswari Mandava, Dhanesh Ramachandram, and Ibrahim Lutfi Shuaib. Automatic initialization of contour for level set algorithms guided by integration of multiple views to segment abdominal CT scans. In Proceedings of the 2nd International Conference on Computational Intelligence, Modelling and Simulation, Bali, Indonesia, September 2010. UKSim and IEEE.
  3. Anusha Achutan, Rajeswari Mandava, Dhanesh Ramachandram, Mohd Ezane Aziz, and Ibrahim Lutfi Shuaib. Wavelet energy-guided level set based active contour: A segmentation method to segment highly similar regions. Computers in Medicine and Biology, 40:608-620, 2010.
  4. Mahmoud Jawarneh, Rajeswari Mandava, Dhanesh Ramachandram, and Ibrahim Lutfi Shuaib. Segmentation of abdominal volume dataset slices guided by single annotated image. In 2nd International Conference on BioMedical Engineering and Informatics (BMEI’09), Tianjin, China, 2009.
  5. Osama Moh’d Alia, Rajeswari Mandava, and Dhanesh Ramachandram. A novel image segmentation algorithm based on harmony fuzzy search algorithm. In International Conference on Soft Computing and Pattern Recognition, Malacca, Malaysia, 2009. IEEE, IEEE Computer Society.