Rohit Philip, Ph.D.

COMPUTER VISION ENGINEER

Postdoctoral Research Associate - I
Dept. of Medical Imaging

   Biography                Research                     Publications                 Coursework             CV/Contact
  1. Generalized Precision, Recall, and F Scores.
  2. Classical detection theory has long used traditional measures such as precision, recall, and F-measures to evaluate the quality of detection results; either during performance analysis of competing detection algorithms, or parameter tuning to obtain the set of parameters that produce optimal detection results during development. While impeccable at pixel level, or in cases of perfect detection at object level, i.e., a one-to-one correspondence between objects produced by the detection algorithm, and objects present in the ground truth (true positives), or no correspondence at all between objects (false positives, and false negatives), cases of many-to-one correspondence are completely ignored. To elaborate: situations where a ground truth object is split into multiple detected objects (splits), or multiple ground truth objects are represented by a single detection (merges) are not included in the quantification, resulting in incorrect evaluation. These situations arise commonly in practice, either as an end result of an imperfect detection algorithm, or as intermediate results of parameter tuning, rendering it impossible to objectively evaluate performance. At its core, the problem consists of precisely determining how all the detected objects (and misses) are classified based on the underlying ground truth. This paper precisely defines the relationship between the complete set of detected objects, and the underlying ground truth objects. Once a precise definition is established, we develop two new precision measures, and two new recall measures, resulting in four new F and G measures, all of which reduce to their classical detection theory counterparts in the case of perfect detection (no splits, and/or merges). We introduce the shared positive detection - shared positive truth (SPD-SPT) curve, and evaluate the detection performance of eight object detection algorithms using our novel kallynodetection framework.



    We present a hypothetical scenario to highlight a potential disparity that arises in the case of multiple detections on a single ground truth object. In the figure above, we represent ground truth by circles and detected objects by squares and/or rectangles. We present case (a) where the single ground truth object is detected by precisely one detection, or the perfect detection case. There is exactly one true positive, and no false positives or false negatives. The precision and recall measures for case (a) is therefore 1 (1/(1+0)), and 1 (1/(1+0)), respectively and the F and G measures are also both 1. Case (c) is also straightforward with one true positive, one false positive, and no false negatives. The precision and recall measures for case (c) are 0.5 (1/(1 + 1)), and 1 (1/(1 + 0)), while their F measure is 0.67, and their G measure is 0.71. Case (b) is the most interesting case where we have multiple objects that are detected on a single ground truth object. It is assumed that both detected objects satisfy the chosen match criterion, typically degree of overlap or centroid localization. The literature on precision and recall is quite ambiguous when dealing with such scenarios, and it is typically assumed that there is 1 true positive (since the ground truth object is detected after all, albeit twice), and no false positives or false negatives. In essence, the second object is ignored and we wind up with precision, recall, F and G measures that are identical to that of case (a). However, the number of detected objects is 2 in this case, while the denominator of precision, as defined is 1 (TP + FP). Therefore, precision ends up taking a value of 1 when there is only one true positive and two detected objects, which is clearly incorrect. A less common approach is to consider the second detected object as a false positive, irrespective of the fact that the second detected object would also satisfy the match criterion. In this situation, we would end up with precision, recall, F and G measures that are identical to that of case (c). While the number of detections (denominator of precision), is satisfied by forcing one of the detections to be considered a false positive, the object detection algorithm seems to be unfairly penalized and would appear identical to another that produced an actual false positive instead of multiple detection such as the one in case (c). Thus, as per classical detection theory and the performance metrics that are commonly used to evaluate detection algorithms, case (b) is either identical to case (a), or to case (c), both of which appear to be incorrect. Ideally, a measure that quantifies the performance of these three cases, would assign to case (b) a score worse than that of the perfect detection case of (a), and better than that of case (c).

    R.C. Philip, and J.J. Rodríguez. "Generalized Precision, Recall, and F Scores for Evaluation of Object Detection Performance." IEEE Transactions on Pattern Analysis and Machine Intelligence (Undergoing final edits).
    [PDF Placeholder] [BibTeX Placeholder]

    Classical detection theory has long used traditional measures such as precision, recall, F measure, and G measure to evaluate the quality of detection results. Such evaluation can be done for performance analysis of competing detection algorithms, or for parameter tuning to optimize parameters based on training data. The performance analysis can be done at the sample level (e.g., pixel-by-pixel) or at the object level (e.g., a set of connected pixels are grouped together into one object). Conventional performance measures are effective when applied at the sample level or when applied at the object level with simple detection outcomes. In many cases, however, object-level detection often results in hybrid detections such as a single ground-truth object split into multiple detected objects or multiple ground-truth objects merged into a single detected object and combinations thereof. In such cases, the conventional performance measures are ineffective. We propose a generalized framework for evaluating object detection algorithms, which involves two new precision measures and two new recall measures, resulting in four new F and G measures, all of which reduce to their classical detection theory counterparts in cases with simple detection outcomes (no split detections or merged truths). We introduce the shared positive detection vs. shared positive truth (SPD-SPT) curve, and evaluate the detection performance of eight object detection algorithms using our generalized framework.
    In preparation. To be updated soon.

  3. Zebrafish Anatomic Assay - Automatic Damage Scoring.
  4. We developed a machine learning solution to automaticallly determine the damage score of confocal microscopy images of zebrafish neuromast after exposure to ototoxins. Raw images showing damage scores from 0 (least damage) to 3 (most damage) are shown below.



    The first step is segmenting the zebrafish neuromast from the surrounding cell structure. Since all the information was present in the green channel of the image, segmentation was accomplished by simple thresholding (all pixels with intensity less than the 93rd percentile were attenuated; this corresponded to the best Dice score with respect to manually obtained ground truth segmentations).



    We then extracted 99 features including size, shape, intensity, texture etc., three of which are shown below.



    Cross validation was performed to reduce the feature set, based on a training data set of manually annotated damage scores. We then cascaded four individual support vector machine classifiers (SVM) to determine hyperplanes in the reduced feature space that best separate the binary classes corresponding to each damage score. A Bland Altman analysis to compare the two damage scoring methods (manual versus automatic) was performed, results of which are presented below.

    The automated damage scoring algorithm computes damage scores at an average of 0.51 seconds per image (including the overhead for training the classifiers), a process that took much longer when done manually. The results are also consistent, and free from operator bias, human error, and eliminates the need for trained experts to painstakingly annotate the data, giving them back time better spent elsewhere.



    R.C. Philip, J.J. Rodríguez, M. Niihori, R.H. Francis, J.A. Mudery, J.S. Caskey, E. Krupinski, and A. Jacob, "Automated High-Throughput Damage Scoring of Zebrafish Lateral Line Hair Cells After Ototoxin Exposure" Zebrafish 2018;15(2):145-155.
    [PDF via Pubmed]

    Zebrafish have emerged as a powerful biological system for drug development against hearing loss. Zebrafish hair cells, contained within neuromasts along the lateral line, can be damaged with exposure to ototoxins, and therefore, pre-exposure to potentially otoprotective compounds can be a means of identifying promising new drug candidates. Unfortunately, anatomical assays of hair cell damage are typically low-throughput and labor intensive, requiring trained experts to manually score hair cell damage in fluorescence or confocal images. To enhance throughput and consistency, our group has developed an automated damage-scoring algorithm based on machine-learning techniques that produce accurate damage scores, eliminate potential operator bias, provide more fidelity in determining damage scores that are between two levels, and deliver consistent results in a fraction of the time required for manual analysis. The system has been validated against trained experts using linear regression, hypothesis testing, and the Pearson's correlation coefficient. Furthermore, performance has been quantified by measuring mean absolute error for each image and the time taken to automatically compute damage scores. Coupling automated analysis of zebrafish hair cell damage to behavioral assays for ototoxicity produces a novel drug discovery platform for rapid translation of candidate drugs into preclinical mammalian models of hearing loss.
    @article {Philip18,
    author = {Philip, Rohit C. and Rodriguez, Jeffrey J. and Niihori, Maki and Francis, Ross H. and Mudery, Jordan A. and Caskey, Justin S. and Krupinski, Elizabeth and Jacob, Abraham},
    title = {Automated High-Throughput Damage Scoring of Zebrafish Lateral Line Hair Cells After Ototoxin Exposure},
    journal = {Zebrafish},
    volume = {15},
    number = {2},
    publisher = {Mary Ann Liebert, Inc.},
    issn = {},
    doi = {10.1089/zeb.2017.1451},
    pages = {145--155},
    year = {2018},
    }

    R.C. Philip, S.R.S.P. Malladi, M. Niihori, A. Jacob and J.J. Rodríguez, "Performance of Supervised Classifiers for Damage Scoring of Zebrafish Neuromasts," 2018 IEEE Southwest Symp. on Image Analysis and Interpretation, Apr. 2018, Las Vegas, NV, pp. 113-116.
    [PDF via IEEEXplore]

    Supervised machine learning schemes are widely used to perform classification tasks. There is a wide variety of classifiers in use today, such as single- and multi-class support vector machines, k-nearest neighbors, decision trees, random forests, naive Bayes classifiers with or without kernel density estimation, linear discriminant analysis, quadratic discriminant analysis, and numerous neural network architectures. Our prior work used high-level shape, intensity, and texture features as predictors in a single-class support vector machine classifier to classify images of zebrafish neuromasts obtained using confocal microscopy into four discrete damage classes. Here, we analyze the performance of a multitude of supervised classifiers in terms of mean absolute error using these high-level features as predictors. In addition, we also analyze performance while using raw pixel data as predictors.
    @INPROCEEDINGS{Philip18,
    author={Philip, R.C. and Malladi, S.R.S.P. and Niihori, M. and Rodriguez, J.J.},
    booktitle={Image Analysis and Interpretation (SSIAI), 2018 IEEE Southwest Symposium on},
    title={A comparison of tracking algorithm performance for objects in wide area imagery},
    year={2018},
    month={April},
    pages={113-116},
    keywords={Bayes methods;biology computing;decision trees;feature extraction;image classification;image texture;nearest neighbour methods;neural net architecture;support vector machines;high-level shape;predictors;single-class support vector machine classifier;zebrafish neuromasts;discrete damage classes;supervised classifiers;high-level features;supervised machine learning schemes;multiclass support vector machines;k-nearest neighbors;decision trees;random forests;naive Bayes classifiers;kernel density estimation;linear discriminant analysis;quadratic discriminant analysis;neural network architectures;damage scoring;intensity;texture features;confocal microscopy;image classification;Support vector machines;Biological neural networks;Feature extraction;Backpropagation;Forestry;Neurons;Decision trees;Neural network;support vector machine;random forest;naive Bayes classifier;supervised learning},
    doi={10.1109/SSIAI.2018.8470377},}

  5. Zebrafish Tracking.
  6. Automating the behavioral assay was successful, and we achieved a high-throughput with minimal error. However, we noticed that sometimes damaged fish were stuck to the bottom of the tank, oriented against the flow contributing to a higher rheotaxis index than expected. We improve the behavioral assay by tracking individual zebrafish from frame to frame and use this motion information, to obtain a complete picture of zebrafish behavior. We associate zebrafish from frame to frame by using optical flow to determine where each detected zebrafish is headed. This frame to frame association sometimes results in broken track fragments which are then connected at a later point to produce complete tracking information.

    We use this tracking information to determine additional metrics based on the zebrafish swimming activity, distance traveled, direction of angular motion, deviation from the direction of water flow etc.




    The work on zebrafish tracking is currently under review with Tech Launch Arizona on a possible patent.

  7. Zebrafish Behavioral Assay.
  8. We built a zebrafish detection module to detect all the zebrafish in an individual video frame. The central premise of the detection algorithm is that the zebrafish are darker than the surrounding water. Classical image processing techniques that detect dark objects of interest were surveyed, analyzed, and implemented. The user was given a choice of six detection algorithms: 1) Bottom Hat Transformation, 2) Kernel Density Estimation, 3) Multiscale Wavelets, 4) Local Comparison, 5) Locally Enhancing Filtering, and 6) Band Pass Filtering. Each of these algorithms were implemented, and provide the user with different capabilities in terms of accuracy, speed, etc. Detection also resulted in good segmentation (grouping of detected pixels into an object of interest).



    The results of the detection algorithm module (using the Bottom Hat Transformation) is quantified in terms of precision, recall, and F-scores. Precision and recall of the detection algorithm with respect to ground truth obtained manually from four human observers (X) is plotted against precision-recall values of each observer against the others (dots) in a scatterplot to indicate that the detection algorithm achieves a high degree of precision and recall, and also performs comparably with inter-operator variance. The performance of the principal component orientation determination module is plotted as a distribution of mean absolute angular error compared with ground truth orientations obtained from each of the four human observers. Finally, the RI at each flow epoch (flow off, flow stabilization, and flow on) was plotted to determine the performance of our high-throughput, fully automated behavioral assay.







    D.W. Todd, R.C. Philip, M. Niihori, R.A. Ringle, K.R. Coyle, S.F. Zehri, J.A. Mudery, R.H. Francis, J.J. Rodríguez, and A. Jacob, "A Fully-Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay," Zebrafish 2017;14(4):331-342.
    [PDF via Pubmed]

    Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully-automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully-automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs (currently no FDA approved treatment exists) into preclinical mammalian models of hearing loss.
    @article {Todd17,
    author = {Todd, Douglas W. and Philip, Rohit C. and Niihori, Maki and Ringle, Ryan A. and Coyle, Kelsey R. and Zehri, Sobia F. and Zabala, Leanne and Mudery, Jordan A. and Francis, Ross H. and Rodriguez, Jeffrey J. and Jacob, Abraham},
    title = {A Fully Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay},
    journal = {Zebrafish},
    volume = {14},
    number = {4},
    publisher = {Mary Ann Liebert, Inc.},
    issn = {},
    doi = {10.1089/zeb.2016.1412},
    pages = {331--342},
    year = {2017},
    }

  9. Raspberry Pi Imaging System.
  10. We built a swimming apparatus consisting of a swimming tank (A) comprising sixteen swimming lanes. We also built removable swimming chambers (B) that would house zebrafish test populations, facilitating quick turnaround time between experiments. We then built a low cost imaging system using a network of interconnected dedicated Raspberry Pi microcomputers (C) with NoIR infrared cameras for each lane, which were then connected via a zebrafish behavioral assay intranet to a control computer (D). The entire system was housed in a cabinet (E & F) built by an external contractor.



    We then built a zebrafish server on the control computer that would serve a control webpage that could be accessed from anywhere in the world using a secure University of Arizona NetID authentication. This control webpage is used to configure various testing conditions, start tests, view videos, manage RI computation settings, enable/disable cameras, and view the RI results. The architecture of the system is shown below.



    By capturing video in 1080p high-definition (1920 x 1080 pixels) at 30 frames per second, we were able to achieve an improved resolution (B) as compared to their previous camera (A). This improved resolution resulting in larger fish (note that the eyes of the zebrafish larvae are visible) made the downstream task of detecting zebrafish, determining its orientation, and computing RI much easier. The low cost (~$40) of each Raspberry Pi camera, and (~$25) for the NoIR infrared camera allowed us to build the entire imaging system for one tank comprising sixteen swimming lanes at around $1000. Adding in the cost for the control computer which can server multiple tanks (projected maximum of 6), the entire swimming apparatus consisting of two tanks and the control computer was built for under $3000.

    The control system was built on an Intel NUC computer operating on Ubuntu 16.04 LTS and running a custom network configuration to serve the behavioral assay intranet and communicate with the individual Raspberry Pis. The webpage is served using an Apache web server running CGI scripts, and perl modules to access configuration information, and RI results from xml files. SSH, and samba share access on the Pis are also available.

    The system was built by my good friend and colleague, Douglas W. Todd, while I inherited the system and helped maintain it, modifying result access etc.


    D.W. Todd, R.C. Philip, M. Niihori, R.A. Ringle, K.R. Coyle, S.F. Zehri, J.A. Mudery, R.H. Francis, J.J. Rodríguez, and A. Jacob, "A Fully-Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay," Zebrafish 2017;14(4):331-342.
    [PDF via Pubmed]

    Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully-automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully-automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs (currently no FDA approved treatment exists) into preclinical mammalian models of hearing loss.
    @article {Todd17,
    author = {Todd, Douglas W. and Philip, Rohit C. and Niihori, Maki and Ringle, Ryan A. and Coyle, Kelsey R. and Zehri, Sobia F. and Zabala, Leanne and Mudery, Jordan A. and Francis, Ross H. and Rodriguez, Jeffrey J. and Jacob, Abraham},
    title = {A Fully Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay},
    journal = {Zebrafish},
    volume = {14},
    number = {4},
    publisher = {Mary Ann Liebert, Inc.},
    issn = {},
    doi = {10.1089/zeb.2016.1412},
    pages = {331--342},
    year = {2017},
    }

  11. COMET - Continous Object Motion Estimation and Tracking.
  12. A crucial element of any video is a moving object - something that we can clearly perceive to have changed its position with time. An immediate intellectual reaction to this observed motion is to determine how much the object has moved from one frame to the next, with respect to its surroundings. Once the motion of the object (and consequently its speed and direction of motion) is estimated, the logical progression is to track it from frame to frame, until the object leaves the area of interest or until the end of the video.

    This motion estimation and tracking of an object is a crucial and complex problem for applications such as real-time video surveillance, traffic monitoring and tracking organs in medical images.

    There are many factors that make object detection and tracking a challenging task, such as camera motion, low contrast between objects and their backgrounds, illumination variation, clutter and occlusion. This research project aims to develop an automated system capable of detecting objects, continuously estimating their motion and tracking them.


    R.C. Philip, S. Ram, X. Gao and J.J. Rodríguez, "A Comparison of Tracking Algorithm Performance for Objects in Wide Area Imagery," 2014 IEEE Southwest Symp. on Image Analysis and Interpretation, Apr. 2014, San Diego, CA, pp. 109-12.
    [PDF via IEEEXplore]

    Object tracking is currently one of the most active research areas in computer vision. In this paper we compare and analyze the performance of six recent object tracking algorithms on a raw, low resolution, unregistered, interlaced aerial video of multiple cars moving on a roadway. This dataset comprising 50 frames of video offers a wide variety of challenges related to imaging issues such as low resolution, unregistered frames, camera motion, and interlaced video, as well as object detection problems such as low contrast, background clutter, object occlusion and varying degrees of motion. We present the performance of these algorithms in terms of both overlap accuracy and the Euclidean distance of the center pixel returned by the tracking algorithm from the ground truth.
    @INPROCEEDINGS{6806041,
    author={Philip, R.C. and Ram, S. and Xin Gao and Rodriguez, J.J.},
    booktitle={Image Analysis and Interpretation (SSIAI), 2014 IEEE Southwest Symposium on},
    title={A comparison of tracking algorithm performance for objects in wide area imagery},
    year={2014},
    month={April},
    pages={109-112},
    keywords={computer vision;image registration;image resolution;object tracking;road vehicles;traffic engineering computing;video signal processing;Euclidean distance;computer vision;interlaced aerial video;low resolution video;multiple cars;object tracking algorithm;roadway;unregistered video;wide area imagery;Accuracy;Clutter;Computer vision;Image resolution;Object tracking;Robustness;Object tracking;localization error;overlap area;partial occlusion;wide area imagery},
    doi={10.1109/SSIAI.2014.6806041},}

  13. Segmentation of Breast Cancer from high-resolution Immunohistochemistry.
  14. Accurate segmentation of breast cancer tissue is imperative to aid the oncologist in researching the effects of the tumor on the blood vasculature surrounding it as well as determining whether angiogensis or apoptosis is occurring.

    These are high-resolution (~0.5 microns x 0.5 microns) color images of immunohistochemistry (IHC) data of breast cancer tissue stained with a coloring agent. The original images are at 20X magnification corresponding to image dimensions of ~16000 microns x 27000 microns(or 32000 x 54000 pixels) while the images that are displayed here are at 1X magnification.

    The goal of this project was to develop an automated system to segment these regions of interest accurately. Since the coloring agent is retained by the regions of interest, there is a corresponding variation in color between these regions and the background. Principal component analysis using the Karhunen-Loeve Transform is performed first in order to find features to classify the regions of interest from background. Luminance information is contained in the first principal component while chrominance is dominant in the second and third principal component. The second principal component is chosen since the third principal component is weak in energy.

    Accurate segmentation results were obtained by a region growing scheme, wherein seed pixels are generated by thresholding the second principal component. However at 20X magnification, ~237000 seed pixels were produced and segmentation was not practical. Some of these seeds are redundant as they grow into the same region, while others are bad seeds as they cause the region growing scheme to spill out of the borders resulting in lower accuracy. We therefore developed a novel seed pruning scheme using a multi-resolution pyramid to prune the ~237000 seeds to a mere ~346 good seed pixels which accurately grow into the regions of interest.


    R.C. Philip, J.J. Rodríguez, and R.J. Gillies, "Seed Pruning using a Multi-Resolution Approach for Automated Segmentation of Breast Cancer Tissue," 2008 IEEE Intl. Conf. on Image Processing, Oct. 2008, San Diego, CA, pp. 1436-9.
    [PDF via IEEEXplore]

    This paper proposes a new automated system for segmentation of breast cancer tissue. The segmentation algorithm involves a principal component region growing scheme for high-resolution images. The number of candidate seed pixels is extremely large due to the high resolution. The main focus of this paper is to present a multi-resolution scheme for accurate selection of seed pixels to be presented as inputs to the region growing segmentation algorithm. The system is tested for accuracy, and the efficiency is measured in terms of percentage reduction in number of seed pixels, as well as accuracy of the segmentation results.
    @INPROCEEDINGS{4712035,
    author={Philip, R.C. and Rodríguez, J.J. and Gillies, R.J.},
    booktitle={Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on},
    title={Seed pruning using a multi-resolution approach for automated segmentation of breast cancer tissue},
    year={2008},
    month={Oct},
    pages={1436-1439},
    keywords={biological tissues;cancer;image resolution;image segmentation;medical image processing;principal component analysis;automated segmentation;breast cancer tissue segmentation;multiresolution approach;principal component region growing scheme;region growing segmentation algorithm;seed pruning;Breast cancer;KL transform;region growing;seed pixels},
    doi={10.1109/ICIP.2008.4712035},
    ISSN={1522-4880},}

Welcome to my nook!