Anil Kumar Vadathya
I am a Research Scientist at Neal Cancer Center, Houston Methodist Research Institute. I work with Dr. Jennifer Cullen's group deploying AI models for cancer epidemiology. Prior to this, I worked as research engineer for 6 years at the Department of ECE, Rice University. I worked with Prof. Ashok Veeraraghavan computational imaging group and Dr. Teresia O'Connor group at Baylor College of Medicine where I was a lead engineer on NIH grants (RO1/PO1) developing, testing and deploying state-of-the-art computer vision algorithms to objectively measure children's TV and mobile use behaviors.
Before joining Rice, I finished my masters in Electrical engineering with Dr. Kaushik Mitra at IIT Madras, where I worked on generative models for image restoration in computational imaging frameworks. My master's thesis work won Qualcomm Innovation Fellowship for the year 2016-2017. I got my B. Tech in Electronics and Communications from RGUKT, Basar, India in 2015.
email /
resume /
cv /
linkedIn /
github /
google scholar
|
|
News:
- (Nov 18, 2024) Joining Houston Methodist as Research Scientist (AI/ML)
- (Jul 05, 2024) FLASH-TV validation accepted at Nature Scientific reports (in print)
- (Jan 04, 2024) FLASH-TV Techinal paper accepted at MTAP journal
- (Oct 23, 2023) FLASH-TV live demo at Children and Screens 2023
|
Press coverage:
|
Research
Broadly I'm interested in the computer vision, machine learning, and generative models and their applications. I love building state-of-the-art AI products for challenging problems.
|
 |
FLASH-TV a machine learning pipeline to passively measure children's TV viewing: validation studies of the system
Anil Kumar Vadathya, Tom Baranowski, Teresia M O'Connor, Alicia Beltran, Salma M Musaad, Uzair Alam, Alex Ho, Jason A Mendoza, Sheryl O Hughes, Ashok Veeraraghavan
In print at Nature Scientific reports, 2024
code / project
We validated a camera based privacy-preserving tech to track TV viewing in real-time in the participant's home. This technology is now being deployed in a 5-year NIH study with 200 children to measure impact of screentime on their health.
|
 |
Development of family level assessment of screen use in the home for TV (FLASH-TV)
Anil Kumar Vadathya, Tom Baranowski, Teresia M O'Connor, Alicia Beltran, Salma M Musaad, Oriana Perez, Jason A Mendoza, Sheryl O Hughes, Ashok Veeraraghavan
Multimedia Tools and Applications, 2023
code / paper
Technical report on FLASH-TV, an objective tool to measure TV time.
|
 |
The Family Level Assessment of Screen Use-Mobile Approach: Development of an Approach to Measure Children's Mobile Device Use
Oriana Perez, Anil Kumar Vadathya, Salma Musaad, Alicia Beltran, R Matthew Barnett, Tom Baranowski, Sheryl O Hughes, Jason A Mendoza, Ashutosh Sabharwal, Ashok Veeraraghavan, Teresia O'Connor
JMIR Formative Research, 2022
code / paper
|
 |
An Objective System for Quantitative Assessment of Television Viewing Among Children (Family Level Assessment of Screen Use in the Home-Television): System Development Study
Anil Kumar Vadathya, Salma Musaad, Alicia Beltran, Oriana Perez, Leo Meister, Tom Baranowski, Sheryl O Hughes, Jason A Mendoza, Ashutosh Sabharwal, Ashok Veeraraghavan, Teresia O'Connor
JMIR Pediatric and Parenting, 2022
code / paper / project
Technical report on FLASH-TV alpha and beta tests.
|
 |
A unified learning-based framework for light field reconstruction from coded projections
Anil Kumar Vadathya, Sharath Girish, Kaushik Mitra
IEEE Trans. on Computational Imaging, 2018
poster / code / arxiv
We provide a learning algorithm for light field reconstruction from a conventional camera.
|
|
Solving Inverse Computational Imaging Problems using Deep Pixel-level Prior
Akshat Dave, Anil Kumar V., Ramana Subramanyam, Rahul Baburajan, Kaushik Mitra
IEEE Trans. on Computational Imaging
Accepted at Deep Generative Models applications (TADGM) workshop at ICML, 2018!
arxiv / code
We use a deep autoregressive image prior PixelCNN++ for solving inverse imaging problems. Since autoregressive nature explicitly models pixel level dependencies it reconstruct pixel level details much better than existing state of the art methods.
|
 |
Learning Light Field Reconstruction from a Single Coded Image
Anil Kumar V., Saikiran C., Gautham R., Vijayalakshmi Kanchana, Kaushik Mitra
Asian Conference on Pattern Recognition, 2017
project page / poster
Using deep neural networks we reconstruct full sensor resolution light field from a single coded image. Our approach involves depth based rendering where depth is learnt in an unsupervised manner.
|
|
Compressive Image Recovery Using Recurrent Generative Model
Akshat Dave, Anil Kumar Vadathya, Kaushik Mitra
International Conference on Image Processsing (ICIP), 2017
project page / github / poster
We use a deep recurrent image prior, RIDE, which can model long range dependencies in images very well. Using this for compressive image recovery we show much better reconstructions especially at lower measurement rates.
|
|
Denoising High Density Gene Expression in Whole Mouse Brain Images
Mayug Maniparambil, Anil Kumar V., Kannan U. V., Kaushik Mitra, Pavel Osten
Society for Neuroscience (SfN), 2017
poster
We use a deep autoencoder with adversarial loss for denoising the gene expression to improve the registration accuracy.
|
|
Butterfly Communication Strategies: A prospect for soft-computing techniques
Sowmya Charugundla, Anjumara Shaik, Chakravarthi Jada, Anil Kumar Vadathya
International Joint Conference on Neural Networks (IJCNN), 2014
BMO code for three peaks function.
We show mathematical models of communication mechanisms debloyed by butterflies. This work is extended to an optimization algorithm, Butterfly Mating Optimization (BMO), Jada et al. 2015. We applied it for image clustering application.
|
|
ROBOG Autonomously Navigating Outdoor Robo-Guide
Kranthi Kumar R., Irfan Feroz G. M., Chakravarthi Jada, Harish Y., Anil Kumar V.
International Conference on Swarm, Evolutionary, and Memetic Computing, 2014
Neural networks are used to learn the navigation information. For outdoor navigation, we propose an image processing pipeline for road detection.
|
|