More than twenty years ago I started building computer vision algorithms to detect lung cancer in CT scans. That work pulled me deep into AI/ML engineering, medical imaging, and eventually into leading teams that ship production software in one of the hardest domains there is: healthcare, life sciences, and medtech.
More than twenty years ago I started building computer vision algorithms to detect lung cancer in CT scans. That work pulled me deep into AI/ML engineering, medical imaging, and eventually into leading teams that ship production software in one of the hardest domains there is: healthcare, life sciences, and medtech.
Along the way I've architected cloud-native ML platforms, built RAG pipelines and multi-agent workflows, led GPU-accelerated inference deployments on AWS, and taken products through FDA 510(k) clearance. Before moving to enterprise, I spent nearly two decades as a Principal Investigator leading multidisciplinary R&D teams across computer vision, medical imaging, and clinical AI — work that taught me how to bridge the gap between research ambition and production reality.
What shipping production AI in regulated, safety-critical environments teaches you is that the hard part is never the model. It's the infrastructure, the evaluation rigor, the governance, the observability, and the ability to work across engineering, clinical, and business teams simultaneously. Most AI initiatives stall because they treat these as afterthoughts.
That's exactly the problem I'm now focused on at enterprise scale. I work with large organizations to move AI out of the experiment phase and into their actual operating model. Designing agent architectures with retrieval, orchestration, routing, and observability built in from day one. Building abstraction layers across AI providers so teams aren't locked into a single vendor. Rethinking workflows from first principles around what AI makes possible, rather than wrapping AI around processes that were designed for a pre-AI world.
HIPAA, FDA 21 CFR Part 11, GxP, EU AI Act, IEC 62304, ISO 14971. A single audit failure can halt production or delay a drug launch by years. Compliance isn't a checkbox — it's the operating environment.
Unlike e-commerce, an operations failure in life sciences means compromised drug quality, delayed trials, or direct patient harm. Models that degrade silently in production aren't a data science problem — they're a patient safety problem.
EHR, PACS, LIMS, QMS, EDMS, MES — often multiple instances per site with inconsistent data architectures. Decades of acquisitions and site-by-site decisions mean no two environments look alike.
It's the infrastructure, the evaluation rigor, the governance, the observability, and the ability to work across engineering, clinical, and business teams simultaneously. This is what separates production AI from demos.
Clinical AI deployment pipelines, DICOM/FHIR integration, continuous model performance monitoring, drift detection, and regulatory-compliant model update workflows. IEC 62304 and FDA PCCP aligned.
Extending GAMP 5 validation frameworks for AI systems. Risk-based classification aligned to EU AI Act and FDA guidance. Change management protocols for when a prompt update requires revalidation vs. documentation-only.
Regulatory knowledge bases indexed on submission history. RAG pipelines for clinical decision support with source citations. Multi-agent workflows for clinical trial operations, pharmacovigilance, and regulatory intelligence.
GPU-accelerated inference on AWS, containerized deployment pipelines, platform-agnostic architecture. Abstraction layers across AI providers so teams aren't locked into a single vendor's roadmap.
AI-assisted requirements generation, automated validation documentation, compliance pre-screening on every PR. Moving from periodic audit scrambles to continuously maintained compliance posture.
Direct experience with FDA 510(k) clearance. RAG-powered regulatory knowledge systems, submission document generation, and regulatory intelligence monitoring across FDA, EU MDR, and ICH frameworks.
Autonomous AI-guided ultrasound system for neonatal congenital heart disease screening. Integrates mechatronic robotics, embedded firmware (ESP32), mobile diagnostics, and deep learning–based image interpretation. A compound agentic system with autonomous decision-making, sensor integration, and real-time inference.
Retrieval-augmented generation system for X-ray interpretation using Qwen2-VL multimodal vision-language model. Full agentic pipeline with PubMedBERT embeddings, ChromaDB semantic search, DocLing OCR parsing, LangChain/LangGraph orchestration, and GPU-enabled Docker deployment with Prometheus/Grafana observability.
Scalable SaaS platform for medical imaging visualization, AI model deployment, and remote inference. Deployed cloud-native infrastructure on AWS with Docker containerization and Terraform infrastructure-as-code.
Integrated open-source radiology viewer with SMART-on-FHIR patient data and a modular Python AI backend for multimodal clinical analysis. Plug-in APIs for deep learning models, real-time VTK.js overlays, and secure medical LLM calls with EHR and imaging context.
Browser-native pipeline integrating NVIDIA Clara open models (Reason, Segment, Generate) into VolView via REST/WebSocket AI services. Real-time GPU-accelerated 3D AI model inference for interactive conversational image exploration, organ segmentation, and synthetic image generation.
AI platform for synthesizing orthopedic X-ray images (DRRs from CT scans) for surgical planning, AI data augmentation, and pathology research. End-to-end MLOps pipeline with MONAI/PyTorch deployed on AWS.
Advanced virtual surgical planning tool for nasal airway obstruction. Integrated 3D image processing, cloud-based model deployment, airflow simulation, and user-centric customization. Earned 3.48/5 usability from ENT surgeons.
Open-source Image-Guided Surgery Toolkit adopted by research labs and medical device companies worldwide for prototyping and commercializing image-guided surgical applications. Co-authored "IGSTK: The Book."
AI-powered surgical planning system (iCSPlan) using 3D statistical shape models to optimize skull correction in children. Achieved 40–50% reduction in cranial malformations (p<0.001) across surgeries.
Suite of virtual surgical trainers for renal biopsy (>4.4/5 effectiveness), laparoscopic surgery (FLS credentialing), neurosurgery, and orthognathic surgery. Open-source simulation frameworks (iMSTK, Pulse Physiology Engine).
Developing a mechatronic ultrasound device with AI algorithms to autonomously guide scanning, interpret images, and diagnose congenital heart disease in newborns.
End-to-end precision surgical imaging platform for autonomous characterization of surgical margins and 3D rendering of critical anatomical structures during surgery.
Developing methodology and open-source software for active learning, cross-institute ML model generalization, and scalable cloud-based data labeling platforms.
Designing open-source software templates to train and advance surgeon performance for improved patient safety and healthcare outcomes.
Quantitative imaging technology using non-invasive low-radiation X-ray imaging to assess respiratory disease risk in premature babies.
Open-source platform to generate and share high-quality anatomical models with reduced expert labor and community-driven improvement over time.
Image-based morphometric analysis using optimal transport methods to discover regional tissue changes without fine-grained segmentations.
Virtual simulator achieving >4.4/5 effectiveness ratings from clinical experts for improving procedural skill competence in real-time ultrasound-guided renal biopsy.
Showing 8 of 20+ grants. Additional completed projects include neurosurgery simulation, PET/CT calibration, craniosynostosis planning, laparoscopic surgery training, orthognathic surgery guidance, and more.
Adjunct Assistant Professor, Old Dominion University (2024–) — Guest lectures on Deep Learning for Medical Image Analysis (MSIM/BME 462/562, MSIM 762/862)
Course Instructor, NC A&T State University (2016) — BMEN 311: Biomedical Imaging and Devices
MICCAI Systems & Architectures for CAI Workshop (2009–2013, 5 editions) · MICCAI Medical Device Software Tutorial (2023) · NCI-ISBI Segmentation Challenge (2013) · CARS Open-Source Workshops (2008–2009)
On X-ray Genius: generating synthetic X-rays (DRRs) from CT scans for surgical planning, data augmentation, and AI training.
Connecting cloud-native DICOM storage to browser-based medical image visualization.
Data pipeline orchestration for population-scale osteoarthritis imaging studies.
On building complete medical software products — from customization to regulatory compliance.
Integrating VTK visualization capabilities with NVIDIA's real-time AI sensor processing platform.
Using NVIDIA's Medical Open Network for AI to build production segmentation pipelines.
I'm always interested in conversations about AI engineering, agentic systems, and operationalizing technology at scale.