Christopher J. Buswinka

PhD — Computational Auditory Neuroscience
Email: buswinka@gmail.com
Phone: 1-231-887-7090
GitHub: github.com/buswinka
ORCID: 0000-0002-3618-5356


About

I am a graduate of the Speech and Hearing Biosciences and Technology (SHBT) program at Harvard Medical School, where I worked in the lab of Artur Indzhykulian developing deep learning software to accelerate image analysis for auditory neuroscientists. I received my undergraduate degree in Biomedical Engineering (with a minor in Electrical Engineering) from the University of Michigan, Ann Arbor, where I studied the biophysical interaction of cochlear implants with the ear under Dr. Bryan Pfingst.

My graduate work produced the Hair Cell Analysis Toolbox (HCAT), the first comprehensive and generalized deep learning tool for histological hair cell analysis in the hearing field, and SKOOTS, a novel skeleton-oriented instance segmentation method for mitochondria in volumetric electron microscopy datasets.


Education

2018–2025 PhD, Computational Auditory Neuroscience
Harvard University, Cambridge MA
Principal Investigator: Artur Indzhykulian, MD, PhD
2014–2018 BSE, Biomedical Engineering (Minor in Electrical Engineering)
University of Michigan, College of Engineering, Ann Arbor MI
Undergraduate Research Advisor: Bryan Pfingst, PhD

Technical Skills

Python (very proficient), MATLAB (proficient), Rust (proficient), Bayesian Modeling (PyMC3), Deep Learning (PyTorch, OpenAI Triton), 2D/3D Instance Segmentation, Computer Vision, Confocal Microscopy, Scanning Electron Microscopy


Projects

SKOOTS

SKOOTS (SKeletOn ObjecT Segmentation) is the instance segmentation method developed for my PhD thesis. Rather than predicting object boundaries directly, it learns a skeleton representation of each object and uses that to separate densely packed structures — a problem that trips up conventional approaches when objects are large, touching, or irregularly shaped. It was designed specifically for quantifying mitochondria in volumetric FIB-SEM cochlear datasets and published in Advanced Science. Works on arbitrary 2D and 3D biomedical images.

HCAT — Hair Cell Analysis Toolbox

HCAT is the first comprehensive, fully automated deep-learning pipeline for detecting and classifying cochlear hair cells across whole-cochlea fluorescence microscopy images. It ships with a GUI that handles multi-piece dissections automatically, making quantitative cochleograms accessible to labs without programming expertise. The underlying dataset (collected with collaborators across many institutions) is open-sourced separately and described in a Scientific Data paper.

ryomen

A lightweight, zero-dependency Python utility for tiling arbitrarily large biological microscopy images. Modern microscopes produce images hundreds of gigabytes in size that cannot fit into GPU memory whole; ryomen slices them into overlapping crops, lets you apply any function to each crop, and stitches the results back together — all with a clean three-value iterator. Works with NumPy, PyTorch, Zarr, or any array type.

PIEFS

PIEFS (Python Instance Eikonal Function Solver) solves the eikonal equation over 2D and 3D instance segmentation masks using the Fast Iterative Method. The core kernel is written in OpenAI Triton for memory-efficient GPU execution. It also computes gradients of the eikonal field, which are useful as an intermediate representation in flow-based segmentation pipelines like Omnipose.

bism — Biomedical Image Segmentation Models

A training and evaluation framework for 3D biomedical instance segmentation models — roughly what timm is for 2D classification. Supports UNet, UNeXT, Recurrent UNet, UNet++, and CellposeNet in both 2D and 3D variants, all configurable via YAML. Intended to make reproducing and benchmarking volumetric segmentation experiments straightforward.


Publications

  1. Buswinka, C. J., Osgood, R. T., Nitta, H., & Indzhykulian, A. A. (2026). SKOOTS: Skeleton-Oriented Object Segmentation for Mitochondria in High-Resolution Cochlear EM Datasets. Advanced Science, e17738.
  2. Buswinka, C. J., Rosenberg, D. B., Simikyan, R. G., Osgood, R. T., Fernandez, K., Nitta, H., … Indzhykulian, A. A. (2024). Large-scale annotated dataset for cochlear hair cell detection and classification. Scientific Data, 11(1), 416. doi:10.1038/s41597-024-03218-y
  3. Buswinka, C. J., Osgood, R. T., Simikyan, R. G., Rosenberg, D. B., & Indzhykulian, A. A. (2023). The Hair Cell Analysis Toolbox Is a Precise and Fully Automated Pipeline for Whole Cochlea Hair Cell Quantification. PLoS Biology, 21(3), e3002041.
  4. Buswinka, C. J., Colesa, D. J., Swiderski, D. L., Raphael, Y., & Pfingst, B. E. (2022). Components of impedance in a cochlear implant animal model with TGFβ1-accelerated fibrosis. Hearing Research, 426, 108638.
  5. Schvartz-Leyzac, K. C., Holden, T. A., Zwolan, T. A., Arts, H. A., Firszt, J. B., Buswinka, C. J., & Pfingst, B. E. (2020). Effects of Electrode Location on Estimates of Neural Health in Humans with Cochlear Implants. Journal of the Association for Research in Otolaryngology, 21(3), 259–275.
  6. Schvartz-Leyzac, K. C., Colesa, D. J., Buswinka, C. J., Rabah, A. M., Swiderski, D. L., Raphael, Y., & Pfingst, B. E. (2020). How electrically evoked compound action potentials in chronically implanted guinea pigs relate to auditory nerve health and electrode impedance. The Journal of the Acoustical Society of America, 148(6), 3900–3912.
  7. Schvartz-Leyzac, K. C., Colesa, D. J., Buswinka, C. J., Swiderski, D. L., Raphael, Y., & Pfingst, B. E. (2019). Changes over time in the electrically evoked compound action potential (ECAP) interphase gap (IPG) effect following cochlear implantation in Guinea pigs. Hearing Research, 383, 107809.

Honors & Awards

2024 Helen Carr Peake Research Prize — Research Laboratory of Electronics, Eaton Peabody Laboratories, MIT / Mass Eye and Ear
2022 Certificate of Distinction in Teaching — Bok Center for Education, Harvard University
2017 Travel Award — Conference for Implantable Auditory Prosthesis

Teaching

Fall 2021 Engineering the Acoustical World (GENED 1080), Harvard College
Undergraduate Teaching Fellow, 20 hrs/week
Fall 2020 Inner Ear Biology (SHBT 201), Harvard Medical School
Graduate Teaching Fellow, 20 hrs/week
Spring 2020 Python for Research, Harvard Medical School
Graduate Teaching Fellow, 5 hrs/week

Mentored Students

2021–2022 Ella Wesson, Undergraduate, Harvard College — ML instance detection datasets for murine auditory hair cells
2021–2024 Emily Nguyen, High School, Fenway High School — High-quality segmentations of cochlear outer hair cell stereocilia
2022 Hidetomi Nitta, Undergraduate, Northeastern University — 3D mitochondria segmentation pipeline and dataset
2022 Rubina Simikyan, Undergraduate, Emmanuel College — ML instance detection datasets for human auditory hair cells
2022–2023 Mona Jawad, Undergraduate, University of Illinois — Hair cell stereocilia segmentation pipeline; international conference presentations
2023–2024 Sophia Wu, High School — ML-ready dataset of hair cell stereocilia in electron microscopy
2023–2024 Alex Wang, Undergraduate, Western University — Web-based ML platform for hair bundle analysis of scanning electron micrographs
2023–2024 Kruthi Gundu, High School — Stereocilia dataset annotation
2024 Medhita Sinha, High School — Stereocilia dataset annotation
2024 Navya Gobinathan, High School — Stereocilia dataset annotation

Selected Presentations

National & International

Local


Service

2025 Manuscript Reviewer, Springer Publishing / Scientific Reports
2018–2025 Interview Planning Committee, Speech and Hearing Biosciences and Technology Graduate Program, Harvard Medical School
2018–2021 Social Committee, Eaton Peabody Lab, Massachusetts Eye and Ear / Harvard Medical School

Current Projects


Last updated April 2026