PhD — Computational Auditory Neuroscience
Email: buswinka@gmail.com
Phone: 1-231-887-7090
GitHub: github.com/buswinka
ORCID: 0000-0002-3618-5356
I am a graduate of the Speech and Hearing Biosciences and Technology (SHBT) program at Harvard Medical School, where I worked in the lab of Artur Indzhykulian developing deep learning software to accelerate image analysis for auditory neuroscientists. I received my undergraduate degree in Biomedical Engineering (with a minor in Electrical Engineering) from the University of Michigan, Ann Arbor, where I studied the biophysical interaction of cochlear implants with the ear under Dr. Bryan Pfingst.
My graduate work produced the Hair Cell Analysis Toolbox (HCAT), the first comprehensive and generalized deep learning tool for histological hair cell analysis in the hearing field, and SKOOTS, a novel skeleton-oriented instance segmentation method for mitochondria in volumetric electron microscopy datasets.
| 2018–2025 |
PhD, Computational Auditory Neuroscience Harvard University, Cambridge MA Principal Investigator: Artur Indzhykulian, MD, PhD |
| 2014–2018 |
BSE, Biomedical Engineering (Minor in Electrical Engineering) University of Michigan, College of Engineering, Ann Arbor MI Undergraduate Research Advisor: Bryan Pfingst, PhD |
Python (very proficient), MATLAB (proficient), Rust (proficient), Bayesian Modeling (PyMC3), Deep Learning (PyTorch, OpenAI Triton), 2D/3D Instance Segmentation, Computer Vision, Confocal Microscopy, Scanning Electron Microscopy
SKOOTS (SKeletOn ObjecT Segmentation) is the instance segmentation method developed for my PhD thesis. Rather than predicting object boundaries directly, it learns a skeleton representation of each object and uses that to separate densely packed structures — a problem that trips up conventional approaches when objects are large, touching, or irregularly shaped. It was designed specifically for quantifying mitochondria in volumetric FIB-SEM cochlear datasets and published in Advanced Science. Works on arbitrary 2D and 3D biomedical images.
HCAT is the first comprehensive, fully automated deep-learning pipeline for detecting and classifying cochlear hair cells across whole-cochlea fluorescence microscopy images. It ships with a GUI that handles multi-piece dissections automatically, making quantitative cochleograms accessible to labs without programming expertise. The underlying dataset (collected with collaborators across many institutions) is open-sourced separately and described in a Scientific Data paper.
A lightweight, zero-dependency Python utility for tiling arbitrarily large biological microscopy images. Modern microscopes produce images hundreds of gigabytes in size that cannot fit into GPU memory whole; ryomen slices them into overlapping crops, lets you apply any function to each crop, and stitches the results back together — all with a clean three-value iterator. Works with NumPy, PyTorch, Zarr, or any array type.
PIEFS (Python Instance Eikonal Function Solver) solves the eikonal equation over 2D and 3D instance segmentation masks using the Fast Iterative Method. The core kernel is written in OpenAI Triton for memory-efficient GPU execution. It also computes gradients of the eikonal field, which are useful as an intermediate representation in flow-based segmentation pipelines like Omnipose.
A training and evaluation framework for 3D biomedical instance segmentation models — roughly what
timm is for 2D classification. Supports UNet, UNeXT, Recurrent UNet, UNet++, and CellposeNet
in both 2D and 3D variants, all configurable via YAML. Intended to make reproducing and benchmarking
volumetric segmentation experiments straightforward.
| 2024 | Helen Carr Peake Research Prize — Research Laboratory of Electronics, Eaton Peabody Laboratories, MIT / Mass Eye and Ear |
| 2022 | Certificate of Distinction in Teaching — Bok Center for Education, Harvard University |
| 2017 | Travel Award — Conference for Implantable Auditory Prosthesis |
| Fall 2021 |
Engineering the Acoustical World (GENED 1080), Harvard College Undergraduate Teaching Fellow, 20 hrs/week |
| Fall 2020 |
Inner Ear Biology (SHBT 201), Harvard Medical School Graduate Teaching Fellow, 20 hrs/week |
| Spring 2020 |
Python for Research, Harvard Medical School Graduate Teaching Fellow, 5 hrs/week |
| 2021–2022 | Ella Wesson, Undergraduate, Harvard College — ML instance detection datasets for murine auditory hair cells |
| 2021–2024 | Emily Nguyen, High School, Fenway High School — High-quality segmentations of cochlear outer hair cell stereocilia |
| 2022 | Hidetomi Nitta, Undergraduate, Northeastern University — 3D mitochondria segmentation pipeline and dataset |
| 2022 | Rubina Simikyan, Undergraduate, Emmanuel College — ML instance detection datasets for human auditory hair cells |
| 2022–2023 | Mona Jawad, Undergraduate, University of Illinois — Hair cell stereocilia segmentation pipeline; international conference presentations |
| 2023–2024 | Sophia Wu, High School — ML-ready dataset of hair cell stereocilia in electron microscopy |
| 2023–2024 | Alex Wang, Undergraduate, Western University — Web-based ML platform for hair bundle analysis of scanning electron micrographs |
| 2023–2024 | Kruthi Gundu, High School — Stereocilia dataset annotation |
| 2024 | Medhita Sinha, High School — Stereocilia dataset annotation |
| 2024 | Navya Gobinathan, High School — Stereocilia dataset annotation |
| 2025 | Manuscript Reviewer, Springer Publishing / Scientific Reports |
| 2018–2025 | Interview Planning Committee, Speech and Hearing Biosciences and Technology Graduate Program, Harvard Medical School |
| 2018–2021 | Social Committee, Eaton Peabody Lab, Massachusetts Eye and Ear / Harvard Medical School |
Last updated April 2026