Hello! I am an EECS PhD Candidate at UC Berkeley advised by Laura Waller. I am interested computational imaging, which is the joint design of imaging hardware and algorithms. My work is at the intesection of signal processing, optics, optimization, and machine learning.
I completed my B.S. in Electrical Engineering from the State University of New York at Buffalo in May 2016. At Buffalo, I was involved in a nanosatellite mission and several other space-related research projects, which you can read more about on my old website here.
K. Monkhova*, K. Yanny*, N. Aggarwal, L. Waller
Project Page /
Video /
Code /
Paper (Optica)
In this work, we propose a novel, compact, and inexpensive computational camera for snapshot hyperspectral imaging. Our system consists of a repeated spectral filter array placed directly on the image sensor and a diffuser placed close to the sensor. Each point in the world maps to a unique pseudorandom pattern on the spectral filter array, which encodes multiplexed spatio-spectral information. A sparsity-constrained inverse problem solver then recovers the hyperspectral volume with good spatio-spectral resolution. By using a spectral filter array, our hyperspectral imaging framework is flexible and can be designed with contiguous or non-contiguous spectral filters that can be chosen for a given application.
K. Yanny, N. Antipa, W. Liberti, S. Dehaeck, K. Monakhova, F. L. Liu, K. Shen, R. Ng, L. Waller
Project Page / Paper (Nature LS&A)
In this work, we replace the tube lens of a Miniscope with an engineered and optimized diffuser that's printed using a Nanoscribe 3D printer. The resulting imager is inexpensive, tiny (the size of a quarter), and can capture 3D fluorescent volumes from a single image, with resulting 3 micron lateral resolution and 10 micron axial resolution at video rates with no moving parts. Check out more of our 3D videos of water bear videos here.
K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, L. Waller
Project Page /
Paper (Optics Express)
Mask-based lensless imagers, like DiffuserCam, can be small, compact, and capture higher-dimensional information (3D, temporal), but the reconstruction time is slow and the image quality is often degraded. In this work, we show that we can use knowledge of optical system physics along with deep learning to form an unrolled model-based network to solve the reconstruction problem, thereby using physics + deep learning together to speed up and improve image reconstructions. As compared to traditional methods, our architecture achieves better perceptual image quality and runs 20× faster, enabling interactive previewing of the scene.