Computational photography has become an increasingly active area of research within the computer vision community. Within the few last years, the amount of research has grown tremendously with dozens of published papers per year in a variety of vision, optics, and graphics venues. A similar trend can be seen in the emerging field of computational displays – spurred by the widespread availability of precise optical and material fabrication technologies, the research community has begun to investigate the joint design of display optics and computational processing. Such displays are not only designed for human observers but also for computer vision applications, providing high-dimensional structured illumination that varies in space, time, angle, and the color spectrum. This workshop is designed to unite the computational camera and display communities in that it considers to what degree concepts from computational cameras can inform the design of emerging computational displays and vice versa, both focused on applications in computer vision.
The Computational Cameras and Displays (CCD) workshop series serves as an annual gathering place for researchers and practitioners who design, build, and use computational cameras, displays, and imaging systems for a wide variety of uses. The workshop solicits posters and demo submissions on all topics relating to computational imaging systems.
Previous CCD Workshops: CCD2024, CCD2023, CCD2022, CCD2021, CCD2020, CCD2018, CCD2017, CCD2016, CCD2014
Time (Nashville local) | Session |
8:45 - 9:00 | Welcome / Opening Remarks |
9:00 - 9:30 | Keynote by Ioannis Gkioulekas A ray tracer for physics |
9:30-9:50 | Invited Talk by Manasi Muglikar Computational imaging with event cameras |
9:50 - 10:10 | Invited Talk by Suyeon Choi Design of Holographic Display Systems Based on Artificial Intelligence |
10:10 - 10:40 | Morning Break |
10:40 - 11:10 | Keynote by Shree K. Nayar Can a Camera be Self-Sustaining? |
11:10 - 11:25 | Spotlight presentations |
11:25 - 12:45 | Poster Session |
12:45 - 13:50 | Lunch break |
13:50 - 14:20 | Keynote by Laura Waller Computational Aberration Correction |
14:20 - 14:40 | Invited Talk by Rick Chang Learning a good 3D representation via flow matching |
14:40 - 15:00 | Invited Talk by Florian Willomitzer Coherent Computational Imaging with Synthetic Waves |
15:00 - 15:30 | Afternoon Break |
15:30 - 16:00 | Keynote by Ren Ng Hi olo! Meet saq and mal |
16:00 - 16:45 | Panel discussion: Ioannis Gkioulekas, Shree K. Nayar, Laura Waller, Ren Ng |
16:45 - 16:55 | Closing Remarks |
ID | Board Number | Title | Presenter |
1 | #179 | BayesiaNF: Scalable Posterior Estimation for Bayesian Inverse Imaging | Tianao Li |
2 | #180 | Blending optimizations for segmented content in headset-free multifocal displays | Ahmed Othman |
3 | #181 | Blurry-Edges: Photon-Limited Depth Estimation from Defocused Boundaries | Wei Xu |
4 | #182 | Coherent Optical Modems for Full-Wavefield Lidar | Parsa Mirdehghan |
5 | #183 | Dense Dispersed Structured Light for Hyperspectral 3D Imaging fo Dynamic Scenes | Suhyun Shin |
7 | #185 | Dual Exposure Stereo for Extended Dynamic Range 3D Imaging | Juhyung Choi |
8 | #186 | Event Ellipsometer: Event-based Mueller-Matrix Video Imaging | Ryota Maeda |
9 | #187 | Flash-Split: 2D Reflection Removal with Flash Cues and Latent Diffusion Separation | Tianfu Wang |
10 | #188 | Focal Split: Untethered Snapshot Depth from Differential Defocus | Junjie Luo |
11 | #189 | Gaussian Wave Splatting for Computer-Generated Holography | Suyeon Choi |
12 | #190 | Hardware Coding Function Design for Compressive Single-photon 3D Cameras | David Parra |
13 | #191 | NeuSee: Neural Imaging to See Through Dazzle | Xiaopeng Peng |
14 | #192 | Pixel-aligend RGB-NIR imaging for robot vision | Jinnyeong Kim |
15 | #193 | Practical single photon color imaging | Tianyi Zhang |
16 | #194 | PS-EIP: Robust Photometric Stereo Based on Event Interval Profile | Kazuma Kitazawa |
17 | #195 | Rapid wavefront shaping using an optical gradient acquisition | Sagi Monin |
18 | #196 | Repurposing Pre-trained Video Diffusion Models for Event-based Video Interpolation | Jingxi Chen |
19 | #197 | Seeing A 3D World in A Grain of Sand | Yufan Zhang |
20 | #198 | Event fields: Capturing light fields at high speed, resolution, and dynamic range | Ziyuan Qu |
21 | #199 | Solving partial differential equations in participating media | Ioannis Gkioulekas |
22 | #200 | Spectrum from Defocus: Fast, Compact, and Interpretable Hyperspectral Imaging | Mehmet Kerem Aydin |
23 | #201 | Text-Guided Image Restoration via a Unified Plug-and-Play Diffusion Framework | Zihui Wu |
24 | #202 | Time of the Flight of the Gaussians: Optimizing Depth Indirectly in Dynamic Radiance Fields | Runfeng Li |
25 | #203 | Vision with Heat and Light | Mani Ramanagopaln |
26 | #204 | Opportunistic Single-Photon Time of Flight | Mian Wei |
Computational Cameras and Displays Workshop - June 11, 2025