I am a doctoral researcher at the CVLab at EPFL, working on Energy-Efficient Deep Networks under the supervision of Prof. Pascal Fua and Dr. Mathieu Salzmann. My current research focuses on 3D reconstruction using compressed Gaussian splatting, developing efficient quantized diffusion models, and reducing inference time for vision-language and large language models (VLMs and LLMs). These projects align with my broader goal of optimizing machine learning models for performance and efficiency.
Prior to my PhD, I worked on hardware-friendly mixed-precision neural networks under the supervision of Prof. Luca Benini at the Integrated Systems Laboratory, ETH Zurich. I hold a master’s degree from TU Munich and ETH Zurich, where I cultivated a strong interest in machine learning algorithms designed for low-power devices.
QT-DoG leverages weight quantization to promote flatter minima, enhancing generalization across unseen domains while reducing model size and computational costs.
QT-DoG leverages weight quantization to promote flatter minima, enhancing generalization across unseen domains while reducing model size and computational costs.
Miscellanea
Experience
Logitech, Research Intern - Immersive Video Conferencing.
Agile Robots AG, Research Intern - Applied Machine Learning.
Max Planck Institute, Research Intern - Deep Learning on FPGA.
BMW Group, Autonomous Driving Campus, Research Intern – HW/SW Optimization of CNNs.
GE-Healthcare Intern – Software Development and Testing.
Siemens AG Research Intern – Deep Learning Model Deployment.
Intel Working Student – Software Development.
Supervised Students
Chengkun Li(Now PhD Student at EPFL)
[Modular Quantization for Object Detection.]
Ziqi Zhao (Now PhD Student at HKU).
[Modular Quantization for 6D pose.]
Ahmad Jarrar Khan(Master Student at EPFL)
[Compressed Gaussians for Dynamic 3D scenes.]
Kepler Warrington-Arroyo (Consultant, Aubep)
[Vision Language Model compression].