Search results
Prafull Sharma. PhD Student @ CSAIL, MIT. Research Interests: Computer Vision, Graphics, Machine Learning. Contact: prafull (at) mit (dot) edu. Bio. I am a PhD student advised by Prof. William T. Freeman and Prof. Frédo Durand at the MIT Computer Science & Artificial Intelligence Laboratory.
- Materialistic: Selecting Similar Materials in Images
Prafull Sharma 1,2, Julien Philip 2, Michaël Gharbi 2,...
- Neural Groundplans: Persistent Neural Scene Representations
Prafull Sharma, Ayush Tewari, Yilun Du, Sergey Zakharov,...
- Alchemist: Parametric Control of Material Properties with ...
We propose a method to control material attributes of...
- Neural Groundplans: Persistent Neural Scene ... - Prafull Sharma
We present a method to map 2D image observations of a scene...
- Materialistic: Selecting Similar Materials in Images
Prafull Sharma. PhD Student @ CSAIL, MIT. Research Interests: Computer Vision, Graphics, Machine Learning. Contact: prafull (at) mit (dot) edu. Bio. I am a PhD student advised by Prof. William T. Freeman and Prof. Frédo Durand at the MIT Computer Science & Artificial Intelligence Laboratory.
Articles 1–11. PhD Student, MIT - Cited by 224 - Computer Vision - Computer Graphics - Machine Learning.
View Prafull Sharma’s profile on LinkedIn, a professional community of 1 billion members. Professor Cardiology at Army Hospital Research and Referral - Company · Experience: Army Hospital...
- 476
- 495
- Army Hospital Research and Referral-Company
We propose a method to control material attributes of objects like roughness, metallic, albedo, and transparency in real images. Our method capitalizes on the generative prior of text-to-image models known for photorealism, employing a scalar value and instructions to alter low-level material properties. Addressing the lack of datasets with ...
View Prafull Sharma’s profile on LinkedIn, a professional community of 1 billion members. Absolute Studios · Producer · Experience: Absolute Studios · Education: infinity business school ·...
- Absolute Studios
We present a method to map 2D image observations of a scene to a persistent 3D scene representation, enabling novel view synthesis and disentangled representation of the movable and immovable components of the scene.