Abstract
Images as an artistic medium often rely on specific camera angles and lens distortions to convey ideas or emotions; however, such precise control is missing in current text-to-image models. We propose an efficient and general solution that allows precise control over the camera when generating both photographic and artistic images. Unlike prior methods that rely on predefined shots, we rely solely on four simple extrinsic and intrinsic camera parameters, removing the need for pre-existing geometry, reference 3D objects, and multi-view data. We also present a novel dataset with more than 57,000 images, along with their text prompts and ground-truth camera parameters. Our evaluation shows precise camera control in text-to-image generation, surpassing traditional prompt engineering approaches.
Results
Here, we provide some results using PreciseCam. The generated image for a given prompt adapts to changes in extrinsic parameters (roll and pitch) and intrinsic parameters (vertical field of view and distortion xi). Users can adjust these parameters via sliders, generating images that reflect both the camara parameters and the text description. Visit our web browser for more examples, and refer to our paper for further details about PreciseCam.


Supplementary Video
Paper: PDF
Code and Model
Coming soon!