The paper introduces a technique for text-based editing of real images using the Stable Diffusion model. In order to modify an image using text, it must first be inverted with a meaningful text prompt into the pretrained model's domain. The proposed inversion technique consists of two novel components: pivotal inversion for diffusion models, which uses a single pivotal noise vector for each timestamp and optimizes around it, and null-text optimization, which modifies only the unconditional textual embedding used for classifier-free guidance, rather than the input text embedding. This allows for prompt-based editing while avoiding the cumbersome tuning of the model's weights. The proposed null-text inversion technique is extensively evaluated on a variety of images and various prompt editing, showing high-fidelity editing of real images.