# CLIP Interrogator

**Model Card:** &#x20;

**The CLIP Interrogator** uses the OpenAI CLIP models to test a given image against a variety of artists, mediums, and styles to study how the different models see the content of the image. It also combines the results with BLIP captions to suggest a text prompt to create more images similar to what was given.

Recommended GPU : Nvidia T4&#x20;

Inference Time: 94 seconds&#x20;

Use **OpenAI's CLIP and Salesforce's BLIP** to optimize prompts for image-text matching and creating art with text-to-image models.

{% embed url="<https://playground.katonic.ai/model/14>" %}
[**https://playground.katonic.ai/model/14**](https://playground.katonic.ai/model/14)
{% endembed %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://metaverse-imagen.gitbook.io/ai-tools-research/ai-tools-main-categories/prompt-design-and-engineering/image-to-text-prompts/clip-interrogator.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
