Organize and Search Images
Nomic Atlas natively supports image datasets, allowing users to interactively explore large image collections.
Atlas automatically organizes your image collections into clusters that group images with similar visual & semantic contents near each other in the map.

For example, you can explore hundreds of thousands of images of artworks from the Metropolitan Museum of Art in this data map in Atlas, which groups similar artworks into different neighborhoods of the map based on their visual content.
Uploading images
Currently, image datasets must be uploaded programatically via the Nomic Python SDK.
You can upload images you have stored locally, or pass them as URL strings to store images hosted remotely in an Atlas Dataset.
Supported file types are .png
, .jpg
, and .webp
.
Pass a list of image URLs, local filepaths, bytes, or PIL.Image objects to the blobs
parameter in map_data
.
from nomic import atlas
atlas.map_data(
blobs=your_images, # Your list of images (list of image URLs, local filepaths, bytes, or PIL images stored locally)
data=your_metadata, # Optional metadata for each image
identifier=your_dataset_name # Dataset name
)
Bring your own image embeddings
You can also bring your own image embeddings to Atlas:
For custom pre-computed image embeddings, use:
from nomic import atlas
atlas.map_data(
embeddings=your_image_embeddings, # np.array of shape (n_images, embedding_dim)
data=your_metadata, # Optional metadata for each image
identifier=your_dataset_name # Dataset name
)
Developers can see more examples of uploading image datasets in the API reference for data upload functionality in the Nomic Python SDK here.
Nomic Embed Vision
Image support in Atlas is supported by our Apache 2.0-licensed image embedding model Nomic Embed Vision. You can read our release announcement for this model here on our blog. Developers looking to use the model programatically can read the API reference for the model here.
Multimodality
Atlas enables multimodal interation with your image data, e.g. searching over your image datasets with text queries to find images that depict content visually related to your queries.
For example, searching for animals
over the Metropolitan Museum of Art map returns images of artwork that depicts animals.

You can read more about multimodality in Atlas in our multimodality guide.