Nomic Text Embeddings allow you to semantically encode text for computers to manipulate. They are useful
for understanding large unstructured datasets, semantic search and building retrieval-augmented LLM apps.
Creates embeddings from a batch of text documents. Optionally, specify the embedding task_type to specialize the embeddings to a certain task. Notably, RAG workflows should use search_query for queries and search_document for documents.
The Nomic Text Embedding model to use.
A list of text documents to embed
The task your embeddings should be specialized for: search_query, search_document, clustering, classification. Defaults to search_document.
The output size of the embedding model. Applicable only to models that support variable dimensionality and defaults to the models largest embedding size.
The list of text embeddings and the total tokens utilized by the request.