23881842ae | ||
---|---|---|
.. | ||
Dockerfile | ||
README.md | ||
compose.yml | ||
fastembed-server.py | ||
requirements.txt |
README.md
About
This is a minimal implementation of the OpenAI Embeddings API meant to be used with the QdrantSearch backend.
Usage
The easiest way to run it is to just use docker compose with docker compose up
. This starts the server on the default configured port. Different models can be used, for a full list of supported models, check the fastembed documentation. The first time a model is requested it will be downloaded, which can take a few seconds.