Hi all,
I just recently got into learning about the vector embeddings in Xano, but I am having some issues with the speed of my queries.
I am using a table of about 4000 records and am trying to implement the ability to search through the description field of the records by a query string. I have followed the guide at https://www.xano.com/learn/vector-embeddings-with-openai/ and set up an inner product vector index for the embeddings field, and have made calls to OpenAI to populate each of the 4000 records with an embedding based on the description text.
Now, to test this out, I have created a endpoint that calls OpenAI to create an embedding for the search query, and then runs a query all function with an inner product similarity eval between the embeddings table field and the search query embedding. I then sort by this similarity eval.
This all works great and the search results are accurate, but it takes about 15 seconds to run this search, which seems way too high to me considering there are only 4000 records. I would have expected a search like this to be under a second.
Is there anything I could be doing wrong that could be causing this, considering the functionality is working fine?
I was wondering if my index could be broken, but I tried removing the index and then the search time went up to about a minute (!!), which means the index must be working and helping.
Is this just the reality of vector embeddings within Xano, or is it likely that I have made a mistake or am implementing this incorrectly?
Thanks in advance!