Open AI - How to enable streaming with Xano?

Hey Xano, right now I am sending api requests to open ai. Problem is, it takes very long for the whole response to appear. LLM generation tends to be slow in general, so displaying the message slowly, line by line, is a UI trick most sites use so that the user doesn’t feel like they are waiting forever. How do I do this in Xano? Right now... it only appears when the entire output is done which could take up to 3 minutes. Open AI has a streaming function to stream responses as they come in. How do I set that up in Xano?

4
9 replies