Hi, I'm running a prompt both to OpenAI as well as to Google Gemini and asking it to return the results in Validated JSON, I'm then using the json_decode filter to decode it. It works half the time, and half the time it tells me it's invalid JSON, even though technically it is validated JSON, is there a better way to get this parsed besides looping the workflow on error until it decides it likes the JSON?
It seems like XANO wants a comma after the last object, but no matter what I do I cant get either LLM to return it with one 100% of the time regardless of the model I use or how explicit I am about asking for it.
I've even gone as far as providing a template in my prompts with it and neither LLM seems to be consistent every time. I'm hoping there might be a solution to this.
For reference, this is an example of what the JSON response that's invalid looks like:
{ "title": "Bird Identification and Education", "problem": { "description": "Beginner bird watchers often struggle to identify different species, which can hinder their enjoyment of the hobby.", }, "solution": { "description": "Develop interactive online courses, mobile apps, or field guides that provide comprehensive information and identification tools for various bird species.", "valueProposition": "This would empower bird watchers of all levels to enhance their knowledge and deepen their appreciation for birdwatching." } }, { "title": "Wildlife Conservation and Bird Protection", "problem": { "description": "Protecting bird habitats and addressing threats to endangered species requires effective conservation efforts." }, "solution": { "description": "Offer educational programs, fundraising campaigns, or advocate for policy changes to support wildlife conservation and bird protection initiatives.", "valueProposition": "This would contribute to preserving bird populations and ensuring the longevity of birdwatching as a recreational activity." } }