Skip to main content

How can I stream a text response back to the end user word by word using the Freshchat API? I’m implementing a specific use case where the response is generated by a custom LLM, and I’d like to stream the reply gradually instead of sending the entire response at once. Long responses can take long time, I want to reduce that.

How can I achieve this?

 
 
Join the Community or User Group to Participate in this Discussion