LLM Tab
Large language model (LLM) settings
Access Bolna playground from https://playground.bolna.dev/.
LLM Tab on Bolna Playground
-
Choose your LLM Provider - (
OpenAI
,DeepInfra
,Groq
) and respective model (gpt-4o
,Meta Llama 3 70B instruct
,Gemma - 7b
, etc.) -
Tokens - Increasing this number enables longer responses to be queued before sending to the synthesiser but slightly increases latency
-
Temperature - Increasing temperature enables heightened creativity, but increases chance of deviation from prompt. Keep temperature as low if you want more control over how your AI will converse
-
Filler words - reduce perceived latency by smarty responding
<300ms
after user stops speaking, but recipients can feel that the AI agent is not letting them complete their sentence
Was this page helpful?