API Documentation
Current APIs
- Agent
- Making phone calls
- Executions & calls data
- Phone numbers
- Inbound Agents
- Batches
- Knowledgebases
- Providers
- User
Legacy APIs
- Agent v1.0 APIs
Get agent
Retrieve an agent
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Path Parameters
Response
Unique identifier for the agent
Human-readable agent name
Type of agent
Current status of the agent
seeding
, processed
Timestamp of agent creation
Timestamp of last update for the agent
An array of tasks that the agent can perform
Type of task
conversation
, extraction
, summarization
, webhook
Configuration of multiple tools that form a task
Configuration of LLM model for the agent task
simple_llm_agent
, knowledgebase_agent
streaming
Semantic routing layer
Since we use fastembed all models supported by fastembed are supported by us
These are predefined routes that can be used to answer FAQs, or set basic guardrails, or do a static function call.
LLM configuration
streaming
Configuration of Synthesizer model for the agent task
polly
, elevenlabs
, deepgram
, styletts
Name of voice
Matthew
Engine of voice
generative
Language of voice
en-US
Sampling rate of voice
8000
, 16000
wav
Configuration of Transcriber model for the agent task
Identification provider for Deepgram
deepgram
nova-2
, nova-2-meeting
, nova-2-phonecall
, nova-2-finance
, nova-2-conversationalai
, nova-2-medical
, nova-2-drivethru
, nova-2-automotive
en
, hi
, es
, fr
linear16
Api tools you'd like the agents to have access to
Description of all the tools you'd like to add to the agent. It needs to be a JSON string as this will be passed to LLM.
Any unique name for this function tool
transfer_call
Use this tool to transfer the call
Should be used onkly in conversation task for now and it consists of all the required configuration for conversational nuances
Time to wait in seconds before hanging up in case user doesn't speak a thing
Since we work with interim results, this will dictate the linear delay to add before speaking everytime we get a partial transcript from ASR
To avoid accidental interruption, how many words should we wait for before interrupting
Weather to use LLM prompt to hang up or not. Pretty soon this will be replaced by predefined function
This will enable agent to acknowledge when user is speaking long sentences
Gap between every successive acknowledgement. We will also add a random jitter to this value to make it more random
Basic delay after which we should start with backchanneling
Toggle to add ambient noise to the call to add more naturalism
Track for ambient noise can be coffee-shop, call-center, office-ambience
office-ambience
, coffee-shop
, call-center
The call automatically disconnects reaching this limit
Was this page helpful?