API
Mendable Chat API
POST /api.mendable.ai/v0/mendableChat
This endpoint enables a chat with the Mendable AI assistant, which provides responses based on the given question and history, and the project's documentation.
Ingesting your documents
For this endpoint to work, your documents have to already be ingested in our system. To do that, go to our platform or schedule an intro call.
We will soon provide a way to ingest documents through our API.
Example Usage with Streaming
Request
Here is an example request using cURL:
curl -X POST https://api.mendable.ai/v0/mendableChat \
-H "Content-Type: application/json" \
-d '{
"api_key": "YOUR_API_KEY",
"question": "How do I create a new project?",
"history": [
{ "prompt" : "How do I create a new project?", "response" : "You can create a new project by going to the projects page and clicking the new project button." }
],
"conversation_id": 12345,
}'
or using JavaScript:
Installing Microsoft Fetch Event Source
This is needed to capture SSE.
npm install @microsoft/fetch-event-source
Example Code
import { fetchEventSource } from '@microsoft/fetch-event-source'
const url = 'https://api.mendable.ai/v0/mendableChat'
const data = {
api_key: 'YOUR_API_KEY',
question: 'How do I create a new project?',
history: [
{
prompt: 'How do I create a new project?',
response:
'You can create a new project by going to the projects page and clicking the new project button.',
},
],
conversation_id: 12345,
}
let fullResponse = ''
fetchEventSource(url, {
method: 'POST',
headers: {
Accept: 'text/event-stream',
'Content-Type': 'application/json',
},
openWhenHidden: true, // This is important to avoid the connection being closed when the tab is not active
body: JSON.stringify(data),
onopen(res: any) {
if (res.ok && res.status === 200) {
console.log('Connection made ', res)
} else if (res.status >= 400 && res.status < 500 && res.status !== 429) {
console.log('Client side error ', res)
}
return res
},
onmessage(event: any) {
const parsedData = JSON.parse(event.data)
const chunk = parsedData.chunk
if (chunk === '<|source|>') {
sources = parsedData.metadata
return
} else if (chunk === '<|message_id|>') {
response_message_id = parsedData.metadata
return
}
// TODO: handle chunks
return
},
onclose() {
// on close functionality
return
},
onerror(err: any) {
// on error functionality
return
},
})
.then((response) => response.json())
.then((data) => console.log(data))
.catch((error) => console.error('Error:', error))
Response
data: {"chunk" : "<|source|>" , "metadata": [{"id": "string", "content": "string", "link": "string"}]}
data: {"chunk": "To create a new project in Mendable, follow these steps:"}
data: {"chunk": "\n\n1. Log in to your Mendable account."}
data: {"chunk": "\n2. Click the 'New Project' button in the dashboard."}
data: {"chunk": "\n3. Fill in the required project details and click 'Create Project'."}
data: {"chunk": "<|message_id|>", "metadata" : 12345 }
Pay attention how the first chunk of the response is a special chunk that contains the metadata for the response. You can capture the chunk as it is sent as "<|source|>". The metadata is an array of objects, each containing the id, content and link of the source used to generate the response.
The last chunk is a <|message_id|> which is the id of the message in Mendable. You can ignore this chunk for now...
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| api_key | string | true | Your unique Mendable API key. |
| question | string | true | The user's input or question. |
| history | array | true | An array of conversation objects containing prompt and response strings and an optional array of source objects. |
| conversation_id | float | false | Id of the conversation, returned from here |
| temperature | float | false | Controls the randomness of the AI's response (0.0 to 1.0). |
| additional_context | string | false | Additional context from your API to be added to the prompt |
| relevance_threshold | number | false | This allows you to filter out sources that fall below a specified relevance_threshold. The relevance_threshold is a value between 0 and 1, where 0 would filter no sources and 1 will filter all sources |
Response
The response is sent as a series of Server-Sent Events (SSE). Each chunk of the AI-generated response is sent as a data event.
Chunk <|source|> contains metadata for the souces of the content.
Chunk <|message_id|> contains the id of the message in Mendable.
All the other chunks are the AI-generated response that you can concat.
data: {"chunk" : "<|source|>" , "metadata": [{"id": "string", "content": "string", "link": "string"}]}
data: {"chunk": "To create a new project in Mendable, follow these steps:"}
data: {"chunk": "\n\n1. Log in to your Mendable account."}
data: {"chunk": "\n2. Click the 'New Project' button in the dashboard."}
data: {"chunk": "\n3. Fill in the required project details and click 'Create Project'."}
data: {"chunk": "<|message_id|>", "metadata" : 12345 }
Disabling Streaming
Alternatively, you can disable streaming by passing the shouldStream parameter as false to the request. With streaming disabled, you can get the bots response from response body.
Here is an example of the request body with streaming disabled:
{
"question": "How to deploy my application?",
"history": [],
"anon_key": "<ANON_KEY>",
"conversation_id": "<convo_id>",
"shouldStream": false
}
And this is the response:
{
"answer": {
"text": "This is how to deploy it..."
},
"message_id": 123,
"sources": [
{
"id": 866,
"content":"",
"link": "",
"relevance_score": 0.99
},
]
}