Tool calling with Serverless Endpoints
Build AI agents with Friendli Serverless Endpoints using tool calling for dynamic, real-time interactions with LLMs.
Goals
- Use tool calling to build your own AI agent with Friendli Serverless Endpoints
- Check out the examples below to see how you can interact with state-of-the-art language models while letting them search the web, run Python code, etc.
- Feel free to make your own custom tools!
Getting Started
- Head to https://suite.friendli.ai, and create an account.
- Grab a FRIENDLI_TOKEN to use Friendli Serverless Endpoints within an agent.
🚀 Step 1. Playground UI
Experience tool calling on the Playground
- On your dashboard, click the “Go to Playground” button of Friendli Serverless Endpoints
- Choose a model that best aligns with your desired use case.
- Click a
web:search
tool calling example and see the response. 😀
🚀 Step 2. Tool Calling
Search interesting information using the web:search
tool.
This time, let’s try it by writing python code.
-
Turn on the
web:search
tool on the playground. -
Ask something interesting!
-
Click the “View code” button to use the tool calling in Python/Javascript.
-
Copy/Paste the code on your IDE.
-
Click here to generate a Friendli Token.
-
Fill in the token value of the copied/pasted code and run it.
🚀 Step 3. Multiple tool calling
Use multiple tools at once to calculate “How long it will take you to buy a house in the San Francisco Bay Area based on your annual salary”. Here is the available built-in tools (beta) list.
math:calculator
(tool for calculating arithmetic operations)math:statistics
(tool for analyzing statistic data)math:calendar
(tool for handling date-related data)web:search
(tool for retrieving data through the web search)web:url
(tool for extracting data from a given website)code:python-interpreter
(tool for writing and executing python code)file:text
(tool for extracting text data from a given file)
Example Answer sheet
🚀 Step 4. Build a custom tool
Build your own creative tool. We will show you how to make a custom tool that retrieves temperature information. (Completed code snippet is provided at the bottom)
-
Define a function for using as a custom tool
-
Send a function calling inference request
- Add the user’s input as an
user
role message. - The information about the custom function (e.g.,
get_temperature
) goes into the tools option. The function’s parameters are described in JSON schema. - The response includes the
arguments
field, which are values extracted from the user’s input that can be used as parameters of the custom function.
- Add the user’s input as an
-
Generate the final response using the tool calling results
- Add the
tool_calls
response as anassistant
role message. - Add the result obtained by calling the
get_weather
function as atool
message to the Chat API again.
- Add the
-
Complete Code Snippet
🎉 Congratulations!
Following the above instructions, we’ve experienced the whole process of defining and using a custom tool to generate an accurate and rich answer from LLM models!
Brainstorm creative ideas for your agent by reading our blog articles!
Was this page helpful?