Quick Start
Add anytool to your LLM app in 5 minutes
Step 1: Get Your API Key
Sign up at anytoolhq.com/signup and navigate to Settings → API Keys to create an API key.
Step 2: Install Dependencies
For this example, we'll use Vercel's AI SDK (works with any LLM provider):
npm install ai @ai-sdk/openai zodStep 3: Add anytool as a Tool
Create a single tool that generates everything on demand:
import { generateText, tool } from 'ai'
import { openai } from '@ai-sdk/openai'
import { z } from 'zod'
const anytool = tool({
description: 'Generate and execute any tool on demand',
parameters: z.object({
prompt: z.string().describe('What the tool should do'),
input: z.string().describe('Input data to process')
}),
execute: async ({ prompt, input }) => {
const response = await fetch('https://anytoolhq.com/api/tool', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.ANYTOOL_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({ prompt, input })
})
if (!response.ok) throw new Error('Tool execution failed')
const result = await response.json()
return result.output
}
})
// Use it
const result = await generateText({
model: openai('gpt-5'),
tools: { anytool },
prompt: 'Generate a QR code for https://example.com'
})
console.log(result.text)Alternative: Use a Specific Tool
If you want to use a specific tool you've created, you can call it directly:
// Get your tool ID from the dashboard
const toolId = 'your-tool-id-here'
// Call the specific tool
const response = await fetch(`https://anytoolhq.com/api/tool/${toolId}/run`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.ANYTOOL_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({ input: 'your input here' })
})
const result = await response.json()
console.log(result.output)That's It!
Your LLM now has access to unlimited tools. When it needs a capability (QR codes, charts, validators, etc.), it calls anytool with a prompt describing what it needs.
How It Works
- Your LLM decides it needs a tool (e.g., "QR code generator")
- It calls anytool with prompt + input
- anytool generates the code (or uses cached version)
- Executes securely and returns the result
- Your LLM uses the output
Next Steps
- See more examples (LangChain, Python, custom)
- Test tools in the web dashboard
- Learn about output types