Node.js Quickstart
The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications.
In this quickstart tutorial, you'll build a simple agent with a streaming chat user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects.
If you are unfamiliar with the concepts of Prompt Engineering and HTTP Streaming, you can optionally read these documents first.
Prerequisites
To follow this quickstart, you'll need:
- Node.js 18+ and pnpm installed on your local development machine.
- A Vercel AI Gateway API key.
If you haven't obtained your Vercel AI Gateway API key, you can do so by signing up on the Vercel website.
Setup Your Application
Start by creating a new directory using the mkdir command. Change into your new directory and then run the pnpm init command. This will create a package.json in your new directory.
mkdir my-ai-appcd my-ai-apppnpm initInstall Dependencies
Install ai, the AI SDK, along with other necessary dependencies.
The AI SDK is designed to be a unified interface to interact with any large language model. This means that you can change model and providers with just one line of code! Learn more about available providers and building custom providers in the providers section.
pnpm add ai@beta zod dotenvpnpm add -D @types/node tsx typescriptThe ai package contains the AI SDK. You will use zod to define type-safe schemas that you will pass to the large language model (LLM). You will use dotenv to access environment variables (your Vercel AI Gateway key) within your application. There are also three development dependencies, installed with the -D flag, that are necessary to run your Typescript code.
Configure Vercel AI Gateway API key
Create a .env file in your project's root directory and add your Vercel AI Gateway API Key. This key is used to authenticate your application with the Vercel AI Gateway service.
touch .env
Edit the .env file:
AI_GATEWAY_API_KEY=xxxxxxxxxReplace xxxxxxxxx with your actual Vercel AI Gateway API key.
The AI SDK will use the AI_GATEWAY_API_KEY environment variable to
authenticate with Vercel AI Gateway.
Create Your Application
Create an index.ts file in the root of your project and add the following code:
import { ModelMessage, streamText } from 'ai';import 'dotenv/config';import * as readline from 'node:readline/promises';
const terminal = readline.createInterface({ input: process.stdin, output: process.stdout,});
const messages: ModelMessage[] = [];
async function main() { while (true) { const userInput = await terminal.question('You: ');
messages.push({ role: 'user', content: userInput });
const result = streamText({ model: 'anthropic/claude-sonnet-4.5', messages, });
let fullResponse = ''; process.stdout.write('\nAssistant: '); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write('\n\n');
messages.push({ role: 'assistant', content: fullResponse }); }}
main().catch(console.error);Let's take a look at what is happening in this code:
- Set up a readline interface to take input from the terminal, enabling interactive sessions directly from the command line.
- Initialize an array called
messagesto store the history of your conversation. This history allows the agent to maintain context in ongoing dialogues. - In the
mainfunction:
- Prompt for and capture user input, storing it in
userInput. - Add user input to the
messagesarray as a user message. - Call
streamText, which is imported from theaipackage. This function accepts a configuration object that contains amodelprovider andmessages. - Iterate over the text stream returned by the
streamTextfunction (result.textStream) and print the contents of the stream to the terminal. - Add the assistant's response to the
messagesarray.
Running Your Application
With that, you have built everything you need for your agent! To start your application, use the command:
pnpm tsx index.ts
You should see a prompt in your terminal. Test it out by entering a message and see the AI agent respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Node.js.
Choosing a Provider
The AI SDK supports dozens of model providers through first-party, OpenAI-compatible, and community packages.
This quickstart uses the Vercel AI Gateway provider, which is the default global provider. This means you can access models using a simple string in the model configuration:
model: 'anthropic/claude-sonnet-4.5';You can also explicitly import and use the gateway provider in two other equivalent ways:
// Option 1: Import from 'ai' package (included by default)import { gateway } from 'ai';model: gateway('openai/gpt-5.1');
// Option 2: Install and import from '@ai-sdk/gateway' packageimport { gateway } from '@ai-sdk/gateway';model: gateway('openai/gpt-5.1');Using other providers
To use a different provider, install its package and create a provider instance. For example, to use OpenAI directly:
pnpm add @ai-sdk/openai@beta
import { openai } from '@ai-sdk/openai';
model: openai('gpt-5.1');Enhance Your Agent with Tools
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where tools come in.
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
For example, if a user asks about the current weather, without tools, the agent would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information.
Let's enhance your agent by adding a simple weather tool.
Update Your Application
Modify your index.ts file to include the new weather tool:
import { ModelMessage, streamText, tool } from 'ai';import 'dotenv/config';import { z } from 'zod';import * as readline from 'node:readline/promises';
const terminal = readline.createInterface({ input: process.stdin, output: process.stdout,});
const messages: ModelMessage[] = [];
async function main() { while (true) { const userInput = await terminal.question('You: ');
messages.push({ role: 'user', content: userInput });
const result = streamText({ model: 'anthropic/claude-sonnet-4.5', messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', inputSchema: z.object({ location: z .string() .describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, });
let fullResponse = ''; process.stdout.write('\nAssistant: '); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write('\n\n');
messages.push({ role: 'assistant', content: fullResponse }); }}
main().catch(console.error);In this updated code:
-
You import the
toolfunction from theaipackage. -
You define a
toolsobject with aweathertool. This tool:- Has a description that helps the agent understand when to use it.
- Defines
inputSchemausing a Zod schema, specifying that it requires alocationstring to execute this tool. The agent will attempt to extract this input from the context of the conversation. If it can't, it will ask the user for the missing information. - Defines an
executefunction that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API.
Now your agent can "fetch" weather information for any location the user asks about. When the agent determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The execute function will then be automatically run, and the results will be used by the agent to generate its response.
Try asking something like "What's the weather in New York?" and see how the agent uses the new tool.
Notice the blank "assistant" response? This is because instead of generating a text response, the agent generated a tool call. You can access the tool call and subsequent tool result in the toolCall and toolResult keys of the result object.
import { ModelMessage, streamText, tool } from 'ai';import 'dotenv/config';import { z } from 'zod';import * as readline from 'node:readline/promises';
const terminal = readline.createInterface({ input: process.stdin, output: process.stdout,});
const messages: ModelMessage[] = [];
async function main() { while (true) { const userInput = await terminal.question('You: ');
messages.push({ role: 'user', content: userInput });
const result = streamText({ model: 'anthropic/claude-sonnet-4.5', messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', inputSchema: z.object({ location: z .string() .describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, });
let fullResponse = ''; process.stdout.write('\nAssistant: '); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write('\n\n');
console.log(await result.toolCalls); console.log(await result.toolResults); messages.push({ role: 'assistant', content: fullResponse }); }}
main().catch(console.error);Now, when you ask about the weather, you'll see the tool call and its result displayed in your chat interface.
Enabling Multi-Step Tool Calls
You may have noticed that while the tool results are visible in the chat interface, the agent isn't using this information to answer your original query. This is because once the agent generates a tool call, it has technically completed its generation.
To solve this, you can enable multi-step tool calls using stopWhen. This feature will automatically send tool results back to the agent to trigger an additional generation until the stopping condition you define is met. In this case, you want the agent to answer your question using the results from the weather tool.
Update Your Application
Modify your index.ts file to configure stopping conditions with stopWhen:
import { ModelMessage, streamText, tool, stepCountIs } from 'ai';import 'dotenv/config';import { z } from 'zod';import * as readline from 'node:readline/promises';
const terminal = readline.createInterface({ input: process.stdin, output: process.stdout,});
const messages: ModelMessage[] = [];
async function main() { while (true) { const userInput = await terminal.question('You: ');
messages.push({ role: 'user', content: userInput });
const result = streamText({ model: 'anthropic/claude-sonnet-4.5', messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', inputSchema: z.object({ location: z .string() .describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, stopWhen: stepCountIs(5), onStepFinish: async ({ toolResults }) => { if (toolResults.length) { console.log(JSON.stringify(toolResults, null, 2)); } }, });
let fullResponse = ''; process.stdout.write('\nAssistant: '); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write('\n\n');
messages.push({ role: 'assistant', content: fullResponse }); }}
main().catch(console.error);In this updated code:
- You set
stopWhento be whenstepCountIs5, allowing the agent to use up to 5 "steps" for any given generation. - You add an
onStepFinishcallback to log anytoolResultsfrom each step of the interaction, helping you understand the agent's tool usage. This means we can also delete thetoolCallandtoolResultconsole.logstatements from the previous example.
Now, when you ask about the weather in a location, you should see the agent using the weather tool results to answer your question.
By setting stopWhen: stepCountIs(5), you're allowing the agent to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the agent to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Celsius to Fahrenheit.
Adding a second tool
Update your index.ts file to add a new tool to convert the temperature from Celsius to Fahrenheit:
import { ModelMessage, streamText, tool, stepCountIs } from 'ai';import 'dotenv/config';import { z } from 'zod';import * as readline from 'node:readline/promises';
const terminal = readline.createInterface({ input: process.stdin, output: process.stdout,});
const messages: ModelMessage[] = [];
async function main() { while (true) { const userInput = await terminal.question('You: ');
messages.push({ role: 'user', content: userInput });
const result = streamText({ model: 'anthropic/claude-sonnet-4.5', messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', inputSchema: z.object({ location: z .string() .describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), convertFahrenheitToCelsius: tool({ description: 'Convert a temperature in fahrenheit to celsius', inputSchema: z.object({ temperature: z .number() .describe('The temperature in fahrenheit to convert'), }), execute: async ({ temperature }) => { const celsius = Math.round((temperature - 32) * (5 / 9)); return { celsius, }; }, }), }, stopWhen: stepCountIs(5), onStepFinish: async ({ toolResults }) => { if (toolResults.length) { console.log(JSON.stringify(toolResults, null, 2)); } }, });
let fullResponse = ''; process.stdout.write('\nAssistant: '); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write('\n\n');
messages.push({ role: 'assistant', content: fullResponse }); }}
main().catch(console.error);Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction:
- The agent will call the weather tool for New York.
- You'll see the tool result logged.
- It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius.
- The agent will then use that information to provide a natural language response about the weather in New York.
This multi-step approach allows the agent to gather information and use it to provide more accurate and contextual responses, making your agent considerably more useful.
This example demonstrates how tools can expand your agent's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the agent to access and process real-world data in real-time and perform actions that interact with the outside world. Tools bridge the gap between the agent's knowledge cutoff and current information, while also enabling it to take meaningful actions beyond just generating text responses.
Where to Next?
You've built an AI agent using the AI SDK! From here, you have several paths to explore:
- To learn more about the AI SDK, read through the documentation.
- If you're interested in diving deeper with guides, check out the RAG (retrieval-augmented generation) and multi-modal chatbot guides.
- To jumpstart your first AI project, explore available templates.