Getting Started with GPT-3 DaVinci Codex
Learn how to integrate the most advanced AI language model into your applications
Installation
Install the DaVinci Codex client library using your preferred package manager:
bash
# Using npm
npm install @davinci/codex
# Using yarn
yarn add @davinci/codex
# Using pnpm
pnpm add @davinci/codex
Quick Start
Get up and running with DaVinci Codex in minutes. Here's a simple example to generate your first AI completion:
javascript
import { DaVinciCodex } from '@davinci/codex';
const codex = new DaVinciCodex({
apiKey: 'your-api-key'
});
async function generateText() {
const response = await codex.completions.create({
model: 'davinci-codex',
prompt: 'Write a function to calculate fibonacci numbers',
max_tokens: 150,
temperature: 0.7
});
console.log(response.choices[0].text);
}
API Authentication
To use DaVinci Codex, you'll need an API key. Here's how to authenticate your requests:
Keep your API key secure
Never expose your API key in client-side code. Always use environment variables or secure server-side storage.
javascript
// Set your API key as an environment variable
process.env.DAVINCI_CODEX_API_KEY = 'your-secret-key';
// Or pass it directly to the client
const codex = new DaVinciCodex({
apiKey: process.env.DAVINCI_CODEX_API_KEY
});
Text Completions
The completions endpoint is the core of DaVinci Codex. Use it to generate text, complete code, or create conversational AI:
javascript
const completion = await codex.completions.create({
model: 'davinci-codex',
prompt: 'Explain quantum computing in simple terms:',
max_tokens: 100,
temperature: 0.7,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0
});
console.log(completion.choices[0].text);
Rate Limits
To ensure fair usage and optimal performance, DaVinci Codex implements rate limiting:
| Plan | Requests per minute | Tokens per minute |
|---|---|---|
| Free | 20 | 40,000 |
| Pro | 3,500 | 350,000 |
| Enterprise | Custom | Custom |