Skip to main content

Models & Providers

FlowStack supports multiple AI providers and models, giving you flexibility to choose the right model for your use case and budget.

Supported Providers

OpenAI

ModelBest ForContext Window
GPT-4oComplex reasoning, multi-modal (text + images)128K tokens
GPT-4o MiniFast, cost-effective general tasks128K tokens
GPT-4 TurboAdvanced reasoning with large context128K tokens
GPT-3.5 TurboSimple tasks, high throughput16K tokens

Anthropic

ModelBest ForContext Window
Claude 3.5 SonnetBest balance of speed and intelligence200K tokens
Claude 3 OpusMost capable, complex analysis200K tokens
Claude 3 HaikuFastest, simple tasks200K tokens

Google

ModelBest ForContext Window
Gemini 1.5 ProLong-context reasoning, multi-modal1M tokens
Gemini 1.5 FlashFast inference, cost-effective1M tokens

Groq

ModelBest ForContext Window
Llama 3.1 70BOpen-source, fast inference128K tokens
Llama 3.1 8BUltra-fast, simple tasks128K tokens
Mixtral 8x7BBalanced performance32K tokens

Mistral

ModelBest ForContext Window
Mistral LargeComplex reasoning128K tokens
Mistral MediumBalanced performance32K tokens
Mistral SmallCost-effective32K tokens

Configuring a Provider

  1. Go to AI Studio → Settings (or configure inline when creating an agent)
  2. Select your provider
  3. Enter your API key
  4. The key is encrypted and stored securely (AES-256)

Choosing a Model

PriorityRecommended Model
Best qualityGPT-4o or Claude 3.5 Sonnet
Fastest responseGroq Llama 3.1 8B or Gemini Flash
Lowest costGPT-4o Mini or Claude 3 Haiku
Longest contextGemini 1.5 Pro (1M tokens)
Best for codeClaude 3.5 Sonnet or GPT-4o
Open-sourceGroq Llama 3.1 70B

Using Models in Workflows

In the Workflow Builder, use the Chat Model node:

  1. Add a Chat Model node to your workflow
  2. Select the provider and model
  3. Configure the connection (API key)
  4. Set parameters:
    • Temperature — 0 (deterministic) to 1 (creative)
    • Max Tokens — Maximum response length
    • Top P — Nucleus sampling threshold
  5. Connect it to an AI Agent node or use it standalone for text generation