首页/Llama 4 Scout Instruct
meta-llama/llama-4-scout-17b-16e-instruct

Llama 4 Scout Instruct

meta-llama/llama-4-scout-17b-16e-instruct
Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and image) and multilingual output (text and code) across 12 supported languages. Designed for assistant-style interaction and visual reasoning, Scout uses 16 experts per forward pass and features a context length of 10 million tokens, with a training corpus of ~40 trillion tokens. Built for high efficiency and local or commercial deployment, Llama 4 Scout incorporates early fusion for seamless modality integration. It is instruction-tuned for use in multilingual chat, captioning, and image understanding tasks. Released under the Llama 4 Community License, it was last trained on data up to August 2024 and launched publicly on April 5, 2025.

特性

按使用量付费

$0.1/$0.5
每100万Token(输入/输出)

使用以下代码示例来集成我们的API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.jiekou.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="meta-llama/llama-4-scout-17b-16e-instruct",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=131072,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

信息

提供商
Llama
量化
bf16

支持的功能

上下文长度
131072
最大输出
131072
函数调用
支持
Input Capabilities
text
Output Capabilities
text