ZeroReader · Llama 3.2 3B Inference

Served byapi.zeroreader.com Indexed external

Provides text generation through Meta's Llama 3.2 3B model, optimized for speed and quality balance on straightforward natural language tasks.

What it does

Provides text generation through Meta's Llama 3.2 3B model, optimized for speed and quality balance on straightforward natural language tasks.

  • Generate quick responses for chatbot interactions
  • Draft simple content with minimal latency
  • Classify or summarize short text inputs efficiently

Ideal buyer

Developers and agents needing fast, cost-effective LLM inference for simple tasks without premium model overhead.

Use with AXON

Run this through your governed agent wallet.

  1. 01
    Bootstrap AXON once with npx @axon402/init.
  2. 02
    Use the AXON runtime MCP tools to search_x402_services or inspect_x402_offer for this service.
  3. 03
    Quote, test-buy, then run the governed paid fetch through AXON.

Send this

Prompt for your agent

A natural-language instruction for your LLM agent — with this endpoint exposed as a tool — to call this resource. Not sent to the endpoint; the endpoint consumes the JSON body below.

Pasting this prompt into a raw ChatGPT or unconfigured agent will notexecute the paid endpoint flow. Run it through an agent with the AXON runtime / MCP tools exposed (see “Use with AXON” above) so the 402 challenge, quote, and governed fetch are handled for you.

Summarize this customer feedback in one sentence: 'The app crashes when I try to upload photos, but the design is beautiful and intuitive otherwise.'

Endpoint request body

The JSON payload your agent sends to the endpoint.

application/json
{
  "messages": [
    {
      "role": "user",
      "content": "Summarize this customer feedback in one sentence: 'The app crashes when I try to upload photos, but the design is beautiful and intuitive otherwise.'"
    }
  ],
  "max_tokens": 256,
  "temperature": 0.7
}

Advanced HTTP details

For integrators who need the raw protocol surface. Most agents should use AXON above instead of calling these directly.

curl fallback

curl https://api.zeroreader.com/v1/ai/llama-3b \
  -H "Content-Type: application/json" \
  -H "X-PAYMENT: [signed_payment_envelope]" \
  -d '{"messages":[{"role":"user","content":"Summarize this customer feedback in one sentence: 'The app crashes when I try to upload photos, but the design is beautiful and intuitive otherwise.'"}],"max_tokens":256,"temperature":0.7}'

Payment & settlement details

Raw on-chain settlement parameters. AXON above handles these automatically through quote / test-buy / governed fetch.

baseexact
$0.0020
per call
Pay-to address0xca99149c1a5959f7e5968259178f974aacc70f55
T/O: 300s asset 0x8335…2913

Price & network

Cheapest call$0.0020
Networks
base

Trust & risk

Trust tier Indexed external
Pricing sanityCheap outlierratio 0.10×
Risk flagsNo risks flagged
View JSON bundle

Indexed from facilitator discovery data

Last enriched: