{
  "generated_at": "2026-04-24T15:01:35.625676+00:00",
  "slug": "hirescrape-com-api-tools-reddit",
  "title": "Hirescrape \u00b7 Reddit Scraping Tool",
  "url": "https://hirescrape.com/api/tools/reddit",
  "category": "data",
  "summary": "Extract Reddit posts, comments, and metadata via automated web scraping.",
  "seo": {
    "title": "Hirescrape Reddit API | x402 Web Scraper",
    "description": "Scrape Reddit posts and comments via Hirescrape x402 API. 0.07 USDC per call on Base. Pay-per-call web scraper for AI agents."
  },
  "use_cases": [
    "Monitor Reddit discussions for brand sentiment",
    "Extract market intelligence from finance subreddits",
    "Research trending topics across Reddit communities"
  ],
  "ideal_buyer": "AI agents and researchers requiring programmatic Reddit data extraction.",
  "example_prompt": "Scrape recent posts from r/MachineLearning about AI agents",
  "example_request_body": {
    "limit": 10,
    "query": "AI agents",
    "subreddit": "MachineLearning"
  },
  "risk_notes": [],
  "pricing_sanity": {
    "flag": "expensive",
    "ratio": 3.5,
    "median_category_atomic": 20000
  },
  "pricing_review_required": false,
  "pricing_decimal_suspect": false,
  "trust_tier": "indexed_external",
  "accepts": [
    {
      "scheme": "exact",
      "network": "base",
      "pay_to": "0xb5194a98dbdbb7028b585db26b972e7f0f3f826a",
      "asset": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913",
      "max_amount_required_atomic": "70000",
      "max_timeout_seconds": 60,
      "verified": false,
      "hints": {
        "input": {
          "type": "http",
          "method": "POST",
          "bodyFields": {
            "sort": {
              "enum": [
                "hot",
                "new",
                "top",
                "rising",
                "controversial",
                "relevance",
                "comments"
              ],
              "type": "string",
              "description": "**How to sort results**\n\nFor Scrape Subreddit:\n\u00e2\u0080\u00a2 \u00f0\u009f\u0094\u00a5 **hot** (trending now)\n\u00e2\u0080\u00a2 \u00e2\u009c\u00a8 **new** (newest first)\n\u00e2\u0080\u00a2 \u00f0\u009f\u008f\u0086 **top** (highest rated)\n\u00e2\u0080\u00a2 \u00f0\u009f\u0093\u0088 **rising** (gaining traction)\n\u00e2\u0080\u00a2 \u00e2\u009a\u00a1 **controversial** (most debated)\n\nFor Search:\n\u00e2\u0080\u00a2 \u00f0\u009f\u008e\u00af **relevance** (best match)\n\u00e2\u0080\u00a2 \u00e2\u009c\u00a8 **new** (newest first)\n\u00e2\u0080\u00a2 \u00f0\u009f\u008f\u0086 **top** (highest rated)\n\u00e2\u0080\u00a2 \u00f0\u009f\u0092\u00ac **comments** (most discussed)"
            },
            "limit": {
              "type": "integer",
              "description": "**Maximum number of posts/results to return**\n\n\u00e2\u009a\u00a0\u00ef\u00b8\u008f Higher = more data but slower and costs more\n\u00f0\u009f\u0092\u00b0 Each post with comments costs more than posts only\n\nRecommended:\n\u00e2\u0080\u00a2 \u00f0\u009f\u009f\u00a2 Quick test: 10-25\n\u00e2\u0080\u00a2 \u00f0\u009f\u009f\u00a1 Normal: 50-100\n\u00e2\u0080\u00a2 \u00f0\u009f\u0094\u00b4 Large: 200-500"
            },
            "query": {
              "type": "string",
              "description": "**Keyword to search or question to ask**\n\n\u00e2\u009c\u0085 Required for: Search Posts, Search Comments, Find Subreddits, Reddit AI Answers\n\u00e2\u009d\u008c Ignore for: Scrape Subreddit, Fetch Post\n\nExamples:\n\u00e2\u0080\u00a2 `best web scraping tools 2026`\n\u00e2\u0080\u00a2 `how to learn Python`\n\u00e2\u0080\u00a2 `AI automation`"
            },
            "action": {
              "enum": [
                "scrape_subreddit",
                "search_posts",
                "search_comments",
                "search_subreddits",
                "fetch_post",
                "reddit_answers",
                "ads_search",
                "ad"
              ],
              "type": "string",
              "required": true,
              "description": "Choose your action \u00e2\u0080\u0094 then scroll down and fill only the fields marked for that action:\n\n\u00e2\u0080\u00a2 **Scrape Subreddit** \u00e2\u0086\u0092 \u00e2\u009c\u0085 Subreddit\n\u00e2\u0080\u00a2 **Search Posts** \u00e2\u0086\u0092 \u00e2\u009c\u0085 Query + \u00e2\u009a\u00a0\u00ef\u00b8\u008f Subreddit (optional)\n\u00e2\u0080\u00a2 **Search Comments** \u00e2\u0086\u0092 \u00e2\u009c\u0085 Query + \u00e2\u009a\u00a0\u00ef\u00b8\u008f Subreddit (optional)\n\u00e2\u0080\u00a2 **Find Subreddits** \u00e2\u0086\u0092 \u00e2\u009c\u0085 Query\n\u00e2\u0080\u00a2 **Fetch Post** \u00e2\u0086\u0092 \u00e2\u009c\u0085 Post URL\n\u00e2\u0080\u00a2 **Reddit AI Answers** \u00e2\u0086\u0092 \u00e2\u009c\u0085 Query"
            },
            "postUrl": {
              "type": "string",
              "description": "**Full Reddit post URL**\n\n\u00e2\u009c\u0085 Required for: Fetch Single Post\n\u00e2\u009d\u008c Ignore for: All other actions\n\nExample: `https://www.reddit.com/r/technology/comments/abc123/my_post/`"
            },
            "threads": {
              "type": "integer",
              "description": "**Number of parallel workers**\n\n\u00f0\u009f\u009f\u00a2 Low (1-5): Slower but safer, less proxy bandwidth\n\u00f0\u009f\u009f\u00a1 Medium (6-10): Balanced speed and reliability\n\u00f0\u009f\u0094\u00b4 High (11-20): Fastest but uses more proxy bandwidth\n\n\u00e2\u009a\u00a0\u00ef\u00b8\u008f Only applies to: **Scrape Subreddit**, **Find Subreddits**\n\u00f0\u009f\u0092\u00a1 Higher = faster but may trigger rate limits"
            },
            "subreddit": {
              "type": "string",
              "description": "**Subreddit name without the r/ prefix**\n\n\u00e2\u009c\u0085 Required for: Scrape Subreddit\n\u00e2\u009a\u00a0\u00ef\u00b8\u008f Optional for: Search Posts, Search Comments\n\u00e2\u009d\u008c Ignore for: Find Subreddits, Fetch Post, Reddit AI Answers\n\nExample: `technology` (not `r/technology`)"
            },
            "timeFilter": {
              "enum": [
                "hour",
                "day",
                "week",
                "month",
                "year",
                "all"
              ],
              "type": "string",
              "description": "**Time range for results**\n\n\u00e2\u009a\u00a0\u00ef\u00b8\u008f Only applies when Sort = **top** or **controversial**\n\nOptions: past hour, day, week, month, year, or all time"
            },
            "includeComments": {
              "type": "boolean",
              "description": "**Fetch full comment trees for each post**\n\n\u00e2\u009c\u0085 Enabled: Get complete discussions with nested replies\n\u00e2\u009d\u008c Disabled: Posts only (faster, cheaper)\n\n\u00e2\u009a\u00a0\u00ef\u00b8\u008f Only applies to: **Scrape Subreddit**\n\u00f0\u009f\u0092\u00b0 Comments significantly increase cost and time\n\u00f0\u009f\u0093\u008a A post with 500 comments = 500 API calls"
            },
            "proxyConfiguration": {
              "type": "object",
              "description": "**Reddit requires residential proxies for best results**\n\n\u00f0\u009f\u0093\u008c **Default Setup (Recommended):**\n\u00e2\u0080\u00a2 \u00e2\u009c\u0085 Use Apify Proxy\n\u00e2\u0080\u00a2 \u00f0\u009f\u008f\u00a0 Residential proxies (best success rate)\n\u00e2\u0080\u00a2 \u00f0\u009f\u0092\u00b0 Costs: $8/GB\n\n\u00f0\u009f\u0092\u00a1 **Alternative:**\n\u00e2\u0080\u00a2 \u00f0\u009f\u008f\u00a2 BUYPROXIES94952 (free datacenter, 5 IPs)\n\u00e2\u0080\u00a2 \u00e2\u009a\u00a0\u00ef\u00b8\u008f Higher chance of Reddit blocks\n\u00e2\u0080\u00a2 \u00f0\u009f\u0092\u00b5 Free but less reliable"
            }
          }
        },
        "output": {
          "items": [
            {
              "score": 124,
              "title": "Building AI agents that pay per call",
              "author": "ai_dev",
              "permalink": "/r/MachineLearning/comments/abc123/building_ai_agents/",
              "subreddit": "MachineLearning",
              "created_utc": 1776480000,
              "num_comments": 37
            }
          ],
          "runId": "sc_mo4example",
          "payment": {
            "amount": "0.040000",
            "currency": "USD",
            "protocol": "x402"
          },
          "duration": 4
        }
      }
    }
  ],
  "duplicate_cluster_id": "data-cl-09a755d6df5e",
  "origin": {
    "slug": "hirescrape-com",
    "host": "hirescrape.com",
    "title": "Hirescrape \u2014 Pay-per-call scraper API for AI agents",
    "description": "Pay-per-call web scrapers for AI agents. 28 tools across Reddit, 8-board job search (LinkedIn \u00b7 Indeed \u00b7 Glassdoor \u00b7 Google Jobs \u00b7 +5), TikTok \u00b7 Douyin \u00b7 Bilibili, cross-platform trend research, social media, and ad libraries. No API keys. Agent wallets settle USDC on Tempo or Base via x402 + MPP.",
    "url": "https://hirescrape.com",
    "og_image": "https://www.hirescrape.com/opengraph-image?c645bc0ba1f3236d",
    "favicon": "https://hirescrape.com/favicon.ico"
  },
  "json_ld": {
    "@id": "https://x402all.com/resource/hirescrape-com-api-tools-reddit",
    "url": "https://x402all.com/resource/hirescrape-com-api-tools-reddit",
    "name": "Hirescrape \u00b7 Reddit Scraping Tool",
    "@type": "WebAPI",
    "offers": {
      "url": "https://x402all.com/resource/hirescrape-com-api-tools-reddit",
      "@type": "Offer",
      "price": "0.07",
      "availability": "https://schema.org/InStock",
      "priceCurrency": "USDC",
      "priceSpecification": {
        "@type": "UnitPriceSpecification",
        "price": "0.070000",
        "unitText": "call",
        "priceCurrency": "USDC"
      },
      "eligibleCustomerType": "Agent",
      "additionalProperty": [
        {
          "@type": "PropertyValue",
          "name": "paymentNetwork",
          "value": "base"
        },
        {
          "@type": "PropertyValue",
          "name": "paymentAsset",
          "value": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913"
        }
      ]
    },
    "sameAs": "https://hirescrape.com/api/tools/reddit",
    "@context": "https://schema.org",
    "provider": {
      "@id": "https://x402all.com/server/hirescrape-com",
      "url": "https://hirescrape.com",
      "name": "Hirescrape \u2014 Pay-per-call scraper API for AI agents",
      "@type": "Organization"
    },
    "identifier": "hirescrape-com-api-tools-reddit",
    "description": "Scrape Reddit posts and comments via Hirescrape x402 API. 0.07 USDC per call on Base. Pay-per-call web scraper for AI agents.",
    "potentialAction": {
      "@type": "BuyAction",
      "target": "https://axon402.com/test-buy?resource=hirescrape-com-api-tools-reddit",
      "description": "Test-buy this endpoint on AXON"
    },
    "applicationCategory": "data"
  },
  "axon_deep_link": "https://axon402.com/test-buy?resource=hirescrape-com-api-tools-reddit"
}
