Skip to content
Back to Gallery

MemeStack API

Search visual content programmatically

MemeStack is a search engine for memes, infographics, charts, and visual content. Every image is AI-analyzed for captions, tags, and OCR text. All read endpoints are public — no authentication required.

What is MemeStack?

MemeStack is a searchable library of self-sufficient visual content — filled memes, editorial cartoons, infographics, charts, multi-panel comparisons, diagrams, and screenshots that make a point. Every approved image is automatically enriched with AI-generated metadata: a vision pipeline produces a human-readable caption, a structured set of topic tags, and full OCR extraction of any text visible in the image. That metadata is indexed and made available through the search API.

Search is unified — every query combines semantic similarity (AI vector embeddings) with keyword matching in a single pass. This means you can search for "proof of work explained" and find a diagram that never uses those exact words but visually explains the concept, while still surfacing images that contain the literal phrase in their caption or OCR-extracted text.

Images are ranked by Lightning Network zaps — Bitcoin micropayments sent by users to signal that an image is useful, accurate, or entertaining. A higher zap count indicates community-vetted quality. The leaderboard endpoints expose the top-ranked images by time period (24h, 7d, 30d, all-time).

Quick Start

Search for images

curl "https://api.memestack.ai/v1/images/search?q=bitcoin+halving"

Get image metadata

curl "https://api.memestack.ai/v1/images/{id}/meta"

Retrieve the image

curl "https://api.memestack.ai/v1/images/{id}/canonical"

Search

Unified search

Every query runs semantic similarity (AI vector embeddings) combined with keyword matching — no mode parameter needed. Works equally well for conceptual queries ("inflation explained", "proof of work diagram") and literal phrase lookups ("21 million", "Satoshi Nakamoto", "HODL").

Multi-tag filtering — tags

The tags parameter accepts a comma-separated list of tag slugs and applies AND logic — only images matching all specified tags are returned. The older single-tag tag parameter still works for backward compatibility.

curl "https://api.memestack.ai/v1/images/search?tags=bitcoin,charts&sort_by=zap_total_sats"

Endpoints Reference

Base URL: https://api.memestack.ai. All endpoints listed below are public GET requests — no API key or authentication header required.

MethodPathDescription
GET/v1/images/searchSearch images — unified semantic + keyword
GET/v1/images/{id}/metaImage metadata (caption, tags, OCR)
GET/v1/images/{id}/canonicalWeb-optimized image (max 2500px)
GET/v1/images/{id}/thumbnailThumbnail (max 768px)
GET/v1/images/{id}/similarPerceptually similar images
GET/v1/images/{id}/relatedSemantically related images
POST/v1/images/reverse-searchReverse image search by phash (HTTPS or data: URL, 10/min/IP)
GET/v1/users/{pubkey}User profile and stats
GET/v1/users/{pubkey}/imagesUser's images (paginated)
GET/v1/leaderboard/imagesTop zapped images by period
GET/v1/leaderboard/zappersTop zappers by period

For AI Agents

If you are an AI agent looking for images to answer a user's question, here is the recommended workflow:

Step 1 — Search for relevant images

GET https://api.memestack.ai/v1/images/search?q={concept}&limit=3

Use a concise natural-language description of what the user is asking about. The response includes an array of image records, each with id, caption, alt_text, tags, and zap_total_sats.

Step 2 — Pick the best result

Read the caption and alt_text fields to verify relevance. Prefer images with higher zap_total_sats when multiple results are equally relevant — community zaps signal quality and accuracy.

Step 3 — Retrieve the image

GET https://api.memestack.ai/v1/images/{id}/canonical

Returns the web-optimized version of the image (max 2500px on the longest side). For a smaller preview, use /thumbnail (max 768px) instead.

Step 4 — Use metadata when presenting the image

Use the caption as a human-readable description and alt_text as the image alt attribute for accessibility. The text_in_image field contains full OCR output — useful for images that are text-heavy (charts, slides, screenshots).

MCP Server

The Model Context Protocol (MCP) lets AI agents use MemeStack tools natively — no HTTP requests needed. Connect your AI client to the remote MCP server and call tools like search_images, get_image, and find_similar directly.

Claude Desktop / Claude Code

{
  "mcpServers": {
    "memestack": {
      "type": "url",
      "url": "https://mcp.memestack.ai/mcp"
    }
  }
}

Fallback (clients without remote MCP support)

{
  "mcpServers": {
    "memestack": {
      "command": "npx",
      "args": ["mcp-remote", "https://mcp.memestack.ai/mcp"]
    }
  }
}

Available Tools

ToolDescription
search_imagesUnified semantic + keyword image search
get_imageFull metadata for one image
find_similarVisually similar images (perceptual hash)
find_relatedSemantically related images (AI embeddings)
browse_imagesBrowse by tag, trending, or recent
get_user_profileUser profile and stats
get_leaderboardTop images or top zappers by period

All tools return rich metadata including captions, tags, zap stats, and direct URLs. No authentication required.

Resources