1. AnalytIQ
AnalytIQ
  • AnalytIQ
    • Technical Specificaiton
    • Whitepaper
  1. AnalytIQ

Technical Specificaiton

analytiq.one – Technical Specification#

1. Overview#

analytiq.one is an API-only crypto analytics service focused on the Base ecosystem.
It provides four primary analytics endpoints:
1.
Wallet Transaction Analysis – /v1/transaction/analyze
2.
Token Sentiment Analysis – /v1/token/sentiment
3.
Token Technical Analysis – /v1/token/technicals
4.
Trending Token Discovery (Base) – /v1/token/trending
All complex logic is delegated to OpenAI Agents which orchestrate a set of MCP tools exposed by a separate FastMCP server.
The service is monetized via a dynamic, usage-based credit system driven by:
LLM token usage (GPT-5)
External API usage (BaseScan, Dexscreener, Binance, CoinGecko, Brightdata)

2. Stack & Core Components#

2.1 Technology Stack#

Language: Python
Web Framework: Flask
AI Orchestration: OpenAI Agents SDK with GPT-5
Database: PostgreSQL
External Providers (via FastMCP tools):
BaseScan (Base chain tx data)
Dexscreener (on-chain Base token & pool metrics)
Binance (CEX OHLCV + stats)
CoinGecko (token metadata, socials, trending)
Brightdata / Scrapers (Twitter, Reddit, News, Google)

2.2 High-Level Architecture#

1.
Client → Flask API
Sends HTTP request with Authorization header containing API key.
Calls one of /v1/... endpoints.
2.
Flask API
Authenticates API key.
Performs light input validation.
Initiates a corresponding OpenAI Agent run.
After the Agent returns, computes cost based on:
LLM token usage.
External API/tool usage.
Deducts credits in PostgreSQL.
Returns the Agent’s JSON response.
3.
OpenAI Agent (GPT-5)
One Agent per endpoint:
transaction_agent
sentiment_agent
technicals_agent
trending_agent
Each Agent has:
A dedicated system prompt.
A fixed tool set (MCP tools).
A contractually defined JSON output schema.
4.
FastMCP Server
Exposes tools used by Agents (BaseScan, Dexscreener, Binance, CoinGecko, Brightdata, TA utilities).
Handles all HTTP calls to external APIs and returns normalized data structures.
5.
PostgreSQL
Stores:
Users and API keys.
Credit balances.
Request and pricing logs.
Supports dynamic, usage-based billing.

3. Authentication & Authorization#

3.1 API Key#

Header: Authorization: <api-key>
Keys are issued per user and stored hashed in PostgreSQL.
Keys can be:
Enabled / disabled (for abuse, non-payment, etc.).

3.2 Request Flow#

1.
Flask middleware extracts Authorization.
2.
Looks up the API key in PostgreSQL:
If invalid/inactive → return 401 Unauthorized.
3.
User context is attached to the request (user ID, current balance).

4. Pricing & Credits#

4.1 Pricing Model#

Pricing is dynamic, driven by:
1.
GPT-5 usage
Prompt tokens.
Completion tokens.
2.
Tool usage
Each MCP tool (BaseScan call, Dexscreener call, Brightdata scrape, etc.) has an associated cost weight.

4.2 Usage Accounting Per Request#

For every API request:
1.
Before Agent run
Ensure user has at least some positive credit balance (soft check).
2.
Agent run
Execute Agent with appropriate tools.
OpenAI returns:
Token usage stats (prompt + completion).
FastMCP / tool layer records:
Tool calls and counts by tool name.
3.
Cost computation
Example (pseudo):
The formula and weights are configurable.
4.
Post-request billing
In a DB transaction:
Deduct total_credits from user’s credit balance.
Insert a usage + billing log row.
5.
Response
Return the Agent’s analytics result to the client.
Optionally include metadata in headers or a separate endpoint (for cost transparency) later.

5. MCP Tools (Conceptual)#

The FastMCP server exposes tools with clear responsibilities. Agents never see raw HTTP; they only call tools.
Key tool groups:

5.1 BaseScan Tools (Base chain)#

basescan.txlist
Fetches normal transactions for a wallet on Base.
basescan.tokentx
Fetches ERC-20 token transfers for a wallet on Base.
basescan.balance (optional)
Fetches Base ETH balance for a wallet.

5.2 Dexscreener Tools (Base DEX data)#

dexscreener.token_pairs
Given a Base token contract, returns pools and metrics (priceUsd, liquidity, volume, txns, priceChange, etc.).
dexscreener.search_pairs
Text or contract search returning matching Base pools.
dexscreener.get_pair
Given a Base pair address, returns detailed current metrics.
dexscreener.scrape_trending_page
Scrapes Dexscreener’s Base trending/hot pools from the website UI.

5.3 Binance Tools (CEX OHLC & stats)#

binance.klines
Returns OHLCV candles for a symbol / interval.
binance.ticker_24h
Returns 24h stats for a symbol (change %, volume, etc.).

5.4 CoinGecko Tools (metadata, socials, trending)#

coingecko.get_token_by_contract
Given a Base contract, returns:
Symbol, name.
Social links (Twitter/X, Reddit, website, etc.).
coingecko.search_trending
Returns a global trending list (by search interest) from which Base tokens can be extracted.

5.5 Brightdata / Scraper Tools (content)#

scrape_twitter
Fetches tweets based on queries and/or handles.
scrape_reddit
Fetches Reddit posts/comments based on search queries and relevant subreddits.
scrape_news
Fetches news articles matching token-related search queries.
scrape_google
Fetches search results and snippets for token-related queries.

5.6 TA Utilities#

ta.compute_indicators
Accepts OHLCV data and returns EMA, RSI, MACD, Bollinger Bands and any other derived technical metrics required by the agents.

6. Agents & Endpoint Responsibilities#

Each endpoint has a dedicated OpenAI Agent configured with:
A system prompt describing:
Role.
Data sources (tools).
Expected JSON output shape.
A specific tool set (subset of the MCP tools).
A well-defined analytics responsibility.
Below is the conceptual behavior per endpoint.

6.1 POST /v1/transaction/analyze#

Purpose#

Perform transaction analysis for a Base wallet:
Trading intensity and style.
Token distribution and top tokens.
Approximate sizing and estimated performance indicators.
Behavioral classification (e.g. “short-term degen”, “swing”, “holder”).

Agent: transaction_agent#

Tools used:
basescan.txlist
basescan.tokentx
basescan.balance (optional)
dexscreener.token_pairs (for pricing of Base tokens involved)
Core logic:
1.
Fetch Base wallet tx data:
Normal txs via basescan.txlist.
ERC-20 transfers via basescan.tokentx.
2.
Filter by requested lookback window (e.g. last 30 days).
3.
Build token-level activity:
Group transfers per token contract.
Derive:
Number of trades per token.
Total in/out amounts.
Direction of flow per token (net accumulation vs distribution).
4.
Price enrichment:
When needed for top tokens:
Use dexscreener.token_pairs to get priceUsd and relevant metrics.
Approximate:
avg_buy_size_usd
avg_sell_size_usd
top_tokens_by_volume
5.
Behavioral metrics & profile:
Estimate:
num_trades
num_tokens_traded
Holding-time proxies (time between buys and sells).
GPT-5 synthesizes:
style (scalper, swing, degen, holder, etc.)
risk_profile (low/medium/high).
Short summary and descriptive narrative.
The Agent returns a structured JSON document with metrics and a concise interpretation.

6.2 POST /v1/token/sentiment#

Purpose#

Compute token sentiment for a Base token across:
Twitter (X)
Reddit
News
Google / web search
Return:
Overall sentiment score & label.
Per-platform sentiment.
Key bullish & bearish reasons.

Agent: sentiment_agent#

Tools used:
coingecko.get_token_by_contract
scrape_twitter
scrape_reddit
scrape_news
scrape_google
Core logic:
1.
Resolve token identity:
Use coingecko.get_token_by_contract (Base contract):
Get official symbol, name.
Extract social links (Twitter handle, subreddit, website).
2.
Build search queries:
Keyword queries:
"SYMBOL crypto", "TOKEN_NAME base".
Handle-based queries (Twitter, Reddit) if available.
3.
Collect platform data:
For each included platform:
Use respective scrape tool with the queries and time window:
time_window_hours defines the freshness.
4.
Sentiment analysis:
GPT-5 reads aggregated content (with internal summarization).
Per platform:
Assign numeric score [-1, 1].
Map to label (very_bearish → very_bullish).
Count sample size.
Overall:
Weighted aggregate score & label.
1–2 line summary.
Key reasons:
Extract top bullish & bearish narratives.
The Agent outputs a JSON object containing all platform-level scores and a summary.

6.3 POST /v1/token/technicals#

Purpose#

Perform technical analysis on a token:
Compute EMA, RSI, MACD, Bollinger Bands.
Derive trend/overbought/oversold states.
Provide a simple narrative and “what’s going on” snapshot.
Supports Base tokens via:
Binance (where a CEX symbol exists).
Dexscreener (for Base-only DEX pairs).

Agent: technicals_agent#

Tools used:
binance.klines
binance.ticker_24h
dexscreener.search_pairs
dexscreener.get_pair
ta.compute_indicators
Core logic:
1.
Determine price feed:
If request explicitly chooses source = "binance" and mapping exists:
Call binance.klines.
Optionally binance.ticker_24h for 24h context.
Otherwise, source = "dex":
Use dexscreener.search_pairs with the Base contract/symbol.
Identify primary Base pool (e.g. highest liquidity).
Use Dexscreener data (and/or chart data pipeline) for candles.
2.
Compute indicators:
Pass candles to ta.compute_indicators.
Receive:
EMA 20, EMA 50.
RSI 14.
MACD values.
Bollinger Bands (middle, upper, lower, bandwidth).
3.
Technical interpretation:
GPT-5 evaluates:
Price relative to EMAs.
RSI (oversold/overbought).
MACD crossovers and histogram.
Bollinger Bands (squeeze, expansion, near upper/lower band).
Outputs:
trend classification.
rsi_state, macd_state, bb_state.
Short summary describing the current technical picture.
trader_takeaway phrased as informational context, not advice.
The Agent returns this analysis as a structured JSON including raw indicator values and symbolic states.

6.4 GET /v1/token/trending#

Purpose#

Return a ranked list of trending Base tokens and a short, metric-based explanation for why each is trending.

Agent: trending_agent#

Tools used:
coingecko.search_trending
coingecko.get_token_by_contract (or equivalent coin metadata lookup)
dexscreener.scrape_trending_page
dexscreener.get_pair
Core logic:
1.
Global interest (search-based trending):
Call coingecko.search_trending.
Extract tokens with a Base deployment / Base contract.
These represent coins with high search interest.
2.
On-chain activity (Base DEX trending):
Call dexscreener.scrape_trending_page for chain = base.
Gather a list of trending/hot Base pools (pair addresses).
For each, call dexscreener.get_pair to retrieve:
Price, liquidity.
1h / 24h volume and transactions.
1h / 24h price change.
Dexscreener URL and token info.
3.
Merge & filter:
Prefer tokens that:
Appear in Coingecko trending and
Have strong on-chain metrics on Base.
If fewer than requested limit, fill with purely on-chain trending tokens.
4.
Label and explain:
For each Base token:
Evaluate recent price change vs volume and txns.
Compute a label:
Example labels: breakout, pump, dump, cooldown, reversal_risk.
Generate a concise reason referencing the metrics:
E.g., “24h volume up 4x, 1h price +35%, and a sharp increase in transactions.”
The Agent returns a JSON array of tokens with symbol, contract, dexscreener URL, core metrics, and a narrative reason.

7. Non-goals / Out of Scope#

No frontend; API-only service.
No manual dashboard or UI for users in this phase.
No on-chain write operations (purely analytical reads).
No direct trading or trading signals; outputs are informational only.
No fixed, per-endpoint pricing; all pricing is dynamic based on usage.

8. Summary#

analytiq.one is a thin Flask shell in front of:
OpenAI Agents (GPT-5) for orchestration.
A FastMCP tool server for all external data.
A PostgreSQL-backed credit system for dynamic, usage-based billing.
The system is intentionally:
Stateless at the computation layer (each request is self-contained).
Tool-driven, making it easy to extend with additional data sources or features.
Base-focused, ensuring deep, specific coverage of the Base ecosystem rather than generic cross-chain analytics.
Modified at 2026-03-19 13:34:45
Next
Whitepaper
Built with