|
| 1 | +# PRISM Agent — API Issue Fixes Required |
| 2 | + |
| 3 | +Read `API_ISSUE_RESPONSES.md` in this repo for full root cause analysis. Here's exactly what to fix: |
| 4 | + |
| 5 | +## Fix #1: Semantic search display (CRITICAL) |
| 6 | + |
| 7 | +**File:** `crates/cli/src/main.rs` ~line 4360 |
| 8 | + |
| 9 | +The CLI reads `name` and `entity_type` from semantic search results. These fields DON'T EXIST. Semantic search returns completely different fields than graph search. |
| 10 | + |
| 11 | +**Graph search fields:** `name, entity_type, label, tenant` |
| 12 | +**Semantic search fields:** `doc_id, content, similarity, corpus_id, metadata, chunk_idx, id` |
| 13 | + |
| 14 | +Change: |
| 15 | +```rust |
| 16 | +// WRONG — these fields don't exist in semantic search |
| 17 | +r.get("name") → r.get("doc_id") |
| 18 | +r.get("entity_type") → remove or show "paper_chunk" |
| 19 | +// ADD — show content preview and similarity score |
| 20 | +r.get("content") // paper text |
| 21 | +r.get("similarity") // 0.0 to 1.0 |
| 22 | +``` |
| 23 | + |
| 24 | +## Fix #3: Gemini usage tracking (MEDIUM) |
| 25 | + |
| 26 | +**File:** `crates/ingest/src/llm.rs` in `chat_marc27_simple()` |
| 27 | + |
| 28 | +Google's streaming endpoint doesn't return `usage` in SSE. This is a Google limitation. Handle it: |
| 29 | + |
| 30 | +```rust |
| 31 | +let prompt_tokens = usage.get("prompt_tokens") |
| 32 | + .or_else(|| usage.get("input_tokens")) |
| 33 | + .and_then(|v| v.as_u64()); |
| 34 | +let completion_tokens = usage.get("completion_tokens") |
| 35 | + .or_else(|| usage.get("output_tokens")) |
| 36 | + .and_then(|v| v.as_u64()); |
| 37 | +// If neither found, show "N/A" instead of "?/?" |
| 38 | +``` |
| 39 | + |
| 40 | +## Fix #4: Support ticket field name (EASY) |
| 41 | + |
| 42 | +**File:** wherever `prism report` builds the request body |
| 43 | + |
| 44 | +Change `body` to `description`: |
| 45 | +```rust |
| 46 | +// WRONG |
| 47 | +json!({"title": title, "body": body}) |
| 48 | +// RIGHT |
| 49 | +json!({"title": title, "description": body, "severity": "medium"}) |
| 50 | +``` |
| 51 | + |
| 52 | +The endpoint works — returns `{"ticket_id":"TKT-00001","status":"open"}`. |
| 53 | + |
| 54 | +## Fix #5: Upstream 429 retry (MEDIUM) |
| 55 | + |
| 56 | +Add retry-with-backoff for 429 responses from upstream providers. Our platform doesn't 429 — it's OpenAI/OpenRouter being rate-limited. |
| 57 | + |
| 58 | +```rust |
| 59 | +for attempt in 0..3 { |
| 60 | + let resp = client.post(url).send().await?; |
| 61 | + if resp.status() == 429 { |
| 62 | + let wait = 2u64.pow(attempt); |
| 63 | + tokio::time::sleep(Duration::from_secs(wait)).await; |
| 64 | + continue; |
| 65 | + } |
| 66 | + return Ok(resp); |
| 67 | +} |
| 68 | +``` |
| 69 | + |
| 70 | +## Fix #6: README discourse command |
| 71 | + |
| 72 | +`prism discourse events` → should be `prism discourse turns`. Update README or add `events` as alias. |
| 73 | + |
| 74 | +## Fix #8: Node up Docker check |
| 75 | + |
| 76 | +Before starting the dashboard Axum server, check if Docker is available: |
| 77 | +```rust |
| 78 | +if !which::which("docker").is_ok() { |
| 79 | + eprintln!("Docker not found — running in headless mode"); |
| 80 | + // Skip dashboard, just register with platform |
| 81 | +} |
| 82 | +``` |
| 83 | + |
| 84 | +## Fix #9: Bare prism TUI |
| 85 | + |
| 86 | +Skip TUI binary check — go straight to help if no subcommand: |
| 87 | +```rust |
| 88 | +if args.is_empty() { |
| 89 | + print_help(); |
| 90 | + return; |
| 91 | +} |
| 92 | +``` |
| 93 | + |
| 94 | +## What's NOT fixable on your side |
| 95 | + |
| 96 | +- **#2 depth > 0:** Web search providers (Semantic Scholar, arXiv) not configured on the platform. Depth 0 works fully (111 queries, 15 LLM calls verified). Add a timeout for depth > 0 and return partial results. |
| 97 | +- **#3 Google usage:** Google's streaming endpoint genuinely doesn't return usage. Show "N/A" for Google models. |
| 98 | + |
| 99 | +## What's verified working on the API (2026-04-08) |
| 100 | + |
| 101 | +- 28/28 provenance test passed |
| 102 | +- All 4 LLM providers respond (Claude, Gemini, DeepSeek, Llama) |
| 103 | +- Marketplace: 15 resources with full model cards |
| 104 | +- Support tickets: endpoint works with correct field names |
| 105 | +- Billing: credits system live, debits on every LLM call |
| 106 | +- 519 models, 517 with pricing |
0 commit comments