Slicing Audit

LLM-powered review of 466 text[:N] patterns. Haiku bulk scan → Opus re-audit. 44 sites need code changes.

2
Remove
42
Raise Limit
256
Named Constant
166
Fine as-is
By directory
DirectoryRemoveRaiseConstantFineTotal
vario/115151445
learning/147646136
lib/95441104
intel/11222145
finance/231217
doctor/1351349
draft/44
helm/19827
jobs/22527
projects/347
tools/314
watch/11
Total242256166466
Show:
LocationVerdictLimitReasoningCode
intel/people/lookup.py:448 Remove Wikipedia extract at 500 chars is fed into an LLM scoring/lookup context where the full extract (typically ~1-2k chars) would improve quality and fits easily in modern context windows. opus
     445          if li.get("og_title"):
     446              lines.append(f"  Title: {li['og_title']}")
     447          if li.get("og_description"):
>>>  448              lines.append(f"  Summary: {li['og_description'][:200]}")
     449          if li.get("headline"):
     450              lines.append(f"  Headline: {li['headline']}")
     451          if li.get("locationName"):
vario/engine/execute.py:90 Remove Truncating LLM response content to 500 chars and storing it in RunResult.content permanently loses data that callers may need; the full content should be preserved and truncation left to display layers. opus
      87          name=run.name or run.prompt[:30],
      88          prompt=run.prompt,
      89          model=model_alias,
>>>   90          content=content[:500] + "..." if len(content) > 500 else content,
      91          output=output,
      92          error=error,
      93          elapsed_seconds=elapsed,
doctor/analyze.py:661 Raise Limit 10005000 This is a raw log line included in an LLM prompt for error analysis. 1000 chars may cut off important stack traces or context. However, since this is just one error's raw log and the prompt asks for brief analysis, we don't need unlimited content. Raising to 5000 chars would capture most stack traces while keeping the prompt focused. opus
     658  Message: {error.message}
     659  
     660  Raw log line:
>>>  661  {error.raw[:1000]}
     662  
     663  Provide a brief analysis (2-4 lines):
     664  1. What this error means
finance/ceo_quality/assess.py:55 Raise Limit 300015000 This truncates archived web content included in a CEO assessment prompt. 3000 chars of company web content is quite limited for a nuanced CEO quality assessment. Raising to 15k chars would provide much richer context for scoring across multiple dimensions while keeping the overall prompt reasonable. opus
      52  
      53      context_parts = []
      54      if wayback_context:
>>>   55          context_parts.append(f"### Company web content (archived as of ~{as_of_date}):\n{wayback_context[:3000]}")
      56      if profile_context:
      57          context_parts.append(f"### Company profile:\n{profile_context}")
      58      context_block = "\n\n".join(context_parts) if context_parts else "(No archived content available)"
finance/jobs/vic_ideas.py:227 Raise Limit 30008000 HTML content sent to a 'cheap flash-lite' LLM for a sanity check. The comment explicitly says it's a cheap call on a sample. However, 3000 chars of HTML (with tags) may not contain enough actual text content. Raising to 8k gives a better sample while keeping the cost-conscious intent, especially since flash-lite models are very cheap. opus
     224      log.info(f"[vic] {item_key}: saved {len(html)} chars to file, {img_count} images in HTML")
     225  
     226      # LLM sanity check — cheap flash-lite call on a sample to verify content
>>>  227      llm_ok = await _llm_sanity_check(item_key, html[:3000], company)
     228  
     229      # Generate static server URL for cached content
     230      cache_url = f"https://static.localhost/db/vic_ideas?table=ideas&idea_id={item_key}&column=raw_html"
intel/people/web_presence.py:226 Raise Limit Prompt text sent to LLM is truncated to 500 chars which could lose critical user intent; should be raised to at least 2000. opus
     223                  if title:
     224                      lines.append(f"    {title}")
     225                  if snippet:
>>>  226                      lines.append(f"    {snippet[:120]}")
     227                  if url:
     228                      lines.append(f"    {url}")
     229                  lines.append("")
learning/cli.py:1850 Raise Limit This full_text truncation at 300 chars is fed into an LLM prompt for principle generalization, where more context would improve quality; raise to 1000 or remove. opus
    1847          children.append(p)
    1848  
    1849      child_text = "\n".join(
>>> 1850          f"- {p.id}: {p.name}\n  Text: {p.text}\n  Full: {(p.full_text or '')[:300]}"
    1851          for p in children
    1852      )
    1853  
learning/cli.py:1945 Raise Limit Instance content truncated to 200 chars is fed into an LLM prompt for specialization proposals, where richer context improves quality; raise to 500. opus
    1942      # Get linked instances for context
    1943      instances = store.get_instances_for_principle(principle_id)
    1944      instance_text = "\n".join(
>>> 1945          f"- {inst.content[:200]}"
    1946          for inst in instances[:10]
    1947      ) or "(no instances yet)"
    1948  
learning/cli.py:2069 Raise Limit Principle full_text truncated to 500 chars is fed into an LLM prompt for session analysis; with modern context windows this should be raised to 2000 or removed. opus
    2066    ID: {principle.id}
    2067    Name: {principle.name}
    2068    Text: {principle.text}
>>> 2069    Full: {(principle.full_text or '')[:500]}
    2070  
    2071  For each moment, identify:
    2072  1. What happened (the specific situation in the session)
learning/eval.py:64 Raise Limit 800050000 Episode text sent to an LLM judge for annotation against principles. 8000 chars is better than 2000 but still quite limiting for complex multi-turn episodes. The principles catalog also takes space, but modern models can handle much more. Raising to 50k allows thorough analysis while keeping costs controlled. opus
      61  {principles_catalog}
      62  
      63  ## Session Episode
>>>   64  {episode_text[:8000]}"""
      65  
      66      response = await call_llm(
      67          model,
learning/eval.py:89 Raise Limit 20008000 This is episode_text being stored in a database via store.add_eval_annotation. It's for storage/reference, not LLM input. However, 2000 chars is quite small for later review of what was evaluated. Raising to 8k gives better traceability while keeping DB rows reasonable. opus
      86              principle_id=pid,
      87              session_id=session_id,
      88              episode_index=episode_index,
>>>   89              episode_text=episode_text[:2000],
      90              annotation=annotation,
      91              confidence=ann.get("confidence"),
      92              annotator_model=model,
learning/gyms/claim_extraction/gym.py:243 Raise Limit 300015000 Document snippet used as context for an LLM judge evaluating claim extraction quality. 3k chars may not capture enough of the source document for the judge to verify whether extracted claims are accurate and complete. Raise to 15k to give the judge better coverage of the source material. opus
     240          doc_id = candidate.metadata.get("doc_id", "")
     241          entry = corpus_map.get(doc_id, {})
     242          ctx = {
>>>  243              "document_snippet": entry.get("text", "")[:3000],
     244              "reference_extraction": entry.get("reference_output", "(no reference)"),
     245              "model": candidate.metadata.get("model", "?"),
     246              "prompt_variant": candidate.metadata.get("prompt", "?"),
learning/gyms/claim_extraction/gym.py:271 Raise Limit 300015000 This document snippet is sent to an LLM judge as context for evaluating claim extraction quality. 3000 chars may cut off important document content that the extraction was based on. Raising to 15k gives the judge much better context while keeping costs reasonable for batch evaluation. opus
     268          for c in candidates:
     269              entry = corpus_map.get(c.metadata.get("doc_id", ""), {})
     270              items.append((c.content, {
>>>  271                  "document_snippet": entry.get("text", "")[:3000],
     272                  "reference_extraction": entry.get("reference_output", "(no reference)"),
     273                  "model": c.metadata.get("model", "?"),
     274                  "prompt_variant": c.metadata.get("prompt", "?"),
learning/gyms/llm_task/gym.py:158 Raise Limit 200010000 Raw snippet is context for an LLM judge scoring a candidate. 2k may cut off important context that the judge needs to evaluate correctness. Raise to 10k to give the judge adequate context while keeping judging costs reasonable. opus
     155          session_id = candidate.metadata.get("session_id", "")
     156          entry = corpus_map.get(session_id, {})
     157          ctx = {
>>>  158              "raw_snippet": entry.get("raw_text", "")[:2000],
     159              "reference": entry.get("reference_output", "(no reference available)"),
     160              "model": candidate.metadata.get("model", "?"),
     161              "prompt": candidate.metadata.get("prompt", "?"),
learning/gyms/llm_task/gym.py:178 Raise Limit 200010000 Same pattern as site 6 — batch version of the same judge context. Same reasoning applies: raise to 10k for better judge accuracy while controlling costs on batch operations. opus
     175          for c in candidates:
     176              entry = corpus_map.get(c.metadata.get("session_id", ""), {})
     177              items.append((c.content, {
>>>  178                  "raw_snippet": entry.get("raw_text", "")[:2000],
     179                  "reference": entry.get("reference_output", "(no reference available)"),
     180                  "model": c.metadata.get("model", "?"),
     181                  "prompt": c.metadata.get("prompt", "?"),
learning/schema/link_instances.py:41 Raise Limit Principle text truncated to 150 chars is fed to an LLM for linking analysis — too short to capture meaning; raise to 500. opus
      38      principles = store.list_principles(limit=200)
      39      lines = []
      40      for p in principles:
>>>   41          lines.append(f"- **{p.id}**: {p.name} — {p.text[:150]}")
      42      return "\n".join(lines)
      43  
      44  
learning/schema/link_instances.py:49 Raise Limit Instance content truncated to 300 chars is the primary input for LLM classification and loses important context; raise to 800. opus
      46      """Build instance descriptions for a batch."""
      47      lines = []
      48      for i, inst in enumerate(instances):
>>>   49          content = inst["content"][:300]
      50          lines.append(f"{i+1}. [{inst['id'][:12]}] {content}")
      51      return "\n".join(lines)
      52  
learning/session_extract/extract.py:61 Raise Limit 1500050000 Episode text sent to an LLM for learning extraction. The comment says 'safety truncation'. 15k chars is reasonable but conservative — longer episodes with rich tool interactions could benefit from more context. Raising to 50k captures most episodes fully while still providing a safety cap. opus
      58      """
      59      prompt = EXTRACT_PROMPT.format(
      60          topic=episode.topic or "unknown",
>>>   61          text=episode.text[:15000],  # safety truncation
      62      )
      63  
      64      try:
learning/session_review/retroactive_study.py:267 Raise Limit 200015000 This episode text is sent directly to an LLM for analysis against principles. 2000 chars is very aggressive and likely discards most of the episode content, severely limiting the LLM's ability to analyze behavior. Episodes can contain multi-turn interactions that need full context. opus
     264      system_prompt: str,
     265  ) -> dict | None:
     266      """Send one episode to LLM with cached principles system prompt."""
>>>  267      prompt = EPISODE_PROMPT.format(episode=episode["episode_text"][:2000])
     268  
     269      try:
     270          response = await call_llm(
learning/session_review/sandbox_replay.py:562 Raise Limit 400030000 This is the actual LLM judge prompt where result_text is truncated to 4k chars. The judge needs to see the full result to score it accurately. Truncating at 4k means long outputs get partially judged. Raise to 30k to cover most results while keeping judge costs reasonable. opus
     559      # Direct litellm used here because call_llm doesn't support n= (multiple choices).
     560      # See lib/llm/cost_log.py for the tracking system.
     561      messages = [{"role": "user", "content": JUDGE_PROMPT.format(
>>>  562          prompt=prompt, result_text=result_text[:4000],
     563      )}]
     564      try:
     565          response = await litellm.acompletion(
lib/gym/improve.py:162 Raise Limit Candidate reasons truncated to 200 chars are fed to an LLM for pattern extraction; important nuance is lost — raise to 500. opus
     159      from lib.llm.json import repair_json
     160  
     161      top_summaries = "\n".join(
>>>  162          f"- [{c.variant}] score={c.score:.0f}: {c.reason[:200]}"
     163          for c in top
     164      )
     165      bottom_summaries = "\n".join(
lib/ingest/literature_review.py:163 Raise Limit 800050000 Fetched article text capped for LLM context in a literature review pipeline. The comment explicitly says 'cap for LLM context'. 8k chars is quite limiting for academic/technical content. Modern models can handle much more, and literature review quality benefits significantly from seeing more of each source. opus
     160  
     161                  return {
     162                      "url": url,
>>>  163                      "text": text[:8000],  # cap for LLM context
     164                      "fetch_status": "success",
     165                      "content_format": content_format,
     166                  }
lib/ingest/literature_review.py:307 Raise Limit 600050000 This truncates source content sent to a 'flash' model for structured extraction in a literature review. 6000 chars (~1.5k tokens) is very low and likely discards most of the article content, hurting extraction quality. Since it's using a flash/cheap model, cost is low — raising to 50k chars would capture most articles while keeping costs reasonable. opus
     304  
     305      async with extract_sem:
     306          try:
>>>  307              prompt = f"Source content:\n{text[:6000]}"
     308              data = await call_llm_json(
     309                  "flash",
     310                  prompt,
lib/ingest/related_work.py:254 Raise Limit 200010000 Content excerpt from a fetched search result sent to an LLM for relevance judging. 2000 chars may miss key information in longer articles. Raising to 10k gives the relevance judge much better signal while keeping the prompt focused. opus
     251          f"Snippet: {snippet}",
     252      ]
     253      if content:
>>>  254          context_parts.append(f"Content excerpt:\n{content[:2000]}")
     255  
     256      prompt = f"""Topic: {topic}
     257  
lib/llm/tool_defs/parallel_web.py:51 Raise Limit 3001000 This truncates search result excerpts that are assembled into tool output returned to an LLM. 300 chars per excerpt is quite short and may cut off useful context. Raising to ~1000 chars per excerpt would provide better context while keeping the overall tool response manageable across multiple results. opus
      48      for r in result.results[:num_results]:
      49          line = f"- {r.url}"
      50          if r.excerpts:
>>>   51              line += f"\n  {r.excerpts[0][:300]}"
      52          lines.append(line)
      53  
      54      if not lines:
lib/llm/tool_defs/parallel_web.py:83 Raise Limit 5002000 This truncates extracted page excerpts returned as tool output to an LLM. At 500 chars per excerpt (up to 3 excerpts per result), the content is quite limited for meaningful extraction. Raising to 2000 chars per excerpt would provide substantially better context for the LLM to work with. opus
      80          excerpts = r.get("excerpts", [])
      81          line = f"Source: {rurl}"
      82          for exc in excerpts[:3]:
>>>   83              line += f"\n{str(exc)[:500]}"
      84          lines.append(line)
      85  
      86      if not lines:
lib/llm/tool_defs/parallel_web.py:123 Raise Limit 300020000 This truncates research output returned as a tool result to an LLM. 3000 chars is very restrictive for research output that may contain synthesized findings from multiple sources. Raising to 20k chars would preserve much more of the research while still keeping tool responses bounded. opus
     120      if output is None:
     121          return "No research output."
     122  
>>>  123      text = str(output)[:3000]
     124      logger.debug(f"parallel_research '{query}' ({processor}): {len(text)} chars")
     125      return text
     126  
lib/semnet/concepts.py:58 Raise Limit Transcript chunks truncated to 400 chars are sent to an LLM for concept extraction — important content is lost; raise to 800. opus
      55      for c in chunks:
      56          mm = int(c["start_s"] // 60)
      57          ss = int(c["start_s"] % 60)
>>>   58          text = c["text"][:400]
      59          chapter = f" [{c.get('chapter_title', '')}]" if c.get("chapter_title") else ""
      60          lines.append(f"CHUNK {c['chunk_id']} ({mm:02d}:{ss:02d}){chapter}: {text}")
      61  
lib/semnet/concepts.py:90 Raise Limit 15000100000 Transcript text sent to LLM for topic segmentation. 15k chars (~3.5k tokens) is quite limiting for transcripts which can be very long. The model needs to see the full transcript to properly segment topics. Raise to 100k to handle most transcripts while staying well within modern context windows. opus
      87  Example: [{{"chunk_id": 1, "topic": "venture capital deal structures"}}, ...]
      88  
      89  Transcript:
>>>   90  {transcript[:15000]}"""
      91  
      92      try:
      93          resp = await call_llm(model=model, prompt=prompt, temperature=0.0)
vario/eval.py:119 Raise Limit 400016000 Candidate output truncated to 4000 chars for evaluation scoring. If the output is longer, the judge can't see the full response and may score inaccurately. Raising to 16k covers most outputs while keeping evaluation costs manageable. opus
     116  Task Goal: {goal}
     117  
     118  Candidate Output:
>>>  119  {content[:4000]}
     120  
     121  {criteria_prompt}
     122  
vario/ng/blocks/reduce.py:165 Raise Limit 20008000 Problem text truncated to 2000 chars in a combine/reduce prompt. The approaches_text (which contains the actual solutions) is not truncated, but the problem context is. For complex problems, 2000 chars may lose important constraints. Raise to 8000 to capture fuller problem descriptions. opus
     162      approaches_text = "\n\n".join(approaches_parts)
     163  
     164      prompt = _COMBINE_PROMPT.format(
>>>  165          problem=ctx.problem[:2000],
     166          n=len(top),
     167          approaches=approaches_text,
     168      )
vario/ng/blocks/revise.py:110 Raise Limit 2000/30008000/16000 Problem truncated to 2000 and answer to 3000 in a revision prompt. The model needs to see the full answer to revise it properly — truncation means it can only revise the first portion. Raise to 8k/16k to handle longer content. opus
     107          feedback_section = _build_feedback_section(thing)
     108  
     109          prompt = _REVISE_PROMPT.format(
>>>  110              problem=ctx.problem[:2000],
     111              answer=thing.content[:3000],
     112              feedback_section=feedback_section,
     113          )
vario/ng/blocks/score.py:207 Raise Limit 2000/30008000/16000 Problem truncated to 2000 and answer to 3000 chars for scoring. Both limits are quite restrictive — complex problems and long answers will be cut off, leading to inaccurate scores. Raising to 8k for problem and 16k for answer would better capture the full content while maintaining cost control. opus
     204  
     205      async def _score_one(thing: Thing) -> tuple[dict[str, Any], int, float]:
     206          prompt = _SCORE_PROMPT.format(
>>>  207              problem=ctx.problem[:2000],
     208              answer=thing.content[:3000],
     209              criteria=criteria,
     210              output_schema=output_schema,
vario/ng/blocks/score.py:226 Raise Limit 2000/30008000/16000 Same pattern as idx 8 but for verification. The verify prompt truncates problem to 2000 and answer to 3000 chars. Verification accuracy depends on seeing the full answer; truncation could cause false passes/fails. Raise to 8k/16k respectively. opus
     223  
     224      async def _verify_one(thing: Thing) -> tuple[dict[str, Any], int, float]:
     225          prompt = _VERIFY_PROMPT.format(
>>>  226              problem=ctx.problem[:2000],
     227              answer=thing.content[:3000],
     228          )
     229          result = await call_llm(model=model, prompt=prompt, temperature=0, **ctx.llm_kwargs)
vario/review.py:547 Raise Limit 15000100000 This sends numbered text to gemini-flash to infer document structure/sections. The model only needs to see the structure, but 15k chars is quite restrictive for longer documents — it may miss sections in the latter half entirely, producing incomplete structure. Gemini Flash has a large context window; raising to 100k would cover most documents while keeping costs reasonable. opus
     544      numbered = "\n".join(f"{i}: {line}" for i, line in enumerate(text.split('\n'), 1))
     545      # Truncate if very long — model only needs enough to see structure
     546      if len(numbered) > 15000:
>>>  547          numbered = numbered[:15000] + "\n... (truncated)"
     548  
     549      try:
     550          raw = await call_llm(
vario/review.py:639 Raise Limit 800030000 This truncates a section's text before asking gemini-flash to identify sub-sections within it. 8000 chars may cut off topic shifts in longer sections, causing the model to miss sub-sections. Since this is per-section (not the whole document), 30k should cover most sections while keeping the prompt manageable. opus
     636              f"within it based on topic shifts.\n\n"
     637              f"Return a JSON array: [{{\"title\": \"...\", \"line\": N}}]\n"
     638              f"where line is the approximate starting line number.\n"
>>>  639              f"Return ONLY the JSON array.\n\n{section_text[:8000]}"
     640          )
     641          try:
     642              raw = await call_llm(model="gemini-flash", prompt=prompt, temperature=SECTION_INFERENCE_TEMPERATURE)
vario/review.py:1055 Raise Limit 20008000 This truncates prior stage findings used as context for stage 2 review lenses. At 2000 chars per synthesis, important findings may be lost, causing stage 2 to duplicate work on content already flagged. Since there could be multiple syntheses, raising to 8000 per synthesis balances completeness with total prompt size. opus
    1052               "Focus your review on content that is NOT being removed or reorganized:\n"]
    1053  
    1054      for name, synth in syntheses.items():
>>> 1055          parts.append(f"### {name.title()} findings\n{synth.content[:2000]}\n")
    1056  
    1057      return "\n".join(parts)
    1058  
vario/strategies/benchmark.py:167 Raise Limit 400016000 This truncates an answer being scored by an LLM judge. If the answer is longer than 4000 chars, the judge cannot evaluate the full response, leading to inaccurate scores. Raising to 16k would cover most answers while keeping judge costs reasonable. opus
     164  
     165      prompt = JUDGE_PROMPT.format(
     166          question=question,
>>>  167          answer=answer[:4000],
     168          expected_section=expected_section,
     169      )
     170  
vario/strategies/blocks/evaluate.py:119 Raise Limit 2000/300020000/30000 Problem text and answer are sent to an LLM for critique/scoring. The 2k/3k limits are very aggressive and will truncate most non-trivial problems and answers, degrading scoring quality. However, since this is a scoring call (not the main generation), some limit is reasonable for cost control. Raise to 20k/30k to handle most real content while keeping costs bounded. opus
     116      criteria = _build_criteria(rubric)
     117      output_schema = _build_output_schema(rubric, feedback)
     118      prompt = _CRITIQUE_PROMPT.format(
>>>  119          problem=problem_text[:2000],
     120          answer=candidate.content[:3000],
     121          criteria=criteria,
     122          output_schema=output_schema,
vario/strategies/blocks/evaluate.py:316 Raise Limit 2000/300020000/30000 Same pattern as site 0 — verification of a candidate answer against a problem. Truncating at 2k/3k means the verifier may not see the full problem or answer, leading to incorrect verification. Raise significantly but keep some limit since this is a utility call, not the primary generation. opus
     313  ) -> ScoredCandidate:
     314      """Verify a single candidate."""
     315      prompt = _VERIFY_PROMPT.format(
>>>  316          problem=problem_text[:2000],
     317          answer=candidate.content[:3000],
     318      )
     319  
vario/strategies/blocks/improve.py:136 Raise Limit 2000/300020000/30000 This is an improvement/refinement prompt where the LLM needs to see the full problem and previous answer to improve it. Truncating at 2k/3k actively hurts quality — the model can't improve what it can't see. Raise to 20k/30k for practical coverage while maintaining cost control. opus
     133              source_model = c.model
     134  
     135          prompt = _APPLY_PROMPT.format(
>>>  136              problem=context.problem.prompt[:2000],
     137              answer=content[:3000],
     138              feedback_section=feedback,
     139          )
vario/strategies/blocks/improve.py:544 Raise Limit 200020000 Problem text is truncated to 2k before a combine/synthesis prompt. The LLM needs the full problem to properly combine approaches. Note that approaches_text is NOT truncated, creating an inconsistency. Raise the problem limit to 20k. opus
     541      problem_text = context.problem.prompt
     542      approaches_text = _format_approaches(top_candidates)
     543      prompt = _COMBINE_PROMPT.format(
>>>  544          problem=problem_text[:2000],
     545          n=len(top_candidates),
     546          approaches=approaches_text,
     547      )
vario/strategies/blocks/meta.py:102 Raise Limit 20008000 Candidate content truncated to 2000 chars when formatting failures for meta-analysis. This is building context for a diagnostic prompt — if candidates are long, the truncation may hide the parts that actually failed. Raise to 8000 to give the diagnostic model more to work with. opus
      99  
     100          parts.append(
     101              f"--- Attempt {i} ({score_info}{verification_info}) ---\n"
>>>  102              f"{c.candidate.content[:2000]}"
     103              f"{critique_info}"
     104          )
     105      return "\n\n".join(parts)
vario/strategies/blocks/meta.py:167 Raise Limit 20008000 Problem text truncated to 2000 chars in the diagnose/analyze prompt. Complex problems with detailed requirements may exceed this, causing the diagnostic model to miss key constraints when analyzing why attempts failed. Raise to 8000. opus
     164      problem_text = context.problem.prompt
     165      failures_text = _format_failures(failed)
     166      prompt = _ANALYZE_PROMPT.format(
>>>  167          problem=problem_text[:2000],
     168          failures=failures_text,
     169      )
     170  
doctor/analyze.py:145 Named Constant Error message stored in ErrorEvent object; 500 is magic number that should be constant haiku
     142                  return ErrorEvent(
     143                      raw=line,
     144                      error_type=data.get("error_type", "Unknown"),
>>>  145                      message=data.get("message", "")[:500],
     146                      timestamp=data.get("ts", ""),
     147                  )
     148          except json.JSONDecodeError as e:
doctor/analyze.py:159 Named Constant Error message stored in ErrorEvent object; 200 is magic number that should be constant haiku
     156  
     157      # Try plain text patterns
     158      if "Traceback" in line:
>>>  159          return ErrorEvent(raw=line, error_type="Traceback", message=line[:200], timestamp=timestamp)
     160  
     161      if "ERROR" in line:
     162          return ErrorEvent(raw=line, error_type="ERROR", message=line[:200], timestamp=timestamp)
doctor/analyze.py:165 Named Constant Error message stored in ErrorEvent object; 200 is magic number that should be constant haiku
     162          return ErrorEvent(raw=line, error_type="ERROR", message=line[:200], timestamp=timestamp)
     163  
     164      if "WARNING" in line or "Warning:" in line:
>>>  165          return ErrorEvent(raw=line, error_type="WARNING", message=line[:200], timestamp=timestamp)
     166  
     167      # Extract exception type from line (Error, Exception, or Warning)
     168      exc_match = re.search(r"(\w+Error|\w+Exception|\w+Warning):\s*(.+)", line)
doctor/analyze.py:173 Named Constant Error message stored in ErrorEvent object; 200 is magic number that should be constant haiku
     170          return ErrorEvent(
     171              raw=line,
     172              error_type=exc_match.group(1),
>>>  173              message=exc_match.group(2)[:200],
     174              timestamp=timestamp,
     175          )
     176  
doctor/analyze.py:236 Named Constant Raw error stored in database record; 500 is magic number that should be constant haiku
     233              status="new",
     234              first_seen=now,
     235              last_seen=now,
>>>  236              raw=error.raw[:500],
     237              project=project,
     238          )
     239  
doctor/analyze.py:401 Named Constant Raw error stored in ErrorEvent object; 500 is magic number that should be constant haiku
     398                      if ts and "-" in ts:
     399                          ts = ts.split(".")[0]  # Remove microseconds
     400                      errors.append(ErrorEvent(
>>>  401                          raw=data.get("text", line)[:500],
     402                          error_type=exc.get("type", level),
     403                          message=record.get("message", "") or exc.get("value", ""),
     404                          file=record.get("file", {}).get("path", ""),
doctor/analyze.py:419 Named Constant Raw error and message stored in ErrorEvent object; 500 is magic number that should be constant haiku
     416                      if isinstance(ts, (int, float)):
     417                          ts = datetime.fromtimestamp(ts).isoformat()
     418                      errors.append(ErrorEvent(
>>>  419                          raw=line[:500],
     420                          error_type=data.get("error_type", data.get("exc_type", level)),
     421                          message=str(data.get("message", data.get("msg", "")))[:500],
     422                          file=str(data.get("file", data.get("filename", ""))),
doctor/commit_review.py:296 Named Constant Code snippet stored in database record; 120 is magic number for payload storage haiku
     293                      severity=severity,
     294                      commit=dl.commit,
     295                      commit_msg=dl.commit_msg,
>>>  296                      snippet=stripped[:120],
     297                  ))
     298  
     299      # --- Multi-line except block detection ---
doctor/critique.py:325 Named Constant Magic number 2000 for LLM prompt content; should be MAX_PAGE_TEXT_CHARS haiku
     322  
     323          url = status.get("url", self._current_url)
     324          title = status.get("title", "?")
>>>  325          text = text_result.get("text", "")[:2000] if text_result.get("ok") else "(could not get text)"
     326          return f"URL: {url}\nTitle: {title}\nViewport: {self._current_viewport}\n\nPage text (first 2000 chars):\n{text}"
     327  
     328      async def report_finding(self, category: str, severity: str, title: str, description: str, suggestion: str) -> ToolResult:
doctor/daemon.py:118 Named Constant Error message stored in database via upsert_issue; 500 is magic number that should be constant haiku
     115      if existing and existing["status"] not in ("fixed",):
     116          # upsert bumps scan_count, resets clean_scans, updates last_seen
     117          upsert_issue(conn, fp, project, "signal", signal_type,
>>>  118                       error_msg[:500], source)
     119          logger.debug("signal dedup: {} already tracked ({})", fp[:12], existing["status"])
     120          return
     121  
doctor/daemon.py:124 Named Constant Error message stored in database via upsert_issue; 500 is magic number that should be constant haiku
     121  
     122      # New issue (or regressed from fixed) — upsert + schedule triage
     123      is_new = upsert_issue(conn, fp, project, "signal", signal_type,
>>>  124                            error_msg[:500], source)
     125  
     126      if is_new:
     127          logger.info("[doctor] new signal issue: {} {} ({})", project, signal_type, fp[:12])
doctor/daemon.py:136 Named Constant Error message stored in database record; 500 is magic number that should be constant haiku
     133                  triage_verdict = await triage_issue(
     134                      project=project,
     135                      error_type=signal_type,
>>>  136                      message=error_msg[:500],
     137                      source=source,
     138                      fingerprint=fp,
     139                  )
doctor/daemon.py:146 Named Constant Error message stored in database record; 500 is magic number that should be constant haiku
     143                      fingerprint=fp,
     144                      project=project,
     145                      error_type=signal_type,
>>>  146                      message=error_msg[:500],
     147                      source=source,
     148                      verdict=triage_verdict,
     149                      model="sonnet",
doctor/daemon.py:165 Named Constant Error message stored in database record; 500 is magic number that should be constant haiku
     162                              asyncio.create_task(_run_auto_fix(project_path, {
     163                                  "fingerprint": fp,
     164                                  "error_type": signal_type,
>>>  165                                  "message": error_msg[:500],
     166                                  "source": source,
     167                                  "project": project,
     168                              }))
doctor/daemon.py:232 Named Constant Error message stored in database record; 500 is magic number that should be constant haiku
     229                  errors.append({
     230                      "fingerprint": ev.fingerprint,
     231                      "error_type": ev.error_type,
>>>  232                      "message": ev.message[:500],
     233                      "source": log_file.name,
     234                      "project": project_name,
     235                      "type": "log",
doctor/daemon.py:260 Named Constant Error message stored in database record; 500 is magic number that should be constant haiku
     257                      errors.append({
     258                          "fingerprint": ev.fingerprint,
     259                          "error_type": ev.error_type,
>>>  260                          "message": ev.message[:500],
     261                          "source": log_file.name,
     262                          "project": project_name,
     263                          "type": "log",
doctor/daemon.py:327 Named Constant Error summary for display/notification; 120 is magic number that should be constant haiku
     324                  except (json.JSONDecodeError, KeyError, OSError):
     325                      pass
     326  
>>>  327              error_summary = err["message"][:120] if err.get("message") else err["error_type"]
     328              fix_summary = attempt.fix_description or "Fix applied"
     329              push(
     330                  title=f"Doctor proposes fix: {err['project']}",
doctor/daemon.py:339 Named Constant Error summary for notification title; 120 is magic number that should be constant haiku
     336              logger.info("[doctor] fix ready for review: {}", fix_id)
     337          elif attempt.status == "failed":
     338              # Fix failed — fall back to normal escalation notification
>>>  339              error_summary = err["message"][:120] if err.get("message") else err["error_type"]
     340              push(
     341                  title=f"Doctor fix failed: {err['project']}",
     342                  body=f"Error: {error_summary}\nFailed: {attempt.error or 'unknown'}",
doctor/doctor_db.py:200 Named Constant Storing error message in database payload. Magic number 500 should be a named constant, but truncation is intentional for storage. haiku
     197                 (fingerprint, project, type, error_type, message, source, status,
     198                  scan_count, clean_scans, first_seen, last_seen)
     199                 VALUES (?, ?, ?, ?, ?, ?, ?, 1, 0, ?, ?)""",
>>>  200              (fingerprint, project, issue_type, error_type, message[:500], source, status, now, now),
     201          )
     202          conn.commit()
     203          return True
doctor/doctor_db.py:304 Named Constant Storing error message in database payload. Magic number 1000 should be a named constant, but truncation is intentional for storage. haiku
     301              fingerprint,
     302              project,
     303              error_type,
>>>  304              message[:1000],
     305              source,
     306              verdict.get("action", "?"),
     307              verdict.get("error_class", ""),
doctor/exercise.py:177 Named Constant Exception message stored in step data; magic number 200 should be constant, but truncation is intentional for storage haiku
     174  
     175              except Exception as e:
     176                  step.status = "fail"
>>>  177                  step.reason = str(e)[:200]
     178                  # Try to capture screenshot even on failure
     179                  try:
     180                      png_bytes = await page.screenshot(type="png")
doctor/exercise.py:303 Named Constant Error display in HTML; magic number 200 should be constant, but truncation is intentional for display haiku
     300          errors_html = ""
     301          if step.console_errors:
     302              error_items = "\n".join(
>>>  303                  f"        <div class='console-error'>{_esc(e[:200])}</div>"
     304                  for e in step.console_errors[:5]
     305              )
     306              errors_html = f"\n      <div class='errors'>{error_items}\n      </div>"
doctor/expect.py:233 Named Constant Storing HTTP response text for processing/logging. Magic number 5000 should be a named constant, but truncation is intentional for storage. haiku
     230              )
     231              # Get text content, truncated
     232              try:
>>>  233                  text = response.text[:5000]
     234              except Exception as e:
     235                  logger.debug(f"Could not decode response as text: {e}")
     236                  text = "[binary content]"
doctor/expect.py:311 Named Constant Storing matched search text in results. Magic number 200 should be a named constant, but truncation is intentional for storage. haiku
     308                          search_text = raw_line
     309                      if compiled_pattern:
     310                          if compiled_pattern.search(search_text):
>>>  311                              matches.append(search_text[:200])
     312                      elif pattern.lower() in search_text.lower():
     313                          matches.append(search_text[:200])
     314                  if matches:
doctor/expect.py:326 Named Constant Storing error messages in results payload. Magic number 200 should be a named constant, but truncation is intentional for storage. haiku
     323                      has_errors = True
     324                      results.append({
     325                          "file": log_file.name,
>>>  326                          "errors": [(ev.message or ev.raw)[:200] for ev in error_events[-10:]],
     327                      })
     328          except Exception as e:
     329              results.append({"file": log_file.name, "error": str(e)})
doctor/expect.py:352 Named Constant Storing error lines in results. Magic number 200 should be a named constant, but truncation is intentional for storage. haiku
     349                  errors = []
     350                  for line in recent_lines:
     351                      if any(p in line for p in error_patterns):
>>>  352                          errors.append(line[:200])
     353                          has_errors = True
     354                  if errors:
     355                      results.append({
doctor/expect.py:396 Named Constant Storing command output in results payload. Magic numbers 5000/2000 should be named constants, but truncation is intentional for storage. haiku
     393              timeout=timeout,
     394          )
     395          return {
>>>  396              "stdout": result.stdout[:5000] if result.stdout else "",
     397              "stderr": result.stderr[:2000] if result.stderr else "",
     398              "returncode": result.returncode,
     399              "success": result.returncode == 0,
doctor/fix.py:684 Named Constant Error log being included in generated content; magic number 2000 should be constant, but this is display/storage haiku
     681  
     682  Raw log line:
     683  ```
>>>  684  {error_raw[:2000]}
     685  ```
     686  
     687  Instructions:
doctor/fix.py:759 Named Constant Error log being included in generated content; magic number 2000 should be constant, but this is display/storage haiku
     756  
     757  Raw log line:
     758  ```
>>>  759  {error_raw[:2000]}
     760  ```
     761  
     762  Instructions:
doctor/fix.py:914 Named Constant Error message stored in data structure; magic number 500 should be constant, but truncation is intentional for storage haiku
     911          "fingerprint": fingerprint,
     912          "source": source,
     913          "error_type": error_type,
>>>  914          "error_message": error_message[:500],
     915          "base_commit": base_commit,
     916      }
     917  
doctor/ingest.py:121 Named Constant Database payload storage of message preview; truncation is intentional, but 100 should be a named constant haiku
     118                      
     119                      if etype == "user_message":
     120                          content = event.get("content", "")
>>>  121                          preview = content[:100] if content else ""
     122                          conn.execute("INSERT INTO user_messages (session_id, timestamp, message_preview) VALUES (?, ?, ?)",
     123                                     (session_id, ts, preview))
     124                                     
doctor/ingest.py:199 Named Constant Database payload storage of message preview; truncation is intentional, but 100 should be a named constant haiku
     196                          content = msg.get("content", "")
     197                          # Try to find tool output if it's a tool result
     198                          # Actually Gemini stores tool calls in the 'gemini' message response
>>>  199                          preview = content[:100] if content else ""
     200                          conn.execute("INSERT INTO user_messages (session_id, timestamp, message_preview) VALUES (?, ?, ?)",
     201                                     (session_id, ts, preview))
     202                                     
doctor/triage.py:162 Named Constant Error message being included in triage content; magic number 2000 should be constant, but this is display/analysis haiku
     159      sections.append(f"Project: {project}")
     160      sections.append(f"Error type: {error_type}")
     161      sections.append(f"Source: {source}")
>>>  162      sections.append(f"Message:\n{message[:2000]}")
     163  
     164      freq = _get_error_frequency(project, fingerprint)
     165      sections.append(f"Frequency: {freq}")
doctor/watch.py:326 Named Constant Magic number 200 for error field in structured data payload; should be MAX_ERROR_LENGTH haiku
     323                      errors.append({
     324                          "file": log_file.name,
     325                          "line_num": ev.timestamp or "+0",
>>>  326                          "error": (ev.message or ev.raw)[:200],
     327                      })
     328                  log_offset.set_offset(log_file, new_offset)
     329              except Exception as e:
doctor/watch.py:375 Named Constant Magic number 200 for error field in structured data; should be named constant haiku
     372                          errors.append({
     373                              "file": log_file.name,
     374                              "line_num": f"+{i}",  # Relative to last position
>>>  375                              "error": line[:200],
     376                          })
     377                          break
     378  
draft/claims/report.py:180 Named Constant Quote stored in HTML report payload; 100 is magic number that should be constant haiku
     177          n = r.get("novelty", "–")
     178          i = r.get("insight", "–")
     179          label = html_mod.escape(seg.label)
>>>  180          quote = html_mod.escape(seg.quote[:100])
     181          top_claims.append(
     182              f'<div class="top-claim" data-id="{seg.id}">'
     183              f'<span class="tc-score">N:{n} I:{i}</span>'
draft/style/gym.py:272 Named Constant Document text being processed; truncation is intentional but 4000 should be a named constant haiku
     269              async with semaphore:
     270                  doc_id = candidate.metadata.get("doc_id", "")
     271                  doc = docs_by_id.get(doc_id, {})
>>>  272                  doc_text = doc.get("text", "")[:4000]
     273  
     274                  # Parse the evaluation from candidate content
     275                  eval_data = json.loads(candidate.content)
draft/style/report.py:179 Named Constant Display preview in HTML report; magic number 100 should be named constant, but truncation is intentional haiku
     176          for v in r.violations[:2]:
     177              violations_detail += (
     178                  f'<div class="pr-violation">'
>>>  179                  f'&ldquo;{_esc(v.get("quote", "")[:100])}&rdquo; '
     180                  f'&mdash; {_esc(v.get("fix", ""))}'
     181                  f'</div>'
     182              )
draft/style/report.py:204 Named Constant Document preview stored in report payload; 2000 is magic number that should be constant, truncation is expected haiku
     201      # Document preview
     202      doc_preview_html = ""
     203      if document_text:
>>>  204          preview = document_text[:2000]
     205          if len(document_text) > 2000:
     206              preview += f"\n\n[... {len(document_text) - 2000} characters omitted ...]"
     207          doc_preview_html = f"""
finance/ceo_quality/assess.py:131 Named Constant Safety limit for LLM input, but this is a known constraint; should be a named constant rather than magic number haiku
     128              if text and len(text) > 100:
     129                  logger.info(f"Wayback: got {len(text)} chars from {url} "
     130                             f"(snapshot {best['timestamp']})")
>>>  131                  return text[:5000]
     132          except Exception as e:
     133              logger.debug(f"Wayback failed for {url}: {e}")
     134              continue
finance/earnings/backtest/annotate.py:196 Named Constant Storing parsed data in payload; truncation is intentional but magic number should be a named constant haiku
     193              results[str(i)] = {
     194                  "d": _clamp(int(parsed["d"]), -1, 1),
     195                  "s": _clamp(int(parsed["s"]), 0, 2),
>>>  196                  "r": str(parsed.get("r", ""))[:200],
     197              }
     198          else:
     199              logger.warning(f"Score parse failed for [{i}]: {raw[:80]}")
finance/vic_analysis/predict_embeddings.py:84 Named Constant Safety limit for OpenAI API; should be a named constant (MAX_EMBEDDING_TEXT_CHARS or similar) haiku
      81              text = idea["thesis_summary"] or idea["description"] or ""
      82          case _:
      83              text = idea["description"] or ""
>>>   84      return text[:8000]  # OpenAI limit safety
      85  
      86  
      87  def embed_ideas(model: str = "text-embedding-3-small", strategy: str = "whole_doc",
helm/analyze.py:250 Named Constant Text preview for user message collection; 200 is a magic number for storage/display, not LLM input processing haiku
     247                  if isinstance(content, list):
     248                      for block in content:
     249                          if isinstance(block, dict) and block.get("type") == "text":
>>>  250                              text = block.get("text", "")[:200]
     251                              if text.strip():
     252                                  user_msgs.append(text)
     253                              break
helm/autodo/fixes.py:231 Named Constant Response preview in logging; truncation is intentional but 500 should be a named constant haiku
     228  
     229      except json.JSONDecodeError as e:
     230          logger.warning("failed to parse LLM JSON response", file=file_path, error=str(e),
>>>  231                          response_preview=text[:500] if text else "<empty>")
     232          return None
     233      except Exception as e:
     234          logger.warning("LLM fix failed", file=file_path, error=str(e))
helm/autodo/handler.py:344 Named Constant Report content being read for processing; truncation is intentional but 5000 should be a named constant haiku
     341      report_path = report_dir / "report.md"
     342      report_content = ""
     343      if report_path.exists():
>>>  344          report_content = report_path.read_text()[:5000]
     345  
     346      elapsed_min = round((time.time() - start) / 60, 1)
     347      return {
helm/autodo/handler.py:504 Named Constant Command output being stored/returned; truncation is intentional but 2000 should be a named constant haiku
     501              cwd=str(RIVUS_ROOT),
     502              capture_output=True, text=True, timeout=60,
     503          )
>>>  504          output = (result.stdout + result.stderr)[:2000]
     505          return result.returncode == 0, output
     506      except subprocess.TimeoutExpired:
     507          return False, "Test timed out after 60s"
helm/autodo/handler.py:560 Named Constant Test output in summary message; truncation is intentional but 500 should be a named constant haiku
     557              log.warning("tests failed after fix, reverting", file=file_path, test_file=test_file)
     558              return {
     559                  "verdict": "needs_work",
>>>  560                  "summary": f"Fix reverted \u2014 tests failed: {test_output[:500]}",
     561                  "committed": False,
     562                  "test_output": test_output,
     563              }
helm/autodo/handler.py:598 Named Constant Error output in logging and return value; truncation is intentional but 200 should be a named constant haiku
     595                  "fixes_applied": fixes_applied,
     596              }
     597          else:
>>>  598              log.info("commit returned non-zero", stderr=result.stderr.strip()[:200])
     599              return {
     600                  "verdict": "accept",
     601                  "summary": f"Fixed {n_fixes} issue{'s' if n_fixes != 1 else ''}",
helm/autodo/handler.py:609 Named Constant Error message in summary; truncation is intentional but 200 should be a named constant haiku
     606          log.warning("git commit failed", stderr=e.stderr)
     607          return {
     608              "verdict": "needs_work",
>>>  609              "summary": f"Fix applied but commit failed: {e.stderr[:200]}",
     610              "committed": False,
     611          }
     612  
helm/autodo/handler.py:652 Named Constant Output preview in returned data structure; truncation is intentional but 500 should be a named constant haiku
     649          "commit_count": commit_count,
     650          "branch": branch,
     651          "worktree_path": worktree_path,
>>>  652          "output_preview": output[:500] if output else "",
     653      }
helm/autodo/scanner/_core.py:430 Named Constant Magic number 120 for issue message display; should be MAX_MESSAGE_LENGTH haiku
     427          parts = file_display.split("/site-packages/")
     428          file_display = parts[-1] if len(parts) > 1 else file_display.split("/")[-1]
     429      loc = f"{file_display}:{issue.line}" if issue.line else file_display
>>>  430      msg = issue.message[:120] + "…" if len(issue.message) > 120 else issue.message
     431      title = f"{loc} — {msg}"
     432  
     433      description = (
helm/autodo/scanner/code_quality.py:501 Named Constant Magic number 120 for description storage; should be MAX_DESCRIPTION_LENGTH haiku
     498              description = vuln.get("description", "")
     499              # Truncate long descriptions
     500              if len(description) > 120:
>>>  501                  description = description[:117] + "..."
     502  
     503              fix_tag = f" (fix: {', '.join(fix_versions)})" if fix_versions else " (no fix available)"
     504  
helm/autodo/scanner/magic_constants.py:213 Named Constant Magic number 120 for stored context field in payload; should be MAX_CONTEXT_LENGTH haiku
     210          self.findings.append({
     211              "line": lineno,
     212              "value": value,
>>>  213              "context": stripped[:120],
     214              "severity": severity,
     215          })
     216  
helm/autodo/scanner/planner.py:582 Named Constant Text stored in database payload; magic number 200 should be constant, but truncation is intentional for storage haiku
     579                  "fingerprint": item.fingerprint,
     580                  "file": item.file,
     581                  "line": item.line,
>>>  582                  "text": item.text[:200],
     583                  "first_seen": now,
     584                  "last_seen": now,
     585                  "queue_id": fp_to_qid.get(item.fingerprint),
helm/corpus/claude_web.py:244 Named Constant First prompt being stored in session state; intentional truncation for storage but 200 should be a named constant haiku
     241          sender = msg.get("sender", "")
     242          role = "user" if sender == "human" else "assistant"
     243          if not session["first_prompt"] and role == "user":
>>>  244              session["first_prompt"] = text[:200]
     245  
     246          msg_uuid = msg.get("uuid", "")
     247          messages.append({
helm/hooks/handler.py:304 Named Constant Prompt being stored in event payload; magic number 500 should be named constant, but truncate() not needed as this is intentional payload storage haiku
     301      body = _json.dumps({
     302          "sid": session_info.get("session_id", ""),
     303          "iterm": session_info.get("iterm_session_id", "") or "",
>>>  304          "prompt": prompt[:500],
     305          "cwd": session_info.get("cwd", ""),
     306      })
     307      req = (
helm/hooks/handler.py:346 Named Constant Prompt being stored in event record; magic number 500 should be named constant, but truncate() not needed as this is intentional event logging haiku
     343          # Essentials — always work, even if watch API is down
     344          record_last_active(session_info)
     345          record_session_time(session_info["session_id"], "user")
>>>  346          record_event("user_prompt", {"prompt": prompt[:500]}, session_info)
     347  
     348          # Write iterm→session mapping (warm pool sessions skip this at SessionStart
     349          # because ITERM_SESSION_ID is unset; first user prompt has a real client attached)
helm/hooks/handler.py:426 Named Constant Error message being stored in event payload; magic number 500 should be named constant, but truncate() not needed as this is intentional payload storage haiku
     423          record_event("tool_failure", {
     424              "tool": tool_name,
     425              "input_summary": _summarize_input(tool_input),
>>>  426              "error": str(error)[:500],
     427          }, session_info)
     428      except Exception as e:
     429          logger.error("tool_failure handler failed: {}", e)
helm/hooks/handler.py:443 Named Constant Description being stored in event payload; magic number 200 should be named constant, but truncate() not needed as this is intentional payload storage haiku
     440          record_event("subagent_start", {
     441              "subagent_id": data.get("subagent_id", ""),
     442              "subagent_type": data.get("subagent_type", ""),
>>>  443              "description": data.get("description", "")[:200],
     444          }, session_info)
     445      except Exception as e:
     446          logger.error("subagent_start handler failed: {}", e)
helm/recap.py:119 Named Constant The 300-char recap preview limit is used identically across three sites in this file and should be a named constant like RECAP_PREVIEW_MAX_CHARS. opus
     116              for block in raw.get("content", []):
     117                  if isinstance(block, dict) and block.get("type") == "text":
     118                      _t = block.get("text", "")
>>>  119                      content = _t[:300] + ("…" if len(_t) > 300 else "")
     120                      break
     121                  elif isinstance(block, str):
     122                      content = block[:300] + ("…" if len(block) > 300 else "")
helm/recap.py:125 Named Constant Same 300-char limit as idx 17, should share the same named constant. opus
     122                      content = block[:300] + ("…" if len(block) > 300 else "")
     123                      break
     124          elif isinstance(raw, str):
>>>  125              content = raw[:300] + ("…" if len(raw) > 300 else "")
     126  
     127          if not content:
     128              continue
intel/companies/analyze.py:316 Named Constant Magic number 120 for display preview in CLI output; should be named constant haiku
     313          lines.append(f"  Market Cap: ${mcap}B | Revenue: ${rev}B" if mcap else f"  Revenue: ${rev}B" if rev else "")
     314          desc = profile.get("description", "")
     315          if desc:
>>>  316              lines.append(f"  {desc[:120]}")
     317          lines.append("")
     318  
     319      # TFTF summary
intel/companies/analyze.py:342 Named Constant Magic number 120 for thesis display; should be named constant for consistency haiku
     339      bull = stages.get("bull_case", {})
     340      if bull and not bull.get("error"):
     341          lines.append("  ── BULL CASE ────────────────────────────────────────────────")
>>>  342          lines.append(f"  {bull.get('thesis', '?')[:120]}")
     343          lines.append(f"  Moat: {bull.get('moat', {}).get('rating', '?')} | Conviction: {bull.get('conviction', '?')}")
     344          lines.append("")
     345  
intel/companies/analyze.py:350 Named Constant Magic numbers 120 and 80 for display; should be named constants haiku
     347      bear = stages.get("bear_case", {})
     348      if bear and not bear.get("error"):
     349          lines.append("  ── BEAR CASE ────────────────────────────────────────────────")
>>>  350          lines.append(f"  {bear.get('thesis', '?')[:120]}")
     351          lines.append(f"  Kill shot: {bear.get('kill_shot', '?')[:80]}")
     352          lines.append("")
     353  
intel/companies/discover.py:68 Named Constant Text being formatted into a prompt for LLM; magic number 150 should be named constant haiku
      65      hint_ctx = f"\nContext: {hint}" if hint else ""
      66  
      67      numbered = "\n".join(
>>>   68          f'{i+1}. [{c["platform"]}] {c["title"]}\n   {c["snippet"][:150]}\n   URL: {c["url"]}'
      69          for i, c in enumerate(candidates)
      70      )
      71  
intel/companies/enrich.py:275 Named Constant Description being stored in enriched data payload; magic number 100 should be named constant, but truncate() not needed as this is intentional payload storage haiku
     272                      "name": r.get("name"),
     273                      "stars": stars,
     274                      "language": r.get("language"),
>>>  275                      "description": (r.get("description") or "")[:100],
     276                  })
     277  
     278              return {
intel/jobs/startup_summary.py:182 Named Constant Content being sent to LLM for analysis; magic number 50000 should be a named constant and truncate() should be used haiku
     179              from lib.ingest.fetcher import fetch_escalate
     180              result = await fetch_escalate(url, start_mode="none", skip_wayback=True)
     181              if result.content and not result.content.startswith("Error"):
>>>  182                  fetched_content = result.content[:50000]  # cap for sanity
     183                  urls_found["website"] = url
     184          except Exception as e:
     185              log.warning("url fetch failed", url=url, error=str(e)[:200])
intel/jobs/startup_summary.py:243 Named Constant Content being sent to LLM for analysis; magic number 30000 should be a named constant and truncate() should be used haiku
     240      system = SYSTEM_RESEARCH
     241      website_path = job.storage.resolved_dir / item_key / "website_content.html"
     242      if website_path.exists():
>>>  243          website_text = website_path.read_text(encoding="utf-8")[:30000]
     244          system = f"""{SYSTEM_RESEARCH}
     245  
     246  <website_content>
intel/people/cluster.py:78 Named Constant Text being formatted into LLM prompt; magic number 150 should be named constant haiku
      75  def _format_items(items: list[dict], offset: int = 0) -> str:
      76      """Format items as numbered list for LLM prompt."""
      77      return "\n".join(
>>>   78          f'{offset + i + 1}. [{it["source"]}] {it["title"]}\n   {it["snippet"][:150]}\n   URL: {it["url"]}'
      79          for i, it in enumerate(items)
      80      )
      81  
intel/people/cluster.py:136 Named Constant Title and snippet being formatted for LLM prompt; magic numbers 80 and 100 should be named constants haiku
     133          sample_lines = []
     134          for item in items[:5]:
     135              title = item.get("title", "")[:80]
>>>  136              snippet = item.get("snippet", "")[:100]
     137              if title:
     138                  line = title
     139                  if snippet:
intel/people/demo_cluster.py:361 Named Constant Display rendering with multiple magic numbers (120, 200, 90); should use named constants like MAX_TITLE_CHARS, MAX_SNIPPET_CHARS, MAX_URL_DISPLAY_CHARS haiku
     358  
     359          def _render_item(item: dict) -> str:
     360              platform = _esc(item.get("source", "?"))
>>>  361              title = _esc(item.get("title", "(no title)")[:120])
     362              snippet = _esc(item.get("snippet", "")[:200])
     363              url = item.get("url", "")
     364              url_display = _esc(url[:90]) + ("..." if len(url) > 90 else "")
intel/people/lib/enrich.py:772 Named Constant Title stored in payload, magic number 120 should be named constant haiku
     769                  journals.add(journal)
     770              pubs.append({
     771                  "pmid": pmid,
>>>  772                  "title": article.get("title", "")[:120],
     773                  "journal": journal,
     774                  "date": article.get("pubdate", ""),
     775              })
intel/people/lib/enrich.py:853 Named Constant Title stored in payload, magic number 120 should be named constant haiku
     850                      categories.add(term.split(".")[0])
     851  
     852              papers.append({
>>>  853                  "title": (title_el.text or "").strip().replace("\n", " ")[:120] if title_el is not None else "",
     854                  "date": (published_el.text or "")[:10] if published_el is not None else "",
     855                  "arxiv_id": (id_el.text or "").split("/abs/")[-1] if id_el is not None else "",
     856              })
intel/people/lib/enrich.py:896 Named Constant Extract stored in payload, magic number 300 should be named constant haiku
     893                  "source": "wikipedia",
     894                  "disambiguation": True,
     895                  "title": data.get("title", ""),
>>>  896                  "extract": data.get("extract", "")[:300],
     897                  "url": data.get("content_urls", {}).get("desktop", {}).get("page", ""),
     898              }
     899          return {
intel/people/lib/enrich.py:968 Named Constant Extract stored in payload, magic number 500 should be named constant haiku
     965              "found": True,
     966              "source": "grokipedia",
     967              "title": name,
>>>  968              "extract": extract[:500],
     969              "url": f"https://grokipedia.com/page/{slug}",
     970              "full_length": len(extract),
     971          }
intel/people/lookup.py:328 Named Constant Magic number 2000 for text preview stored in result payload. Should be MAX_TEXT_PREVIEW_CHARS constant, but truncation is intentional for storage. haiku
     325      text = re.sub(r'<[^>]+>', ' ', text)
     326      text = re.sub(r'\s+', ' ', text).strip()
     327      if len(text) > 200:
>>>  328          result["text_preview"] = text[:2000]
     329  
     330      return result
     331  
intel/people/scoring.py:170 Named Constant Bio extract at 500 chars is used as LLM scoring input and the limit should be a named constant like MAX_BIO_EXTRACT_CHARS for clarity and tunability. opus
     167      # Dossier (truncated)
     168      dossier = person.get("dossier", "")
     169      if dossier:
>>>  170          parts.append(f"\nDossier excerpt:\n{dossier[:2000]}")
     171  
     172      return "\n".join(parts)
     173  
intel/people/writings.py:261 Named Constant Snippet stored in payload/database, magic number 300 should be named constant haiku
     258              return WritingRecord(
     259                  title=title, url=url or "", source="scholar",
     260                  date=year,
>>>  261                  snippet=(item.get("snippet", "") or "")[:300],
     262                  publication=str(pub_info)[:100] if pub_info else None,
     263              )
     264          case "videos":
intel/people/writings.py:268 Named Constant Snippet stored in payload/database, magic number 300 should be named constant haiku
     265              return WritingRecord(
     266                  title=title, url=url or "", source="talk",
     267                  date=item.get("date") or _extract_date(item),
>>>  268                  snippet=(item.get("snippet", "") or (item.get("description", "") or ""))[:300],
     269                  publication=item.get("channel") or "YouTube",
     270              )
     271          case "news":
intel/people/writings.py:275 Named Constant Snippet stored in payload/database, magic number 300 should be named constant haiku
     272              return WritingRecord(
     273                  title=title, url=url or "", source="news",
     274                  date=item.get("date") or _extract_date(item),
>>>  275                  snippet=(item.get("snippet", "") or "")[:300],
     276                  publication=item.get("source"),
     277              )
     278          case _:
intel/people/writings.py:282 Named Constant Snippet stored in payload/database, magic number 300 should be named constant haiku
     279              return WritingRecord(
     280                  title=title, url=url or "", source=source,
     281                  date=_extract_date(item),
>>>  282                  snippet=(item.get("snippet", "") or "")[:300],
     283                  publication=_source_label(url) if url else source,
     284              )
     285  
intel/people/writings.py:622 Named Constant Snippet being formatted for display/output, magic number 120 should be named constant haiku
     619      """Score a batch of records. Returns {index: verdict}."""
     620      items = []
     621      for i, r in enumerate(records):
>>>  622          items.append(f"{i + start_idx}. [{r.source}] \"{r.title}\" — {r.snippet[:120]}")
     623      items_text = "\n".join(items)
     624  
     625      ambiguity_note = ""
intel/people/writings.py:798 Named Constant Snippet being formatted for HTML output, magic number 200 should be named constant haiku
     795          conf_color = "#2d7" if w.confidence >= CONFIDENCE_HIGH_THRESHOLD else "#fa3" if w.confidence >= CONFIDENCE_MEDIUM_THRESHOLD else "#999"
     796          date_str = w.date or "—"
     797          pub_str = w.publication or w.source
>>>  798          snippet_html = w.snippet[:200].replace("<", "&lt;").replace(">", "&gt;")
     799  
     800          draft_link = f'<a href="{draft_url}" target="_blank" class="review-btn">Review in Drafter</a>' if draft_url else ""
     801  
jobs/app_ng.py:172 Named Constant Magic number 100 for event detail in HTML display. Should be MAX_EVENT_DETAIL_CHARS constant, but truncation is intentional for display. haiku
     169      for ev in events:
     170          icon = EVENT_ICONS.get(ev["event"], "\u2022")
     171          ts = _ts_pt(ev["ts"])
>>>  172          detail = _html_escape((ev["detail"] or "")[:100])
     173          rows.append(
     174              f"<tr>"
     175              f"<td style='padding:2px 8px 2px 0; white-space:nowrap; color:#8b949e;'>{ts}</td>"
jobs/app_ng.py:336 Named Constant Magic number 200 for detail in HTML display. Should be named constant, but truncation is intentional for display. haiku
     333              f'<b style="color:#f85149;">Circuit breaker tripped</b> '
     334              f'<span style="color:#8b949e;">at <b>{stage}</b> ({ts})</span><br>'
     335              f'<span style="color:{color};">{ec}</span>: '
>>>  336              f'{_html_escape(detail[:200])}'
     337              f'</div>'
     338          )
     339  
jobs/app_ng.py:479 Named Constant Magic number 120 for title in payload. Should be MAX_TITLE_CHARS constant, but this is intentional storage truncation. haiku
     476              "key": key[-12:] if key.startswith("scan-") else key[:20],
     477              "full_key": key,
     478              "subcheck": data.get("scan_subcheck", ""),
>>>  479              "title": title[:120],
     480              "location": location,
     481              "stages": stage_display,
     482              "status": status,
jobs/app_ng.py:485 Named Constant Magic number 200 for error in payload. Should be MAX_ERROR_CHARS constant, but this is intentional storage truncation. haiku
     482              "status": status,
     483              "priority": priority or 0,
     484              "updated": _ts_pt(last_at),
>>>  485              "error": (error or "")[:200],
     486          })
     487      return items
     488  
jobs/app_ng.py:644 Named Constant Magic numbers 120 and 200 for title and error in payload. Should be named constants, but truncation is intentional for storage. haiku
     641          row = {
     642              "key": key[:30],
     643              "full_key": key,
>>>  644              "title": str(title)[:120],
     645              "status": status,
     646              "priority": priority or 0,
     647              "updated": _ts_pt(last_at),
jobs/app_ng.py:1457 Named Constant Magic number 500 for error in HTML display. Should be MAX_ERROR_DISPLAY_CHARS constant, but truncation is intentional for display. haiku
    1454          html = [f"<h4 style='margin: 0 0 8px;'>{key}</h4>",
    1455                  f"<div style='color: #8b949e; margin-bottom: 8px;'>Status: {row['status']} · Pri: {row['priority']}</div>"]
    1456          if error:
>>> 1457              html.append(f"<div style='color: #f85149; background: #1a0000; padding: 8px; border-radius: 4px; margin-bottom: 8px; font-family: monospace; font-size: 0.85em; white-space: pre-wrap;'>{error[:500]}</div>")
    1458          html.append("<table style='border-collapse: collapse; width: 100%;'>")
    1459          for k, v in sorted(data.items()):
    1460              if v is None or v == "":
jobs/handlers/nonprofit_990s.py:997 Named Constant Storing description in database payload; 500 is a magic number that should be named constant haiku
     994          for pg in irs990.findall(f".//irs:{tag}", ns):
     995              desc = pg.findtext("irs:Desc", namespaces=ns) or ""
     996              expense = pg.findtext("irs:ExpenseAmt", namespaces=ns) or "0"
>>>  997              programs.append({"description": desc[:500], "expense": float(expense) if expense else 0})
     998      result["xml_programs"] = programs
     999  
    1000      # Schedule J compensation (detailed)
jobs/lib/db/items.py:512 Named Constant Storing error message in database; truncation is intentional but magic numbers should be named constants haiku
     509      elapsed = None
     510  
     511      if status == "failed":
>>>  512          err = (error or f"unknown error in {stage}")[:500]
     513      elif status == "retry_later":
     514          err = error[:500] if error else None
     515  
jobs/lib/discovery.py:1324 Named Constant Storing feed entry summary in database; truncation is intentional but magic numbers should be named constants haiku
    1321                      seen_urls.add(entry_url)
    1322  
    1323                      title = str(entry.get("title", ""))
>>> 1324                      summary = str(entry.get("summary", entry.get("description", "")))[:500]
    1325                      published = str(entry.get("published", entry.get("updated", "")))
    1326  
    1327                      # Content hash for dedup across sources
jobs/lib/doctor.py:266 Named Constant Storing error text in database record; truncation is intentional but magic number should be a named constant haiku
     263          (
     264              job_id, stage, item_key,
     265              verdict.error_class.value,
>>>  266              error_text[:500],
     267              verdict.action.value,
     268              verdict.reason,
     269              verdict.risk_tier.value,
jobs/lib/doctor.py:463 Named Constant Storing error reason in database record; truncation is intentional but magic number should be a named constant haiku
     460              error_class=ErrorClass.temporal,
     461              action=Action.retry_later,
     462              risk_tier=RiskTier.low,
>>>  463              reason=f"RetryLaterError at {stage or 'process'}: {error_str[:100]}",
     464              pause_reason=None,
     465              retry_after=None,
     466          )
jobs/lib/stages.py:567 Named Constant Storing description in merged data payload; truncation is intentional but magic number should be a named constant haiku
     564      merged = {**data, **meta_result} if meta_result else data
     565  
     566      title = merged.get("title", "")
>>>  567      description = (merged.get("description") or merged.get("descr_short") or "")[:800]
     568      duration = merged.get("duration", 0) or 0
     569      channel = merged.get("channel", "")
     570  
jobs/runner.py:240 Named Constant Error detail stored in database log_event; 200 is magic number needing constant, but truncation is intentional for storage haiku
     237              return 0
     238          except Exception as e:
     239              logger.error("Discovery failed", error=str(e))
>>>  240              log_event(conn, job.id, "discovery_failed", detail=str(e)[:200])
     241              return 0
     242  
     243  
jobs/runner.py:339 Named Constant Validation result stored in database detail field; 180 is magic number needing constant, truncation is intentional for payload haiku
     336                              if not is_job_paused(conn, job.id):
     337                                  set_job_paused(conn, job.id, True, reason=result)
     338                                  log_event(conn, job.id, "paused",
>>>  339                                            detail=f"Validation: {result[:180]}")
     340                                  logger.error(f"Guard: paused — {result}")
     341              except Exception as e:
     342                  logger.error("Guard check error", error=str(e))
jobs/runner.py:358 Named Constant Error message stored in database and logged; 200/100 are magic numbers needing constants, but truncation is intentional for storage haiku
     355  
     356      match verdict.action:
     357          case Action.retry_later:
>>>  358              mark_stage_retry_later(conn, item_id, stage, job_stages, emsg[:200])
     359              logger.warning(f"{stage_label}retry later ({verdict.error_class.value}): {emsg[:100]}{'…' if len(emsg) > 100 else ''}")
     360  
     361          case Action.fail_item:
jobs/runner.py:365 Named Constant Error message stored in database log_event; 200/100 are magic numbers needing constants, truncation is intentional for payload haiku
     362              mark_stage_failed(conn, item_id, stage, job_stages, emsg[:200])
     363              item_counter["failed"] += 1
     364              log_event(conn, job_id, "stage_failed", stage=stage,
>>>  365                        item_key=item_key, detail=emsg[:200])
     366              logger.warning(f"{stage_label}failed ({verdict.error_class.value}): {emsg[:100]}{'…' if len(emsg) > 100 else ''}")
     367  
     368          case Action.pause_job:
jobs/runner.py:387 Named Constant Error message stored in database job record; 200 is magic number needing constant, truncation is intentional for storage haiku
     384                  error_class=verdict.error_class.value,
     385                  reason=verdict.reason,
     386                  risk_tier=verdict.risk_tier.value,
>>>  387                  last_error=emsg[:200],
     388                  last_item=item_key,
     389              )
     390  
jobs/runner.py:1124 Named Constant Config changes stored in database log_event detail; 200 is magic number needing constant, truncation is intentional for payload haiku
    1121                                              set_job_paused(ev_conn, job_id, False)
    1122                                              wake_job(job_id)
    1123                                      log_event(ev_conn, job_id, "config_reloaded",
>>> 1124                                                detail=", ".join(changes)[:200])
    1125                                      logger.bind(job=job.label).info(f"Config reloaded: {', '.join(changes)}")
    1126                                      wake_job(job_id)  # trigger rediscovery on config change
    1127                      # Hot-reload resource concurrency
jobs/ui/queries.py:507 Named Constant Event detail displayed in UI table; 100 is magic number needing constant, truncation is intentional for display haiku
     504              ts = format_date_short(ev["ts"])
     505              detail = ev["detail"] or ""
     506              if len(detail) > 100:
>>>  507                  detail = detail[:100] + "\u2026"
     508              ev_rows.append(f"| {ts} | {ev_icon} {ev['event']} | {detail} |")
     509          events_table = "| Time | Event | Detail |\n|---|---|---|\n" + "\n".join(ev_rows)
     510          events_summary = f"\n\n<details><summary>Recent Events ({len(ev_rows)})</summary>\n\n{events_table}\n\n</details>"
jobs/ui/queries.py:625 Named Constant Display value in UI markdown; 200 is magic number needing constant, truncation is intentional for display haiku
     622              elif k == "url":
     623                  display = f"[{v}]({v})"
     624              if len(display) > 200:
>>>  625                  display = display[:200] + "\u2026"
     626              parts.append(f"- **{k}**: {display}")
     627  
     628      if results:
jobs/ui/queries.py:639 Named Constant Display value in UI markdown; 200 is magic number needing constant, truncation is intentional for display haiku
     636                      if isinstance(v, str) and (v.startswith("http://") or v.startswith("https://")):
     637                          display = f"[{v}]({v})"
     638                      elif len(display) > 200:
>>>  639                          display = display[:200] + "\u2026"
     640                      parts.append(f"- **{k}**: {display}")
     641  
     642              files = result_data.get("_files", {})
jobs/ui/queries.py:663 Named Constant Transcript preview for UI display; 500 is magic number needing constant, truncation is intentional for display haiku
     660  
     661      if transcript_path and transcript_path.exists():
     662          text = transcript_path.read_text()
>>>  663          preview = text[:500] + ("\u2026" if len(text) > 500 else "")
     664          parts.append(f"\n**Transcript preview** ({len(text)} chars):\n```\n{preview}\n```")
     665  
     666      return "\n\n".join(parts)
learning/cli.py:1358 Named Constant Full text being included in email body payload; intentional truncation for storage but 500 should be a named constant haiku
    1355      if principle.learning_type:
    1356          body_parts.append(f"Type: {principle.learning_type.value}")
    1357      if principle.full_text:
>>> 1358          body_parts.append(principle.full_text[:500])
    1359      return f"Title: {title}\n{title}\n\n{'. '.join(body_parts)}"
    1360  
    1361  
learning/cli.py:1397 Named Constant Content being stored in payload; intentional truncation but 500 should be a named constant haiku
    1394          logger.debug(f"Embedded {table}/{item_id[:12]}")
    1395  
    1396          # Also upsert to vector store
>>> 1397          payload = {"content": text[:500], "type": table}
    1398          if table == "principles" and hasattr(item, "domain_scope"):
    1399              payload["domain_tags"] = item.domain_scope or []
    1400              payload["confidence"] = item.abstraction_level
learning/cli.py:1536 Named Constant Full text being stored in vec_meta payload; intentional truncation but 500 should be a named constant haiku
    1533                  if r["learning_type"]:
    1534                      body_parts.append(f"Type: {r['learning_type']}")
    1535                  if r["full_text"]:
>>> 1536                      body_parts.append(r["full_text"][:500])
    1537                  # Structured prefix: title repeated for emphasis, then body
    1538                  text = f"Title: {title}\n{title}\n\n{'. '.join(body_parts)}"
    1539                  to_embed.append(("principles", r["id"], text))
learning/cli.py:1578 Named Constant Content being stored in vec_meta payload; intentional truncation but 500 should be a named constant haiku
    1575                  text = "\n".join(parts)
    1576                  to_embed.append(("learning_instances", r["id"], text))
    1577                  vec_meta[("learning_instances", r["id"])] = {
>>> 1578                      "content": text[:500],
    1579                      "type": "learning_instances",
    1580                      "project": r["project"],
    1581                      "domain_tags": domain_tags,
learning/cli.py:1657 Named Constant Content being stored in vec_meta payload; intentional truncation but 500 should be a named constant haiku
    1654                  total += 1
    1655  
    1656                  source = "principles" if table == "principles" else "learnings"
>>> 1657                  meta = vec_meta.get((table, item_id), {"content": _text[:500], "type": table})
    1658                  point = {"id": item_id, "text": _text or item_id, "vector": vec, "payload": meta}
    1659                  if table == "principles":
    1660                      vec_points_p.append(point)
learning/cli.py:1962 Named Constant Full text being included in prompt/email body; intentional truncation but 500 should be a named constant haiku
    1959    ID: {parent.id}
    1960    Name: {parent.name}
    1961    Text: {parent.text}
>>> 1962    Full: {(parent.full_text or '')[:500]}
    1963  
    1964  Supporting instances:
    1965  {instance_text}
learning/gyms/apply/gym.py:305 Named Constant Text being formatted into a prompt for LLM analysis; magic number 200 should be named constant haiku
     302              is_correct = "(EXPECTED)" if item.get("id") == result.test_case.principle_id else ""
     303              retrieved_desc.append(
     304                  f"{i}. [{item.get('title', '?')}] {is_correct}\n"
>>>  305                  f"   {item.get('text', '')[:200]}"
     306              )
     307          retrieved_text = "\n".join(retrieved_desc) if retrieved_desc else "(no results)"
     308  
learning/gyms/apply/spot_check.py:111 Named Constant HTML display preview; magic number 150 should be a named constant haiku
     108              item_cls = "retrieved-correct" if is_correct else "retrieved-other"
     109              marker = " (EXPECTED)" if is_correct else ""
     110              title = _esc(item.get("title", "?"))
>>>  111              text = _esc((item.get("text", "") or "")[:150])
     112              score_val = item.get("score", 0)
     113              retrieved_html += (
     114                  f'<div class="retrieved-item {item_cls}">'
learning/gyms/badge/gym.py:144 Named Constant Magic number 200 for stored prompt field in timeline; should be MAX_PROMPT_LENGTH haiku
     141              current_badge = badge_text
     142  
     143          timeline.append({
>>>  144              "prompt": prompt[:200],
     145              "badge": badge_text or "(no change)",
     146              "topic": topic,
     147          })
learning/gyms/badge/report.py:46 Named Constant Magic number 120 for HTML display preview; should be named constant haiku
      43              badge_class = "badge-same" if step["badge"] == "(no change)" else "badge-changed"
      44              timeline_html += f"""
      45              <div class="step">
>>>   46                  <div class="prompt">{_esc(step['prompt'][:120])}</div>
      47                  <div class="{badge_class}">{_esc(step['badge'])}</div>
      48              </div>"""
      49  
learning/gyms/claim_extraction/corpus_prep.py:195 Named Constant Document text stored in corpus payload; 15000 is magic number needing constant, but truncation is intentional for storage haiku
     192  
     193          # Truncate very long docs
     194          if len(text) > 15000:
>>>  195              text = text[:15000] + "\n[truncated]"
     196  
     197          docs.append({
     198              "doc_id": idea_id,
learning/gyms/claim_extraction/corpus_prep.py:227 Named Constant Document text stored in corpus payload; 15000 is magic number needing constant, truncation is intentional for storage haiku
     224      for f in selected:
     225          text = f.read_text()
     226          if len(text) > 15000:
>>>  227              text = text[:15000] + "\n[truncated]"
     228          docs.append({
     229              "doc_id": f.stem,
     230              "text": text,
learning/gyms/claim_extraction/report.py:257 Named Constant Content preview being rendered in HTML report; magic number 2000 should be named constant, but truncate() not needed as this is intentional display preview haiku
     254                              f'<div class="bar-bg"><div class="bar" style="width:{bw}%;background:{_score_color(bw)}"></div></div>'
     255                              f'<span class="crit-score">{v}</span></div>'
     256                          )
>>>  257                  preview = c.content[:2000] if c.content else "(empty)"
     258                  return f"""
     259                  <div class="example-card">
     260                      <div class="example-label" style="background:{color}">{label}: {c.metadata.get('model','?')}:{c.metadata.get('prompt','?')} &mdash; {c.score:.0f}</div>
learning/gyms/claim_extraction/report.py:315 Named Constant Content preview being rendered in HTML report; magic number 3000 should be named constant, but truncate() not needed as this is intentional display preview haiku
     312          stats_html = " &middot; ".join(stats_items) if stats_items else ""
     313  
     314          doc_id = c.metadata.get("doc_id", "?")
>>>  315          content_preview = c.content[:3000] if c.content else ""
     316  
     317          detail_cards += f"""
     318          <div class="card">
learning/gyms/extraction/report.py:236 Named Constant Text preview in HTML report; magic number 3000 should be constant, but truncation is intentional for display haiku
     233  
     234          text_preview = ""
     235          if r.text_extracted or r.text_cleaned:
>>>  236              extracted_preview = escape(r.text_extracted[:3000]) if r.text_extracted else "(empty)"
     237              cleaned_preview = escape(r.text_cleaned[:3000]) if r.text_cleaned else "(empty)"
     238              text_preview = f"""
     239              <details style="margin-top:8px;">
learning/gyms/llm_task/report.py:66 Named Constant HTML display of LLM output; magic number 2000 should be a named constant for consistency haiku
      63              <div class="reason">{escape(c.reason)}</div>
      64              {subscores_html}
      65              <details><summary>Output ({len(c.content)} chars)</summary>
>>>   66                  <pre>{escape(c.content[:2000])}</pre>
      67              </details>
      68          </div>"""
      69  
learning/schema/import_principles.py:136 Named Constant Extracted content stored in schema; magic numbers 500 and 300 should be named constants haiku
     133      # Look for "Why" sections
     134      why_match = re.search(r"###?\s*(?:Why|The (?:Problem|Insight))\s*\n(.+?)(?=\n###?\s|\Z)", body, re.DOTALL)
     135      if why_match:
>>>  136          rationale = why_match.group(1).strip()[:500]
     137  
     138      # Look for anti-patterns or "Bad" examples
     139      bad_match = re.search(r"(?:❌|Bad:|Anti-pattern|Signs of)(.+?)(?=\n(?:✅|Good:|###?\s)|\Z)", body, re.DOTALL)
learning/schema/import_principles.py:168 Named Constant Summary text stored in schema; magic numbers 200 and 500 should be named constants haiku
     165  
     166              # First paragraph as summary
     167              paragraphs = [p.strip() for p in body.split("\n\n") if p.strip()]
>>>  168              summary = paragraphs[0] if paragraphs else body[:200]
     169              # Strip markdown formatting for summary
     170              summary = re.sub(r"\*\*(.+?)\*\*", r"\1", summary)
     171              summary = re.sub(r"```[\s\S]*?```", "", summary)
learning/schema/learning_store.py:904 Named Constant Text preview stored in payload; magic number 120 should be a named constant haiku
     901                          "type": "principle",
     902                          "id": r["id"],
     903                          "title": r["name"],
>>>  904                          "text": (r["text"] or "")[:120],
     905                          "learning_type": r["learning_type"],
     906                      })
     907  
learning/schema/materialize.py:89 Named Constant Display preview in materialized output; magic number 200 should be a named constant haiku
      86      ]
      87  
      88      for p in principles:
>>>   89          text = (p["text"] or "")[:200].replace("\n", " ").strip()
      90          lines.append(f"- **{p['name']}** ({p['instance_count']}x) — {text}")
      91  
      92      # Group recent instances by project
learning/schema/materialize.py:105 Named Constant Display preview in materialized output; magic number 200 should be a named constant haiku
     102          for proj, instances in by_project.items():
     103              lines.append(f"### {proj}")
     104              for inst in instances:
>>>  105                  content = inst["content"][:200].replace("\n", " ")
     106                  lines.append(f"- {content}")
     107              lines.append("")
     108  
learning/schema/migrate_failures.py:89 Named Constant Error content stored in schema; magic numbers 200 should be named constants haiku
      86  
      87          if repair:
      88              repair_tool = repair.get("tool_name", "unknown")
>>>   89              repair_summary = repair.get("input_summary", "")[:200]
      90              content = (
      91                  f"Error in {tool_name}: {error_msg[:200]}\n"
      92                  f"Repair: {repair_tool} - {repair_summary}"
learning/schema/migrate_failures.py:95 Named Constant Error content stored in schema; magic numbers 300 and 500 should be named constants haiku
      92                  f"Repair: {repair_tool} - {repair_summary}"
      93              )
      94          else:
>>>   95              content = f"Error in {tool_name}: {error_msg[:300]}"
      96  
      97          # Build context snippet
      98          context = row["context_prompt"] or ""
learning/schema/ui/detail.py:105 Named Constant HTML display preview; magic number 150 should be a named constant haiku
     102                  f"<div style='border-left:3px solid {tc};padding:4px 8px;margin:3px 0;background:#fafafa;border-radius:3px;'>"
     103                  f"<span style='font-size:0.8em;color:#888;'>{inst.get('link_type', '')} \u00b7 {inst.get('project', '-')}"
     104                  f"{(' \u00b7 ' + strength_str) if strength_str else ''}</span><br>"
>>>  105                  f"<span style='font-size:0.88em;'>{esc((inst.get('content') or '')[:150])}</span></div>"
     106              )
     107          lines.append("</details>")
     108  
learning/schema/ui/detail.py:121 Named Constant HTML display preview; magic number 200 should be a named constant haiku
     118                  f"<span style='color:#888;font-size:0.82em;'>{relative_time(a.get('applied_at'))} \u00b7 {a.get('project', '-')}</span>"
     119              )
     120              if a.get("outcome_notes"):
>>>  121                  lines.append(f"<br><span style='font-size:0.88em;'>{esc(a['outcome_notes'][:200])}</span>")
     122              lines.append("</div>")
     123          lines.append("</details>")
     124  
learning/schema/ui/detail.py:272 Named Constant HTML display preview; magic number 150 should be a named constant haiku
     269              f"<div style='padding:6px 10px;background:#e8f5e9;border-left:3px solid #4caf50;"
     270              f"border-radius:4px;margin-bottom:8px;font-size:0.88em;'>"
     271              f"<b>\U0001f4d6 Instance</b> ({instance.get('learning_type', '?')}): "
>>>  272              f"{esc((instance.get('content') or '')[:150])}</div>"
     273          )
     274      if principles:
     275          p_chips = " ".join(
learning/schema/ui/detail.py:313 Named Constant HTML display preview; magic number 100 should be a named constant haiku
     310              lines.append(
     311                  f"<div style='margin:3px 0;padding:4px 8px;background:{bg};border-radius:3px;font-size:0.85em;'>"
     312                  f"{icon} <b>#{i+2}</b> {attempt.get('tool_name', '?')}: "
>>>  313                  f"{esc((attempt.get('input_summary') or '')[:100])}</div>"
     314              )
     315          lines.append("</details>")
     316  
learning/schema/ui/detail.py:327 Named Constant HTML display preview; magic number 120 should be a named constant haiku
     324              lines.append(
     325                  f"<div style='border:{border};border-radius:4px;padding:6px;margin:4px 0;background:{bg};font-size:0.85em;'>"
     326                  f"<b>#{i}</b> \u2014 {cand.get('tool_name', '?')}{badge}<br>"
>>>  327                  f"{esc((cand.get('input_summary') or '')[:120])}</div>"
     328              )
     329          lines.append("</details>")
     330  
learning/schema/ui/queries.py:246 Named Constant DataFrame column for display; magic number 120 should be a named constant haiku
     243      if df.empty:
     244          return pd.DataFrame(columns=["_id", "Content", "Type", "Source", "Project", "Created"])
     245  
>>>  246      df["content_short"] = df["content"].str[:120]
     247      df["type_label"] = df["learning_type"].map(lambda t: f"{TYPE_ICONS.get(t, '?')} {t}" if t else "?")
     248      df["source_label"] = df["source_type"].map(lambda s: SOURCE_MAP.get(s, s or "?"))
     249      df["created_rel"] = df["created_at"].map(relative_time)
learning/session_extract/episodes.py:73 Named Constant This 150-char truncation is fed into an LLM prompt for episode segmentation, so the limit matters for quality and should be a named constant like TURN_SUMMARY_MAX_CHARS. opus
      70          prefix = "USER" if t.role == "user" else "ASST"
      71          err = " [ERROR]" if t.has_error else ""
      72          tools = f" [{', '.join(t.tools_used)}]" if t.tools_used else ""
>>>   73          summary_lines.append(f"Turn {t.turn_idx}: {prefix}{err}{tools} {t.text[:150]}")
      74  
      75      prompt = f"""\
      76  Analyze this session transcript and identify where the topic/task changes.
learning/session_review/app.py:349 Named Constant Magic number 200 for HTML display preview. Should be MAX_RESULT_PREVIEW_CHARS constant, but this is intentional display truncation. haiku
     346                  f"<div style='border:{border};border-radius:6px;padding:8px;margin-bottom:8px;background:{bg};'>"
     347                  f"<b>Candidate {i}</b> — {cand.get('tool_name', '?')}{badge}"
     348                  f"<pre style='white-space:pre-wrap;margin:4px 0;font-size:0.9em;'>{_esc(cand.get('input_summary', ''))}</pre>"
>>>  349                  f"<pre style='white-space:pre-wrap;margin:0;font-size:0.85em;color:#666;'>{_esc(cand.get('result_summary', '')[:200])}</pre>"
     350                  f"</div>"
     351              )
     352  
learning/session_review/failure_mining.py:341 Named Constant Storing result_summary in a data structure (repair candidate); 300 is a magic number that should be MAX_RESULT_SUMMARY_CHARS haiku
     338                          candidate = {
     339                              "tool_name": tool_name,
     340                              "input_summary": input_summary,
>>>  341                              "result_summary": result_text[:300],
     342                              "candidate_index": len(active_failure.repair_candidates),
     343                          }
     344                          active_failure.repair_candidates.append(candidate)
learning/session_review/failure_mining.py:362 Named Constant Same as [0]: storing result_summary in a data structure; should be a named constant haiku
     359                              project=meta["project"],
     360                              tool_name=tool_name,
     361                              input_summary=input_summary,
>>>  362                              result_summary=result_text[:300],
     363                              context_prompt=context_prompt,
     364                              turn_index=tool_info["turn_index"],
     365                              transcript_path=str(path),
learning/session_review/failure_mining.py:565 Named Constant Building display content for storage/logging; 200 is magic number for repair_summary, should be a constant haiku
     562  
     563          if repair:
     564              repair_tool = repair.get("tool_name", "unknown")
>>>  565              repair_summary = repair.get("input_summary", "")[:200]
     566              content = (
     567                  f"Error in {tool_name}: {error_msg[:200]}\n"
     568                  f"Repair: {repair_tool} - {repair_summary}"
learning/session_review/failure_mining.py:571 Named Constant Building display content; 300 is magic number for error_msg truncation, should be a constant haiku
     568                  f"Repair: {repair_tool} - {repair_summary}"
     569              )
     570          else:
>>>  571              content = f"Error in {tool_name}: {error_msg[:300]}"
     572  
     573          # Save both in one connection to avoid locking
     574          with sqlite3.connect(store.db_path) as conn:
learning/session_review/failure_mining.py:606 Named Constant Storing context_prompt in database payload; 500 is magic number that should be MAX_CONTEXT_PROMPT_CHARS haiku
     603                     VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
     604                  (
     605                      instance_id, content,
>>>  606                      p.context_prompt[:500] if p.context_prompt else None,
     607                      SourceType.SESSION_REFLECTION.value,
     608                      p.session_id,
     609                      f"turn_{p.turn_index}" if p.turn_index else None,
learning/session_review/judge_calibration.py:124 Named Constant Extracting result_text for storage; 4000 is magic number that should be MAX_RESULT_TEXT_CHARS haiku
     121                  if isinstance(entry, dict) and entry.get("role") == "assistant":
     122                      content = entry.get("content", "")
     123                      if isinstance(content, str) and len(content) > 50:
>>>  124                          result_text = content[:4000]
     125                          break
     126  
     127          samples.append({
learning/session_review/judge_calibration.py:130 Named Constant Storing result_text in data structure; 4000 is magic number that should be a constant haiku
     127          samples.append({
     128              "id": row["id"],
     129              "prompt": row["prompt"],
>>>  130              "result_text": result_text[:4000],
     131              "existing_score": row["quality_score"],
     132              "tag": row["tag"],
     133          })
learning/session_review/pair_judge.py:182 Named Constant Magic number 200 for prompt context being sent to LLM judge. Should be MAX_CONTEXT_CHARS constant, but truncation is intentional for prompt formatting. haiku
     179          # Single-candidate (legacy) prompt
     180          s = pair["success"]
     181          return JUDGE_SINGLE_PROMPT.format(
>>>  182              context=pair["context_prompt"][:200],
     183              fail_tool=ff.get("tool_name", "?"),
     184              fail_input=ff.get("input_summary", "")[:200],
     185              fail_error=ff.get("error_text", "")[:300],
learning/session_review/pair_judge.py:188 Named Constant Magic number 200 for result summary in LLM prompt. Should be named constant, truncation is intentional for prompt size control. haiku
     185              fail_error=ff.get("error_text", "")[:300],
     186              success_tool=s.get("tool_name", "?"),
     187              success_input=s.get("input_summary", "")[:200],
>>>  188              success_result=s.get("result_summary", "")[:200],
     189          )
     190  
     191  
learning/session_review/parallelism_analysis.py:75 Named Constant Tool result being stored for LLM processing; magic number 200 should be named constant haiku
      72              for c in msg.get("content", []):
      73                  if isinstance(c, dict) and c.get("type") == "tool_result":
      74                      rc = c.get("content", "")
>>>   75                      results[c.get("tool_use_id", "")] = str(rc if not isinstance(rc, list) else rc)[:200]
      76      return results
      77  
      78  
learning/session_review/principle_propose.py:208 Named Constant Magic number 200 for context prompt in markdown output sent to LLM. Should be named constant, but truncation is intentional for prompt formatting. haiku
     205          thinking = json.loads(f["thinking_blocks"]) if f["thinking_blocks"] else []
     206  
     207          lines.append(f"### Failure {i} [{f['error_category']}]")
>>>  208          lines.append(f"**Context:** {f['context_prompt'][:200]}")
     209          lines.append(f"**Failed:** {first.get('tool_name', '?')} → {first.get('error_text', '')[:200]}")
     210          if thinking:
     211              lines.append(f"**Thinking:** {thinking[0][:200]}")
learning/session_review/principle_propose.py:214 Named Constant Magic number 100 for error text in markdown output. Should be named constant, truncation is intentional for display. haiku
     211              lines.append(f"**Thinking:** {thinking[0][:200]}")
     212          lines.append(f"**Attempts:** {f['attempt_count']}")
     213          for j, r in enumerate(recovery[:2]):
>>>  214              lines.append(f"  Retry {j+1}: {r.get('tool_name', '?')} → {'❌' if r.get('is_error') else '✅'} {r.get('error_text', r.get('input_summary', ''))[:100]}")
     215          if success:
     216              lines.append(f"**Fix:** {success.get('tool_name', '?')}: {success.get('input_summary', '')[:200]}")
     217          lines.append("")
learning/session_review/replay.py:303 Named Constant Magic number 2000 for block content truncation with explicit '(truncated)' marker. Should be MAX_BLOCK_CONTENT_CHARS constant for consistency. haiku
     300              # Truncate large tool results
     301              bc = block.get("content", "")
     302              if isinstance(bc, str) and len(bc) > 2000:
>>>  303                  block = {**block, "content": bc[:2000] + "\n... (truncated)"}
     304              elif isinstance(bc, list):
     305                  # Truncate text blocks within
     306                  new_bc = []
learning/session_review/replay.py:311 Named Constant Magic number 2000 for item text truncation with explicit '(truncated)' marker. Should be MAX_ITEM_TEXT_CHARS constant for consistency. haiku
     308                      if isinstance(item, dict) and item.get("type") == "text":
     309                          text = item.get("text", "")
     310                          if len(text) > 2000:
>>>  311                              new_bc.append({**item, "text": text[:2000] + "\n... (truncated)"})
     312                          else:
     313                              new_bc.append(item)
     314                      else:
learning/session_review/replay.py:354 Named Constant Magic number 3000 for block content truncation. Should be MAX_BLOCK_CONTENT_CHARS constant, but truncation is intentional for payload size control. haiku
     351                  if isinstance(bc, list):
     352                      bc = " ".join(
     353                          b.get("text", "") for b in bc if isinstance(b, dict)
>>>  354                      )[:3000]
     355                  elif isinstance(bc, str):
     356                      bc = bc[:3000]
     357  
learning/session_review/replay.py:553 Named Constant Tool result content stored in payload/message structure; 3000 is a magic number that should be named constant, but truncation is intentional for payload size haiku
     550              tool_results.append({
     551                  "type": "tool_result",
     552                  "tool_use_id": tu["id"],
>>>  553                  "content": result_content[:3000],
     554              })
     555  
     556          conversation.append({"role": "user", "content": tool_results})
learning/session_review/replay.py:686 Named Constant Text block stored in tool_calls structure; 200 is magic number needing constant, truncation is intentional for payload haiku
     683              if block.type == "tool_use":
     684                  tool_calls.append({"name": block.name, "input": block.input})
     685              elif block.type == "text":
>>>  686                  tool_calls.append({"type": "text", "text": block.text[:200]})
     687          return tool_calls
     688  
     689      base_system = (
learning/session_review/retroactive_study.py:78 Named Constant Building HTML display; 120 and 60 are magic numbers that should be named constants haiku
      75              "rationale": p.rationale or "",
      76              "anti_pattern": p.anti_pattern or "",
      77          })
>>>   78          brief = (p.text or "")[:120].replace("\n", " ")
      79          anti = f" ≠ {p.anti_pattern[:60]}" if p.anti_pattern else ""
      80          lines.append(f"<p id=\"{p.id}\">{p.name}: {brief}{anti}</p>")
      81  
learning/session_review/retroactive_study.py:97 Named Constant Summarizing tool input for display/storage; 100 is magic number, should be MAX_TOOL_INPUT_SUMMARY_CHARS haiku
      94  def _summarize_input(tool_input: dict) -> str:
      95      """Summarize a tool_use input dict to a brief string."""
      96      if not isinstance(tool_input, dict):
>>>   97          return str(tool_input)[:100]
      98      if "command" in tool_input:
      99          return tool_input["command"][:120]
     100      if "file_path" in tool_input:
learning/session_review/retroactive_study.py:106 Named Constant Building tool input summary; 40 and 150 are magic numbers that should be constants haiku
     103              s += f" edit: {tool_input['old_string'][:50]}→{tool_input.get('new_string', '')[:50]}"
     104          elif "pattern" in tool_input:
     105              s += f" pattern={tool_input['pattern'][:40]}"
>>>  106          return s[:150]
     107      if "pattern" in tool_input:
     108          path = tool_input.get("path", "")
     109          return f"{path} pattern={tool_input['pattern'][:60]}"[:150]
learning/session_review/retroactive_study.py:113 Named Constant Building tool input summary; 60, 150, and 120 are magic numbers that should be named constants haiku
     110      if "prompt" in tool_input:
     111          return tool_input["prompt"][:120]
     112      if "query" in tool_input:
>>>  113          return tool_input["query"][:120]
     114      return json.dumps(tool_input)[:120]
     115  
     116  
learning/session_review/retroactive_study.py:161 Named Constant Storing current_prompt for episode processing; 500 is magic number that should be MAX_PROMPT_CHUNK_CHARS haiku
     158                                  episodes.append(_finalize_episode(
     159                                      current_prompt, current_tools, metadata
     160                                  ))
>>>  161                              current_prompt = text[:500]
     162                              current_tools = []
     163                      elif block.get("type") == "tool_result":
     164                          tool_id = block.get("tool_use_id", "")
learning/session_review/retroactive_study.py:177 Named Constant Storing tool result in episode data structure; 150 is magic number that should be MAX_TOOL_RESULT_CHARS haiku
     174                              current_tools.append({
     175                                  "tool": tool_info["name"],
     176                                  "input": tool_info["input"],
>>>  177                                  "result": str(result_content)[:150],
     178                                  "is_error": is_error,
     179                              })
     180              elif isinstance(content, str) and content.strip() and len(content.strip()) > 15:
learning/session_review/retroactive_study.py:183 Named Constant Storing current_prompt for episode; 500 is magic number that should be a constant haiku
     180              elif isinstance(content, str) and content.strip() and len(content.strip()) > 15:
     181                  if current_prompt and current_tools:
     182                      episodes.append(_finalize_episode(current_prompt, current_tools, metadata))
>>>  183                  current_prompt = content.strip()[:500]
     184                  current_tools = []
     185  
     186      # Finalize last episode
learning/session_review/retroactive_study.py:302 Named Constant Storing task summary in stats payload; 120 is magic number that should be MAX_TASK_SUMMARY_CHARS haiku
     299      stats = {
     300          "session_id": session_id,
     301          "project": project,
>>>  302          "task": episodes[0]["prompt"][:120],
     303          "episode_count": len(episodes),
     304          "error_count": sum(e["error_count"] for e in episodes),
     305          "tool_count": sum(e["tool_count"] for e in episodes),
learning/session_review/retroactive_study.py:331 Named Constant Storing episode_prompt in stats; 120 is magic number that should be a constant haiku
     328              continue
     329  
     330          stats["analyses"].append({
>>>  331              "episode_prompt": episode["prompt"][:120],
     332              "analysis": analysis,
     333          })
     334          stats["followed"] += len(analysis.get("principles_followed", []))
learning/session_review/retroactive_study.py:353 Named Constant Storing context_snippet and outcome_notes in database; 300 and 200 are magic numbers that should be constants haiku
     350                  principle_id=item["principle_id"],
     351                  session_id=session_id,
     352                  project=project,
>>>  353                  context_snippet=item.get("evidence", "")[:300],
     354                  outcome=ApplicationOutcome.SUCCESS,
     355                  outcome_notes=f"Exemplified: {item.get('evidence', '')[:200]}",
     356                  recorded_by=RECORDED_BY,
learning/session_review/retroactive_study.py:376 Named Constant Storing context_snippet and outcome_notes in database; 300 is magic number that should be a constant haiku
     373                  principle_id=item["principle_id"],
     374                  session_id=session_id,
     375                  project=project,
>>>  376                  context_snippet=item.get("evidence", "")[:300],
     377                  outcome=ApplicationOutcome.FAILURE,
     378                  outcome_notes=item.get("how_it_would_have_helped", "")[:300],
     379                  recorded_by=RECORDED_BY,
learning/session_review/retroactive_study.py:454 Named Constant Storing evidence in stats payload; 150 is magic number that should be MAX_EVIDENCE_CHARS haiku
     451                  pid = item["principle_id"]
     452                  principle_stats.setdefault(pid, {"id": pid, "followed": 0, "violated": 0, "followed_evidence": [], "violated_evidence": []})
     453                  principle_stats[pid]["followed"] += 1
>>>  454                  principle_stats[pid]["followed_evidence"].append(item.get("evidence", "")[:150])
     455  
     456              for item in analysis.get("principles_violated", []):
     457                  pid = item["principle_id"]
learning/session_review/retroactive_study.py:461 Named Constant Storing evidence in stats payload; 100 is magic number that should be a constant haiku
     458                  principle_stats.setdefault(pid, {"id": pid, "followed": 0, "violated": 0, "followed_evidence": [], "violated_evidence": []})
     459                  principle_stats[pid]["violated"] += 1
     460                  principle_stats[pid]["violated_evidence"].append(
>>>  461                      f"{item.get('evidence', '')[:100]} → {item.get('how_it_would_have_helped', '')[:100]}"
     462                  )
     463  
     464              for item in analysis.get("principle_refinements", []):
learning/session_review/retroactive_study.py:602 Named Constant Building HTML display; 120 is magic number that should be MAX_EVIDENCE_DISPLAY_CHARS haiku
     599      for p in ranked:
     600          evidence_items = []
     601          for e in p.get("followed_evidence", [])[:2]:
>>>  602              evidence_items.append(f'<span class="followed">+</span> {_esc(e[:120])}')
     603          for e in p.get("violated_evidence", [])[:2]:
     604              evidence_items.append(f'<span class="violated">−</span> {_esc(e[:120])}')
     605          evidence_html = "<ul class='evidence-list'>" + "".join(f"<li>{e}</li>" for e in evidence_items) + "</ul>" if evidence_items else ""
learning/session_review/retroactive_study.py:609 Named Constant Building HTML display; 200 is magic number that should be MAX_PRINCIPLE_TEXT_CHARS haiku
     606  
     607          name = _esc(p.get("name", ""))
     608          pid = _esc(p["id"])
>>>  609          text = _esc(p.get("text", "")[:200].replace("\n", " "))
     610          principle_cell = (
     611              f'<a href="{LEARNING_URL}" title="{pid}" '
     612              f'style="color:#1a73e8">'
learning/session_review/sandbox_replay.py:223 Named Constant Error output being stored in result; magic numbers 500 and 2000 should be named constants, but truncate() not needed as this is intentional error storage haiku
     220              wall_clock = time.monotonic() - t0
     221  
     222              if proc.returncode != 0:
>>>  223                  logger.warning(f"Container exited {proc.returncode}: {stderr.decode()[:500]}")
     224                  return SandboxResult(
     225                      run=run,
     226                      wall_clock_seconds=wall_clock,
learning/session_review/sandbox_replay.py:366 Named Constant Prompt being stored in comparison dict for analysis; magic number 200 should be named constant haiku
     363          }
     364  
     365      comparison = {
>>>  366          "prompt": prompt[:200],
     367          "commit": commit[:12],
     368          "principle": principle,
     369          "model": model,
learning/session_review/tool_error_analysis.py:143 Named Constant Extracting error_msg for storage/analysis; 500 and 200 are magic numbers that should be constants haiku
     140              error_msg = ""
     141              bc = block.get("content", "")
     142              if isinstance(bc, str):
>>>  143                  error_msg = bc[:500]
     144              elif isinstance(bc, list):
     145                  error_msg = " ".join(
     146                      b.get("text", "")[:200] for b in bc if isinstance(b, dict)
learning/session_review/tool_error_analysis.py:168 Named Constant Storing tool_input in data structure; 200 is magic number that should be MAX_TOOL_INPUT_DISPLAY_CHARS haiku
     165                      ):
     166                          tool_name = pb.get("name", "unknown")
     167                          raw_input = pb.get("input", {})
>>>  168                          tool_input = json.dumps(raw_input)[:200] if raw_input else ""
     169                          tool_use_ts = parse_ts(msgs[j].get("timestamp"))
     170  
     171              # Find next assistant message (recovery)
learning/session_review/tool_error_analysis.py:186 Named Constant Storing next_action in data structure; 200 is magic number that should be a constant haiku
     183                          for nb in nc:
     184                              if isinstance(nb, dict):
     185                                  if nb.get("type") == "text":
>>>  186                                      next_action = nb.get("text", "")[:200]
     187                                      break
     188                                  elif nb.get("type") == "tool_use":
     189                                      next_action = f"[tool_use: {nb.get('name')}]"
learning/session_review/webfetch_fallback_test.py:108 Named Constant Storing original_error in results payload; 200 is magic number that should be MAX_ERROR_MESSAGE_CHARS haiku
     105          if url:
     106              results.append({
     107                  "url": url,
>>>  108                  "original_error": row["error_message"][:200],
     109              })
     110  
     111      # Deduplicate by URL
learning/session_review/webfetch_fallback_test.py:140 Named Constant Storing content_preview in test results; 200 is magic number that should be MAX_CONTENT_PREVIEW_CHARS haiku
     137              success=is_real,
     138              content_length=len(content),
     139              elapsed_ms=elapsed,
>>>  140              content_preview=content[:200] if content else "",
     141              error="" if is_real else "content looks like refusal or too short",
     142          ))
     143      except Exception as e:
learning/session_review/webfetch_fallback_test.py:147 Named Constant Error message storage in FetchResult payload; truncation is intentional for database field, but 200 should be a named constant haiku
     144          elapsed = int((time.monotonic() - t0) * 1000)
     145          results.append(FetchResult(
     146              url=url, method="brain_direct", success=False,
>>>  147              elapsed_ms=elapsed, error=str(e)[:200],
     148          ))
     149  
     150      # Method 2: With proxy (if direct failed)
learning/session_review/webfetch_fallback_test.py:164 Named Constant Content preview stored in FetchResult payload; intentional truncation for display/storage, but 200 should be a named constant haiku
     161                  success=is_real,
     162                  content_length=len(content),
     163                  elapsed_ms=elapsed,
>>>  164                  content_preview=content[:200] if content else "",
     165                  error="" if is_real else "content looks like refusal or too short",
     166              ))
     167          except Exception as e:
learning/session_review/webfetch_fallback_test.py:171 Named Constant Error message storage in FetchResult payload; same as [0] haiku
     168              elapsed = int((time.monotonic() - t0) * 1000)
     169              results.append(FetchResult(
     170                  url=url, method="brain_proxy", success=False,
>>>  171                  elapsed_ms=elapsed, error=str(e)[:200],
     172              ))
     173  
     174      # Method 3: JS rendering (if still failing)
learning/session_review/webfetch_fallback_test.py:188 Named Constant Content preview stored in FetchResult payload; same as [1] haiku
     185                  success=is_real,
     186                  content_length=len(content),
     187                  elapsed_ms=elapsed,
>>>  188                  content_preview=content[:200] if content else "",
     189                  error="" if is_real else "content looks like refusal or too short",
     190              ))
     191          except Exception as e:
learning/session_review/webfetch_fallback_test.py:195 Named Constant Error message storage in FetchResult payload; same as [0] haiku
     192              elapsed = int((time.monotonic() - t0) * 1000)
     193              results.append(FetchResult(
     194                  url=url, method="brain_js", success=False,
>>>  195                  elapsed_ms=elapsed, error=str(e)[:200],
     196              ))
     197  
     198      return results
learning/session_review/webfetch_fallback_test.py:228 Named Constant Content preview being stored in database; intentional truncation but 500 should be a named constant haiku
     225                         VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)""",
     226                      (now, r.url, info.get("original_error", ""), r.method,
     227                       1 if r.success else 0, r.content_length,
>>>  228                       r.elapsed_ms, r.error, str(r.content_preview)[:500]),
     229                  )
     230          db.commit()
     231  
lib/billing/providers.py:81 Named Constant Error message storage in CostSnapshot payload; magic numbers 100 and 120 should be named constants haiku
      78              month=month_cost if month_cost > 0 else None,
      79          )
      80      except httpx.HTTPStatusError as e:
>>>   81          return CostSnapshot(provider="Anthropic", error=f"HTTP {e.response.status_code}: {e.response.text[:100]}")
      82      except Exception as e:
      83          return CostSnapshot(provider="Anthropic", error=str(e)[:120])
      84  
lib/billing/providers.py:148 Named Constant Error message storage in CostSnapshot payload; magic numbers 100 and 120 should be named constants haiku
     145              month=month_cost if month_cost > 0 else None,
     146          )
     147      except httpx.HTTPStatusError as e:
>>>  148          return CostSnapshot(provider="OpenAI", error=f"HTTP {e.response.status_code}: {e.response.text[:100]}")
     149      except Exception as e:
     150          return CostSnapshot(provider="OpenAI", error=f"{type(e).__name__}: {e}"[:120])
     151  
lib/billing/providers.py:222 Named Constant Error message storage in CostSnapshot payload; magic numbers 100 and 120 should be named constants haiku
     219              balance=balance,
     220          )
     221      except httpx.HTTPStatusError as e:
>>>  222          return CostSnapshot(provider="Bright Data", error=f"HTTP {e.response.status_code}: {e.response.text[:100]}")
     223      except Exception as e:
     224          return CostSnapshot(provider="Bright Data", error=str(e)[:120])
     225  
lib/billing/providers.py:263 Named Constant Error message storage in CostSnapshot payload; magic numbers 100 and 120 should be named constants haiku
     260              month=total_monthly,
     261          )
     262      except httpx.HTTPStatusError as e:
>>>  263          return CostSnapshot(provider="Cloudflare", error=f"HTTP {e.response.status_code}: {e.response.text[:100]}")
     264      except Exception as e:
     265          return CostSnapshot(provider="Cloudflare", error=str(e)[:120])
     266  
lib/billing/providers.py:325 Named Constant Error message storage in CostSnapshot payload; magic numbers 100 and 120 should be named constants haiku
     322              balance=balance,
     323          )
     324      except httpx.HTTPStatusError as e:
>>>  325          return CostSnapshot(provider="xAI", error=f"HTTP {e.response.status_code}: {e.response.text[:100]}")
     326      except Exception as e:
     327          return CostSnapshot(provider="xAI", error=str(e)[:120])
     328  
lib/billing/providers.py:432 Named Constant Error message storage in CostSnapshot payload; magic numbers 100 and 120 should be named constants haiku
     429              balance=balance,
     430          )
     431      except httpx.HTTPStatusError as e:
>>>  432          return CostSnapshot(provider="Deepgram", error=f"HTTP {e.response.status_code}: {e.response.text[:100]}")
     433      except Exception as e:
     434          return CostSnapshot(provider="Deepgram", error=str(e)[:120])
     435  
lib/browser_observer.py:39 Named Constant Message stored in log payload; magic number 1000 should be named constant, but truncation is intentional haiku
      36          entry = {
      37              "ts": datetime.now().isoformat(),
      38              "level": level,
>>>   39              "message": text[:1000],  # Truncate long messages
      40              "source": source,
      41              "project": project,
      42              "port": port,
lib/ingest/benchmark_pdf.py:82 Named Constant Preview being stored in results payload; 200 char limit should be a named constant (MAX_PREVIEW_CHARS) haiku
      79              "peak_mb": peak_mb,
      80              "output_len": len(result),
      81              "word_count": len(result.split()),
>>>   82              "preview": result[:200].replace("\n", " "),
      83              "error": None,
      84          }
      85      except Exception as e:
lib/ingest/browser/conversation_log.py:110 Named Constant Error message being logged to persistent log; 500 char limit should be a named constant (MAX_ERROR_MESSAGE_CHARS) haiku
     107      log_event(
     108          "error",
     109          error_type=error_type,
>>>  110          message=message[:500],  # Truncate long messages
     111          session=session_id,
     112          **kwargs,
     113      )
lib/ingest/browser/extractor.py:265 Named Constant Sample HTML being stored in results; 500 char limit should be a named constant (MAX_SAMPLE_HTML_CHARS) haiku
     262                  results["item_candidates"].append({
     263                      "selector": sel,
     264                      "count": count,
>>>  265                      "sample_html": sample[:500] + "..." if len(sample) > 500 else sample,
     266                  })
     267          except Exception as e:
     268              logger.warning("Failed to check item selector", selector=sel, error=str(e))
lib/ingest/browser/extractor.py:295 Named Constant Sample value being stored in results; 100 char limit should be a named constant (MAX_SAMPLE_VALUE_CHARS) haiku
     292                          if val:
     293                              field_results.append({
     294                                  "selector": sel,
>>>  295                                  "sample_value": val[:100] if val else None,
     296                              })
     297                  except Exception as e:
     298                      logger.warning("Failed to analyze field", field_name=field_name, selector=sel, error=str(e))
lib/ingest/browser/reflection.py:67 Named Constant Hash computation from base64 data; 1000 char limit should be a named constant (MAX_HASH_INPUT_CHARS) haiku
      64      """Save base64 image data to cache and return a key."""
      65      cache = _get_cache()
      66  
>>>   67      content_hash = hashlib.md5(b64_data[:1000].encode()).hexdigest()[:12]
      68      ext = img_format.lower()
      69      if ext == "jpeg":
      70          ext = "jpg"
lib/ingest/browser/reflection.py:80 Named Constant Base64 data being cached; 1000 char limit should be a named constant (MAX_CACHE_DATA_CHARS) haiku
      77      except Exception as e:
      78          logger.warning("Failed to decode base64 image, saving as text", error=str(e))
      79          key = f"img_{content_hash}.txt"
>>>   80          cache.set(key, b64_data[:1000] + "...", expire=CACHE_TTL)
      81  
      82      return f"cache:{key}"
      83  
lib/ingest/cli.py:382 Named Constant Link text being stored in data structure; 100 char limit should be a named constant (MAX_LINK_TEXT_CHARS) haiku
     379          href = a["href"]
     380          if text and href and not href.startswith(("#", "javascript:")):
     381              link_url = urljoin(base_url, href) if base_url else href
>>>  382              links.append({"text": text[:100], "url": link_url})
     383      if links:
     384          data["links"] = links
     385  
lib/ingest/cli.py:924 Named Constant Snippet being formatted for display output; 200 char limit should be a named constant (MAX_SNIPPET_CHARS) haiku
     921              results_text.append(f"{i}. [{title}]({link})\n   {source} - {dims}")
     922          else:
     923              link = item.get("link", "")
>>>  924              snippet = item.get("snippet", item.get("description", ""))[:200]
     925              source = item.get("source", "")
     926              results_text.append(f"{i}. [{title}]({link})\n   {source} - {snippet}")
     927  
lib/ingest/fetcher.py:349 Named Constant Title extraction for storage/display; 100 char limit should be a named constant (MAX_TITLE_CHARS) haiku
     346      soup = BeautifulSoup(html, "html.parser")
     347      title_tag = soup.find("title")
     348      if title_tag:
>>>  349          return title_tag.get_text().strip()[:100]
     350  
     351      # Try h1 as fallback
     352      h1 = soup.find("h1")
lib/ingest/fetcher.py:376 Named Constant Text normalization for hashing; 10000 char limit should be a named constant (MAX_HASH_TEXT_CHARS) haiku
     373  def _compute_hashes(text: str) -> tuple[str, int]:
     374      """Compute content_hash (sha256 prefix) and simhash for extracted text."""
     375      from simhash import Simhash
>>>  376      norm = " ".join(text.split())[:10000]
     377      sha = hashlib.sha256(norm.encode()).hexdigest()[:16]
     378      sh = int(Simhash(norm).value or 0) & 0x7FFFFFFFFFFFFFFF  # fit SQLite signed int64
     379      return sha, sh
lib/ingest/literature_review.py:122 Named Constant Response text being returned in data structure; 2000 char limit should be a named constant (MAX_PAYWALL_TEXT_CHARS) haiku
     119                  if status == 403:
     120                      return {
     121                          "url": url,
>>>  122                          "text": resp.text[:2000] if resp.text else "",
     123                          "fetch_status": "paywall",
     124                          "content_format": "html" if "html" in ct else "unknown",
     125                      }
lib/ingest/related_work.py:206 Named Constant Content being returned for downstream processing/LLM; 4000 char limits should be named constants (MAX_CONTENT_CHARS) haiku
     203              text = extract_text_from_html(resp.text)
     204              if not text or text == "No content found":
     205                  raise ValueError("empty HTML after extraction")
>>>  206              return text[:4000]
     207          elif "text" in ct or "json" in ct:
     208              return resp.text[:4000]
     209          raise ValueError(f"unsupported content-type: {ct[:60]}")
lib/llm/claude_oauth.py:253 Named Constant Error response in exception, magic number should be named constant for consistency haiku
     250          )
     251          if resp.status_code != 200:
     252              raise OAuthError(
>>>  253                  f"Token refresh failed ({resp.status_code}): {resp.text[:200]}"
     254              )
     255          data = resp.json()
     256          if "access_token" not in data:
lib/llm/claude_oauth.py:542 Named Constant Error response truncation stored in variable for later use, should be named constant haiku
     539  
     540              if resp.status_code == 429:
     541                  error_body = await resp.aread()
>>>  542                  error_text = error_body.decode()[:500]
     543                  cooldown = rl["retry_after"] or 60
     544                  claim = rl["claim"]  # "seven_day", "five_hour", etc.
     545                  _update_profile_utilization(profile_label, rl)
lib/llm/claude_oauth.py:560 Named Constant Error response in exception, magic number should be named constant for consistency haiku
     557              if resp.status_code != 200:
     558                  error_body = await resp.aread()
     559                  raise OAuthError(
>>>  560                      f"Subscription API error ({resp.status_code}): {error_body.decode()[:300]}"
     561                  )
     562  
     563              # Track utilization from successful response
lib/llm/codex_oauth.py:156 Named Constant Error response in exception, magic number should be named constant for consistency haiku
     153      )
     154      if resp.status_code != 200:
     155          raise CodexOAuthError(
>>>  156              f"Token refresh failed ({resp.status_code}): {resp.text[:200]}"
     157          )
     158      data = resp.json()
     159      if "access_token" not in data:
lib/llm/codex_oauth.py:285 Named Constant Two error response truncations in exception messages, magic numbers should be named constants haiku
     282              error_body = await resp.aread()
     283              retry_after = resp.headers.get("retry-after", "")
     284              retry_msg = f", retry-after={retry_after}s" if retry_after else ""
>>>  285              raise CodexOAuthError(f"Rate limited (429){retry_msg}: {error_body.decode()[:200]}")
     286          if resp.status_code != 200:
     287              error_body = await resp.aread()
     288              raise CodexOAuthError(
lib/llm/gemini_oauth.py:122 Named Constant Error response in exception, magic number should be named constant for consistency haiku
     119              "grant_type": "refresh_token",
     120          })
     121          if resp.status_code != 200:
>>>  122              raise GeminiOAuthError(f"Token refresh failed for {self.name} ({resp.status_code}): {resp.text[:200]}")
     123          data = resp.json()
     124          if "access_token" not in data:
     125              raise GeminiOAuthError(f"Token refresh missing access_token for {self.name}: {data}")
lib/llm/gemini_oauth.py:148 Named Constant Error response in exception, magic number should be named constant for consistency haiku
     145              json={"metadata": {"ideType": "IDE_UNSPECIFIED", "platform": "PLATFORM_UNSPECIFIED"}},
     146          )
     147          if resp.status_code != 200:
>>>  148              raise GeminiOAuthError(f"loadCodeAssist failed for {self.name} ({resp.status_code}): {resp.text[:300]}")
     149          data = resp.json()
     150          self.project_id = data.get("cloudaicompanionProject")
     151  
lib/llm/gemini_oauth.py:348 Named Constant Two error response truncations in exception messages, magic numbers should be named constants haiku
     345                      logger.info(f"Gemini 429 on {acct.name}, rotating to {next_acct.name}")
     346                      acct = next_acct
     347                      continue
>>>  348                  raise GeminiOAuthError(f"Rate limited (429) on all accounts: {resp.text[:200]}")
     349  
     350              if resp.status_code != 200:
     351                  raise GeminiOAuthError(
lib/llm/gemini_oauth.py:444 Named Constant Error response in exception, magic number should be named constant for consistency haiku
     441                          acct = next_acct
     442                          continue
     443                      raise GeminiOAuthError(
>>>  444                          f"Rate limited (429) on all accounts: {error_body.decode()[:200]}"
     445                      )
     446                  if resp.status_code != 200:
     447                      error_body = await resp.aread()
lib/llm/gemini_oauth.py:451 Named Constant Error response in logging message, magic number should be named constant for consistency haiku
     448                      acct.subscription_broken = True
     449                      logger.warning(
     450                          "Gemini account subscription broken: HTTP {} — {}",
>>>  451                          resp.status_code, error_body.decode()[:300],
     452                          account=acct.name,
     453                      )
     454                      next_acct = _get_next_account(acct, model)
lib/llm/gemini_oauth.py:460 Named Constant Error response in exception, magic number should be named constant for consistency haiku
     457                          acct = next_acct
     458                          continue
     459                      raise GeminiOAuthError(
>>>  460                          f"streamGenerateContent failed ({resp.status_code}): {error_body.decode()[:300]}"
     461                      )
     462  
     463                  last_usage = None
lib/llm/probe.py:129 Named Constant Exception string used for classification logic, magic number should be named constant haiku
     126  def _classify_error(e: Exception) -> tuple[str, str, float | None]:
     127      """Classify an exception into (status, message, retry_after_s)."""
     128      msg = str(e).lower()
>>>  129      full = str(e)[:300]
     130      retry_after = _extract_retry_after(full)
     131  
     132      # Rate limiting — check first (429 can appear with auth-like messages)
lib/llm/stream.py:802 Named Constant Two error response truncations in exception messages, magic numbers should be named constants haiku
     799          # Re-raise as RuntimeError with the upstream message so the runner can
     800          # detect rate limits (429/RESOURCE_EXHAUSTED) and retry.
     801          if "validation error" in emsg.lower() and "ErrorEvent" in emsg:
>>>  802              logger.warning("stream_llm: provider error response parse failure", model=model, error=emsg[:200])
     803              raise RuntimeError(f"Provider returned error (unparseable): {emsg[:300]}") from e
     804          logger.error("stream_llm error (responses)", error=str(e))
     805          raise
lib/notify/pushover.py:83 Named Constant Notification history being sent to LLM for processing, magic number should be named constant haiku
      80          return True
      81  
      82      history_lines = "\n".join(
>>>   83          f"  - {t}: {b[:100]}" for t, b in recent[-10:]
      84      )
      85      prompt = f"""You are a notification gate. The user gets push notifications on their phone.
      86  Recent notifications sent in the last 5 minutes:
lib/notify/pushover.py:154 Named Constant Storing text preview in JSON payload file; truncation is intentional but magic numbers should be named constants haiku
     151              with open(_PUSH_LOG, "a") as f:
     152                  f.write(_json.dumps({
     153                      "ts": datetime.now().isoformat(),
>>>  154                      "title": title[:100],
     155                      "body": body[:200],
     156                      "caller": caller_stack[:500],
     157                  }) + "\n")
lib/observe/core.py:336 Named Constant Same repeated 200-char error truncation as the tune/core and race modules; should share a named constant. opus
     333                      result = await fn(*args, **kwargs)
     334                      return result
     335                  except Exception as e:
>>>  336                      error = str(e)[:200]
     337                      raise
     338                  finally:
     339                      _record(chosen, result, error, (time.perf_counter() - t0) * 1000, kwargs)
lib/observe/core.py:359 Named Constant Same repeated 200-char error truncation pattern; should reference a shared constant. opus
     356                      result = fn(*args, **kwargs)
     357                      return result
     358                  except Exception as e:
>>>  359                      error = str(e)[:200]
     360                      raise
     361                  finally:
     362                      _record(chosen, result, error, (time.perf_counter() - t0) * 1000, kwargs)
lib/observe/evaluate.py:77 Named Constant Candidate input/output being sent to LLM for evaluation; magic numbers 500 and 1000 should be constants and data loss should be logged haiku
      74              continue
      75  
      76          # Build candidate text for judge
>>>   77          candidate = f"Input: {json.dumps(cached.get('kwargs', {}), default=str)[:500]}\n\nOutput: {str(cached.get('result', ''))[:1000]}"
      78          context = {
      79              "experiment": experiment,
      80              "choice": cached.get("choice", ""),
lib/observe/race.py:46 Named Constant Same repeated 200-char error truncation pattern; should reference a shared constant. opus
      43          result = await action(choice)
      44          return RaceEntry(choice, result, (time.perf_counter() - t0) * 1000, None)
      45      except Exception as e:
>>>   46          return RaceEntry(choice, None, (time.perf_counter() - t0) * 1000, str(e)[:200])
      47  
      48  
      49  async def _collect_and_log(pending, entries, experiment, observers, store, timeout, all_choices):
lib/observe/race.py:60 Named Constant Same repeated 200-char error truncation pattern; should reference a shared constant. opus
      57                  try:
      58                      entries.append(task.result())
      59                  except Exception as e:
>>>   60                      entries.append(RaceEntry(pending[task], None, 0, str(e)[:200]))
      61              for task in timed_out:
      62                  task.cancel()
      63      except Exception as e:
lib/observe/race.py:150 Named Constant Same repeated 200-char error truncation pattern; should reference a shared constant. opus
     147              try:
     148                  entry = task.result()
     149              except Exception as e:
>>>  150                  entry = RaceEntry(choice, None, 0, str(e)[:200])
     151              entries.append(entry)
     152  
     153              valid = entry.error is None and validator(entry.result)
lib/private_data/claude_web.py:229 Named Constant Storing first_prompt in session payload; truncation is intentional but 200 should be a named constant haiku
     226          sender = msg.get("sender", "")
     227          role = "user" if sender == "human" else "assistant"
     228          if not session["first_prompt"] and role == "user":
>>>  229              session["first_prompt"] = text[:200]
     230  
     231          messages.append({
     232              "source": SOURCE,
lib/semnet/api.py:161 Named Constant Text stored in API response payload, magic number 300 should be named constant haiku
     158              "channel": r["channel"],
     159              "start_s": r["start_s"],
     160              "end_s": r["end_s"],
>>>  161              "text": r["text"][:300],
     162              "topic_label": r["topic_label"],
     163              "chapter_title": r["chapter_title"],
     164              "yt_url": f"https://youtube.com/watch?v={r['doc_id']}&t={int(r['start_s'])}",
lib/semnet/api.py:214 Named Constant Text stored in API response payload, magic number 300 should be named constant haiku
     211                  "channel": vid_channel,
     212                  "start_s": payload.get("start_s", 0),
     213                  "end_s": payload.get("end_s", 0),
>>>  214                  "text": payload.get("text", "")[:300],
     215                  "topic_label": payload.get("topic_label", ""),
     216                  "chapter_title": payload.get("chapter_title", ""),
     217                  "score": hit.get("score", 0),
lib/semnet/api.py:582 Named Constant Text being accumulated in span data structure, magic number 200 should be named constant haiku
     579                  # Extend current span
     580                  current_span["end_s"] = vc["end_s"]
     581                  current_span["chunks"].append(vc["chunk_id"])
>>>  582                  current_span["text"] += " " + vc["text"][:200]
     583              else:
     584                  if current_span:
     585                      spans.append(current_span)
lib/semnet/api.py:590 Named Constant Text stored in span payload, magic number 200 should be named constant haiku
     587                      "start_s": vc["start_s"],
     588                      "end_s": vc["end_s"],
     589                      "chunks": [vc["chunk_id"]],
>>>  590                      "text": vc["text"][:200],
     591                      "duration": vc["end_s"] - vc["start_s"],
     592                      "yt_url": f"https://youtube.com/watch?v={vid}&t={int(vc['start_s'])}",
     593                  }
lib/semnet/models.py:116 Named Constant Text preview for display; magic numbers 200 and 197 should be named constants, but truncation is intentional haiku
     113      def text_preview(self) -> str:
     114          if len(self.text) <= 200:
     115              return self.text
>>>  116          return self.text[:197] + "..."
lib/semnet/portal/generate.py:750 Named Constant Excerpt for HTML display; magic number 400 should be named constant, but truncation is expected for UI haiku
     747  
     748              # Excerpt — show first ~400 chars
     749              merged = " ".join(t.strip() for t in span["texts"] if t)
>>>  750              excerpt = merged[:400] + ("..." if len(merged) > 400 else "")
     751              parts.append(f'    <div class="occurrence-text">{html.escape(excerpt)}</div>')
     752              parts.append(f'  </div>')
     753              parts.append(f'</div>')
lib/semnet/presenter/app_ng.py:906 Named Constant Preview text for UI display stored in payload; magic number 3000 should be named constant, but truncation is expected haiku
     903              doc = adapter.load_document(doc_id)
     904              text = doc.get("text", "") if doc else ""
     905              title = doc.get("title", doc_id) if doc else doc_id
>>>  906              preview_text = text[:3000] + ("..." if len(text) > 3000 else "")
     907              import html as html_lib
     908              escaped = html_lib.escape(preview_text)
     909              content_preview.content = (
lib/tune/core.py:356 Named Constant The 200-char error truncation limit is used identically across multiple files and should be a shared module-level constant like MAX_ERROR_LENGTH. opus
     353                      result = await fn(*args, **kwargs)
     354                      return result
     355                  except Exception as e:
>>>  356                      error = str(e)[:200]
     357                      raise
     358                  finally:
     359                      _record(chosen, result, error, (time.perf_counter() - t0) * 1000, kwargs)
lib/tune/core.py:379 Named Constant Same repeated 200-char error truncation as idx 1; should share the same named constant. opus
     376                      result = fn(*args, **kwargs)
     377                      return result
     378                  except Exception as e:
>>>  379                      error = str(e)[:200]
     380                      raise
     381                  finally:
     382                      _record(chosen, result, error, (time.perf_counter() - t0) * 1000, kwargs)
lib/tune/evaluate.py:77 Named Constant Candidate input/output being sent to LLM for evaluation; magic numbers 500 and 1000 should be constants and data loss should be logged haiku
      74              continue
      75  
      76          # Build candidate text for judge
>>>   77          candidate = f"Input: {json.dumps(cached.get('kwargs', {}), default=str)[:500]}\n\nOutput: {str(cached.get('result', ''))[:1000]}"
      78          context = {
      79              "experiment": experiment,
      80              "choice": cached.get("choice", ""),
lib/tune/race.py:46 Named Constant Same repeated 200-char error truncation pattern; should reference a shared constant. opus
      43          result = await action(choice)
      44          return RaceEntry(choice, result, (time.perf_counter() - t0) * 1000, None)
      45      except Exception as e:
>>>   46          return RaceEntry(choice, None, (time.perf_counter() - t0) * 1000, str(e)[:200])
      47  
      48  
      49  async def _collect_and_log(pending, entries, experiment, observers, store, timeout, all_choices):
lib/tune/race.py:60 Named Constant Same repeated 200-char error truncation pattern; should reference a shared constant. opus
      57                  try:
      58                      entries.append(task.result())
      59                  except Exception as e:
>>>   60                      entries.append(RaceEntry(pending[task], None, 0, str(e)[:200]))
      61              for task in timed_out:
      62                  task.cancel()
      63      except Exception:
lib/tune/race.py:150 Named Constant Same repeated 200-char error truncation pattern; should reference a shared constant. opus
     147              try:
     148                  entry = task.result()
     149              except Exception as e:
>>>  150                  entry = RaceEntry(choice, None, 0, str(e)[:200])
     151              entries.append(entry)
     152  
     153              valid = entry.error is None and validator(entry.result)
projects/supplychain/union_report.py:239 Named Constant Example data stored in report payload; magic number 100 should be constant, but truncation is intentional haiku
     236          issues.append({
     237              "issue": "Public company without ticker",
     238              "count": row[0],
>>>  239              "examples": (row[1] or "")[:100],
     240          })
     241  
     242      # Duplicate canonical names
projects/supplychain/union_report.py:257 Named Constant Example data stored in report payload; magic number 100 should be constant, but truncation is intentional haiku
     254          issues.append({
     255              "issue": "Duplicate canonical names",
     256              "count": row[0],
>>>  257              "examples": (row[1] or "")[:100],
     258          })
     259  
     260      # Missing exchange for companies with ticker
projects/supplychain/union_report.py:271 Named Constant Example data stored in report payload; magic number 100 should be constant, but truncation is intentional haiku
     268          issues.append({
     269              "issue": "Ticker without exchange",
     270              "count": row[0],
>>>  271              "examples": (row[1] or "")[:100],
     272          })
     273  
     274      return issues
tools/extract/ui.py:262 Named Constant Text preview in UI display; magic number 5000 should be constant, but truncation is intentional for display haiku
     259          char_count = len(text)
     260          yield (
     261              text,
>>>  262              f"<div style='padding: 16px; font-family: monospace; white-space: pre-wrap;'>{text[:5000]}{'...' if len(text) > 5000 else ''}</div>",
     263              "",
     264              text,
     265              "",
tools/extract/ui.py:357 Named Constant Text preview in UI display; magic number 8000 should be constant, but truncation is intentional for display haiku
     354  
     355      # Fallback: show text as content
     356      char_count = len(text)
>>>  357      html = f"<div style='padding: 16px; font-family: sans-serif; white-space: pre-wrap;'>{text[:8000]}</div>"
     358      yield (
     359          text,
     360          html,
tools/extract/ui.py:612 Named Constant Card content preview in UI; magic number 5000 should be constant, but truncation is intentional for display haiku
     609  
     610      def _show_card(card, card_id, store):
     611          """Return outputs to display a card in the viewer."""
>>>  612          clean_html = card.content_html or f"<div style='padding: 16px;'>{card.content[:5000]}</div>"
     613          # Build source tab iframe from URL if available
     614          if card.url:
     615              source_html = (
vario/cli.py:578 Named Constant Logging function argument; truncation is intentional for log readability, but 200 should be a named constant haiku
     575  
     576      # Log experiment
     577      stats_data = {"comparisons": {f"{a}_vs_{b}": c for (a, b), c in comparisons.items()}} if comparisons else None
>>>  578      log_run(eval_prompt[:200], result.config, config_path, result.results,
     579                     f"vario eval -i {input_path}", result.duration_seconds, stats_data)
     580  
     581  
vario/ng/ui_studio.py:184 Named Constant Magic number 500 for content preview; should be MAX_PREVIEW_CHARS haiku
     181              )
     182  
     183          # Truncate content for display
>>>  184          preview = thing.content[:500]
     185          if len(thing.content) > 500:
     186              preview += "..."
     187  
vario/review_report.py:601 Named Constant HTML display content; truncation is intentional for rendering, but 5000 should be a named constant haiku
     598              synth_html = f"""\
     599              <div style="margin-bottom:1rem">
     600                  <h3 style="color:var(--highlight)">Synthesis</h3>
>>>  601                  <div class="prose">{_markdown_to_html(synthesis.content[:5000])}</div>
     602              </div>"""
     603  
     604          # Per-model details
vario/review_report.py:861 Named Constant HTML display quote preview; truncation is intentional for UI, but 100 should be a named constant haiku
     858      rows = []
     859      for f in findings:
     860          sev_class = f.severity if f.severity in ("critical", "moderate", "minor") else "minor"
>>>  861          quote_html = f'<br><small style="color:var(--text-muted)">&ldquo;{html.escape(f.quote[:100])}&rdquo;</small>' if f.quote else ""
     862  
     863          rows.append(f"""\
     864          <tr>
vario/server.py:142 Named Constant Storing prompt and content in database. Magic number 2000 should be a named constant, but truncation is intentional for storage. haiku
     139                      strategy_name,
     140                      strategy_hash,
     141                      json.dumps(strategy_spec) if strategy_spec else None,
>>>  142                      prompt[:2000],
     143                      content[:2000],
     144                      cost_usd,
     145                      tokens_total,
vario/server.py:421 Named Constant Storing content in API response payload. Magic number 2000 should be a named constant, but truncation is intentional for storage. haiku
     418          all_results=[
     419              {
     420                  "variant": r.get("variant"),
>>>  421                  "content": r.get("content", "")[:2000],
     422                  "error": r.get("error"),
     423              }
     424              for r in results
vario/strategies/blocks/meta.py:212 Named Constant Storing summary in result payload/metadata. Magic number 200 should be a named constant, but truncation is intentional for storage. haiku
     209                  "total_candidates": len(candidates),
     210                  "num_failures": len(failed),
     211                  "num_patterns": len(analysis.get("error_patterns", [])),
>>>  212                  "summary": analysis.get("summary", "")[:200],
     213              },
     214          )
     215      )
vario/strategies/report.py:719 Named Constant Display preview in HTML report; magic number 300 should be named constant haiku
     716              best = scored_results[0]
     717              best_content = best.get("selected_content", "")
     718              if best_content and len(best_content) > 300:
>>>  719                  best_content = best_content[:300] + "..."
     720              parts.append(
     721                  f'<h3>Best Problem</h3>'
     722                  f'<p><strong>{html.escape(str(best["problem_id"]))}</strong> '
vario/strategies/report.py:731 Named Constant Display preview in HTML report; magic number 300 should be named constant haiku
     728              worst = scored_results[-1]
     729              worst_content = worst.get("selected_content", "")
     730              if worst_content and len(worst_content) > 300:
>>>  731                  worst_content = worst_content[:300] + "..."
     732              parts.append(
     733                  f'<h3>Worst Problem</h3>'
     734                  f'<p><strong>{html.escape(str(worst["problem_id"]))}</strong> '
vario/ui_compare.py:66 Named Constant Magic number 2000 for file content preview; should be MAX_FILE_PREVIEW_CHARS haiku
      63          return f'<div style="padding:12px; text-align:center; color:#666;">[{ext} file — {path.name}]</div>'
      64      else:
      65          try:
>>>   66              text = path.read_text()[:2000]
      67          except Exception as e:
      68              logger.warning("Failed to read file", path=str(path), error=str(e))
      69              text = "(unreadable)"
vario/ui_compare.py:145 Named Constant Magic number 300 for thumbnail text preview; should be MAX_THUMBNAIL_TEXT_CHARS haiku
     142              thumb = render_artifact_html(best, max_height="200px")
     143          elif v.get("items"):
     144              try:
>>>  145                  text = v["items"][0].read_text()[:300]
     146              except Exception as e:
     147                  logger.warning("Failed to read variant thumbnail", error=str(e))
     148                  text = "(error)"
vario/ui_compare.py:248 Named Constant Magic number 200 for prompt markdown display; should be MAX_PROMPT_DISPLAY_CHARS haiku
     245          prompt_md = ""
     246          if data.get("prompt"):
     247              p = data["prompt"]
>>>  248              prompt_md = f"**Prompt:** {p[:200]}{'...' if len(p) > 200 else ''}"
     249          grid = build_thumbnail_grid_html(data)
     250          names = [v["name"] for v in data.get("variants", [])]
     251          return data, prompt_md, grid, gr.update(choices=names, value=[])
vario/ui_evaluate.py:172 Named Constant HTML display preview of answer; truncation is intentional for UI rendering, but 3000 should be a named constant haiku
     169          <div class="eval-score-badge {score_cls}">{score_display}</div>
     170          <div class="eval-meta">{cost_str} &middot; {latency_str} &middot; {result.tokens_total:,} tokens</div>
     171  
>>>  172          <div class="eval-answer">{_esc_multiline(answer[:3000])}</div>
     173  
     174          {"<div class='eval-judge-reasoning'><strong>Judge:</strong> " + _esc(judge_reason) + "</div>" if judge_reason else ""}
     175  
vario/validate_extraction.py:121 Named Constant Sample text in validation result payload; truncation is intentional but 500 should be a named constant haiku
     118              passed=False,
     119              detail=f"Missing required content: {missing}",
     120              notes=notes,
>>>  121              sample=text[:500] if verbose else "",
     122          )
     123  
     124      # Check for junk that shouldn't be there
vario/validate_extraction.py:195 Named Constant Sample text in validation result payload; truncation is intentional but 500 should be a named constant haiku
     192          passed=True,
     193          detail=f"{len(text)} chars extracted",
     194          notes=notes,
>>>  195          sample=text[:500] if verbose else "",
     196      )
     197  
     198  
doctor/analyze.py:771 Fine CLI output preview of a log line for display purposes, 250 chars is reasonable. opus
     768          return "continue"
     769  
     770      # Show the error
>>>  771      print(f"│log│ {line[:250]}{'…' if len(line) > 250 else ''}", flush=True)
     772  
     773      # Analyze
     774      asyncio.run(analyzer.analyze(error))
doctor/commit_review.py:126 Fine Logging truncation of stderr from a failed git command, 200 chars is fine for diagnostics. opus
     123          return []
     124  
     125      if result.returncode != 0:
>>>  126          logger.warning(f"git log failed: {result.stderr[:200]}")
     127          return []
     128  
     129      lines: list[DiffLine] = []
doctor/expect.py:637 Fine Truncating uncertain verification content to 100 chars for a reason string is reasonable debug/logging output. opus
     634          if match:
     635              reason = match.group(1).strip().split("\n")[0]
     636      else:
>>>  637          reason = f"Uncertain result: {content[:100]}"
     638  
     639      return VerifyResult(
     640          expectation=expectation,
doctor/fix.py:433 Fine Truncating stderr to 200 chars for an error message string is reasonable logging/CLI output behavior. opus
     430              subprocess.run(
     431                  ["git", "rebase", "--abort"], cwd=git_root, capture_output=True
     432              )
>>>  433              return False, f"Rebase conflict - manual merge needed: {result.stderr[:200]}"
     434  
     435          clog.success("<green>[doctor.fix]</green> Rebase successful")
     436  
doctor/learn.py:187 Fine 100none This is a display/UI truncation for a clarification question shown to the user, not LLM input. The user message is embedded in a human-readable question string. 100 chars is appropriate for a preview in a UI prompt. opus
     184      return Clarification(
     185          id=clarify_id,
     186          ts=now,
>>>  187          question=f"Is this a general principle or project-specific? '{user_message[:100]}'",
     188          context=context,
     189          options=["general", "project", "ignore"],
     190          status="pending",
doctor/resmon/__main__.py:138 Fine Truncating to 200 chars for macOS notification body which has real platform limits is correct. opus
     135      body = f"{diagnosis} | {cpu_time_min:.0f}min CPU time | kill {pid}"
     136      # Truncate for macOS notification limits
     137      if len(body) > 200:
>>>  138          body = body[:197] + "..."
     139  
     140      subprocess.run([
     141          "osascript", "-e",
doctor/resmon/checks.py:259 Fine Display truncation of command string in alert messages for readability. opus
     256          # Build a useful description from cmdline
     257          cmd_str = " ".join(cmdline[:4]) if cmdline else name
     258          if len(cmd_str) > 120:
>>>  259              cmd_str = cmd_str[:117] + "..."
     260          age_min = age / 60
     261  
     262          level = "critical" if cpu_pct >= CPU_CRIT_PCT else "warning"
doctor/tasks.py:614 Fine Truncating raw error text to 180 chars for CLI log display is reasonable. opus
     611              ts_display = f"[{error.timestamp[11:16]}] " if error.timestamp and len(error.timestamp) > 16 else ""
     612              # Show fingerprint, type, and message
     613              fp_short = error.fingerprint[:8]
>>>  614              display = error.raw[:180] if error.raw else error.message[:180]
     615              print(f"│log│ {ts_display}[{fp_short}] {display}", flush=True)
     616              if analyzer.is_new(error):
     617                  if analyze or fix:
doctor/tasks.py:815 Fine Truncating error display to 200 chars for CLI log output is reasonable. opus
     812              # Display with source file context
     813              fp_short = error.fingerprint[:8]
     814              source_name = source_file.relative_to(target) if source_file.is_relative_to(target) else source_file.name
>>>  815              display = (error.raw or error.message)[:200]
     816              print(f"│log│ [{source_name}] [{fp_short}] {display}", flush=True)
     817  
     818              if analyze:
doctor/tasks.py:896 Fine Truncating error message to 100 chars for a dim informational log line about skipped infra errors is fine. opus
     893      msg_lower = error.message.lower()
     894      if any(kw in msg_lower for kw in INFRA_ERROR_KEYWORDS):
     895          clog.warning(f"<yellow>⚠ Infrastructure error - may not be fixable by code changes</yellow>")
>>>  896          clog.info(f"<dim>Error: {error.message[:100]}</dim>")
     897          return  # Skip fixer for infra errors
     898  
     899      # Show fingerprint, type, and short summary
doctor/tasks.py:1082 Fine Truncating pattern-matched log lines to 100 chars for diagnostic summary output is reasonable. opus
    1079          for line in lines:
    1080              for pattern in LOG_ERROR_PATTERNS:
    1081                  if pattern in line:
>>> 1082                      pattern_errors.append(line[:100])
    1083                      break
    1084  
    1085          clog.info(f"<blue>{Path(filepath).name}</blue>")
doctor/watch.py:189 Fine Returning a truncated error message summary for display, 200 chars is reasonable for test failure messages. opus
     186              # Strip the E prefix
     187              msg = line[2:].strip()
     188              if msg and "assert" in msg.lower() or "error" in msg.lower():
>>>  189                  return msg[:200] + ("…" if len(msg) > 200 else "")
     190              if msg:
     191                  return msg[:200] + ("…" if len(msg) > 200 else "")
     192  
doctor/watch.py:197 Fine Same function as idx 4, fallback branch returning a truncated line for display purposes. opus
     194      for line in lines:
     195          stripped = line.strip()
     196          if stripped and not stripped.startswith("_"):
>>>  197              return stripped[:200] + ("…" if len(stripped) > 200 else "")
     198  
     199      return "Unknown failure"
     200  
finance/earnings/backtest/chart.py:118 Fine Truncating stderr to 200 chars in an error log is standard logging practice. opus
     115      if transcript:
     116          t_dts = [_epoch_to_dt(u["epoch_utc"]) for u in transcript]
     117          hover_texts = [
>>>  118              f"<b>{u.get('time_et', '')}</b><br>{u['text'][:120]}{'…' if len(u['text']) > 120 else ''}"
     119              for u in transcript
     120          ]
     121          fig.add_trace(go.Scatter(
finance/earnings/live/audio_capture.py:246 Fine Truncating stderr for a warning log message is standard debug/logging output. opus
     243                      url=manifest_url, client="android",
     244                      format_info=format_info, manifest_type=mtype,
     245                  )
>>>  246          err = stderr.decode(errors="replace")[:200] if stderr else ""
     247          logger.warning(f"yt-dlp: android client failed: {err}")
     248          logger.warning("yt-dlp: falling back to default client")
     249  
finance/earnings/live/audio_capture.py:258 Fine Truncating stderr for a warning log message is standard debug/logging output. opus
     255      )
     256      stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=15)
     257      if proc.returncode != 0:
>>>  258          err = stderr.decode(errors="replace")[:200]
     259          logger.warning(f"yt-dlp -g failed (rc={proc.returncode}): {err}")
     260          return None
     261      lines = stdout.decode().strip().splitlines()
finance/earnings/live/audio_capture.py:331 Fine Truncating a URL for a log message is a standard logging preview pattern. opus
     328          # Direct HLS/DASH/audio — ffmpeg reads it directly
     329          cmd = f"ffmpeg {ffmpeg_low_latency} -i '{url}' {pcm_out}"
     330          mode = "direct"
>>>  331          logger.info(f"Direct stream: type={_detect_manifest_type(url)}, url={url[:120]}...")
     332      elif _is_youtube(url):
     333          # YouTube — extract manifest URL, then ffmpeg direct
     334          logger.info(f"Extracting manifest URL: {url}")
finance/earnings/live/audio_capture.py:425 Fine Truncating ffmpeg stderr to 300 chars for a warning log is standard logging output. opus
     422          await process.wait()
     423          stderr_out = await process.stderr.read() if process.stderr else b""
     424          if stderr_out:
>>>  425              logger.warning(f"ffmpeg stderr: {stderr_out.decode(errors='replace')[:300]}")
     426          duration_s = bytes_read / (sample_rate * CHANNELS * SAMPLE_WIDTH)
     427          logger.info(f"Audio capture stopped: {bytes_read:,} bytes ({duration_s:.1f}s of audio), mode={mode}")
finance/earnings/live/test_latency.py:67 Fine Truncating stderr output for an error field in a test/latency result is standard error logging. opus
      64      result["extract_time_ms"] = round(elapsed)
      65  
      66      if proc.returncode != 0:
>>>   67          err = stderr.decode(errors="replace")[:200]
      68          result["error"] = err
      69          return result
      70  
finance/earnings/live/test_latency.py:97 Fine Truncating exception string for an error field in a test result is standard error logging. opus
      94              resp = await client.get(manifest_url)
      95              text = resp.text
      96      except Exception as e:
>>>   97          result["error"] = str(e)[:200]
      98          return result
      99  
     100      if "#EXTM3U" not in text:
finance/eval/evaluator.py:377 Fine Truncating unparseable input for an error message is a standard logging/debug preview pattern. opus
     374          if match:
     375              parsed = json.loads(match.group(1))
     376          else:
>>>  377              return {"content": f"Could not parse JSON from input: {input[:200]}", "score": None}
     378  
     379      configs = parsed if isinstance(parsed, list) else [parsed]
     380  
finance/jobs/company_research.py:54 Fine Error message truncation for RuntimeError, 200 chars is sufficient for diagnostics. opus
      51              content = result.content
      52              mode_used = result.fetch_mode
      53              if content.startswith("Error"):
>>>   54                  raise RuntimeError(f"Fetch failed: {content[:200]}")
      55  
      56              # fetch_escalate auto-extracts PDFs — check content-type to pick format
      57              resp_ct = (result.headers or {}).get("content-type", "").lower()
finance/jobs/company_research.py:69 Fine Log warning truncation of error string, 200 chars is sufficient for diagnostics. opus
      66                  log.info(f"[{item_key}] html ({mode_used}): {title} ({len(content)} chars)")
      67      except Exception as e:
      68          breaker.record_failure(url)
>>>   69          log.warning("fetch failed", url=url, error=str(e)[:200])
      70          raise
      71  
      72      breaker.record_success(url)
finance/jobs/company_research.py:169 Fine Sampling first 2000 chars for language detection regex is a reasonable heuristic that doesn't need the full text. opus
     166      if words and len(set(words)) / len(words) < 0.2:
     167          flags.append("low_diversity")
     168      # Check Japanese content has Japanese characters
>>>  169      if expected_lang == "ja" and not re.search(r'[\u3040-\u309F\u30A0-\u30FF\u4E00-\u9FFF]', text[:2000]):
     170          flags.append("language_mismatch")
     171      return flags
finance/jobs/earnings_backfill.py:391 Fine Log warning truncation of stderr from subprocess, 200 chars is sufficient for diagnostics. opus
     388                  capture_output=True, text=True, timeout=YTDLP_SEARCH_TIMEOUT,
     389              )
     390              if r.returncode != 0:
>>>  391                  log.warning(f"yt-dlp search failed for query={query!r}: {r.stderr.strip()[:200]}")
     392                  continue
     393  
     394              for line in r.stdout.strip().split("\n"):
helm/api.py:461 Fine Logging output truncating prompt preview to 100 chars — standard debug logging. opus
     458              f"Current tree: {json.dumps(state.tree)}\n"
     459              f"Current theme: {state.theme or '(none)'}\n"
     460              f"Recent actions: {json.dumps(recent[-10:]) if recent else '[]'}\n"
>>>  461              f"Latest prompt: {prompt[:500]}"
     462          )
     463  
     464          # Race all fast models — fastest valid JSON wins, all logged
helm/autodo/scanner/code_conventions.py:316 Fine This is documentation/code in a scanner that detects raw slicing — not itself a truncation site. opus
     313      percentage lost. Raw slicing hides data loss — the #1 cause of bad extraction results.
     314  
     315      Only flags 4+ digit numbers (>=1000 chars) — these are content limits for LLM
>>>  316      calls and processing. Smaller slices ([:200], [:500]) are typically display/payload
     317      truncation and acceptable without logging.
     318      """
     319      issues: list[ScanIssue] = []
helm/corpus/cli.py:230 Fine CLI listing preview of message content, 100 chars is appropriate for tabular terminal output. opus
     227                  ts = (r["timestamp"] or "")[:16]
     228                  src = "cc" if r["source"] == "claude_code" else "web"
     229                  role_ch = "U" if r["role"] == "user" else "A"
>>>  230                  text = r["content"][:100].replace("\n", " ")
     231                  acct = (r["account"] or "")
     232                  if "@" in acct:
     233                      acct = acct.split("@")[0][:12]
helm/hooks/handler.py:573 Fine Display preview of tool input for logging/UI purposes. opus
     570          url = tool_input['url']
     571          return f"url: {url[:50]}\u2026" if len(url) > 50 else f"url: {url}"
     572      s = str(tool_input)
>>>  573      return s[:100] + "\u2026" if len(s) > 100 else s
     574  
     575  
     576  def _track_resource(resource_type: str, path: str, session_info: dict):
helm/periodic.py:126 Fine Error message truncation for database storage, 500 chars is reasonable for a last_error column. opus
     123  
     124          except Exception as e:
     125              elapsed = time.monotonic() - start
>>>  126              error_msg = str(e)[:500]
     127              next_due = (datetime.now(tz=timezone.utc) + task.interval).isoformat()
     128  
     129              conn = open_raw_connection()
helm/periodic.py:236 Fine Stderr truncation for error reporting from subprocess, 500 chars is reasonable. opus
     233      stdout, stderr = await proc.communicate()
     234  
     235      if proc.returncode != 0:
>>>  236          error = stderr.decode().strip()[:500] if stderr else "unknown error"
     237          raise RuntimeError(f"Backup failed (exit {proc.returncode}): {error}")
     238  
     239      return None
helm/recap.py:284 Fine Truncating to 200 chars explicitly for a grid card display is a reasonable UI constraint with a clear inline comment. opus
     281          content = re.sub(r"`(.+?)`", r"\1", content)
     282          # Collapse to single line
     283          content = " ".join(content.split())
>>>  284          return content[:200] + ("…" if len(content) > 200 else "")  # cap for grid card
     285  
     286      return _extract_section(recap_text, "Goal"), _extract_section(recap_text, "Result")
     287  
helm/stats.py:500 Fine Used as a deduplication signature for messages, 100 chars is sufficient for that purpose. opus
     497                          text = content if isinstance(content, str) else ""
     498                      if len(text.strip()) <= 5:
     499                          continue
>>>  500                      sig = text.strip()[:100]
     501                      if sig in seen_msgs:
     502                          continue
     503                      seen_msgs.add(sig)
intel/app.py:514 Fine UI pagination limit for displayed cards, not text truncation. opus
     511              if multi_only:
     512                  items = [s for s in items if s.get("investor_count", 0) > 1]
     513              with startup_grid:
>>>  514                  shown = items[:120]
     515                  for s in shown:
     516                      is_exit = bool(s.get("status") and ("acquired" in s["status"].lower() or "ipo" in s["status"].lower()))
     517                      with ui.element("div").classes("intel-card").style(
intel/cli.py:264 Fine CLI output showing score summary, 120 chars is a reasonable display limit. opus
     261          # Show score if available
     262          if "score" in results and not results["score"].get("error"):
     263              s = results["score"]
>>>  264              click.echo(f"Score: {s.get('overall', '?')}/10 — {s.get('summary', '')[:120]}")
     265  
     266  
     267  async def _run_stage(stage_name: str, slug: str, name: str, hint: str, person_dir: Path, prior_results: dict) -> dict:
intel/companies/report_verify.py:91 Fine Truncating company name and detail for HTML table cell display is standard UI behavior. opus
      88              <td>{i.issue_type.replace('_', ' ')}</td>
      89              <td><strong>{_esc(i.company_name[:60])}</strong></td>
      90              <td><code>{i.ticker or '—'}</code></td>
>>>   91              <td>{_esc(i.detail[:120])}</td>
      92              <td>{fix_cell}</td>
      93          </tr>\n"""
      94  
intel/companies/scrape_portfolio.py:111 Fine Debug logging of an LLM response that failed to parse, 500 chars is sufficient for diagnostics. opus
     108              companies = json.loads(match.group(0))
     109          else:
     110              logger.error(f"Failed to parse LLM response as JSON")
>>>  111              logger.debug(f"Response: {text[:500]}")
     112              return []
     113  
     114      # Ensure slugs exist
intel/jobs/startup_summary.py:173 Fine Logging a preview of an error string for debugging. opus
     170                  elif "angellist" in link or "wellfound" in link:
     171                      urls_found["angellist"] = link
     172          except Exception as e:
>>>  173              log.warning("search failed", query=query, error=str(e)[:200])
     174  
     175      # Fetch provided URL if given
     176      fetched_content = None
intel/jobs/vc_01_scrape_extract.py:97 Fine Logging a preview of error content for debugging. opus
      94              pages_fetched += 1
      95              log.info("scraped main", firm=item_key, chars=len(result.content))
      96          else:
>>>   97              log.warning("main page fetch returned error", firm=item_key, content=result.content[:200])
      98      except Exception as e:
      99          log.warning("main page fetch failed", firm=item_key, error=str(e)[:200])
     100  
intel/jobs/vc_01_scrape_extract.py:115 Fine Logging a preview of an error string for debugging. opus
     112                      log.info("scraped page", firm=item_key, page=page_type, suffix=suffix, chars=len(result.content))
     113                      break  # found a working page for this type
     114              except Exception as e:
>>>  115                  log.warning("suffix miss", firm=item_key, url=url, error=str(e)[:100])
     116                  continue  # try next suffix
     117  
     118      # Fetch blog/writing URL if provided (e.g. codingvc.com for Susa)
intel/jobs/vc_01_scrape_extract.py:130 Fine Logging a preview of an error string for debugging. opus
     127                  pages_fetched += 1
     128                  log.info("scraped blog", firm=item_key, url=blog_url, chars=len(result.content))
     129          except Exception as e:
>>>  130              log.warning("blog fetch failed", firm=item_key, url=blog_url, error=str(e)[:200])
     131  
     132      if pages_fetched == 0:
     133          raise RuntimeError(f"Could not fetch any pages for {item_key} at {base_url}")
intel/people/common.py:380 Fine About section truncated to 300 chars for display preview in profile summary. opus
     377      if rname:
     378          lines.append(f"    {rname}")
     379      if headline:
>>>  380          lines.append(f"    {headline[:150]}")
     381      if url:
     382          lines.append(f"    {url}")
     383      return lines
intel/people/discover.py:165 Fine CLI display preview of search result snippets for human readability. opus
     162              tag = c["platform"]
     163              lines.append(f"  [{tag}] {c['title']}")
     164              if c.get("snippet"):
>>>  165                  lines.append(f"        {c['snippet'][:120]}")
     166              lines.append(f"        {c['url']}")
     167              lines.append("")
     168          # Web search organics (not already in candidates)
intel/people/discover.py:175 Fine Snippet preview truncated to 120 chars for display formatting — standard display preview. opus
     172                  continue
     173              lines.append(f"  [web] {w['title']}")
     174              if w.get("snippet"):
>>>  175                  lines.append(f"        {w['snippet'][:120]}")
     176              lines.append(f"        {w['url']}")
     177              lines.append("")
     178  
intel/people/discover.py:295 Fine Wikipedia description truncated to 80 chars for aligned columnar display output. opus
     292                      lines.append(f"  Wikipedia          {edata.get('title', '')} — {desc[:80]}")
     293                      extract = edata.get("extract", "")
     294                      if extract:
>>>  295                          lines.append(f"                     {extract[:150]}...")
     296                      if edata.get("url"):
     297                          lines.append(f"                     {edata['url']}")
     298  
intel/people/discover.py:304 Fine Wikipedia extract truncated to 150 chars for display preview with ellipsis. opus
     301                  lines.append(f"  Grokipedia         {edata.get('title', '')} ({chars} chars)")
     302                  extract = edata.get("extract", "")
     303                  if extract:
>>>  304                      lines.append(f"                     {extract[:150]}...")
     305                  if edata.get("url"):
     306                      lines.append(f"                     {edata['url']}")
     307  
intel/people/lib/enrich.py:218 Fine Error response body truncation for error reporting, 200 chars is sufficient. opus
     215              },
     216          )
     217          if resp.status_code != 200:
>>>  218              return {"error": f"Apollo HTTP {resp.status_code}: {resp.text[:200]}"}
     219          data = resp.json()
     220          person = data.get("person")
     221          if not person:
intel/people/lookup.py:295 Fine Principle text truncated to 120 chars for log output display. opus
     292              "thumbnail": data.get("thumbnail", {}).get("source"),
     293          }
     294      except json.JSONDecodeError:
>>>  295          return {"error": "parse failed", "raw": html[:500]}
     296  
     297  
     298  async def fetch_generic_page(url: str, source_name: str) -> dict:
intel/people/lookup.py:434 Fine Raw HTML error fallback truncated to 500 chars for debug diagnostics — appropriate. opus
     431          lines.append(f"  {wiki.get('description', 'N/A')}")
     432          extract = wiki.get("extract", "")
     433          if extract:
>>>  434              lines.append(f"  {extract[:500]}")
     435          if wiki.get("url"):
     436              lines.append(f"  URL: {wiki['url']}")
     437          lines.append("")
intel/people/lookup.py:475 Fine LinkedIn OG description truncated to 200 chars for display summary. opus
     472          if cb.get("og_title"):
     473              lines.append(f"  Title: {cb['og_title']}")
     474          if cb.get("og_description"):
>>>  475              lines.append(f"  {cb['og_description'][:300]}")
     476          if cb.get("description"):
     477              lines.append(f"  Bio: {cb['description'][:300]}")
     478          lines.append(f"  URL: {cb.get('url', 'N/A')}")
intel/people/lookup.py:489 Fine Crunchbase descriptions truncated to 300 chars for display summary. opus
     486              if src.get("title"):
     487                  lines.append(f"  Title: {src['title']}")
     488              if src.get("description"):
>>>  489                  lines.append(f"  {src['description'][:300]}")
     490              if src.get("text_preview"):
     491                  # Show first meaningful chunk
     492                  text = src["text_preview"][:500]
intel/people/lookup.py:502 Fine Source description truncated to 300 chars for display summary. opus
     499      if gb and gb.get("snippets"):
     500          lines.append(f"🌐 GOOGLE BIO SNIPPETS")
     501          for s in gb["snippets"][:3]:
>>>  502              lines.append(f"  • {s[:200]}")
     503          lines.append("")
     504  
     505      return "\n".join(lines)
intel/people/network_browser.py:365 Fine Grokipedia extract truncated to 150 chars for display preview with ellipsis. opus
     362  
     363      about = profile.get("about", "")
     364      if about:
>>>  365          lines.append(f"\n> {about[:300]}")
     366  
     367      # Experience
     368      exp = profile.get("experience") or []
intel/people/scoring.py:150 Fine Google bio snippets truncated to 200 chars for display listing. opus
     147      if grok:
     148          extract = grok.get("extract", "")
     149          if extract:
>>>  150              parts.append(f"\nBio extract: {extract[:500]}")
     151  
     152      # Net worth
     153      net_worth = person.get("net_worth")
jobs/job_inspect.py:245 Fine CLI error display truncated to 150 chars, reasonable for inspection output. opus
     242  
     243          print(" ".join(parts))
     244          print(f"     Attempts: {attempts} | Last: {_format_timestamp(last_attempt)}")
>>>  245          print(f"     Error: {error[:150]}")
     246          print()
     247  
     248  
jobs/lib/discovery.py:276 Fine Truncating error content to 200 chars in a RuntimeError message is fine for error reporting. opus
     273              if rc == -1:
     274                  logger.error("youtube_channel: yt-dlp timed out")
     275              else:
>>>  276                  logger.error(f"youtube_channel: yt-dlp failed: {stderr[:200]}")
     277              return []
     278  
     279          items = []
jobs/lib/doctor.py:155 Fine 800800 This truncates an error message string sent to an LLM for classification. Error messages are typically short and repetitive — 800 chars is sufficient to capture the meaningful part of most error traces. The LLM only needs to classify the error type, not analyze the full stack trace, so this is a reasonable deliberate focusing limit. opus
     152                  f"Job: {job_id}\n"
     153                  f"Stage: {stage}\n"
     154                  f"Item: {item_key}\n"
>>>  155                  f"Error: {error[:800]}"
     156              ),
     157              system=_CLASSIFY_SYSTEM,
     158              temperature=0,
jobs/lib/stages.py:119 Fine Truncating error string to 200 chars in an error log is standard logging practice. opus
     116              )
     117              content = result.content
     118              if content.startswith("Error"):
>>>  119                  raise RuntimeError(f"Fetch failed: {content[:200]}")
     120              content_path = data_dir / "content.html"
     121              content_path.write_text(content, encoding="utf-8")
     122              content_type = "html"
jobs/lib/stages.py:648 Fine Truncating diarization output to 200 chars in a RuntimeError message is fine for error reporting. opus
     645          # Clamp to 0-10
     646          return {k: max(0, min(10, int(scores.get(k, 0)))) for k in expected}
     647      except Exception as e:
>>>  648          log.error("score LLM error", error=str(e)[:200])
     649          raise
learning/cli.py:246 Fine CLI display of instance content, 120 chars is a reasonable preview. opus
     243          instance = matches[0]
     244  
     245      click.echo(f"  {instance.id[:12]}  [{instance.learning_type.value if instance.learning_type else '?'}]")
>>>  246      click.echo(f"  {instance.content[:120]}")
     247  
     248      # Check links
     249      principles = store.get_principles_for_instance(instance.id)
learning/cli.py:490 Fine CLI provenance display of instance content, 100 chars is fine. opus
     487  
     488      click.echo(f"\n{'Provenance':─^50}")
     489      click.echo(f"  ID:               {inst.id}")
>>>  490      click.echo(f"  Content:          {inst.content[:100]}")
     491      click.echo(f"  Source type:      {inst.source_type.value}")
     492      click.echo(f"  Source ID:        {inst.source_id or '—'}")
     493      click.echo(f"  Source location:  {inst.source_location or '—'}")
learning/cli.py:638 Fine CLI error output showing raw invalid JSON response, 300 chars is fine for debugging. opus
     635          data = json.loads(text)
     636      except json.JSONDecodeError:
     637          click.echo(f"LLM returned invalid JSON", err=True)
>>>  638          click.echo(f"Raw: {text[:300]}", err=True)
     639          return None
     640  
     641      data["_model"] = model
learning/cli.py:685 Fine CLI display preview of principle rationale, 100 chars is fine. opus
     682      click.echo(f"  File:     {file_path}")
     683      click.echo(f"  Text:     {principle.text}")
     684      if principle.rationale:
>>>  685          click.echo(f"  Why:      {principle.rationale[:100]}")
     686  
     687      # Auto-materialize
     688      materialize_all(quiet=True)
learning/cli.py:727 Fine CLI error output showing raw invalid JSON response, 200 chars is fine for debugging. opus
     724          data = json.loads(text)
     725      except json.JSONDecodeError:
     726          click.echo(f"LLM returned invalid JSON, using raw input.", err=True)
>>>  727          click.echo(f"Raw response: {text[:200]}", err=True)
     728          data = {
     729              "content": observation,
     730              "learning_type": "observation",
learning/cli.py:863 Fine CLI dry-run display preview of rationale and anti-pattern, 100 chars is fine. opus
     860              click.echo(f"  Name:     {data.get('name')}")
     861              click.echo(f"  File:     {data.get('file_path')}")
     862              click.echo(f"  Text:     {data.get('text')}")
>>>  863              click.echo(f"  Why:      {data.get('rationale', '')[:100]}")
     864              click.echo(f"  Anti:     {data.get('anti_pattern', '')[:100]}")
     865              suggested = parent or data.get("suggested_parent")
     866              if suggested:
learning/cli.py:1456 Fine CLI search result snippet display, 100 chars is a reasonable preview. opus
    1453      """Print a single search result."""
    1454      type_tag = "P" if r["type"] == "principle" else "I"
    1455      item_id = r["id"] if r["type"] == "principle" else r["id"][:12]
>>> 1456      snippet = (r.get("snippet") or r.get("text", ""))[:100].replace("\n", " ")
    1457      proj = f" [{r['project']}]" if r.get("project") else ""
    1458      click.echo(f"  {type_tag}  {item_id}{proj}")
    1459      click.echo(f"     {r['title']}")
learning/cli.py:1736 Fine CLI verbose listing of instance content and context, 120/100 chars are fine display previews. opus
    1733  
    1734      if verbose:
    1735          click.echo(f"{prefix}")
>>> 1736          click.echo(f"    Content:  {inst.content[:120]}")
    1737          if inst.context_snippet:
    1738              click.echo(f"    Context:  {inst.context_snippet[:100]}")
    1739          if inst.project:
learning/cli.py:1760 Fine CLI verbose display preview of principle text, 150 chars is fine for terminal output. opus
    1757      if verbose:
    1758          click.echo(f"{prefix}")
    1759          click.echo(f"{pad}    Name:     {p.name}")
>>> 1760          click.echo(f"{pad}    Text:     {p.text[:150]}")
    1761          if p.rationale:
    1762              click.echo(f"{pad}    Rationale:{p.rationale[:100]}")
    1763          if p.anti_pattern:
learning/cli.py:1877 Fine Error message logging of invalid JSON response, 300 chars is sufficient for debugging. opus
    1874      try:
    1875          data = json.loads(text)
    1876      except json.JSONDecodeError:
>>> 1877          click.echo(f"LLM returned invalid JSON:\n{text[:300]}", err=True)
    1878          raise SystemExit(1)
    1879  
    1880      name = data.get("name", "")
learning/cli.py:1988 Fine Error message logging of invalid JSON response, 300 chars is sufficient for debugging. opus
    1985      try:
    1986          proposals = json.loads(text)
    1987      except json.JSONDecodeError:
>>> 1988          click.echo(f"LLM returned invalid JSON:\n{text[:300]}", err=True)
    1989          raise SystemExit(1)
    1990  
    1991      if not isinstance(proposals, list):
learning/cli.py:2098 Fine Error message logging of invalid JSON response, 300 chars is sufficient for debugging. opus
    2095          m = _re.search(r'\{[\s\S]*\}', text)
    2096          data = json.loads(m.group()) if m else {"adaptations": []}
    2097      except (json.JSONDecodeError, AttributeError):
>>> 2098          click.echo(f"LLM returned invalid JSON:\n{text[:300]}", err=True)
    2099          raise SystemExit(1)
    2100  
    2101      adaptations = data.get("adaptations", [])
learning/cli.py:2112 Fine CLI display truncation of a quote to 120 chars for readable terminal output. opus
    2109          click.echo(f"\n  {i}. [{followed}] {a.get('situation', '?')}")
    2110          click.echo(f"     Outcome: {a.get('outcome', '?')}")
    2111          if a.get("quote"):
>>> 2112              click.echo(f"     Quote: \"{a['quote'][:120]}\"")
    2113  
    2114      if dry_run:
    2115          click.echo("\n(dry run — not storing)")
learning/gyms/apply/gym.py:197 Fine Fallback search query from context text — 100 chars is reasonable for a search query string. opus
     194          # Take up to 12 terms, join with OR for broad match
     195          terms = filtered[:12]
     196          if not terms:
>>>  197              return context[:100]
     198          return " OR ".join(terms)
     199  
     200      def evaluate_retrieval(self, case: RetrievalCase) -> RetrievalResult:
learning/gyms/apply/gym.py:312 Fine 200200 These are display/summary truncations for building a human-readable evaluation report of retrieval results. The principle_text and item text are being formatted into a structured summary, not sent as primary LLM input for analysis. 200 chars is appropriate for a preview/summary format. opus
     309          candidate = (
     310              f"Context: {result.test_case.context_snippet}\n\n"
     311              f"Expected principle: {result.test_case.principle_name}\n"
>>>  312              f"  {result.test_case.principle_text[:200]}\n\n"
     313              f"Retrieved (top 5):\n{retrieved_text}"
     314          )
     315  
learning/gyms/apply/spot_check.py:152 Fine HTML report display preview of context snippet for spot-check UI. opus
     149  
     150              <div class="section">
     151                  <div class="section-label">Context (search query)</div>
>>>  152                  <div class="context">{_esc(tc.context_snippet[:300])}</div>
     153              </div>
     154  
     155              <div class="section">
learning/gyms/apply/spot_check.py:159 Fine HTML report display preview of principle text for spot-check UI. opus
     156                  <div class="section-label">Expected principle</div>
     157                  <div class="expected">
     158                      <strong>{_esc(tc.principle_name)}</strong>
>>>  159                      <div class="principle-text">{_esc(tc.principle_text[:250])}</div>
     160                  </div>
     161              </div>
     162  
learning/gyms/badge/gym.py:35 Fine Error message truncation showing the unexpected response, 200 chars is fine for a ValueError message. opus
      32      result = repair_json(text)
      33      if isinstance(result, dict):
      34          return result
>>>   35      raise ValueError(f"Expected JSON object but got {type(result).__name__}: {text[:200]}")
      36  
      37  
      38  @dataclass
learning/gyms/badge/gym.py:212 Fine Truncating prompt text to 100 chars for building a readable evaluation summary, appropriate for display. opus
     209                  # Build a readable prompt→badge sequence
     210                  sequence_lines = []
     211                  for step in timeline:
>>>  212                      sequence_lines.append(f"  prompt: {step['prompt'][:100]}")
     213                      sequence_lines.append(f"  badge: {step['badge']}")
     214                      sequence_lines.append("")
     215                  sequence_text = "\n".join(sequence_lines)
learning/gyms/extraction/app.py:122 Fine Truncating diff lines to 120 chars for UI display in a Gradio app is a reasonable display preview. opus
     119          ext_lines = set(result.text_extracted.splitlines())
     120          cln_lines = set(result.text_cleaned.splitlines())
     121          removed = ext_lines - cln_lines
>>>  122          diff_info = "Removed by cleaning:\n" + "\n".join(f"  - {line[:120]}" for line in sorted(removed) if line.strip())
     123      else:
     124          diff_info = "(no changes from cleaning)" if use_clean else "(cleaning disabled)"
     125  
learning/gyms/extraction/app.py:148 Fine Same diff display context as idx 4, truncating lines to 120 chars for UI readability. opus
     145          c_lines = set(cleaned.text_cleaned.splitlines())
     146          removed = b_lines - c_lines
     147          diff = "Removed by LLM cleaning:\n" + "\n".join(
>>>  148              f"  - {line[:120]}" for line in sorted(removed) if line.strip()
     149          )
     150      else:
     151          diff = "No difference"
learning/gyms/iterm2/test_panes.py:77 Fine Debug/error output preview in a test file, 200 chars is fine. opus
      74      if "Hello from gym test" in text:
      75          log("Buffer contains expected output ✓")
      76      else:
>>>   77          print(f"  ✗ Expected 'Hello from gym test' in buffer, got:\n{text[:200]}")
      78  
      79      # Cleanup
      80      await s2.async_close()
learning/schema/link_instances.py:116 Fine Logging a preview of a failed JSON response for debugging purposes. opus
     113          text = strip_fences(response)
     114          results = json.loads(text)
     115      except json.JSONDecodeError as e:
>>>  116          logger.error("Failed to parse LLM response", error=str(e), response_preview=response[:200])
     117          return 0
     118  
     119      link_count = 0
learning/session_extract/triage.py:68 Fine 100100 This is a display/summary truncation — using ep.text[:100] as a fallback label when ep.topic is None, for building a summary list sent to an LLM for triage classification. 100 chars is fine as a topic-level preview. opus
      65      summaries = []
      66      for i, ep in enumerate(episodes):
      67          err = " [HAS ERRORS]" if ep.has_errors else ""
>>>   68          summaries.append(f"Episode {i}: {ep.topic or ep.text[:100]}{err}")
      69  
      70      prompt = f"""\
      71  Classify each episode of this coding session. Return JSON array of objects:
learning/session_extract/turns.py:119 Fine 500500 This truncates error content when building turn representations for episode segmentation. Error messages beyond 500 chars are typically stack traces with repetitive frames. This is a reasonable limit for capturing the essential error info. opus
     116                      b.get("text", "") for b in content if isinstance(b, dict)
     117                  )
     118              if is_error and content:
>>>  119                  asst_text_parts.append(f"ERROR: {content[:500]}")
     120                  asst_has_error = True
     121  
     122      _flush_assistant()
learning/session_review/failure_mining.py:169 Fine 500500 This extracts a context prompt for storage/display in failure mining records, not for LLM consumption. 500 chars is a reasonable preview of the user's initial prompt for logging and later human review. opus
     166          msg = entry.get("message", {})
     167          content = msg.get("content", "")
     168          if isinstance(content, str) and content.strip():
>>>  169              return content.strip()[:500]
     170          if isinstance(content, list):
     171              for block in content:
     172                  if isinstance(block, dict):
learning/session_review/failure_mining.py:176 Fine 500500 Same function as idx 1 — extracting a user prompt preview from a different content format (list of blocks). This is for storage/display in failure records, not LLM input. 500 chars is appropriate. opus
     173                      if block.get("type") == "text":
     174                          text = block.get("text", "").strip()
     175                          if text:
>>>  176                              return text[:500]
     177                      # Skip tool_result blocks when looking for user prompt
     178                      if block.get("type") == "tool_result":
     179                          continue
learning/session_review/failure_mining.py:248 Fine 500500 This truncates thinking text being buffered into a failure mining record. It's stored as structured data for later review, not sent to an LLM. 500 chars per thinking block is a reasonable storage limit to keep failure records manageable. opus
     245                  if block.get("type") == "thinking":
     246                      thinking_text = block.get("thinking", "")
     247                      if thinking_text and active_failure:
>>>  248                          thinking_buffer.append(thinking_text[:500])
     249  
     250                  elif block.get("type") == "tool_use":
     251                      has_tool_use = True
learning/session_review/failure_mining.py:285 Fine 500500 This truncates tool result text for storage in failure mining records. It's structured data capture, not LLM input. 500 chars per tool result is reasonable for diagnostic storage. opus
     282                      result_content = " ".join(
     283                          b.get("text", "") for b in result_content if isinstance(b, dict)
     284                      )
>>>  285                  result_text = str(result_content)[:500]
     286  
     287                  tool_info = pending_tool_uses.pop(tool_id, None)
     288                  if not tool_info:
learning/session_review/judge_calibration.py:187 Fine Debug/error logging of unparseable raw LLM response, 200 chars is fine for diagnostics. opus
     184              reasons.append(parsed.get("reason", ""))
     185          except (json.JSONDecodeError, ValueError):
     186              scores.append(None)
>>>  187              reasons.append(f"parse_error: {raw[:200]}")
     188  
     189      return {"scores": scores, "reasons": reasons, "latency_ms": latency_ms, "cost": cost}
     190  
learning/session_review/pair_judge.py:168 Fine 200200 These are summaries (input_summary, result_summary, context_prompt) being formatted into a structured judge prompt. The fields are already summaries by nature — the 200-char limit is a safety cap on summary fields, not truncation of primary content. The judge is evaluating the repair pattern, not the full content. opus
     165          for i, c in enumerate(candidates):
     166              candidates_text += f"### Candidate {i}\n"
     167              candidates_text += f"Tool: {c.get('tool_name', '?')}\n"
>>>  168              candidates_text += f"Input: {c.get('input_summary', '')[:200]}\n"
     169              candidates_text += f"Result: {c.get('result_summary', '')[:200]}\n\n"
     170  
     171          return JUDGE_CANDIDATES_PROMPT.format(
learning/session_review/pair_judge.py:174 Fine 200/300200/300 Same as site 12 — these are summary fields (input_summary, error_text, context_prompt) being capped for a structured judge prompt. These are intentionally concise summaries for pattern-level judging, not full content analysis. The limits are appropriate for summary fields. opus
     171          return JUDGE_CANDIDATES_PROMPT.format(
     172              context=pair["context_prompt"][:200],
     173              fail_tool=ff.get("tool_name", "?"),
>>>  174              fail_input=ff.get("input_summary", "")[:200],
     175              fail_error=ff.get("error_text", "")[:300],
     176              candidates_text=candidates_text.strip(),
     177          )
learning/session_review/pair_judge_compare.py:904 Fine HTML report display preview of context, 120 chars is fine for UI rendering. opus
     901  
     902          h.append('<div class="spotlight">')
     903          h.append(f'<div class="spotlight-header">Pair #{pid} — [{cat}] {_esc(ff.get("tool_name", "?"))}</div>')
>>>  904          h.append(f'<div class="meta">Context: {_esc(pd["context"][:120])}</div>')
     905  
     906          # Show error and repair side by side
     907          h.append('<div class="pair-detail">')
learning/session_review/pair_judge_compare.py:912 Fine HTML report display preview of error text, 200 chars is reasonable. opus
     909          # Error side
     910          h.append('<div class="pair-error">')
     911          h.append(f'<div class="pair-label">Error ({_esc(ff.get("tool_name", "?"))})</div>')
>>>  912          error_text = ff.get("error_text", "")[:200]
     913          h.append(f'<code>{_esc(error_text)}</code>')
     914          if ff.get("input_summary"):
     915              h.append(f'<div class="meta" style="margin-top:0.3em">Input: {_esc(ff["input_summary"][:120])}</div>')
learning/session_review/pair_judge_compare.py:928 Fine HTML report display preview of repair candidate input summary, 200 chars is fine. opus
     925                  c = cands[best_idx]
     926                  h.append('<div class="pair-repair">')
     927                  h.append(f'<div class="pair-label">Repair candidate #{best_idx} ({_esc(c.get("tool_name", "?"))})</div>')
>>>  928                  h.append(f'<code>{_esc(c.get("input_summary", "")[:200])}</code>')
     929                  if c.get("result_summary"):
     930                      h.append(f'<div class="meta" style="margin-top:0.3em">Result: {_esc(c["result_summary"][:120])}</div>')
     931                  h.append('</div>')
learning/session_review/pair_judge_compare.py:938 Fine HTML report display preview of success input summary, 200 chars is fine. opus
     935              s = pd["success"]
     936              h.append('<div class="pair-repair">')
     937              h.append(f'<div class="pair-label">Next success ({_esc(s.get("tool_name", "?"))})</div>')
>>>  938              h.append(f'<code>{_esc(s.get("input_summary", "")[:200])}</code>')
     939              if s.get("result_summary"):
     940                  h.append(f'<div class="meta" style="margin-top:0.3em">Result: {_esc(s["result_summary"][:120])}</div>')
     941              h.append('</div>')
learning/session_review/principle_propose.py:230 Fine Snippet preview truncated to 120 chars for display formatting. opus
     227          text = p.read_text()
     228          # Get first non-header, non-empty line as summary
     229          summary_lines = [l.strip() for l in text.split("\n") if l.strip() and not l.startswith("#")]
>>>  230          summary = summary_lines[0][:150] if summary_lines else ""
     231          principles.append(f"- **{p.stem}**: {summary}")
     232      return "\n".join(principles) if principles else "(none)"
     233  
learning/session_review/principle_propose.py:509 Fine Principle summary line truncated to 150 chars for display listing — standard display preview. opus
     506          if analysis:
     507              logger.info(f"Latest analysis: {len(analysis.get('root_causes', []))} root causes")
     508              for rc in analysis.get("root_causes", []):
>>>  509                  logger.info(f"  [{rc.get('frequency', '?')}] {rc['name']}: {rc['description'][:100]}")
     510          proposals = load_proposals()
     511          if proposals:
     512              logger.info(f"\n{len(proposals)} proposed principles:")
learning/session_review/principle_propose.py:544 Fine Root cause description truncated to 100 chars for log output display. opus
     541              analysis = await analyze_root_causes(train_failures, model)
     542              save_analysis(analysis)
     543              for rc in analysis.get("root_causes", []):
>>>  544                  logger.info(f"  [{rc.get('frequency', '?')}] {rc['name']}: {rc['description'][:120]}")
     545              if meta := analysis.get("meta_observations"):
     546                  logger.info(f"  Meta: {meta[:200]}")
     547  
learning/session_review/principle_propose.py:562 Fine Root cause description truncated to 120 chars for log output display. opus
     559              principles = await propose_principles(analysis, train_failures, model)
     560              for p in principles:
     561                  save_proposal(p)
>>>  562                  logger.info(f"  [{p.get('abstraction_level', '?')}] {p['name']}: {p['text'][:120]}")
     563  
     564          # Stage 3
     565          if coverage:
learning/session_review/principle_refine.py:303 Fine Headline truncated to 150 chars for display formatting in search result lines. opus
     300              ),
     301          )
     302  
>>>  303          logger.info(f"Worst regression prompt: {worst_reg['prompt'][:100]}...")
     304          logger.info(f"Failure analysis:\n{worst_reg['failure_analysis']}")
     305  
     306          # Generate guard clause
learning/session_review/replay.py:486 Fine Logger info preview of trigger text, 100 chars is fine for log output. opus
     483          )
     484          logger.info(f"Constructed trigger from context ({len(orig_tools)} prior tools)")
     485      else:
>>>  486          logger.info(f"Trigger: {trigger_text[:100]}")
     487  
     488      conversation = [{"role": "user", "content": trigger_text}]
     489      replayed_turns = []
learning/session_review/replay.py:775 Fine CLI debug print of tool input JSON, 120 chars is a reasonable display preview. opus
     772          print(f"\n  WITHOUT principle — tool details:")
     773          for c in baseline_calls:
     774              if "name" in c:
>>>  775                  inp = json.dumps(c["input"])[:120]
     776                  print(f"    {c['name']}: {inp}")
     777  
     778          print(f"\n  WITH principle — tool details:")
learning/session_review/replay.py:781 Fine Same CLI debug print pattern as idx 1, display preview is fine. opus
     778          print(f"\n  WITH principle — tool details:")
     779          for c in principle_calls:
     780              if "name" in c:
>>>  781                  inp = json.dumps(c["input"])[:120]
     782                  print(f"    {c['name']}: {inp}")
     783  
     784          # Parallelism comparison
learning/session_review/replay.py:830 Fine CLI display preview of user message text blocks, 120 chars is fine. opus
     827          print(f"\n  First user messages:")
     828          user_turns = [t for t in seg if t.role == "user" and t.text_blocks]
     829          for t in user_turns[:5]:
>>>  830              text = t.text_blocks[0][:120] if t.text_blocks else "(tool results only)"
     831              print(f"    [{t.index}] {text}")
     832          return
     833  
learning/session_review/sandbox_replay.py:505 Fine 40004000 This truncation is for computing a cache key hash, not for LLM input. The 4k limit ensures cache keys are stable and representative without hashing enormous strings. This is a hashing/caching concern, not a content truncation issue. opus
     502  
     503  def _judge_cache_key(prompt: str, result_text: str, judge_model: str) -> str:
     504      """Compute cache key for judge scoring."""
>>>  505      data = f"{prompt}|{result_text[:4000]}|{judge_model}"
     506      return hashlib.sha256(data.encode()).hexdigest()
     507  
     508  
lib/brightdata/youtube.py:67 Fine Warning log truncation of API error messages is standard logging practice. opus
      64      good = []
      65      for r in records:
      66          if r.get("error") or r.get("error_code"):
>>>   67              logger.warning(f"BD record error for {r.get('input', {}).get('url', '?')}: {r.get('error', '')[:100]}")
      68          else:
      69              good.append(r)
      70  
lib/eval/parse.py:51 Fine Error message truncation showing the unparseable LLM response, 200 chars is fine for a ValueError message. opus
      48      if bracket_match:
      49          return _unwrap(json.loads(bracket_match.group(0)))
      50  
>>>   51      raise ValueError(f"Could not parse JSON from LLM response: {text[:200]}")
lib/finnhub/client.py:76 Fine Error message truncation in exception text is standard practice for preventing huge payloads in error messages. opus
      73          logger.warning(f"🔴 429 Rate limited! (#{_429_count}, limiter says {count}/{limit} in window)")
      74          raise FinnhubError("429 Rate limited — wait a moment", 429)
      75      if r.status_code != 200:
>>>   76          raise FinnhubError(f"HTTP {r.status_code}: {r.text[:200]}", r.status_code)
      77  
      78  
      79  def api(endpoint: str, **params) -> dict:
lib/gen/calibrate_temp_variety.py:363 Fine Display preview of creative text output for CLI debugging, classic preview truncation. opus
     360          for temp in TEMPERATURES:
     361              tk = temp_key(temp)
     362              for i, r in enumerate(creative.get(model, {}).get(tk, [])):
>>>  363                  text = r["text"][:120].replace("\n", " ") if r["text"] else "[ERROR]"
     364                  print(f"    t={tk:>7s} #{i}: {text}...")
     365  
     366  
lib/gradio/ingest.py:110 Fine Checking the first 500 chars for HTML detection markers is a reasonable heuristic that doesn't truncate output. opus
     107          "border:1px solid #ddd;padding:12px;border-radius:6px"
     108      )
     109  
>>>  110      if stripped.startswith("<!") or stripped.startswith("<html") or "<body" in stripped[:500].lower():
     111          return f'<div style="{style};background:#fff">{content}</div>'
     112  
     113      escaped = html_lib.escape(content)
lib/gym/base.py:141 Fine Truncating the reason string to 200 chars for feedback in a gym iteration loop is reasonable logging. opus
     138  
     139              # Build feedback from weak areas
     140              weak = [k for k, v in best.subscores.items() if isinstance(v, (int, float)) and v < 60]
>>>  141              feedback = f"Score {best.score:.0f}. Weak: {weak or 'none'}. {best.reason[:200]}"
     142  
     143          return best_candidates
lib/ingest/browser/reflection.py:56 Fine Checking only the first 100 chars of a string against a base64 regex pattern is a valid optimization for format sniffing. opus
      53          return (match.group(1), match.group(2))
      54  
      55      # Check for raw base64 that looks like an image (long base64 string)
>>>   56      if len(value) > MIN_BASE64_LENGTH and re.match(r'^[A-Za-z0-9+/=]+$', value[:100]):
      57          # Likely raw base64 image data
      58          return ("png", value)  # Assume PNG if no format specified
      59  
lib/ingest/browser/reflection.py:178 Fine Preview truncation of a large result string for display purposes is standard practice. opus
     175              else:
     176                  parts.append(
     177                      f"Result (large, {len(result_str)} chars, saved to {file_path}):\n"
>>>  178                      f"{preview_str[:500]}..."
     179                  )
     180  
     181      return "\n\n".join(parts)
lib/ingest/browser/session.py:150 Fine Debug log truncation of browser console messages is standard practice to avoid log flooding. opus
     147                  }
     148                  with open(self._console_log, "a") as f:
     149                      f.write(json.dumps(entry) + "\n")
>>>  150                  logger.debug(f"│console│ {msg.type}: {msg.text[:100]}")
     151  
     152          def log_pageerror(err):
     153              entry = {
lib/ingest/browser/session.py:162 Fine Warning log truncation of JS error messages is standard practice to avoid log flooding. opus
     159              }
     160              with open(self._console_log, "a") as f:
     161                  f.write(json.dumps(entry) + "\n")
>>>  162              logger.warning(f"│js-error│ {str(err)[:100]}")
     163  
     164          page.on("console", log_console)
     165          page.on("pageerror", log_pageerror)
lib/ingest/fetcher_media.py:224 Fine Error message truncation of stderr from a failed subprocess is standard practice. opus
     221      Path(pdf_path).unlink(missing_ok=True)
     222  
     223      if result.returncode != 0:
>>>  224          raise RuntimeError(f"marker-pdf failed: {result.stderr[:500]}")
     225  
     226      return result.stdout
     227  
lib/ingest/fetcher_media.py:256 Fine Error message truncation of unexpected response bytes is standard practice for diagnostics. opus
     253      if not pdf_bytes.startswith(b"%PDF"):
     254          clog.warning(f"Response doesn't look like PDF (first bytes: {pdf_bytes[:20]})")
     255          return (
>>>  256              f"Error: Expected PDF but got: {pdf_bytes[:200].decode('utf-8', errors='replace')}",
     257              headers,
     258          )
     259  
lib/ingest/fetcher_media.py:289 Fine Truncating a fallback PDF title to 100 chars is reasonable for display/metadata purposes. opus
     286          elif line and not line.startswith("#"):
     287              # Use first non-empty, non-heading line as fallback
     288              if len(line) > 10 and len(line) < 200:
>>>  289                  pdf_title = line[:100]
     290              break
     291  
     292      # Fallback to filename from URL
lib/ingest/related_work.py:141 Fine Error message truncation of unparseable LLM output in exception is standard practice. opus
     138          if match := re.search(r"\[.*\]", raw, re.DOTALL):
     139              queries = json.loads(match.group())
     140          else:
>>>  141              raise RuntimeError(f"Query generation returned unparseable output: {raw[:200]}")
     142      if not isinstance(queries, list) or not queries:
     143          raise RuntimeError(f"Query generation returned empty or non-list: {type(queries)}")
     144      return queries
lib/ingest/url_fetch_tool.py:95 Fine Sniffing the first 2000 chars to detect HTML content type is a reasonable heuristic for format detection, not a data truncation. opus
      92  def _extract_text_for_llm(content: str, headers: dict) -> str:
      93      """Best-effort text extraction to minimize tokens for downstream models."""
      94      content_type = str(headers.get("content-type", "")).lower()
>>>   95      snippet = content[:2000].lower()
      96      looks_html = "html" in content_type or "<html" in snippet or "<body" in snippet
      97      if not looks_html:
      98          return content
lib/llm/claude_oauth.py:552 Fine Truncating error body to 200 chars in a warning log message is standard logging practice. opus
     549                      f" ({cooldown / 3600:.1f}h)\n"
     550                      f"  5h: {_fmt_pct(rl['5h_util'])} ({rl['5h_status']})"
     551                      f"  7d: {_fmt_pct(rl['7d_util'])} ({rl['7d_status']})\n"
>>>  552                      f"  body: {error_text[:200]}"
     553                  )
     554                  mark_profile_rate_limited(profile_label, cooldown_seconds=cooldown, limit_reason=claim)
     555                  last_error = OAuthError(f"Rate limited (429) [{claim}]: {error_text}")
lib/llm/gemini_oauth.py:370 Fine Truncating notification body to 200 chars for an LLM dedup-gate prompt is a reasonable preview for deciding whether to send. opus
     367              if "429" not in emsg:
     368                  # Persistent failure (404, 403, etc.) — mark account broken and try next
     369                  acct.subscription_broken = True
>>>  370                  logger.warning("Gemini account subscription broken", account=acct.name, error=emsg[:150])
     371                  next_acct = _get_next_account(acct, model)
     372                  if next_acct:
     373                      acct = next_acct
lib/llm/json.py:162 Fine 500500 This truncates raw LLM output in an error log message, not content sent to an LLM. Logging 500 chars of failed output is reasonable for debugging without flooding logs. This is display/logging truncation, not content truncation. opus
     159                      f"call_llm_json: JSON parse failed (attempt {attempt + 1}/{retries + 1}): {e}"
     160                  )
     161  
>>>  162      logger.error(f"call_llm_json: all {retries + 1} attempts failed, raw output: {raw_text[:500]}")
     163      raise last_error  # type: ignore[misc]
lib/llm/parallel.py:303 Fine CLI output preview truncation at 500 chars for parallel LLM results display is standard practice. opus
     300              print(
     301                  f"\n=== {chunk.model} ({chunk.latency:.1f}s, {chunk.tokens} tokens, {n_blocks} code blocks) ==="
     302              )
>>>  303              print(content[:500] + "..." if len(content) > 500 else content)
     304  
     305          elif chunk.chunk_type == "error":
     306              print()
lib/llm/stream.py:418 Fine Truncating error message to 150 chars in a warning log is fine for logging. opus
     415              if fallback_to_api:
     416                  emsg = str(e)
     417                  if "429" in emsg:
>>>  418                      logger.warning("Subscription rate limited, falling back to API billing", route=route, error=emsg[:150])
     419                  else:
     420                      logger.warning("Subscription failed, falling back to API billing", error=str(e))
     421              else:
lib/llm/stream.py:454 Fine Debug trace output showing first 100 chars of prompt is standard debug preview. opus
     451      import sys as _sys
     452      print(f"[DEBUG stream_llm] full_model={full_model}, uses_responses={uses_responses_api(full_model)}", file=_sys.stderr, flush=True)
     453      print(f"[DEBUG stream_llm] cached_content len={len(cached_content) if cached_content else 0}, system len={len(system) if system else 0}", file=_sys.stderr, flush=True)
>>>  454      print(f"[DEBUG stream_llm] prompt len={len(prompt)}, prompt[:100]={repr(prompt[:100])}", file=_sys.stderr, flush=True)
     455  
     456      if uses_responses_api(full_model):
     457          # Responses API (OpenAI/xAI) — no explicit caching support,
lib/llm/stream.py:579 Fine Truncating error message to 500 chars in a warning log is reasonable for logging. opus
     576                  logger.warning(
     577                      f"429 RATE LIMIT (litellm path) on {profile_label} | model={model}\n"
     578                      f"  retry-after parsed: {cooldown}s\n"
>>>  579                      f"  full error: {emsg[:500]}\n"
     580                      f"  trying next profile (attempt {attempt + 1})"
     581                  )
     582                  last_error = e
lib/llm/stream.py:755 Fine Debug trace output showing first 300 chars of input is standard debug preview. opus
     752      import sys as _sys
     753      print(f"[DEBUG _stream_responses_api] model={model}", file=_sys.stderr, flush=True)
     754      print(f"[DEBUG] prompt len={len(prompt)}, system len={len(system) if system else 0}, full_input len={len(full_input)}", file=_sys.stderr, flush=True)
>>>  755      print(f"[DEBUG] full_input[:300]={repr(full_input[:300])}", file=_sys.stderr, flush=True)
     756  
     757      kwargs: dict[str, Any] = {
     758          "model": model,
lib/llm/stream.py:1320 Fine Truncating error message to 150 chars in a warning log is fine for logging. opus
    1317              if fallback_to_api:
    1318                  emsg = str(e)
    1319                  if "429" in emsg:
>>> 1320                      logger.warning("Subscription rate limited, falling back to API billing", route=route, error=emsg[:150])
    1321                  else:
    1322                      logger.warning("Subscription failed, falling back to API billing", error=str(e))
    1323              else:
lib/nicegui/doc_input.py:103 Fine Sniffing the first 500 chars to detect HTML is a reasonable heuristic that doesn't need to scan the entire document. opus
     100          "min-width:0;max-width:100%;overflow-wrap:break-word;word-break:break-word"
     101      )
     102  
>>>  103      if stripped.startswith("<!") or stripped.startswith("<html") or "<body" in stripped[:500].lower():
     104          return f'<div style="{style}">{content}</div>'
     105  
     106      escaped = html_lib.escape(content)
lib/notify/pushover.py:91 Fine Truncating transcript text to 120 chars for Plotly hover tooltip is appropriate for UI display. opus
      88  
      89  New notification about to be sent:
      90    Title: {title}
>>>   91    Body: {body[:200]}
      92  
      93  Should this notification be sent? Consider:
      94  - Is this genuinely new information, or redundant with recent sends?
lib/semnet/cli.py:197 Fine CLI display preview of evidence text, 150 chars is fine for terminal output. opus
     194              click.echo(f"    [{c.category}/{c.sub_type}] {conf} {c.direction or ''}")
     195              click.echo(f"      {c.claim_text}")
     196              if c.evidence:
>>>  197                  ev = c.evidence[:150] + "..." if len(c.evidence) > 150 else c.evidence
     198                  click.echo(f"      evidence: {ev}")
     199  
     200      succeeded = sum(1 for r in results if not r.error)
lib/semnet/concepts.py:355 Fine CLI display preview of chunk text for human readability. opus
     352              click.echo(f"  ── {topic} ──")
     353              prev_topic = topic
     354  
>>>  355          text_preview = c["text"][:120].replace("\n", " ")
     356          chapter = f" [{c['chapter_title']}]" if c["chapter_title"] else ""
     357          click.echo(f"    {mm_s:02d}:{ss_s:02d}-{mm_e:02d}:{ss_e:02d}{chapter}  {text_preview}")
     358  
lib/semnet/frames.py:82 Fine Logging stderr output from subprocess, 300 chars is appropriate for error diagnostics. opus
      79          return None
      80  
      81      if result.returncode != 0:
>>>   82          stderr = result.stderr[:300]
      83          logger.error(f"yt-dlp -g failed for {video_id}: {stderr}")
      84          return None
      85  
lib/semnet/frames.py:123 Fine Logging stderr output from subprocess, 300 chars is appropriate for error diagnostics. opus
     120          return False
     121  
     122      if result.returncode != 0:
>>>  123          stderr = result.stderr[:300]
     124          logger.error(f"ffmpeg failed: {stderr}")
     125          return False
     126  
lib/sessions/reader.py:133 Fine Truncating a user prompt to 200 chars for a display preview/summary is a legitimate UI preview use case. opus
     130                          if isinstance(content, str) and not rec.get("isMeta"):
     131                              text = content.strip()
     132                              if len(text) > 5 and "<command-name>" not in text:
>>>  133                                  first_prompt = text[:200] + ("\u2026" if len(text) > 200 else "")
     134                  elif not started_at and ts:
     135                      started_at = ts
     136  
lib/sessions/reader.py:373 Fine Truncating user message content for a summary list is a legitimate display preview use case. opus
     370              msg = record.get("message", {})
     371              content = msg.get("content", "")
     372              if isinstance(content, str):
>>>  373                  user_msgs.append(content[:200] + ("\u2026" if len(content) > 200 else ""))
     374              elif isinstance(content, list):
     375                  for block in content:
     376                      if isinstance(block, dict) and block.get("type") == "text":
lib/tempmail/client.py:216 Fine Error message truncation of an HTTP response body in a RuntimeError, 200 chars is fine. opus
     213              json={"address": address, "password": password},
     214          )
     215          if r.status_code not in (200, 201):
>>>  216              raise RuntimeError(f"Failed to create account: {r.status_code} {r.text[:200]}")
     217  
     218          acct_data = r.json()
     219          account_id = acct_data.get("id", "")
lib/vision/analyze.py:141 Fine Logging truncation of raw LLM response on parse failure is standard debug logging practice. opus
     138      try:
     139          return json.loads(text)
     140      except json.JSONDecodeError as e:
>>>  141          logger.warning(f"Failed to parse analysis JSON: {e}\nRaw: {text[:500]}")
     142          return {"score": 0, "subscores": {}, "findings": [], "suggestions": [], "_parse_error": str(e)}
     143  
     144  
lib/vision/capture.py:48 Fine Error message truncation of invalid JSON output from subprocess is standard practice. opus
      45      try:
      46          items = json.loads(result.stdout)
      47      except json.JSONDecodeError as e:
>>>   48          raise RuntimeError(f"appctl returned invalid JSON: {e}\nOutput: {result.stdout[:200]}")
      49  
      50      if not items:
      51          raise RuntimeError(f"appctl returned no screenshots for {app_name!r}")
lib/vision/capture.py:79 Fine Error message truncation of invalid JSON output from subprocess is standard practice. opus
      76      try:
      77          items = json.loads(result.stdout)
      78      except json.JSONDecodeError as e:
>>>   79          raise RuntimeError(f"it2 returned invalid JSON: {e}\nOutput: {result.stdout[:200]}")
      80  
      81      if not items:
      82          raise RuntimeError("it2 returned no screenshots")
lib/vision/capture.py:113 Fine Error message truncation of HTTP response text in exception is standard practice. opus
     110              json={"action": "navigate", "url": url},
     111          )
     112          if nav_resp.status_code != 200:
>>>  113              raise RuntimeError(f"Playwright navigate failed ({nav_resp.status_code}): {nav_resp.text[:200]}")
     114          nav_data = nav_resp.json()
     115          if not nav_data.get("ok"):
     116              raise RuntimeError(f"Playwright navigate error: {nav_data.get('error', 'unknown')}")
lib/vision/capture.py:128 Fine Error message truncation of HTTP response text in exception is standard practice. opus
     125          # Step 3: Screenshot the current page (returns PNG bytes)
     126          shot_resp = await c.get(f"{PLAYWRIGHT_API}/screenshot")
     127          if shot_resp.status_code != 200:
>>>  128              raise RuntimeError(f"Playwright screenshot failed ({shot_resp.status_code}): {shot_resp.text[:200]}")
     129  
     130          save_path = output_path or Path(f"/tmp/playwright_{url.split('/')[-1].replace('.', '_')}_{id(shot_resp)}.png")
     131          save_path.write_bytes(shot_resp.content)
lib/vision/compare.py:67 Fine Logging truncation of raw LLM response on parse failure is standard debug logging practice. opus
      64      try:
      65          return json.loads(text)
      66      except json.JSONDecodeError as e:
>>>   67          logger.warning(f"Failed to parse comparison JSON: {e}\nRaw: {text[:500]}")
      68          return {"before_score": 0, "after_score": 0, "regressions": [], "improvements": [], "summary": "Parse error"}
      69  
      70  
lib/ytdl/core.py:90 Fine Logging truncation of stderr from yt-dlp for error diagnostics, 500 chars is fine. opus
      87          return None
      88  
      89      if result.returncode != 0:
>>>   90          stderr = result.stderr[:500]
      91          # Bot detection is fatal
      92          if "Sign in to confirm" in stderr or "bot" in stderr.lower():
      93              raise BotDetectedError(f"yt-dlp bot detection: {stderr[:150]}")
lib/ytdl/core.py:96 Fine Exception message and warning log truncation of stderr, 100-150 chars is reasonable for error messages. opus
      93              raise BotDetectedError(f"yt-dlp bot detection: {stderr[:150]}")
      94          # Proxy auth errors
      95          if "ip_forbidden" in stderr or "403 Forbidden" in stderr:
>>>   96              raise ProxyAuthError(f"yt-dlp proxy auth error: {stderr[:150]}")
      97          if "Private video" in stderr or "removed" in stderr:
      98              logger.warning(f"yt-dlp: video unavailable — {stderr[:100]}")
      99          elif result.stdout.strip():
projects/supplychain/browse.py:227 Fine CLI table display truncation for readability. opus
     224          # Truncate description for table
     225          desc = r["description"] or ""
     226          if len(desc) > 100:
>>>  227              desc = desc[:100] + "..."
     228  
     229          rows.append([
     230              r["name"],
projects/supplychain/export_universe.py:205 Fine Truncating descriptions to 120 chars for display/export output is a standard UI concern and the limit is reasonable. opus
     202          # Truncate long descriptions
     203          desc = c.description.strip()
     204          if len(desc) > 120:
>>>  205              desc = desc[:117] + "..."
     206          parts.append(desc)
     207  
     208      if c.fan_in > 0:
projects/vic/enrich.py:127 Fine 200200 This truncates LLM response text in an error log message when JSON parsing fails. It's purely for logging/debugging — showing enough of the malformed response to diagnose the issue. 200 chars is appropriate for error logs. opus
     124          result = json.loads(text)
     125      except json.JSONDecodeError as e:
     126          log.error("check_enrich JSON parse failed", idea_id=item_key, error=str(e),
>>>  127                    response=text[:200])
     128          raise RuntimeError(f"check_enrich LLM returned invalid JSON: {e}")
     129  
     130      # Validate content_ok — if LLM says content is bad, log warning
projects/vic/signup.py:501 Fine Debug log preview of email body, 500 chars is appropriate for logging. opus
     498          # Extract password
     499          body = msg.get("text", "") or msg.get("html", "")
     500          log.info(f"Email received: {msg.get('subject', '?')}")
>>>  501          log.debug(f"Email body preview: {body[:500]}")
     502  
     503          password = _extract_password(body)
     504          if not password:
tools/todos/analyze.py:169 Fine CLI summary output truncated to 2000 chars with a truncation notice, reasonable for terminal display. opus
     166      click.echo("=" * 60)
     167      for r in succeeded:
     168          click.echo(f"\n--- {r['variant']} ---")
>>>  169          click.echo(r["content"][:2000])
     170          if len(r["content"]) > 2000:
     171              click.echo("... (truncated)")
     172  
vario/cli_ng.py:261 Fine CLI output preview of candidate content, 500 chars is a reasonable display truncation. opus
     258          click.echo(f"\n--- {len(result.candidates)} candidates ---")
     259          for i, c in enumerate(result.candidates[:5]):
     260              score_str = f" (score={c.score:.1f})" if c.score is not None else ""
>>>  261              click.echo(f"\n[{i+1}]{score_str}: {c.candidate.content[:500]}")
     262  
     263      # Traces
     264      click.echo(f"\n--- Traces ---")
vario/cli_v1.py:578 Fine 200none This truncates the eval prompt for logging/storage purposes (log_run), not for LLM input. 200 chars is a reasonable preview for a log entry. No change needed. opus
     575  
     576      # Log experiment
     577      stats_data = {"comparisons": {f"{a}_vs_{b}": c for (a, b), c in comparisons.items()}} if comparisons else None
>>>  578      log_run(eval_prompt[:200], result.config, config_path, result.results,
     579                     f"vario eval -i {input_path}", result.duration_seconds, stats_data)
     580  
     581  
vario/directives.py:64 Fine 500500 This extracts search directives from a prompt — it only needs to detect trigger phrases like 'use search' or 'no search'. These directives appear at the beginning of prompts. 500 chars is sufficient for detecting a search preference keyword, and the function already has a fast-path trigger-word check. opus
      61      try:
      62          extraction_prompt = f"""Does this prompt specify a web search preference?
      63  
>>>   64  Prompt: "{prompt[:500]}"
      65  
      66  Options:
      67  - "native" = wants native/provider search (says "native search", "provider search")
vario/engine/__init__.py:19 Fine This is in a docstring example showing preview output, purely illustrative. opus
      16  
      17      result = await execute(config)
      18      for r in result.results:
>>>   19          print(f"{r.name}: {r.output[:100]}...")
      20  """
      21  
      22  from vario.engine.execute import (
vario/macro_analyze.py:127 Fine Warning log truncation of unparseable JSON text, 100 chars is sufficient for debugging. opus
     124      try:
     125          return json.loads(text[start : end + 1])
     126      except json.JSONDecodeError:
>>>  127          logger.warning(f"Failed to parse JSON from: {text[:100]}...")
     128          return []
     129  
     130  
vario/ng/cli.py:467 Fine CLI display truncation of a problem description, 200 chars is a reasonable preview. opus
     464          # Problem (truncated)
     465          problem = data["problem"]
     466          if len(problem) > 200:
>>>  467              problem = problem[:200] + "..."
     468          click.echo(f"\n  Problem: {problem}")
     469  
     470          # Things
vario/ng/cli.py:479 Fine CLI display truncation of content in a compact list view, 117 chars is reasonable for single-line display. opus
     476                  model = f"  model={t['model']}" if t["model"] else ""
     477                  content = t["content"].replace("\n", " ").strip()
     478                  if len(content) > 120:
>>>  479                      content = content[:117] + "..."
     480                  click.echo(f"  [{t['rank']}]{score}{model}  {content}")
     481  
     482          # Traces
vario/ng/db.py:408 Fine 300none This truncates the problem text to generate a 3-6 word topic label using Haiku. For generating a short label, 300 chars of the problem is more than sufficient — the model just needs the gist. The truncation is deliberate cost control on a labeling task where more context adds no value. opus
     405      """Generate a short LLM topic label for a run and save it."""
     406      from lib.llm import call_llm
     407  
>>>  408      truncated = problem[:300]
     409      result = await call_llm(
     410          "haiku",
     411          f"Generate a 3-6 word topic label for this task. "
vario/render.py:50 Fine Truncating LLM response in an error message for debugging when script extraction fails is fine. opus
      47      if script:
      48          return script
      49  
>>>   50      raise ValueError(f"Could not extract bash script from LLM response:\n{full[:300]}")
      51  
      52  
      53  def execute_render_script(script: str, work_dir: Path, timeout: int = 30) -> dict:
vario/render_d2.py:38 Fine Warning log truncation of an error response body, 100 chars is sufficient for diagnostics. opus
      35              )
      36              if response.status_code == 200:
      37                  return response.text
>>>   38              logger.warning(f"Kroki error {response.status_code}: {response.text[:100]}")
      39              return None
      40      except Exception:
      41          logger.warning(f"Kroki request failed")
vario/strategies/blocks/create.py:327 Fine 20002000 This is a classification call using haiku (a cheap, small model) just to categorize the problem type (e.g., math vs. code). The model only needs enough context to classify, not the full problem. The 2k limit is reasonable for this lightweight classification task and helps keep costs minimal. opus
     324      if approach == "auto":
     325          classify_result = await call_llm(
     326              model="haiku",
>>>  327              prompt=problem_text[:2000],
     328              system=_CLASSIFY_SYSTEM,
     329              temperature=0,
     330              )
vario/strategies/executor.py:114 Fine Debug/trace summary truncation of arbitrary data to 120 chars, appropriate for concise tracing output. opus
     111                  return f"{n} scored candidate(s), scores {min(scores):.0f}-{max(scores):.0f}"
     112              return f"{n} scored candidate(s)"
     113          return f"list[{n}]"
>>>  114      return str(data)[:120]
     115  
     116  
     117  def _summarize_output(data: Any) -> str:
vario/validate_extraction.py:103 Fine Checking the first 100 chars for '<' to detect HTML is a lightweight heuristic probe, not a truncation of data. opus
     100          )
     101  
     102      content = result.content
>>>  103      text = extract_text_from_html(content) if "<" in content[:100] else content
     104  
     105      # Check length
     106      if len(text) < exp.min_length:
vario/validate_extraction.py:227 Fine Truncating a sample preview to 300 chars for CLI display output is reasonable. opus
     224  
     225          if result.sample:
     226              print(f"\n    --- Sample ---")
>>>  227              print(f"    {result.sample[:300]}...")
     228              print(f"    --- End Sample ---")
     229  
     230      # Summary
watch/live.py:57 Fine Display preview of user messages in a live watch UI, 120 chars is a reasonable preview length. opus
      54      all_msgs.sort(key=lambda x: x[0], reverse=True)
      55      for ts, proj, msg in all_msgs[:30]:
      56          time_str = ts[11:19] if len(ts) >= 19 else ""
>>>   57          preview = escape(msg[:120] + ("…" if len(msg) > 120 else ""))
      58          msg_rows += f"""
      59          <div style="display:flex;gap:10px;padding:4px 0;border-bottom:1px solid #ffffff08">
      60            <span style="color:#888;font-size:12px;white-space:nowrap">{time_str}</span>