# VC Intel Restructure Design

Date: 2026-03-06
Reviewed: vario maxthink (opus, gpt-pro, grok, gemini) — all feedback incorporated

## Problem

VC firm data lives in a separate `vc_firms` table inside `intel/people/data/people.db` (4,481 rows, all score=0). This conflicts with the unified company registry design (`intel/companies/data/companies.db` with tag-based universes). Scoring logic exists only in the standalone `build_vc_list.py` script and was never applied to the DB. The `projects/vc_intel/` folder is a strategic placeholder that should dissolve into `intel/`.

## Schema Changes

### New tables in `companies.db`

```sql
-- Canonical VC firm profile (one per company)
CREATE TABLE IF NOT EXISTS vc_firm_profile (
    company_id    INTEGER PRIMARY KEY REFERENCES companies(id),
    website       TEXT,
    city          TEXT,
    state         TEXT,
    focus_tags    TEXT,          -- comma-separated: ai,software,fintech
    stages        TEXT,          -- comma-separated: Seed,A,B,Growth
    israel_ops    INTEGER DEFAULT 0,
    fund_size_usd REAL,
    deal_count    INTEGER DEFAULT 0,
    crd_number    TEXT,
    linkedin_url  TEXT,
    has_midas     INTEGER DEFAULT 0,  -- denormalized from vc_people
    source        TEXT,          -- primary/best source
    score         INTEGER,
    score_breakdown TEXT,        -- JSON
    score_version TEXT,
    scored_at     TEXT,
    created_at    TEXT DEFAULT (datetime('now')),
    updated_at    TEXT DEFAULT (datetime('now'))
);

-- Per-source raw observations (provenance)
CREATE TABLE IF NOT EXISTS vc_firm_observations (
    id            INTEGER PRIMARY KEY AUTOINCREMENT,
    company_id    INTEGER NOT NULL REFERENCES companies(id),
    source        TEXT NOT NULL,    -- forbes_midas, sec_adv, nvca, sec_formd, signal_nfx, manual_seed
    source_id     TEXT,
    raw_data      TEXT,             -- JSON blob from original source
    observed_at   TEXT DEFAULT (datetime('now')),
    UNIQUE(company_id, source, source_id)
);
```

### Registry API changes

Fix `ensure_company()` in `registry.py`:
- Add `is_public: int | None = None` parameter (currently hardcoded to `1`)
- Add `website: str | None = None` parameter
- Use parameter value on INSERT, COALESCE on UPDATE

### `vc_people` schema change in `people.db`

- Add `company_id INTEGER` column — stores the `companies.db` company ID (logical FK, not enforced cross-DB)
- Resolve at write time: after `ensure_company()` returns `company_id`, write it to `vc_people`
- Keep `firm_name` as denormalized display field
- Drop `firm_id` column (currently FK to `vc_firms` which is being removed)

## Data Migration

### Phase 1: Migrate firms

One-time script `intel/companies/migrate_vc_firms.py`:

1. Read all `vc_firms` rows from `people.db`
2. Deduplicate by `name_normalized` (run `SELECT COUNT(DISTINCT name_normalized) FROM vc_firms` first to know actual unique count)
3. For each unique firm:
   - `ensure_company(name=..., country=..., is_public=0)` — creates company row
   - `tag_company(company_id, "vc-firm")` — membership tag only (no metadata blob)
   - INSERT into `vc_firm_profile` with merged canonical fields
   - INSERT into `vc_firm_observations` for each source record (preserves `raw_data`)
4. Dry-run name collision check: verify no existing public company gets accidentally matched

### Merge precedence (per field, when multiple sources):

| Field          | Rule                                        |
|----------------|---------------------------------------------|
| `website`      | manual_seed > nvca > signal_nfx > other     |
| `city`, `state`| manual_seed > forbes_midas > sec_adv > other|
| `focus_tags`   | manual_seed > union of all sources          |
| `stages`       | manual_seed > union of all sources          |
| `fund_size_usd`| max across sources (largest fund vehicle)   |
| `deal_count`   | max across sources                          |
| `israel_ops`   | OR (any source says true)                   |
| `crd_number`   | first non-empty                             |

### Phase 2: Migrate seed firms

65 curated firms from `build_vc_list.py::SEED_FIRMS`:
- Extract to `intel/companies/data/vc_seed_firms.json` (persistent, version-controlled)
- Migrate with `source: "manual_seed"` in observations
- Manual seed data takes precedence over automated sources (see merge rules above)
- Handler upsert logic must not overwrite manual_seed fields

### Phase 3: Migrate vc_people links

For the 120 `vc_people` rows:
- Match `firm_name` to migrated companies (normalize + verify)
- Write resolved `company_id` to new column
- Log non-exact matches for manual review

### Phase 4: Archive and cutover

- RENAME `vc_firms` to `vc_firms_archived` (keep 30 days, then drop)
- Remove `CREATE TABLE vc_firms` from handler's `_init_schema()`
- Validation: count firms per source before/after, spot-check 50 random firms

## Job Handler Update

`jobs/handlers/vc_data.py`:

- Fix `DATA_DIR` path (currently `projects/people/data`)
- Firms: write via `ensure_company()` + `tag_company("vc-firm")` + INSERT into `vc_firm_profile` + `vc_firm_observations`
- People: keep writing `vc_people` to `people.db`, but resolve and store `company_id` at write time
- Import from `intel.companies.registry` for firm operations
- Remove `vc_firms` and `vc_people` DDL from `_init_schema()` (firms go to companies.db schema; people schema stays but updated)
- Fix `process_stage` dispatcher — currently broken (stage functions not registered)
- Wire `enrich` stage to call `vc_scoring.score_firm()`, store score in `vc_firm_profile`

## Scoring Logic

New file: `intel/companies/vc_scoring.py`

- Move `build_vc_list.py::score_firm()` logic here
- Input: `vc_firm_profile` row → output: `(score, breakdown)`
- `has_midas` is denormalized into profile (no cross-DB query at score time)
- Scoring dimensions: US (+10), Silicon Valley (+15), LA (+10), AI focus (+20), Software focus (+15), Series A (+10), Israel ops (+5), Fund size >$500M/1B (+10/15), Prominence by deal count (+5-20), Midas presence (+10)
- Add `score_version` (hash of scoring function) and `scored_at` for audit
- Callable from: job handler enrich stage, standalone re-score CLI, `intel/people/scoring.py`

## File Disposition

| File                                     | Action                                                           |
|------------------------------------------|------------------------------------------------------------------|
| `intel/people/build_vc_list.py`          | Delete — seeds to JSON, scoring to vc_scoring.py                 |
| `intel/people/data/vc_firms_scored.csv`  | Delete — replaced by registry + vc_firm_profile query            |
| `intel/people/data/vc_firms_scored.json` | Delete — replaced by registry + vc_firm_profile query            |
| `intel/people/scoring.py`                | Update — read VC firms from companies.db vc_firm_profile         |
| `intel/people/tests/test_scoring.py`     | Update — adjust imports and data source                          |
| `jobs/handlers/vc_data.py`               | Rewrite — companies registry + vc_firm_profile + observations    |
| `projects/vc_intel/README.md`            | Move strategic content into intel/README.md, delete folder       |

## Documentation

- `intel/README.md` — absorb strategic vision from `projects/vc_intel/README.md`
- `intel/companies/CLAUDE.md` — add `vc-firm` tag and vc_firm_profile/observations docs
- `intel/people/README.md` — stays as founder evaluator spec, references companies registry for VC data

## Consumers to Update

- `intel/people/scoring.py` — `_load_vc_firms()` reads CSV → switch to `vc_firm_profile` table query
- `intel/people/tests/test_scoring.py` — fixtures and imports
- `intel/people/network_browser.py` — if it references vc_firms
- Any other imports of `build_vc_list` (grep to confirm)

## Risks and Mitigations

| Risk                                    | Mitigation                                               |
|-----------------------------------------|----------------------------------------------------------|
| Name collisions with existing companies | Dry-run collision check before migration                 |
| Data loss from dedup/merge              | Archive table (30 days), per-source observations table   |
| Handler recreates old schema            | Remove DDL from `_init_schema()` as part of cutover      |
| Manual seed data clobbered by fetches   | Source precedence rules, `source != 'manual_seed'` guard |
| Circular dependency (companies↔people)  | One-way: people→companies only. Midas denormalized.      |
| Cross-DB FK integrity                   | Resolve company_id at write time, store as integer       |
