← Back to Methodology

AI Hygiene Review — Parent Admin Guide

This guide covers the AI Hygiene Review feature for Parent admins (Portfolio Principal in a PE-firm deployment, Group Admin in a conglomerate deployment). It assumes you are already familiar with campaign management — see docs/ADMIN_GUIDE.md for general campaign workflow.


What the AI Hygiene Review is

The AI Hygiene Review is a campaign-driven self-attestation that lets the parent fund verify tenant cybersecurity hygiene around AI features shipped to customers. It anchors on the open-source AI SAFE² Framework v1.0 (Cyber Strategy Institute, dual-licensed MIT + CC-BY-SA) with crosswalks rendered to NIST AI RMF, ISO/IEC 42001, EU AI Act, and OWASP LLM Top 10.

The assessment is scoped to AI in product — customer-facing AI shipped by tenants. Internal employee tooling and AI-assisted developer tooling are out of scope by design; the Q0 gate handles that boundary automatically.


Creating an AI Hygiene Campaign

The AI Hygiene Review is an add-on assessment module that can be layered onto any campaign or run standalone.

Standalone (AI-only campaign)

  1. Open Create Campaign.
  2. In Step 2 (Framework & Control Baseline), click “Running an add-on-only campaign? Skip framework selection.”
  3. Scroll to Add-on Assessment Modules and tick AI Hygiene Review.
  4. Continue through the wizard and submit.

The campaign is created with 30 SAFE² scoring questions and no framework controls. Each assigned tenant gets the Q0 scope gate, the Q1 third-party override, and (on Q0=Yes / Q1=No) the full 30-question questionnaire.

Combined with CIS or NIST

  1. Open Create Campaign.
  2. In Step 2, choose your primary framework (CIS / NIST / etc.) as you would today.
  3. Scroll to Add-on Assessment Modules and tick AI Hygiene Review.
  4. Continue through the wizard and submit.

The campaign now has both the framework’s controls AND the 30 SAFE² scoring questions. Tenants see the AI Hygiene flow alongside the framework’s standard assessment.

Required prerequisites

The AI Hygiene add-on requires two fixtures to be loaded on the deployment:

If either is missing at campaign-creation time, the API returns 409 with a message naming the missing command. Tenants can also be redirected to /assessments/<assignment_id>/ai-hygiene from their assignment detail page.


1. Launching an AI Hygiene Review campaign

Navigate to Compliance > Campaigns, then click New Campaign.

In the campaign wizard:

  1. Name and description — give the campaign a name (e.g., “AI Hygiene — Q2 2026”) and optional description.
  2. Framework — select the AI Hygiene Review preset from the framework dropdown. The preset pre-populates the question bank (30 SAFE² questions across 5 pillars), the AI Hygiene Officer attestation requirement (CampaignPolicyAttestation.policy_type = 'ai_hygiene_officer', backend/apps/assessments/models.py:2611), and the default scoring weight profile.
  3. Control scope — for AI Hygiene campaigns the scope is automatically set to ai_in_product (backend/apps/assessments/services/ai_hygiene_constants.py:18). No manual scope adjustment is needed.
  4. Scoring config — the default pillar weights are loaded from AI_HYGIENE_DEFAULT_WEIGHTS (ai_hygiene_constants.py:31–37): audit_inventory 0.25, sanitize_isolate 0.20, fail_safe_recovery 0.15, engage_monitor 0.20, evolve_educate 0.20. Leave these at their defaults unless you have a fund-specific rationale — audit is weighted highest because procurement DDQs concentrate on that pillar.
  5. Document requirements — optional. You may require tenants to upload an AI Bill of Materials, model cards, or red-team reports as supplemental evidence. These are separate from Q1 third-party override documents.
  6. Assignments — select which tenants to assign. A tenant with subsidiary_oversight_enabled = True in your family is visible here if you have the correct permission class (IsSubsidiaryOverseerOrPortfolioAdmin, backend/apps/core/permissions.py:135–143).
  7. Activate — click Create & Activate to notify assigned tenants immediately, or Save Draft to review before activating.

2. Reading the rollup

Once tenants start responding, navigate to Compliance > Campaigns, open the campaign, and select the AI Hygiene tab (or the Scores tab, depending on your deployment version).

The rollup table has one row per assigned tenant. Columns:

ColumnWhat it shows
Tenant nameRendered via the terminology dictionary — “Portfolio Company” (PE-firm deployment) or “Subsidiary” (conglomerate deployment).
AI Hygiene Score0–100 overall score, or if the tenant has not yet submitted (status is assigned or in_progress). Null when the tenant took the Q0 scope-out path.
Pillar scoresPer-pillar breakdown on hover or in the detail pane. Each pillar 0–100.
Status badgeCurrent CampaignAssignment status (see Section 4 below).
ProvenanceOne of: Self-attested, Audited externally — accepted, or AI Out-of-Scope. Sourced from AIHygieneListItemSerializer.provenance_label (backend/apps/assessments/serializers/ai_hygiene_serializers.py:82–87).
Evidence-backedIcon present when the tenant attached at least one optional evidence file to a questionnaire response, OR when status is submitted_via_third_party_accepted.

The table is sortable by any column. Click a column header to sort; click again to reverse. Use the status filter to show only a specific status (e.g., show all submitted_via_third_party_pending to work the review queue).


3. Reviewing third-party uploads (the Q1 path)

When a tenant takes the Q1 override path — uploading an existing third-party AI governance assessment — their assignment moves to submitted_via_third_party_pending. A badge appears on the Third-Party Review tab of the campaign.

Finding the review queue

Open the campaign and select Third-Party Review. Each row in the queue shows the tenant name, submission time, document filename, assessment type, and SHA-256 hash. The hash matches what was recorded at upload time and is re-verified on every download — chain of custody is intact even if the file is later re-retrieved.

What qualifies

The acceptable third-party assessment types are defined in THIRD_PARTY_ASSESSMENT_TYPES (ai_hygiene_constants.py:61–69):

Enum valueAssessment type
iso_42001_certISO/IEC 42001 Certification
hitrust_ai_risk_mgmtHITRUST AI Risk Management Assessment
hitrust_ai_security_certHITRUST AI Security Certification
nist_ai_rmf_auditNIST AI RMF Audit (Big4 or accredited auditor)
big4_ai_auditBig4 AI Audit Report
ai_red_team_reportIndependent AI Red-Team Report (last 12 months)
otherOther — tenant must provide description

Red-team reports have a 12-month recency constraint (enforced by Q1 question text). For all other types the constraint is scope and coverage: the assessment must cover the tenant’s customer-facing AI practices, not just internal AI governance.

Accepting a submission

Click Review on the row. A PDF preview pane opens alongside the metadata (type, hash, submitter, submission timestamp). When you are satisfied the document covers the scope:

  1. Click Accept.
  2. The assignment transitions to submitted_via_third_party_accepted and the AI Hygiene Score for that tenant is set to 100 (this is the terminal accepted score — see Section 5 for how the score is computed for questionnaire paths).
  3. The provenance label changes to Audited externally — accepted.
  4. The tenant is notified.

Rejecting a submission

  1. Click Reject.
  2. Enter a rejection_reason (required — the serializer enforces this: ThirdPartyReviewDecisionSerializer.validate(), ai_hygiene_serializers.py:111–115).
  3. The assignment transitions to submitted_via_third_party_rejected, and then immediately back to in_progress so the tenant can either upload a different document or switch to the full questionnaire.
  4. Your rejection reason is visible to the tenant on their assessment dashboard.

4. Assignment status reference

All statuses are defined in CampaignAssignment.STATUS_CHOICES (backend/apps/assessments/models.py:1300–1311). The four statuses added for AI Hygiene Review are from NEW_ASSIGNMENT_STATES (ai_hygiene_constants.py:86–91).

StatusMeaning
assignedCampaign has been activated and the tenant notified. No action taken yet.
in_progressTenant has opened the assessment and answered at least one question, OR has been sent back from a rejected third-party submission.
submittedTenant submitted the full questionnaire. Awaiting your review if your campaign requires review; auto-completes if not.
under_reviewYou have opened the submission for review.
completedYou approved the submission. Score is final.
overdueCampaign due date passed before the tenant submitted.
not_applicable_attestedTenant answered Q0 = No (does not ship AI features). Signed attestation text is stored. Score is null — this tenant is out of scope.
submitted_via_third_party_pendingTenant uploaded a third-party assessment via Q1. Awaiting your accept/reject decision in the review queue.
submitted_via_third_party_acceptedYou accepted the third-party submission. Score is 100 and provenance is Audited externally — accepted. Terminal state.
submitted_via_third_party_rejectedYou rejected the third-party submission. Assignment returns to in_progress.

5. How the AI Hygiene Score is computed

Note: The scoring service is backend/apps/assessments/services/ai_hygiene_score.py (ships with the p1-services branch). The algorithm below documents what that service implements.

Per-pillar score

For each of the five SAFE² pillars, collect all question responses from the submission where the response is not N/A. Map:

ResponseScore
Yes1.0
Partial0.5
No0.0
N/Aexcluded from denominator

The pillar score is the arithmetic mean of the in-scope responses, multiplied by 100. If every question in a pillar is answered N/A, that pillar is excluded from the overall score calculation (it does not count as zero).

Overall score

The five pillar scores are combined using the weight profile from AI_HYGIENE_DEFAULT_WEIGHTS (ai_hygiene_constants.py:31–37):

audit_inventory:    0.25
sanitize_isolate:   0.20
fail_safe_recovery: 0.15
engage_monitor:     0.20
evolve_educate:     0.20

When one or more pillars are excluded (all-N/A), the remaining pillar weights are renormalized so they sum to 1.0 before applying. The result is an overall score between 0 and 100.

Special cases

Where this score shows up

The AI Hygiene Score appears in:

Integration of the AI Hygiene Score into the existing Exit Readiness Score (backend/apps/core/services/exit_readiness.py:182) is planned for Phase 3 of the AI Hygiene roadmap, when the apps/ai_governance/ module ships per-tenant AIInventoryItem rows.


6. Subsidiary-overseer access

If your organization uses subsidiary-oversight (a parent admin with Tenant.subsidiary_oversight_enabled = True), you can see AI Hygiene assessments across your entire family of tenants in the same rollup view.

The permission gate on AI Hygiene endpoints is IsSubsidiaryOverseerOrPortfolioAdmin (backend/apps/core/permissions.py:135–143), which accepts both Portfolio Principals and subsidiary-overseer admins. Cross-tenant queries use User.get_visible_tenants() (backend/apps/core/models.py:303–347).

To enable subsidiary oversight for a parent-tenant, navigate to Administration > Organization, find the tenant, and toggle subsidiary_oversight_enabled. See docs/KNOWN_BUGS.md for current limitations with the subsidiary-overseer create path (the read-side rollup works; create operations have a deferred bug).


Framework attribution

The AI Hygiene Review is anchored on the AI SAFE² Framework v1.0, an open-source taxonomy by Cyber Strategy Institute (https://github.com/CyberStrategyInstitute/ai-safe2-framework), dual-licensed MIT (code) + CC-BY-SA (taxonomy). Attribution is included in every fixture header per the CC-BY-SA license requirement.