Risk Assessment Policy
Bridgit Platform (askbridgit.ca)
Version 1.0 | Effective: April 29, 2026 | Next Review: October 29, 2026
1. Purpose and Scope
This policy establishes the framework for identifying, assessing, treating, and monitoring information security risks affecting the Bridgit platform, its infrastructure, and the data of its users and their organizations.
Applies to: All personnel with access to Bridgit production systems, source code, or cloud infrastructure.
In scope: The Bridgit SaaS application, all cloud infrastructure (GCP Cloud Run, Cloud SQL, GCS, Redis, Secret Manager), third-party integrations (AI providers, Stripe, Google OAuth, Tavily, Apify), source code, and all data processed through the platform.
Compliance mapping: ISO 27001 A.8.1, A.14, A.17; SOC 2 CC3, CC5, CC8, CC9; GDPR Art. 35.
2. Risk Management Framework
Methodology: Custom, asset-based risk management approach scaled to a small SaaS team operating on managed cloud infrastructure.
Risk management lifecycle:
- Identification: threats and vulnerabilities identified through code review, dependency auditing, infrastructure review, and incident history
- Analysis: risks scored using a 5x5 likelihood x impact matrix
- Evaluation: risks ranked against the risk appetite and prioritized for treatment
- Treatment: controls selected and implemented (code fixes, configuration changes, process updates)
- Monitoring: residual risks tracked and re-evaluated semi-annually or after significant changes
Risk ownership: the Platform Administrator owns all identified risks and is responsible for treatment decisions.
3. Risk Appetite
Data security and privacy: Zero tolerance for cross-tenant data exposure, unauthorized access to personal data, or credential leakage. Any risk in this category requires immediate treatment.
Operational risk: Moderate tolerance. Acceptable: brief Cloud Run scaling delays, non-critical feature outages under 4 hours. Unacceptable: data loss, database corruption, extended outage exceeding 8 hours.
Third-party risk: Moderate tolerance. AI provider outages accepted as transient. AI provider data breaches treated as P2 incidents.
Compliance risk: Zero tolerance for GDPR or PIPEDA violations.
Risk acceptance above the defined appetite requires documented justification and review at the next semi-annual cycle.
4. Asset Inventory and Classification
Asset Categories
Infrastructure:
- Cloud Run services (API, frontend) in GCP northamerica-northeast1
- Cloud SQL PostgreSQL 15 database
- Redis 7 (JWT blacklisting, session management)
- GCS buckets (file storage)
- GCP Secret Manager (API keys, credentials)
Data:
- activity_instances table (JSONB, polymorphic user-generated content)
- users table (email, system_role)
- User Profile activity instances (personal data)
- ai_usage_logs (prompts sent to AI providers)
- File uploads in GCS
- OAuth tokens (AES-256-GCM encrypted)
Services:
- AI providers: OpenAI, Anthropic, Google, Cohere
- Stripe (billing)
- Tavily (web search)
- Apify (social media data)
- Google OAuth
- GitHub (source code, CI/CD)
Classification
Critical: Cloud SQL database, GCP Secret Manager, activity_instances table, source code repository.
High: Cloud Run services, OAuth tokens, AI provider API keys, Stripe integration.
Medium: Redis, GCS file storage, ai_usage_logs.
Low: Frontend static assets, development/staging environments, documentation.
Ownership
All assets are owned by the Platform Administrator, who maintains the inventory, ensures classification is current, and approves access to Critical and High assets.
5. Risk Identification
Threat Identification
Sources:
- Post-incident reviews from the breach register
- Problem Report form submissions flagged as security concerns
- npm audit and GitHub Dependabot alerts
- GCP security bulletins
- AI provider security disclosures
Platform-specific threat categories:
- Multi-tenant data isolation failure (cross-org JSONB data leakage)
- API key exposure (git history, logs, error messages)
- AI provider data exposure (user content in prompts)
- Authentication bypass (JWT, OAuth, embed auth)
- RBAC escalation
- Supply chain compromise (malicious npm packages)
- Cloud infrastructure misconfiguration
Vulnerability Assessment
- Dependency scanning: npm audit, GitHub Dependabot. Critical/High patched within 7 days, Medium within 30 days.
- Code review: all changes reviewed before deployment, ESLint enforcement
- Application testing: automated API tests, E2E tests (Playwright), staging validation
- Infrastructure review: GCP IAM and Cloud SQL authorized networks reviewed semi-annually
- Vulnerability scanning frequency: monthly
- Gap: formal penetration testing is not yet conducted. To be addressed as team scales.
6. Risk Assessment and Scoring
Likelihood Scale (1-5)
1 Very Low: Less than once in 5 years. Example: GCP region-wide outage exceeding 24 hours.
2 Low: Once in 2-5 years. Example: npm supply chain compromise on a direct dependency.
3 Medium: Once per year. Example: API key accidentally logged in error output.
4 High: Multiple times per year. Example: npm audit High severity finding in transitive dependency.
5 Very High: Expected frequently. Example: automated bot scanning of public API endpoints.
Impact Scale (1-5)
1 Negligible: No data exposure, no disruption. Example: cosmetic UI bug.
2 Minor: Non-personal data, disruption under 1 hour. Example: staging briefly inaccessible.
3 Moderate: Small-scale personal data exposure, 1-4 hours disruption, possible regulatory inquiry. Example: one org's data visible to another briefly.
4 Major: Multi-user data exposure, 4-24 hours disruption, regulatory notification required, financial impact $5K-$50K. Example: database backup exposed publicly.
5 Catastrophic: Large-scale breach, outage exceeding 24 hours, enforcement action, financial impact exceeding $50K. Example: full database compromise.
Risk Matrix
Score = Likelihood x Impact (1-25)
Low (1-4): Accept with monitoring. Review at next semi-annual cycle.
Medium (5-9): Treatment plan within 90 days.
High (10-15): Treatment plan within 30 days. Platform Administrator attention.
Critical (16-25): Immediate action within 7 days. Escalate per Incident Response Policy.
7. Risk Treatment and Mitigation
Treatment Options
- Accept: document justification, assign review trigger
- Mitigate: implement controls to reduce likelihood or impact
- Transfer: insurance or managed service provider
- Avoid: eliminate the activity causing the risk
Control Selection
Controls are selected based on risk priority, effectiveness, implementation cost, and compliance requirements.
Technical controls: HTTPS, AES-256-GCM encryption, JWT with Redis blacklisting, organization_id isolation, input validation, RBAC middleware, GCP Secret Manager, automated testing and linting.
Administrative controls: code review, staging validation, deployment guide with mandatory backup, Incident Response Policy, semi-annual policy review.
Residual Risk
After controls are applied, residual risk is re-scored using the same matrix. Residual risk in the High or Critical bands requires documented justification, additional compensating controls, and accelerated review.
8. Fraud Risk
Relevant fraud risks:
- Unauthorized use of AI provider API keys (financial exposure from usage-based billing)
- Unauthorized data export from production database
- Creation of unauthorized admin accounts
Mitigating controls:
- All code changes tracked in git with author attribution
- Production deployments only through GitHub Actions
- GCP IAM limits access to Secret Manager and Cloud SQL
- Stripe handles all payment processing (no platform-level financial data)
- AI usage logged in ai_usage_logs
- Semi-annual access review of GCP IAM roles
No formal whistleblower mechanism at current scale.
9. Change Management
All changes follow:
- Local development in Docker containers
- Automated testing (npm run test:quick) and linting (npm run lint)
- Staging deployment via GitHub Actions to staging.askbridgit.ca
- Staging validation
- Production deployment via pull request merge to main
- Database backup before every production push (scripts/prepare-deployment.sh)
Change categories:
- Standard (code_only): features, bug fixes, UI changes
- Schema change: requires Sequelize migration, tested against production-clone data
- Emergency: deployed directly with post-deployment review within 48 hours
10. Business Continuity and Disaster Recovery
Recovery objectives:
Cloud Run API: RTO 15 minutes (auto-recovery), RPO 0 (stateless).
Cloud SQL database: RTO 1 hour, RPO 24 hours (automated backups) or deployment-time (manual pg_dump).
GCS file storage: RTO 0, RPO 0 (regionally redundant).
Redis: RTO 5 minutes, RPO session data only (users re-authenticate).
Disaster recovery: restore database from Cloud SQL backup or pg_dump, redeploy application from known-good git commit via GitHub Actions, rotate all secrets if compromise suspected.
BCP/DR testing frequency: semi-annually.
11. Security in Development
Security integrated at each SDLC phase:
- Requirements: security requirements in PRDs for features involving user data or external integrations
- Design: data flow analysis, multi-tenant isolation verification
- Implementation: OWASP Top 10 reference, ESLint, single-provider pattern for SDK imports, secrets in Secret Manager
- Testing: automated API tests, E2E tests, staging validation
- Deployment: GitHub Actions CI/CD, mandatory backup
- Operations: Cloud Run monitoring, Report a Problem form, Incident Response Policy
Environment separation: local development, staging, production. Secrets differ per environment.
12. Data Protection Impact Assessment
DPIA required for:
- New AI integrations transmitting user data to external providers
- New data processing purposes beyond original consent or legal basis
- Large-scale changes to how activity_instances data is queried, exported, or shared
- Cross-border data transfers to vendors outside Canada or the EU
- Automated decision-making with significant effects on users
- New third-party integrations receiving personal data
Current DPIA-relevant integrations: OpenAI, Anthropic, Google, Cohere (AI), Stripe (billing), Google OAuth (authentication).
Process: screening, data flow mapping, necessity assessment, risk identification, mitigation, documentation as activity instance in the platform, review on material change or semi-annually.
13. Policy Administration
- Version: 1.0
- Effective Date: April 29, 2026
- Last Review: April 29, 2026
- Next Review: October 29, 2026
- Owner: Platform Administrator
- Review Frequency: Semi-annually, or after any P1/P2 incident, or after significant infrastructure change
- Approved By: (signature / name / date)
This policy is maintained alongside the platform source code and is subject to version control. Changes require review and re-approval.