Skip to main content

Compliance Information

Last Updated: February 14, 2026

Overview

NeuroPathmaker builds AI systems for regulated industries. Compliance is a core architecture consideration, not a post-deployment checkbox. This page outlines our approach to data security and regulatory requirements.

HIPAA Compliance

For healthcare clients, our systems are designed and deployed with HIPAA compliance requirements built into the architecture:

  • Encryption: All data encrypted at rest (AES-256) and in transit (TLS 1.2+).
  • Access Controls: Role-based access controls (RBAC) with principle of least privilege. Multi-factor authentication for administrative access.
  • Audit Logging: Comprehensive audit trails for all data access, modifications, and system events. Logs are immutable and retained per compliance requirements.
  • BAA Execution: Business Associate Agreements executed with all subprocessors handling protected health information (PHI).
  • Data Segregation: Client data is logically or physically segregated. PHI is never commingled across client environments.

Infrastructure Security

  • SOC 2 Compliant Infrastructure: Systems are deployed on SOC 2 Type II certified infrastructure providers.
  • Network Security: Virtual private clouds, security groups, and network ACLs restrict traffic to authorized sources only.
  • Vulnerability Management: Regular security assessments and dependency scanning. Critical vulnerabilities are remediated within 24 hours.
  • Incident Response: Documented incident response procedures with defined escalation paths and notification timelines.

Data Handling

  • Data Residency: Client data is stored within the United States unless otherwise specified.
  • Data Retention: Retention policies are defined per engagement and aligned with industry and regulatory requirements.
  • Data Deletion: Upon engagement termination, client data is securely deleted from all systems within 30 days, with certification provided upon request.

AI-Specific Considerations

  • Model Training Data: Client data is never used to train third-party AI models. API calls to LLM providers are configured with data privacy settings that prevent training data retention.
  • Output Validation: AI-generated outputs in regulated contexts include human review workflows and confidence scoring.
  • Explainability: Systems are designed with audit-friendly architectures that can demonstrate how outputs were generated.

Contact

For compliance inquiries or to request additional documentation, please contact us at info@neuropathmaker.ai.