Introduction:
Looom AI leverages artificial intelligence and advanced analytics to provide actionable insights for risk management, audit, and compliance. This page explains how client and user data is processed, how AI models operate, and the safeguards implemented to ensure privacy, security, and compliance.
- Types of Data Processed
- Personal Data: Names, email addresses, contact information, or other identifiers necessary for platform access and communication.
- Operational Client Data: Financial transactions, journal entries, audit records, and other business-specific information required for risk analysis.
- System & Usage Data: IP addresses, device information, platform usage logs, and session activity used for monitoring and system improvements.
- Purpose of Data Processing
- AI-driven risk detection, anomaly analysis, and predictive scoring.
- Automated generation of audit reports, client communications, and email drafts.
- Dashboard visualizations, alerts, and notifications for risk management and compliance monitoring.
- Continuous platform improvements and operational monitoring to ensure reliability and integrity.
- How AI Processes Data
- AI Model Integration: Looom AI uses OpenAI API or Azure OpenAI Service for natural language processing and analytical insights.
- Data Handling During AI Processing:
- Only anonymized or necessary data is sent to AI models.
- AI requests are transmitted over secure HTTPSTLS connections.
- Responses are returned to the platform for display, reporting, or further processing.
- No Retention for Training:
- By default, AI input data is not stored or reused for model training.
- Temporary retention may occur only for abuse prevention and monitoring, with strict deletion policies in place.
- EnterpriseAzure OpenAI deployments ensure zero data retention for full compliance with POPIAGDPR.
- Human-in-the-Loop Verification
- All AI-generated insights, risk assessments, or email drafts undergo human review before actions are taken, or client communications are sent.
- This ensures accuracy, contextual relevance, and regulatory compliance.
- Data Minimization & Anonymization
- The platform applies data minimization principles:
- Only essential information is processed for each AI task.
- Sensitive identifiers are anonymized where feasible.
- Users can configure which data fields are processed by AI to align with internal policies and client agreements.
- Auditability & Logging
- Every data processing event, AI query, and response is logged in the platform.
- Logs include:
- Timestamp of the request
- Risk or data context
- User initiating the action
- AI response delivered
- These logs support SOC 2 audits, internal compliance checks, and regulatory inspections.
- Security Measures
- Encryption: AES-256 at rest, TLS 1.2+ in transit.
- Access Controls: Role-based access, multi-factor authentication, and session management.
- Secure API Keys: AI service credentials are stored securely and never exposed on the client side.
- Incident Monitoring: Continuous detection of anomalies or unauthorized access attempts.
- User Rights & Transparency
- Users can:
- Access data processed by the platform.
- Correct or update personal or operational data.
- Request deletion of data where legally permissible.
- Full transparency about AI processing logic and purpose is provided to users.
- Continuous Compliance
- Looom AI continuously monitors changes in data privacy laws, AI ethics guidelines, and audit regulations.
- Platform updates and AI model usage policies are adapted to maintain ongoing POPIA, GDPR, and SOC 2 compliance.
Conclusion:
Looom AI is designed to process client and user data securely, responsibly, and transparently. Our AI-driven insights are generated with strict compliance controls, anonymization, and human review, ensuring that sensitive data is never misused or stored for unintended purposes.