BP Logix Responsible AI Usage Policy
Last updated on: June 13, 2025
1.0 Purpose
The purpose of this policy is to provide guidelines for the ethical, responsible, and secure use of artificial intelligence (AI) technologies within BP Logix. This policy aims to minimize risks associated with AI systems and ensure they are used in a manner that is compliant with laws/regulations, aligns with BP Logix values, and promotes fairness and transparency.
1.1 Quick-start / Need to Know
Below is a non-exhaustive, but prioritized list of need to know aspects regarding AI
- AI limitations - Understand that chatbots and other AI have restricted abilities to truly understand context, nuance, emotions. Do not assume their responses are flawless
- Guard personal info - Be wary of sharing sensitive personal details with AI, as data collection practices may not be transparent
- Verify unusual recommendations - Cross-check any highly unusual or improper suggestions from AI before acting on them
- Consider biases - AI can reflect inherent human biases; be alert to signs of prejudice or lack of diversity
- Use ethically - Avoid leveraging AI in ways that could harm others or violate ethical norms
- Credit sources - When publishing output created by AI, disclose that the content was machine-generated
- Monitor for mistakes - Routinely review AI interactions to catch potential errors
- Provide feedback - Alert developers/companies to observed limitations, risks, or harmful content generated by their AI
2.0 Scope
This policy applies to all BP Logix employees, contractors, and business partners who use, develop, Procure, or interact with AI technologies on behalf of the company. This includes, but is not limited to, technologies such as:
- Machine learning systems
- Natural language processing
- Computer vision
- Conversational AI/chatbots
- Robotic process automation
- Predictive modeling
3.0 Policy
3.1 Ethical AI Principles
All use of AI technology must be consistent with the following core principles:
- Users must understand the origins, capabilities, and limitations of AI systems before utilizing them
- AI system outputs must be carefully validated by users and not directly dictate decisions without human oversight
- Users must monitor AI systems to ensure continued fair, ethical, and intended performance
- Users must not utilize AI systems for applications that could violate laws, ethics, or company values
- Humans remain fully accountable for decisions informed by AI system outputs
- Users must promptly report unethical or unintended AI system behavior to system administrators
- When using AI systems that process personal data, privacy rights must be protected and adhere to existing company policies and procedures
- Caution must be taken to avoid the introduction of biases into operational decisions or processes informed by AI systems
3.2 Responsible AI Practice
- AI systems may only be procured/developed to aid core business functions in a manner that respects civil liberties and aligns with BP Logix values
- Employees must receive acknowledge this policy before utilizing AI systems for their work
- Third parties supplying AI systems must adhere to the same standards of responsible/ethical AI as BP Logix and follow Vendor Management policies and procedures
- All AI systems must be evaluated for potential risks and undergo impact assessments focused on privacy, security, fairness, and societal impact
- AI systems deemed high-risk will receive heightened oversight, testing, and approval requirements
3.3 AI Development Risk Management
- AI systems must be designed and developed following secure software development lifecycle practices
- AI infrastructure/data must be secured, and access controlled as appropriate for confidential data.
- Incidents involving AI systems must be documented, reviewed, and mitigated under BP Logix incident response protocols
4.0 Oversight
- Head of IT/Operations (Principal), along with approval from the executive team will provide governance of all AI initiatives
- Principal will maintain a register of all production AI systems and review new AI proposals
- Periodic reviews will be conducted on operational AI systems to validate performance, fairness, and responsible use
- Principal will update this policy and associated procedures periodically to adapt to changes in technology and regulations
5.0 Compliance
5.1 Violations
- Employees found to have violated this policy may face disciplinary action per standard protocols
- Partners/vendors must adhere to this policy; non-compliance may result in termination of contracts
5.2 Policy Review
- This policy will be reviewed annually and updated as needed to account for advances in technology and evolving regulatory guidance
6.0 Applicability
This policy is applicable by review and acknowledge