In today’s rapidly evolving technological landscape, artificial intelligence (AI) plays a pivotal role in transforming businesses across various sectors. From enhancing operational efficiency to revolutionizing customer experiences, AI offers immense potential. However, with great power comes great responsibility. Creating a robust AI policy is imperative for companies to address the ethical, legal and operational challenges that come with AI implementation.
Understanding the need for an AI policy
As AI technologies become more sophisticated, concerns around privacy, bias, transparency and accountability have intensified. Companies must address these issues proactively through well-defined policies that guide AI development, deployment and usage. An AI policy serves as a framework to ensure that AI systems align with ethical standards, legal requirements and business objectives.
For instance, companies in sectors like manufacturing or consumer goods often leverage AI to optimize their supply chain. While this leads to efficiency, it also raises questions about transparency and data usage. A clear policy helps ensure that AI not only improves operations but also aligns with legal and ethical standards.
Key components of an effective AI policy
Ethical principles and values
It’s important to define the ethical principles that guide AI development and deployment within your company. These principles should reflect your organization’s values and commitment to responsible AI use, such as fairness, transparency, accountability, safety and inclusivity. If your company uses AI for targeted marketing, for example, ensure that its use respects customer privacy and prevents discriminatory targeting practices.
Data governance
Strong data governance is the foundation of any successful AI strategy. Companies need to establish clear guidelines for how its data is collected, stored and used, and ensure compliance with data protection regulations like GDPR in the EU, CCPA in California, LGPD in Brazil, PIPL in China and AI regulations such as EU AI Act. This includes regular audits to guarantee data quality and security throughout the AI lifecycle. The importance of data privacy, data quality and security should be emphasized throughout the AI lifecycle.
Algorithmic transparency and explainability
AI systems often operate as ‘black boxes,’ making decisions that are difficult to interpret. To foster trust, it is important to promote transparency in your AI processes. For instance, companies implementing AI-driven supply chains should ensure the technology explains to managers why specific decisions — such as routing inventory — are made. Providing such clarity builds confidence in AI decision-making.
Bias mitigation
Bias in AI models — such as retail video surveillance systems that involve facial recognition — can cause serious damage, both culturally and to your business’ reputation. It’s essential to regularly audit your AI systems to detect and mitigate biases in data collection, algorithm design and decision-making processes. This can involve using diverse data sources, conducting regular bias audits and maintaining human oversight to ensure fairness at every stage.
Risk management
Imagine if you had to evacuate a six-mile radius due to a toxic substance being released into the air from one of your plants, such as what happened in 2020 at a well-known company’s food plant in Camilla, GA.
In this incident, liquid nitrogen leaked from the plant’s refrigeration system. Liquid nitrogen is commonly used in the food industry for freezing products, but it becomes dangerous when it vaporizes and displaces oxygen in the air. The leak caused the oxygen levels to drop, leading to a hazardous environment. As a result, six workers died and several others were hospitalized. Authorities evacuated the surrounding area within a six-mile safety radius to prevent further casualties from nitrogen exposure.
This tragic event prompted an investigation into safety protocols and emergency response procedures at food processing facilities that use hazardous chemicals, underscoring the importance of stringent safety measures in such industries.
Every AI system introduces certain risks, whether related to cybersecurity, operational disruptions or legal liabilities. If AI systems are used to manage safety-critical processes, companies should ensure transparency, auditing mechanisms and human oversight are in place to mitigate potential risks.
Regulatory compliance
AI regulations vary by industry and geography, and your AI policy must adhere to all relevant laws. For example, the food and beverage industry is governed by regulations such as the Food Safety Modernization Act (FSMA) in the US, which require preventive controls to address potential hazards in production and distribution. Ensure your AI policy is designed to comply with all applicable regulations and is adaptable to changing legal landscapes.
Employee training and awareness
AI adoption is only successful when employees are well-informed about its ethical use and their roles in supporting responsible practices. Training programs, workshops and interactive learning tools can help employees understand AI technologies, ethical considerations and their importance in ensuring fairness and compliance.
External stakeholder engagement
Communicating your AI policy to customers, partners and other stakeholders is critical for building trust. By engaging in dialogue with and soliciting feedback from external parties, companies can address concerns and foster a positive relationship with those affected by their AI initiatives.
Steps to develop and implement an AI policy
1. Assessment and gap analysis
Amazon faced scrutiny when its AI-powered recruiting tool was found to exhibit bias against women. In 2018, during an internal assessment, Amazon discovered that the AI, trained on resumes from predominantly male candidates, was penalizing resumes that included the word “women’s,” such as in “women’s chess club.” As a result, Amazon discontinued the tool and began reevaluating their AI systems to identify biases and improve fairness in future models.
Start by evaluating your current AI capabilities and practices. Identify gaps related to ethics, transparency, risk and compliance. This gap analysis will help pinpoint areas that need improvement as you craft your AI policy.
2. Cross-functional collaboration
Google established its Advanced Technology External Advisory Council (ATEAC) in 2019 to include input from ethicists, human rights specialists and industry experts when developing its AI systems. This cross-functional collaboration aimed to ensure that Google’s AI developments — such as its facial recognition technology — adhered to ethical standards and avoided biases that could harm minority communities. Although the council was disbanded due to internal conflicts, the initiative highlighted the importance of cross-functional collaboration in AI development.
A comprehensive AI policy requires input from diverse stakeholders. Legal experts, data scientists, ethicists and business leaders should work together to ensure the policy integrates technical expertise with ethical considerations.
3. Policy formulation
Microsoft formulated an AI ethics policy after recognizing the risks associated with AI use in facial recognition. The policy required safeguards to prevent misuse and bias, particularly in government and law enforcement applications. In 2020, Microsoft decided to stop selling its facial recognition technology to police departments until there was a national law regulating its use, emphasizing a risk-based framework that prioritized human rights and ethical concerns.
Create clear, actionable policies that align with your company’s values and regulatory requirements. Using a risk-based framework, similar to the EU AI Act, can help guide policy development.
A risk-based framework for the Food & Beverages industry, for example, would consider the following, from highest to lowest impact across functional business units:
- Supply chain management. AI systems used for optimizing supply chain operations, forecasting demand and managing inventory require risk assessment and mitigation. Companies need to ensure that these systems are designed to prevent risks and that their use of AI does not lead to unfair practices or safety issues. i.e. Ensure that AI bias does not unfairly favor one supplier over another.
- Quality control and manufacturing. AI systems used in quality control, predictive maintenance and manufacturing processes are needed to ensure reliability and accuracy. However, the impact may be less compared to areas directly interacting with consumers or making high-risk decisions.
- Innovation & product development. If AI is used in ways that could impact consumer safety or involve high-risk technologies, these systems must meet rigorous standards for testing and validation.
- Human resources & talent acquisition. AI tools used for HR functions, such as recruitment, performance evaluations and employee engagement, are less affected than consumer-facing applications. However, they still need to comply with requirements related to fairness and transparency, particularly if AI influences hiring or promotion decisions.
- Finance and procurement. Lower impact, focusing on transparency and accuracy.
4. Internal review and approval
Facebook (now Meta) reviewed and modified its AI content moderation policies after facing backlash for allowing hate speech and disinformation to spread on its platform. In 2020, Facebook’s internal review, conducted with senior management and external legal advisors, led to the refinement of its AI content moderation algorithms. They incorporated human oversight to reduce errors in identifying harmful content, ensuring that the policy aligned with global standards and Facebook’s mission of maintaining a safe online environment.
Before implementing an AI policy, it’s important to review it with senior management, legal advisors and key stakeholders. Feedback should be incorporated, and a consensus should be reached to ensure the policy aligns with the organization’s goals and legal obligations.
5. Implementation and monitoring
Tesla has implemented AI in its Autopilot and Full Self-Driving (FSD) systems for its vehicles. After launching these AI systems, Tesla provided extensive training for its drivers on how to use the technology safely, emphasizing that drivers must stay alert and be ready to take over control if needed. Tesla continuously monitors the system’s performance through data collection and real-world feedback, making software updates when issues are detected, such as the infamous “phantom braking” problem where cars abruptly stop due to misinterpretations by the AI.
Once the AI policy is approved, it should be communicated across the organization. Employees need to be trained in how to follow the policy guidelines, and monitoring systems should be established to ensure compliance and address any violations promptly.
6. Regular review and updates
IBM regularly updates its AI ethics policy through its AI Ethics Board, established to oversee AI development and deployments. In 2020, IBM discontinued its facial recognition technology and updated its AI policies in response to concerns over the technology’s potential for mass surveillance and racial profiling. The company’s ongoing reviews ensure that its AI practices align with evolving legal, ethical and societal standards, particularly regarding fairness, privacy and transparency.
AI is constantly evolving, and your AI policy needs to evolve with it. Regularly reviewing and updating the policy to reflect technological advancements, regulatory changes and lessons learned from deployments ensures that your AI practices remain relevant and effective.
An AI policy is a living document
Crafting an AI policy for your company is increasingly important due to the rapid growth and impact of AI technologies. By prioritizing ethical considerations, data governance, transparency and compliance, companies can harness the transformative potential of AI while mitigating risks and building trust with stakeholders. Remember, an effective AI policy is a living document that evolves with technological advancements and societal expectations. By investing in responsible AI practices today, businesses can pave the way for a sustainable and ethical future tomorrow.
Leo Rajapakse is the Head of Platform Infrastructure & Advanced Technology for Grupo Bimbo. He leads the company’s Technology Platform organization, which provides critical technology infrastructure platforms on-premise and cloud. Before joining Bimbo Bakeries, Leo held several leadership positions with the technology arms of leading institutions, including the Australian Government. He has extensive experience in managing large, global and diverse technology organizations where he has transformed and modernized complex technology platforms to greatly improve the stability, resiliency, and cybersecurity of applications and infrastructure.
Read More from This Article: Beyond the hype: Key components of an effective AI policy
Source: News