Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

IT leaders should measure and balance fairness in AI models, Forrester says

Bias in artificial intelligence development has been a growing concern as its use increases across the world. But despite efforts to create AI standards, it is ultimately down to organizations and IT leaders to adopt best practices and ensure fairness throughout the AI life cycle to avoid any dire regulatory, reputation, and revenue impact, according to a new Forrester Research report.

While a 100% elimination of bias in AI is impossible, CIOs must determine when and where AI should be used and what could be the ramifications of its usage, said Forrester vice president Brandon Purcell.

Bias has become so inherent in AI models that companies are looking at bringing in a new C-level executive called the chief ethics officer tasked with navigating the ethical implications of AI, Purcell said. Salesforce, Airbnb, and Fidelity already have ethics officers and more are expected to follow suit, he told CIO.com.

Ensuring AI model fairness

CIOs can take several steps to not only to measure but also balance AI models’ fairness, he said, even though there is a lack of regulatory guidelines dictating the specifics of fairness.

The first step, Purcell said, is make sure that the model itself is fair. He recommended using accuracy-based fairness criterion[GG3]  that optimizes for equality, a representation-based fairness criterion that optimizes for equity, and an individual-based fairness criterion. Companies should bring together multiple fairness criteria to check the impact on the model’s predictions.

While the accuracy-based fairness criterion ensures that no group in the data set receives preferential treatment, the equity-based fairness criterion ensures that the model is offering equitable results based on the data sets.

“Demographic parity, for example, aims to ensure that equal proportions of different groups are selected by an algorithm. For example, a hiring algorithm optimized for demographic parity would hire a proportion of male to female candidates that is representative of the overall population (likely 50:50 in this case), regardless of potential differences in qualifications,” Purcell said.

One example of bias in AI was the Apple Card AI model that was allocating more credit to men, as was revealed in late 2019. The issue came to light when the model offered Apple cofounder Steve Wozniak a credit limit that was 10 times than that of his wife even though they share the same assets.

Balancing fairness in AI

Balancing the fairness in AI across its life cycle is important to ensure that a model’s prediction is close to being free of bias.

To do so, companies should look at soliciting feedback from stakeholders to define business requirements, seek more representative training data during data understanding, use more inclusive labels during data preparation, experiment with causal inference and adversarial AI in the modeling phase, and accounting for intersectionality in the evaluation phase, Purcell said. “Intersectionality” refers to how various elements of a person’s identity combine to compound the impacts of bias or privilege.

“Spurious correlations account for most harmful bias,” he said. “To overcome this problem, some companies are starting to apply causal inference techniques, which identify cause-and-effect relationships between variables and therefore eliminate discriminatory correlations.” Other companies are experimenting with adversarial learning, a machine-learning technique that optimizes for two cost functions that are adversarial.

For example, Purcell said, “In training its VisualAI platform for retail checkout, computer vision vendor Everseen used adversarial learning to both optimize for theft detection and discourage the model from making predictions based on sensitive attributes, such as race and gender. In evaluating the fairness of AI systems, focusing solely on one classification such as gender may obscure bias that is occurring at a more granular level for people who belong to two or more historically disenfranchised populations, such as non-white women.”

He gave the example of Joy Buolamwini and Timnit Gebru’s seminal paper on algorithmic bias in facial recognition that found that the error rate for Face++’s gender classification system was 0.7% for men and 21.3% for women across all races, and that the error rate jumped to 34.5% for dark-skinned women.

More ways to adjust fairness in AI

There are couple of other methods that companies might employ to ensure fairness in AI that include deploying different models for different groups in the deployment phase and crowdsourcing with bias bounties — where users who detect biases get rewarded — in the monitoring phase.

“Sometimes it is impossible to acquire sufficient training data on underrepresented groups. No matter what, the model will be dominated by the tyranny of the majority. Other times, systemic bias is so entrenched in the data that no amount of data wizardry will root it out. In these cases, it may be necessary to separate groups into different data sets and create separate models for each group,” Purcell said.



Read More from This Article: IT leaders should measure and balance fairness in AI models, Forrester says
Source: News

Category: NewsJanuary 25, 2022
Tags: art

Post navigation

PreviousPrevious post:How to Avoid the Data Silo TrapNextNext post:NxtGen’s Sovereign Cloud Empowers India’s Government Agencies to Protect Critical Data and Unlock its Potential

Related posts

IA segura y nube híbrida, el binomio perfecto para acelerar la innovación empresarial 
May 23, 2025
How IT and OT are merging: Opportunities and tips
May 23, 2025
The implementation failure still flying under the radar
May 23, 2025
보안 자랑, 잘못하면 소송감?···법률 전문가가 전하는 CISO 커뮤니케이션 원칙 4가지
May 23, 2025
“모델 연결부터 에이전트 관리까지” 확장 가능한 AI 표준을 위한 공개 프로토콜에 기대
May 23, 2025
AWS, 클라우드 리소스 재판매 제동···기업 고객에 미칠 영향은?
May 23, 2025
Recent Posts
  • IA segura y nube híbrida, el binomio perfecto para acelerar la innovación empresarial 
  • How IT and OT are merging: Opportunities and tips
  • The implementation failure still flying under the radar
  • 보안 자랑, 잘못하면 소송감?···법률 전문가가 전하는 CISO 커뮤니케이션 원칙 4가지
  • “모델 연결부터 에이전트 관리까지” 확장 가능한 AI 표준을 위한 공개 프로토콜에 기대
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.