Insights

General

Where Ethics and Development Converge: Building Responsible AI

October 21, 2024 5 min read
AltaML Team
AltaML Team

As artificial intelligence (AI) plays a bigger role in shaping data—not just how it’s analyzed but even how it’s produced or accessed through generative AI (GenAI), a type of AI that can create new content such as text, images, audio, or video by learning patterns from existing data—we have to take steps to be sure it will be used responsibly. The challenge is defining what that responsibility entails and coming up with a workable solution for responsible artificial intelligence (RAI). AltaML proposes a sustainable approach to addressing algorithmic bias, upholding ethical standards, and delivering better outcomes for all. 


AI Business Aspirations

With AI becoming an integral part of operations across organizations, it raises a host of ethical questions about fairness, safety, privacy, and transparency of its applications. These concerns are now amplified by the growing use of GenAI among businesses. 

Harvard Business Review reports, two-thirds of senior IT leaders reported plans to introduce GenAI to their operations within the year. Even while embracing the technology, they express concerns over its safety (79%) and its associations with biased outcomes (73%).  

This duality relates to a point made by the UK’s Financial Conduct Authority (FCA), Nikhil Rathi, in warning about the need for “an open conversation about the risks and trade-offs” associated with AI.  

“We want safe and responsible use of AI to drive beneficial innovation,” Rathi said.

Finding the way to use AI safely and responsibly begins with admitting the problems associated with their use. People cannot just be subject to AI determinations that can lead to unfair and inequitable outcomes. Now is the time to plan on finding the solutions, and it starts with identifying the problems. 

Algorithmic Pitfalls in Hiring and Facial Recognition

“Even algorithms have parents, and those parents are computer programmers, with their values and assumptions,“ wrote Alberto Ibargüen, President and CEO of the John S. and James L. Knight Foundation.  

The assumptions that were built into data models can have long-reaching effects. We’ve seen this happen with respect to the biases that have emerged from algorithmic reviews of job applicants and facial recognition.  

Several years ago, Amazon launched an experiment using an automated rating system for job candidates. As the training data showed far more men in tech roles than women,  the algorithm learned to select male candidates as the better choice for jobs

The algorithm learned to apply this gender bias in its selection and to associate any indicators of female identity on a resume as a reason to filter out a candidate. The results were so skewed that Amazon was compelled to shut down its AI recruiting tool.  

Amazon also drew fire for the biases that emerged from its facial recognition software called Rekognition that had been adopted by some law enforcement agencies before a great deal of pressure from researchers and civil liberties groups brought it to a stop in 2020. 

Facial Recognition Is Accurate, if You’re a White Guy was the headline that encapsulated the problem. The article reported on Joy Buolamwini’s research that found that Rekognition misidentified women with darker skin tones as men 31% of the time.

In addition to the problem of racial bias and inaccuracy, Buolamwini pointed out the privacy violations that can result from adopting facial recognition. That problem did not go away in later iterations of facial recognition technology, as evidenced by Clearview AI. It builds its facial database by scraping people’s photos from the web without their consent. The company has been fined multiple times by Europe agencies for violating the General Data Protection Regulation (GDPR) privacy rules. While the company claims it’s exempt from GDPR because it’s based in the U.S. and doesn’t sell to European agencies, privacy regulations still apply. However, its latest fine brings the total it could owe to over $110 million, a substantial debt for a business.

What Must Be Done

The fact that an American company can still be forced to act in compliance with GDPR rules is highly relevant to how AI will evolve now that the EU AI Act is set to take effect in August 2026. Violators of the new regulations may face significant penalties.

There are three compelling reasons for taking immediate action on RAI:

  1. Legal requirements
  2. The moral imperative
  3. The consequences of inaction

While there is no international law in effect, the EU law will have far-reaching consequences, likely operating like the GDPR in setting a precedent for privacy regulations put into effect in various jurisdictions across the globe. That means that even businesses in the U.S. or Canada that don’t have European interests would be prudent to keep EU rules in mind as they plan for future AI integrations that may come to fruition as new regulations emerge in different countries. 

Being proactive about RAI is also essential to staying on the right track with technology. That means ensuring that AI will not be used to reinforce unfair treatment of individuals stemming from biases that are veiled by the algorithmic operation. Failing to take action to avert such outcomes ends up costing businesses forced to catch up later.  

The costs include the technical debt and stagnation that result from failing to plan ahead for AI implementations that meet regulatory standards. An ounce of prevention is far more cost-effective than a pound of cure, especially when considering the delays that can derail projects if the technology requires adjustments or updates.

Ignoring RAI can also translate into losing the trust of customers and employees who will look elsewhere for a company that is not misusing AI. That would increase the difficulty of recruiting and retaining talent.

The 7 Principles of RAI

To guide businesses in balancing AI development with responsibility, AltaML has defined seven fundamental principles of RAI:

  1. Inclusivity
  2. Fairness
  3. Transparency and Explainability
  4. Safety and Security
  5. Accountability
  6. Privacy and Data Protection
  7. Awareness and Empowerment

Inclusivity: Given that AI has an effect on all kinds of people, their perspectives should be represented in AI solutions. Different types of people have to be recruited to participate in its design and development to avert the racial, ethnic, and gender biases that can get baked in by homogeneous teams.

Fairness: We have to strive to keep human values at the center of AI development. That means not losing sight of the Human Rights Act that recognizes the need to protect people from being discriminated against for traits like age, gender, race, marital status, etc. For that reason, AI use cases that are likely to result in a discriminatory effect should be avoided. This—and the lesson from Amazon—may be why it’s not currently being used widely in recruitment. As of early 2024 only 14% of companies reported using AI for their talent acquisition, according to a global survey of professionals.

Transparency and Explainability: We have to break through the black box. At the most basic level, users have to be informed when they’re interacting with an AI system rather than a human. Beyond that, we should aim to provide visibility into the applications, development, and operations associated with AI so that both users and affected parties understand the what, why, and how of outcomes and be able to flag mistakes.

Safety and Security: We need to ascertain that AI is being used only for valid purposes and prioritize mitigating safety and security risks. This entails comparing the in-production predictions with the ground truths that emerge later.

Accountability: All AI systems are selected and deployed by humans who have to be held accountable for its operation and output. This entails having a subject matter expert (SME) human-in-the-loop to validate the predictions before they are acted on. There should also be regularly scheduled audits to confirm that all outputs are consistent with human rights and environmental values.

Privacy and Data Protection: When data that includes sensitive personal information is used by the program, anonymization must be maintained to safeguard privacy throughout the AI life cycle by adhering to privacy frameworks. For example, if the data on individuals includes their addresses, only the aggregate of locations should show up when displaying results. 

Awareness and Empowerment: Public trust in AI depends on transparent communication that fosters understanding and empowerment. Teams should be  encouraged to share any of their concerns about AI usage so that they can be addressed. It all starts with a commitment to RAI that is integrated into the company culture.

As AI takes on an increasingly larger role in daily functions, it’s important to remember that while it brings incredible efficiencies, human insight remains essential for guiding its use. The road to RAI begins with awareness of the problems and a commitment to striving for better outcomes going forward.  



Share This Article