Skip to Main Content

Generative AI: Using artificial intelligence to make human impact.Learn how


Are AI Ethics the New ESG?

Staggering investments in generative AI raise questions of whether or not AI ethics will follow the same corporate missteps as ESG.

Article summary:

Ethical AI and ESG go hand-in-hand: Just like with sustainability, where upfront investment leads to better products, ethical considerations in AI development result in more valuable, efficient and cost-effective solutions that are more aligned with human goals. Ethical AI can also minimize environmental impact and potential legal risks.

Focus on the right tool for the job: Don't blindly chase the latest AI technology. Consider the specific task and choose the most appropriately scaled solution, whether it's a small language model, a non-AI tool or a larger AI model to reduce computational impact. This will reduce costs and environmental impact while ensuring optimal performance.

Ethical AI is good business: Ethical AI practices lead to improved user trust, cost savings and better products. It avoids potential ethical pitfalls and brand damage, ultimately benefiting the company's bottom line.

Despite a two-decade rise in environmental, social and governance (ESG) investing, corporate response has been slow and uneven. While ESG reporting is mandatory in the EU, it's practically nonexistent in the U.S.

Even among companies claiming to engage with sustainability efforts, few have concrete plans to measure progress. This sluggishness mirrors the development of ethical considerations in artificial intelligence (AI). Just like with AI, where regulations struggle to keep pace with rapid innovation, ethical ESG practices have yet to gain traction.

The key difference? Ethical AI has a clear path to both short- and long-term profitability of the products and services it guides, while also ensuring that these products and services meet sustainability, human rights and equity objectives.

The potential for financially rewarding AI products that deliver greater value could drive companies to develop AI responsibly. This stands in contrast to other ESG practices, where the benefits often take much longer to materialize.

Despite the growing interest in ESG, corporate action on sustainability remains slow

While the term ESG was coined over 20 years ago in 2004, ESG reporting only became mandatory in the EU last year, and there’s still no federal mandate for any type of ESG reporting in the U.S., although the EU’s requirements are affecting more than 10,000 non-EU enterprises.

When it comes to the U.S. specifically, only 23 percent of Fortune 500 companies claim to engage with the Sustainable Development Goals framework, (the U.N.’s 17 goals to end poverty, protect the planet and secure peace and prosperity), and a peer-reviewed study found that only 0.2 percent of those companies have actually developed concrete methods and tools to evaluate their progress toward relevant SDGs.

Beyond ESG reporting, fewer than half of Fortune Global 500 companies actually reduced their reported emissions last year. Despite 20 years of work, experts like Google’s Chief Sustainability Officer, Kate Brandt, say that in regards to sustainable business, “The world is still not on track.”

Lagging AI ethics mirrors ESG's slow progress, so far

Many experts see parallels between corporate efforts to improve sustainability, or ESG as a whole, and corporate efforts to implement AI, ethically or not.

For instance, most companies hadn’t invested in generative AI, a facet of artificial intelligence that can create new content, until 2023 at the earliest, and just one year later, government regulations around responsible generative AI are already far behind corporate action.

At the same time, both ESG and AI advancements hinge on reliable data. For ESG, a lack of data hinders progress in tracking sustainability efforts, and for AI, poor quality data can lead to biased and harmful models.

In an ideal world, investments in AI could not only generate revenue but actually further ESG goals rather than hurt them. Many tech experts have brought up this point in response to concerns about climate impact from resource-intensive LLMs, giving examples of AI traffic light sequencing to transportation routing powered by machine learning.

But in reality, ethical AI implementation has thus far come behind profitable AI implementation, which has come behind AI implementation, which has come behind the discussion of AI. The topic of AI ethics is framed as a conversation of limits rather than a conversation of possibilities.

This raises concerns about future unethical AI tools. However, ethical practices could lead to better and cheaper AI.

An ethical AI solution is a more valuable AI solution

In the world of AI, the ethical design of algorithms is not just a moral imperative but also a key determinant of product quality and value. Just like with sustainability, where planning for environmental impact from the beginning leads to a higher-quality product, ethical considerations built into AI development from the outset result in better overall solutions that are more aligned with human needs and goals.

Bias in AI algorithms can result in inaccurate predictions and unfair results, damaging both the user experience and the deploying company's reputation.

Example: An AI hiring tool that hasn't been properly audited for bias could inadvertently discriminate against qualified candidates from underrepresented groups, leading to a less diverse and potentially less effective workforce.

Conversely, an AI algorithm that has been meticulously designed and tested to minimize bias can deliver more accurate and equitable results.

Stuck in the generative AI prototype phase?

Example: Google's celebrity facial recognition app, which underwent rigorous bias testing, not only avoided potential PR disasters but also produced a superior product capable of recognizing a diverse array of celebrities.

Ethical AI solutions in practice: energy and transportation sectors

In the energy sector, AI algorithms are used to optimize grid performance and predict energy demand. If these algorithms are biased, they could lead to inefficient energy distribution and higher costs. However, an ethically designed AI can ensure more accurate predictions, leading to improved efficiency and cost savings.

In the transportation sector, AI is used in everything from route optimization for logistics companies to autonomous vehicles. An AI that hasn't been properly tested for bias could lead to inefficient routes or, in the case of autonomous vehicles, even safety issues. However, an ethically designed AI can ensure optimal performance and safety.

Sustainability requires customer-centric design, not just good intentions

In the past, a more sustainable product hasn't always equated to a better product. This has been a significant roadblock in the path of corporate sustainability initiatives, especially when companies have prioritized a sustainable reputation rather than actual climate impact.

Take the case of cardboard straws, for instance. In an effort to reduce plastic waste, many companies have switched to cardboard straws. However, some versions of these straws become soggy and unusable after a short period, leading to customer dissatisfaction. This has led some businesses to question the viability of such sustainable alternatives.

Some might argue that neither plastic nor cardboard straws are ideal solutions, and perhaps a completely new option is needed (or maybe we should ditch straws altogether, as they weren't necessary before their invention).

This highlights the importance of achieving the right design and execution, which requires time and development. Similarly, current AI models may perform well in specific areas but have limitations. Future models, built on the learnings of today's versions, can potentially overcome these limitations.

“The key takeaway isn't that sustainable products or ethical AI products inherently have higher quality. Both require intentional effort throughout their development—it's a continuous journey. There's no finish line; it's about constantly refining and improving.”

Sucharita Venkatesh , Senior Director General Management at Publicis Sapient

Sustainability investments are viewed as a long-term play, with a costly upfront burden

At the same time, corporations still face a long wait before seeing a return on their sustainability investments. A blog published by MSCI found that firms with higher environmental ratings had a roughly 0.6 percent lower cost of capital, indicating that sustainable investments can eventually be less expensive. But because of this, when faced with a short-term economic squeeze, many companies put ESG goals on pause.

This situation reflects what former Bank of England Governor Mark Carney called the “tragedy of the horizon.” With ESG investing, the benefits—like avoiding a potential environmental catastrophe—lie far in the future. However, the investments need to be made now, when the risks seem distant and less urgent. This disconnect between long-term benefits and short-term costs can make it difficult to take decisive action.

Still, companies like Unilever have shown that sustainability is profitable in the long term, and research supports this. Unilever's “Sustainable Living” brands, which are designed with a clear social or environmental purpose, delivered 70 percent of the company's growth in 2017.

Ethical AI boosts ESG goals

An ethical approach to AI, through creating and using AI solutions with precision and efficiency, can not only be more valuable and therefore drive more revenue (in the short and long term), but can also be more sustainable, making them a wise investment for forward-thinking companies.

In the world of AI, ethical considerations extend beyond fairness, accuracy and bias; they also encompass ESG principles: environment, social and governance.

“Ethical AI will be a crucial part of ESG itself, and not a metric measured on its own.”

Francesca Sorrentino , AI Ethics Taskforce Lead

Reducing the environmental footprint with targeted AI models

Generative AI models, particularly LLMs, can have a significant environmental impact. Training these models requires substantial computational resources, which in turn leads to high energy consumption. According to a study by the University of Massachusetts Amherst, training a single large AI model can generate as much carbon emissions as five cars in their lifetimes.

However, companies can reduce this environmental impact by adopting a more targeted approach to AI. Instead of using large, resource-intensive models for all tasks, companies can use smaller, more efficient models for specific use cases. This not only reduces energy consumption but also results in more accurate and useful AI applications.

Example: A company might use a small language model (SLM) to power a customer service chatbot. This model can be trained specifically on customer service interactions in a particular industry, making it more efficient and accurate than a larger, more general model. This targeted approach reduces the computational resources required, thereby reducing the environmental impact.

Beyond AI efficiency, toward responsible social use

Ethical considerations for LLMs go beyond just environmental efficiency. Addressing bias, toxicity and copyright through proper testing are all crucial aspects of ethical development.

The “S” (social) of ESG, referring to an organization’s relations with stakeholders, including how it addresses human rights and equity, can be addressed through careful consideration of when and how to integrate AI. Don't force the use of an LLM when a simpler solution, like an SLM or even a non-AI tool, is better suited. Similarly, using AI for tasks it's not designed for, like using a generative AI algorithm to make hiring or financial decisions, can be counterproductive.

How ethical AI models cut costs in the short term and long term

Ethical AI algorithms can actually save companies money, not just over time but in the short term through the use of SLMs, bias and copyright considerations and the creation of an AI ethics framework.

While LLMs can generate more diverse responses, they require more computational resources to train and run, making them more expensive. SLMs, on the other hand, have fewer parameters, making them less resource-intensive and therefore cheaper to use. Despite their smaller size, they can still generate high-quality responses, especially when trained on a specific task or domain.

Example: A customer service chatbot powered by an SLM can handle a large volume of customer inquiries efficiently and accurately. The model can be trained specifically on customer service interactions in a particular industry, allowing it to provide relevant and accurate responses. This reduces the need for human customer service representatives, leading to significant cost savings.

Ethical AI models can also prevent higher costs in the long term by avoiding customer service frustrations and legal fees.

Example: Using AI trained on licensed content provides a reduced legal risk. This includes avoiding lawsuits from copyright holders and potential fines for algorithmic bias in the future.

“We need to thread the needle between the regulatory concerns and the moral good. This could involve the use of small language models, more precise use of AI and the creation of an AI ethics framework.”

Zachary Paradis , Global Experience Offering Lead, CX&I

Finally, following ethical principles, such as beneficence (as defined at Publicis Sapient) or sustainability, might lead us to choose an SLM over an LLM, an AI solution or a non-AI solution and to take the time and extra resources for bias testing. This approach can not only be more ethically sound but also more cost-effective.

Mitigate risk with ethical AI

Ethical AI starts with mission alignment

While ethical AI does offer clear financial and environmental benefits, a focus on mission alignment is still crucial throughout the AI development process.

Just like corporate sustainability initiatives have received backlash for “greenwashing,” it’s possible for well-intentioned companies to “AI wash,” or make misleading or false claims about their use of AI, either negligently or due to the struggle of ethically implementing AI.

A generative AI mismatch can hurt customer experience, brand identity and profits. Imagine a customer service-focused company deploying impersonal chatbots, or a creative content company solely relying on generic AI generation—both miss the mark.

Example: When Air Canada deployed a chatbot that hallucinated, the company attempted to dissociate itself from the hallucination by claiming it could not be held liable for information provided by its chatbot, thus damaging their reputation and customer trust.

However, a strong AI strategy goes beyond just "use cases” that drive the most ROI. Identifying "non-AI use cases" aligned with a company’s mission is equally important.

Example: Dove, a company committed to real beauty, has publicly chosen not to use AI-generated images of women in advertising to avoid unrealistic beauty standards. This aligns with their mission and protects their brand.

“Ethical discussions can get bogged down in technicalities, focusing on ‘can we’ instead of 'should we.'"

Todd Cherkasky , Publicis Sapient Generative AI Ethics Taskforce

Defining these "non-use cases" ensures ethical AI use is aligned with a company's mission and avoids wasting money and environmental resources on technology that doesn't add value.

As we move forward, it's essential to keep asking the question: Should we be doing this? And if so, are we putting the right governance and framework in place to address it responsibly?

Ethical AI is an enabler, not a hindrance

Ethical AI isn't a roadblock, it's a booster. Leaders must bridge the gap to ethical AI for better products, trust and cost control. It's a proactive approach, like user-centered design, ensuring mutual value.

Getting started with ethical AI

Digital business transformation is the foundation for successful and ethical generative AI solutions. This approach curates enterprise data and evaluates and prioritizes generative AI use cases based on potential value and impact, preparing organizations for their journey.

Publicis Sapient strategies are tailored to each client's unique requirements, ensuring alignment with environmental sustainability, social responsibility and governance standards. Digital teams design and build generative AI solutions that meet specific needs, from testing use cases to full business transformation with native large language models.


Unlock the full value of generative AI with an ethical and sustainable strategy.


Todd Cherkasky
Todd Cherkasky
GVP Customer Experience & Innovation Consulting, Publicis Sapient, Chicago, IL
Francesca Sorrentino
Francesca Sorrentino
Client Partner, Publicis Sapient, London, United Kingdom
Sucharita Venkatesh
Sucharita Venkatesh
Senior Director General Management, Publicis Sapient, London, United Kingdom