How to get the best out of generative AI while avoiding the worst

20 August, 2024

How to get the best out of generative AI while avoiding the worst

By the time you read this article, AI tools and applications will have developed by leaps and bounds. This is precisely why companies need to establish a coherent framework and set of principles for leveraging the benefits of generative AI while avoiding its pitfalls.

At the beginning of 2023, the world was waking up to the vast potential of AI applications to make many aspects of human work easier, faster, and more efficient. It was already clear, too, that tools trained on mass data and repetitive routines could even eliminate some jobs.

However, it was also evident that AI had limitations as it lacked a three-dimensional human perception of our world. Nowhere was this more evident than in the generation of certain types of images.

We collectively mocked image generators’ clumsy attempts to reproduce human hands, and we chuckled over its attempts at moving images when we viewed videos of actor Will Smith eating spaghetti or AI-generated gymnastics routines. But technology is advancing exponentially and getting better at imitating real life. Remember that this is the worst generative AI will ever be.

 

When art (too closely) imitates life

“Her” actor Scarlet Johansson has accused OpenAI of using a replica of her voice for its virtual assistant, while more recently, others have begun sounding the alarm over content created by Grok. It is an AI embedded in the X microblogging platform that appears to flirt with copyright infringements and produce controversial and problematic images due to extremely loose or non-existent filters.

Even before the current crescendo of generative AI abilities, there had been concerns over bad faith actors using applications to alter real videos or create deep fake content for misinformation and disinformation.

Any company that does not have a well-thought-through, intentional, and transparent policy for using generative AI for business will be playing with fire if it decides to use the technology without the guardrails required to protect its stakeholders, brand and reputation.

We encourage our clients to think strategically about which areas of their operations will benefit most from deploying generative AI and where it might pose the biggest risks. We also encourage them to develop robust policies to steer and monitor their use.

 

Empowered Intelligence at Aspidistra

At Aspidistra, we believe that human intelligence augments generative AI with empathy, context, real-world expertise, and ethical oversight. We promote the innovative and responsible use of generative AI and add value to it by interpreting its results. We use generative AI to empower clients, improve outcomes, and create a positive impact. This means we have considered and enshrined the following core principles in policy, among others.

  • Protection of confidential data

We do not enter confidential data, such as non-public client data, finance, HR, personnel or partner records, into publicly available generative AI tools.

  • Respect for intellectual property

When we use generative AI tools, we respect intellectual property and avoid violating the rights of creators and copyright holders.

  • Content validation

AI-generated content can be inaccurate, misleading, or entirely fabricated (“hallucinations”) or may contain copyrighted material. We continually review and fact-check information provided by generative AI assistants.

  • Generative AI transparency

We always disclose to our clients and other stakeholders if generative AI has been used in the content that we deliver, including the tools used and how they have been used.

  • Selection of generative AI tools

We are committed to the ethical and responsible use of generative AI. As such, we select tools with care and in accordance with the principles of ethical use and accountability.

We call our approach Empowered Intelligence.

We encourage everyone interested in incorporating AI tools to improve their speed and efficiency to consider a robust framework within which to use them and mitigate the risks inherent in their use. This is one area in which moving fast and breaking things has the potential to harm individuals as well as brands.

Written by Denise Wall, Aspidistra CEO

*No generative AI was used in the creation of this article.

Recommended For You

Recent Posts

The good, the bad and the ugly of thought leadership
The good, the bad and the ugly of thought leadership
Everyone wants to be a thought leader—but not everyone gets it right. This visual breakdown
The Aspidistra origin story – Gen Z edition
The Aspidistra origin story – Gen Z edition
  What happens when Gen Z retells your company’s founding story? Turns out, it involves a
What journalism taught me about emotion in B2B marketing
What journalism taught me about emotion in B2B marketing
It’s very tempting to rely exclusively on fact-based and highly technical arguments when building a

Stay in the loop!

Subscribe to our newsletter for professional updates, expert tips, fresh insights and smart marketing updates straight to your inbox.