News, Trends, and Insights for IT & Managed Services Providers
News, Trends, and Insights for IT & Managed Services Providers

NIST Launches GenAI Program to Set Benchmarks for Generative AI and Enhance Transparency

Written by

Dave sobel, host of the business of tech podcast
Dave Sobel

Published on

May 1, 2024
Business of tech | nist launches genai program to set benchmarks for generative ai and enhance transparency

The National Institute of Standards and Technology (NIST) has launched NIST GenAI, a program to assess generative AI technologies. The program will release benchmarks, develop deepfake-checking systems, and encourage the creation of software to detect the source of AI-generated information. NIST GenAI’s first project is a pilot study to differentiate between human-created and AI-generated media, starting with text. The program will invite teams to submit AI systems to generate or identify AI-generated content. The launch of NIST GenAI is part of NIST’s response to President Joe Biden’s AI transparency rules and will inform the work of NIST’s AI Safety Institute.

Eight major daily newspapers, including the Chicago Tribune and the New York Daily News, have sued OpenAI and Microsoft for using their news articles to train AI tools without compensating content owners. The lawsuit raises concerns about the impact of AI tools on the news industry and calls for fair compensation for the use of copyrighted work. AI companies argue that using news articles for training qualifies as fair use, but news publishers disagree and demand a stop to the practice.

And while I’m on AI, A report by Getty Images highlights that consumers value authentic content and are wary of AI-generated images. Businesses should use synthetic images tactfully, considering their audience and intent, keeping authenticity as the focus, and understanding the data on which AI tools were trained. Industries with higher trust expectations, such as healthcare and finance, should be especially cautious. AI-generated content may not be suitable for campaigns emphasizing authenticity or featuring real people but can be used for non-human elements.

Why do we care?

Businesses need to consider carefully how they integrate AI-generated content into their marketing strategies, particularly in sectors where trust is paramount, such as healthcare and finance.  The key insight is transparency.  

I won’t dwell on the lawsuits – this is a tactical update – but I will dwell on how establishing rigorous benchmarks for generative AI will help standardize how these technologies are assessed and used, fostering safer and more reliable applications.  By setting benchmarks and developing systems to assess and identify AI-generated content, NIST aims to enhance transparency and safety in the use of such technologies.

And the opportunity is in applying that in business.   

Search all stories