Skip to content

Frisco-Based HITRUST Releases First AI Assurance Program

The program is aimed at ensuring secure and sustainable AI use
shutterstock_2284126663

Generative AI, exemplified by OpenAI's ChatGPT, is igniting a technological surge of AI advancement. Recently, a Frisco-based company, HITRUST, released the industry's first AI assurance program.

HITRUST, an organization specializing in information risk management, standards and certification, released an all-encompassing AI strategy aimed at ensuring secure and sustainable AI utilization. The strategy features a range of essential components for delivering dependable AI. As a result, the HITRUST AI Assurance Program places significant emphasis on integrating risk management into the newly revised version 11.2 of the HITRUST CSF.

"Risk management, security and assurance for AI systems requires that organizations contributing to the system understand the risks across the system and agree how they together secure the system," said Robert Booker, Chief Strategy Officer, HITRUST. 

According to Goldman Sachs research, Generative AI has the potential to increase global GDP by 7% in the coming decade. Organizations are keen to revolutionize their operations and enhance productivity across various business functions to tap into the expanding realm of enterprise AI applications and unlock additional value. But, like any technology, Generative AI introduces new risks as well.

"Trustworthy AI requires an understanding of how controls are implemented by all parties and shared and a practical, scalable, recognized, and proven approach for an AI system to inherit the right controls from their service providers,” Booker said. “We are building AI Assurances on a proven system that will provide the needed scalability and inspire confidence from all relying parties, including regulators, that care about a trustworthy foundation for AI implementations."

According to HITRUST, the AI Assurance Program empowers AI users to effectively integrate risk management aspects into AI endeavors. This enhanced clarity regarding shared risks and responsibilities will enable organizations to place trust in shared information protection controls, both from internal shared IT services and external third-party organizations, that are already in place.

HITRUST will soon be unveiling AI risk management recommendations for AI systems, as well as introducing the concept of inheritance to bolster shared responsibility in AI, all within the framework of the AI Assurance Program.