veryLLM Checks the Basis of Every Sentence in an LLM Output; Provides Critical Capability for LLM Providers to Bring Transparency to LLM Responses
PALO ALTO, Calif., Sept. 15, 2023 /PRNewswire/ — Vianai Systems, the leader in human-centered AI (H+AI) for the enterprise, today announced the release of veryLLM, an open-source toolkit that enables reliable, transparent and transformative AI systems for enterprises. The veryLLM toolkit empowers developers and data scientists to build a universally needed transparency layer into Large Language Models (LLMs), to evaluate the accuracy and authenticity of AI-generated responses — addressing a critical challenge that has prevented many enterprises from deploying LLMs due to the risks of false responses.
AI hallucinations AI hallucinations, in which LLMs create false, offensive or otherwise inaccurate or unethical responses raise particularly challenging issues for enterprises as the risks of financial, reputational, legal and/or ethical consequences are extremely high. The AI hallucination problem left unaddressed by LLM providers has continued to plague the industry and hinder adoption, with many enterprises simply unwilling to bring the risks of hallucinations into their mission-critical enterprise systems. Vianai is releasing the veryLLM toolkit (under the Apache 2.0 open-source license) to make this capability available for anyone to use, to build trust and to drive adoption of AI systems.
How veryLLM works The veryLLM toolkit introduces a foundational ability to understand the basis of every sentence generated by an LLM via several built-in functions. These functions are designed to classify statements into distinct categories using context pools that the LLMs are trained on (e.g., Wikipedia, Common Crawl, Books3 and others), with the introductory release of veryLLM based on a subset of Wikipedia articles. Given that most publicly disclosed LLM training datasets include Wikipedia, this approach provides a robust foundation for the veryLLM verification process. Developers can use veryLLM in any application that leverages LLMs, to provide transparency on AI generated responses. The veryLLM functions are designed to be modular, extensible, and work alongside any LLM, providing support for existing and future language models.
“AI hallucinations pose serious risks for enterprises, holding back their adoption of AI. As a student of AI for many years, it is also just well-known that we cannot allow these powerful systems to be opaque about the basis of their outputs, and we need to urgently solve this. Our veryLLM library is a small first step to bring transparency and confidence to the outputs of any LLM – transparency that any developer, data scientist or LLM provider can use in their AI applications,” said Dr. Vishal Sikka, Founder and CEO of Vianai Systems and advisor to Stanford University’s Center for Human-Centered Artificial Intelligence. “We are excited to bring these capabilities, and many other anti-hallucination techniques, to enterprises worldwide, and I believe this is why we are seeing unprecedented adoption of our solutions.”
Try veryLLM here.
Access the code here.
hila™ Enterprise hila Enterprise, Vianai’s platform and applications for safely and reliably deploying large language enterprise solutions for finance, contracts, legal, HR and more, focuses first and foremost on accuracy and transparency of AI systems for enterprises. All of Vianai’s hila applications leverage powerful techniques and technologies, including its Zero Hallucination™ capabilities, as well as veryLLM code with its proprietary UI, together with many other AI techniques, to help enterprises minimize the risks of AI, and instead take full advantage of the transformation potential of reliable AI systems.
About Vianai Systems Vianai Systems, Inc. is a human-centered AI (H+AI™) platform and products company focused on bringing trustworthy, responsible and transformative AI systems to enterprises worldwide. Vianai customers include many of the largest, most-respected businesses in the world; includes a team with unmatched expertise in building enterprise platforms and breakthrough applications; and is backed by investors globally recognized as industry luminaries for their entrepreneurship, innovation and leadership. Follow @VianaiSystems on Vianai Twitter and Vianai LinkedIn.
Vianai Systems Media Contact Jackie D’Andrea781-820-5476365430@email4pr.com
View original content to download multimedia:https://www.prnewswire.com/news-releases/vianai-introduces-powerful-open-source-toolkit-to-verify-accuracy-of-llm-generated-responses-301929386.html
SOURCE Vianai Systems
veryLLM Checks the Basis of Every Sentence in an LLM Output; Provides Critical Capability for LLM Providers to Bring Transparency to LLM Responses
PALO ALTO, Calif., Sept. 15, 2023 /PRNewswire/ — Vianai Systems, the leader in human-centered AI (H+AI) for the enterprise, today announced the release of veryLLM, an open-source toolkit that enables reliable, transparent and transformative AI systems for enterprises. The veryLLM toolkit empowers developers and data scientists to build a universally needed transparency layer into Large Language Models (LLMs), to evaluate the accuracy and authenticity of AI-generated responses — addressing a critical challenge that has prevented many enterprises from deploying LLMs due to the risks of false responses.
AI hallucinations
AI hallucinations, in which LLMs create false, offensive or otherwise inaccurate or unethical responses raise particularly challenging issues for enterprises as the risks of financial, reputational, legal and/or ethical consequences are extremely high. The AI hallucination problem left unaddressed by LLM providers has continued to plague the industry and hinder adoption, with many enterprises simply unwilling to bring the risks of hallucinations into their mission-critical enterprise systems. Vianai is releasing the veryLLM toolkit (under the Apache 2.0 open-source license) to make this capability available for anyone to use, to build trust and to drive adoption of AI systems.
How veryLLM works
The veryLLM toolkit introduces a foundational ability to understand the basis of every sentence generated by an LLM via several built-in functions. These functions are designed to classify statements into distinct categories using context pools that the LLMs are trained on (e.g., Wikipedia, Common Crawl, Books3 and others), with the introductory release of veryLLM based on a subset of Wikipedia articles. Given that most publicly disclosed LLM training datasets include Wikipedia, this approach provides a robust foundation for the veryLLM verification process. Developers can use veryLLM in any application that leverages LLMs, to provide transparency on AI generated responses. The veryLLM functions are designed to be modular, extensible, and work alongside any LLM, providing support for existing and future language models.
“AI hallucinations pose serious risks for enterprises, holding back their adoption of AI. As a student of AI for many years, it is also just well-known that we cannot allow these powerful systems to be opaque about the basis of their outputs, and we need to urgently solve this. Our veryLLM library is a small first step to bring transparency and confidence to the outputs of any LLM – transparency that any developer, data scientist or LLM provider can use in their AI applications,” said Dr. Vishal Sikka, Founder and CEO of Vianai Systems and advisor to Stanford University’s Center for Human-Centered Artificial Intelligence. “We are excited to bring these capabilities, and many other anti-hallucination techniques, to enterprises worldwide, and I believe this is why we are seeing unprecedented adoption of our solutions.”
Try veryLLM here.
Access the code here.
hila™ Enterprise
hila Enterprise, Vianai’s platform and applications for safely and reliably deploying large language enterprise solutions for finance, contracts, legal, HR and more, focuses first and foremost on accuracy and transparency of AI systems for enterprises. All of Vianai’s hila applications leverage powerful techniques and technologies, including its Zero Hallucination™ capabilities, as well as veryLLM code with its proprietary UI, together with many other AI techniques, to help enterprises minimize the risks of AI, and instead take full advantage of the transformation potential of reliable AI systems.
About Vianai Systems
Vianai Systems, Inc. is a human-centered AI (H+AI™) platform and products company focused on bringing trustworthy, responsible and transformative AI systems to enterprises worldwide. Vianai customers include many of the largest, most-respected businesses in the world; includes a team with unmatched expertise in building enterprise platforms and breakthrough applications; and is backed by investors globally recognized as industry luminaries for their entrepreneurship, innovation and leadership. Follow @VianaiSystems on Vianai Twitter and Vianai LinkedIn.
Vianai Systems Media Contact
Jackie D’Andrea
781-820-5476
365430@email4pr.com
View original content to download multimedia:https://www.prnewswire.com/news-releases/vianai-introduces-powerful-open-source-toolkit-to-verify-accuracy-of-llm-generated-responses-301929386.html
SOURCE Vianai Systems