04/24/2025, 07:46 AM UTC
降低可靠人工智能响应的计算成本Reducing Calculation Costs for Reliable AI Responses
苏黎世联邦理工学院的研究人员开发了一种方法,使得人工智能的答案随着时间的推移变得更加可靠。他们的算法在选择数据时非常具有选择性。此外,高达40倍更小的AI模型可以实现与最佳大型AI模型相同的输出性能。
ChatGPT和类似工具常常因为其答案的准确性而令人惊讶,但同时也经常引起怀疑。强大的人工智能响应机器的一个大挑战是,它们以同样的轻松程度为我们提供完美的答案和明显的废话。一个主要挑战是如何处理不确定性。直到现在,判断LLM是否基于可靠的数据基础生成答案,或者是否处于不确定的基础上,都是非常困难的。
苏黎世联邦理工学院计算机科学系机器学习研究所的研究人员已经开发了一种方法,可以专门减少人工智能的不确定性。‘我们的算法可以将人工智能的一般语言模型与问题相关主题领域的附加数据专门丰富。与具体问题相结合,我们可以从模 型的深处和丰富数据中专门检索出那些可能生成正确答案的关系’,学习与自适应系统组的Jonas Hübotter解释说,他在他的博士研究中开发了这种方法。
The ETH Zurich researchers have developed a method that makes AI answers more reliable over time. Their algorithm is highly selective in choosing data. Additionally, up to 40 times smaller AI models can achieve the same output performance as the best large AI models.
ChatGPT and similar tools often amaze us with the accuracy of their answers, but also often lead to doubt. One of the big challenges of powerful AI response machines is that they serve us with perfect answers and obvious nonsense with the same ease. One of the major challenges is how the underlying large language models (LLMs) of AI deal with uncertainty. It has been very difficult until now to judge whether LLMs focused on text processing and generation generate their answers on a solid foundation of data or whether they are on uncertain ground.
Researchers from the Institute for Machine Learning at the Department of Computer Science at ETH Zurich have now developed a method to specifically reduce the uncertainty of AI. 'Our algorithm can specifically enrich the general language model of AI with additional data from the relevant thematic area of the question. In combination with the specific question, we can then specifically retrieve those relationships from the depths of the model and from the enrichment data that are likely to generate a correct answer,' explains Jonas Hübotter from the Learning & Adaptive Systems Group, who developed the new method as part of his PhD studies.
---
本文由大语言模型(LLM)生成,旨在为读者提供半导体新闻内容的知识扩展(Beta)。