Trustworthy and Efficient LLMs Meet Databases

Date:

In the rapidly evolving AI era with large language models (LLMs) at the core,
making LLMs more trustworthy and efficient, especially in output generation
(inference), has gained significant attention. This is to reduce plausible but
faulty LLM outputs (a.k.a hallucinations) and meet the highly increased
inference demands. This tutorial explores such efforts and makes them
transparent to the database community. Understanding these efforts is essential
in harnessing LLMs in database tasks and adapting database techniques to LLMs.
Furthermore, we delve into the synergy between LLMs and databases, highlighting
new opportunities and challenges in their intersection. This tutorial aims to
share with database researchers and practitioners essential concepts and
strategies around LLMs, reduce the unfamiliarity of LLMs, and inspire joining
in the intersection between LLMs and databases.

Share post:

Popular

More like this
Related

Large Language Models as Software Components: A Taxonomy for LLM-Integrated Applications

The research paper examines the performance of large...

Large Language Model Inside an Electron.js Desktop App for Anonymizing PII Data

Discover how a cutting-edge large language model is being utilized inside an Electron.js desktop app to anonymize PII data. Explore the innovative technology behind this groundbreaking solution.

AI Thinking: A framework for rethinking artificial intelligence in practice

Artificial intelligence is transforming the way we work with...

TEST: Text Prototype Aligned Embedding to Activate LLM’s Ability for Time Series

This work summarizes two ways to accomplish Time-Series (TS)...