![]() ![]() In this new paper, researchers realized that if they keep the first token in the sliding cache, the model will maintain its performance even when the cache size is exceeded.īut this didn’t make any sense. However, the model’s performance often plummets as soon as that first token is evicted, rapidly reducing the quality of newly generated words. To get around these problems, researchers employ a “sliding cache” that bumps out the oldest tokens to add new tokens. For instance, one popular model can store 4,096 tokens, yet there are about 10,000 tokens in an academic paper. Understanding these relationships is one feature that enables large language models to generate human-like text.īut when the cache gets very large, the attention map can become even more massive, which slows down computation.Īlso, if encoding content requires more tokens than the cache can hold, the model’s performance drops. The attention mechanism builds a grid that includes all tokens in the cache, an “attention map” that maps out how strongly each token, or word, relates to each other token. Typically, an AI chatbot writes new text based on text it has just seen, so it stores recent tokens in memory, called a KV Cache, to use later. Many models employ what is known as an attention mechanism that uses these tokens to generate new text. Large language models encode data, like words in a user query, into representations called tokens. The work will be presented at the International Conference on Learning Representations. Xiao’s co-authors include his advisor, Song Han, an associate professor in EECS, a member of the MIT-IBM Watson AI Lab, and a distinguished scientist of NVIDIA as well as Yuandong Tian, a research scientist at Meta AI Beidi Chen, an assistant professor at Carnegie Mellon University and senior author Mike Lewis, a research scientist at Meta AI. By making a chatbot that we can always chat with, and that can always respond to us based on our recent conversations, we could use these chatbots in some new applications,” says Guangxuan Xiao, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on StreamingLLM. “Now, with this method, we can persistently deploy these large language models. This could allow a chatbot to conduct long conversations throughout the workday without needing to be continually rebooted, enabling efficient AI assistants for tasks like copywriting, editing, or generating code. When compared to another method that avoids crashing by constantly recomputing part of the past conversations, StreamingLLM performed more than 22 times faster. The method, called StreamingLLM, enables a model to remain efficient even when a conversation stretches on for more than 4 million words. This can cause the model to fail.īy ensuring that these first few data points remain in memory, the researchers’ method allows a chatbot to keep chatting no matter how long the conversation goes. In some methods, when this cache needs to hold more information than it has capacity for, the first pieces of data are bumped out. Their method involves a tweak to the key-value cache (which is like a conversation memory) at the core of many large language models. When a human-AI conversation involves many rounds of continuous dialogue, the powerful large language machine-learning models that drive chatbots like ChatGPT sometimes start to collapse, causing the bots’ performance to rapidly deteriorate.Ī team of researchers from MIT and elsewhere has pinpointed a surprising cause of this problem and developed a simple solution that enables a chatbot to maintain a nonstop conversation without crashing or slowing down.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |