Together AI, an AI research company, published a post detailing their work on extending the context length for large language models (LLMs) up to 32,000 tokens by releasing LLaMA-2-7B-32K, an open source 32K context model built on top of 4K context LLaMA-2 base version.
Share this post
LLaMA-2-7B-32K Pushes the Limits of Context…
Share this post
Together AI, an AI research company, published a post detailing their work on extending the context length for large language models (LLMs) up to 32,000 tokens by releasing LLaMA-2-7B-32K, an open source 32K context model built on top of 4K context LLaMA-2 base version.