Databricks has released KARL, an RL-trained RAG agent that it says handles all six enterprise search categories at 33% lower cost than frontier models.
MIT researchers developed Attention Matching, a KV cache compaction technique that compresses LLM memory by 50x in seconds — ...