-
User-LLM: Efficient LLM Contextualization with User Embeddings
Paper • 2402.13598 • Published • 21 -
ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
Paper • 2403.03853 • Published • 65 -
From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples
Paper • 2404.07544 • Published • 20
Shannon Sands
ssands1979
AI & ML interests
None yet
Recent Activity
liked a model about 12 hours ago
armand0e/Qwen3.5-9B-Pi-Agent-LoRA upvoted a collection about 12 hours ago
Qwen3.5 upvoted a paper 1 day ago
Efficient Pre-Training with Token Superposition