-
EMOVA Online Interactive Demo
🔥7Live Interactive demo for EMOVA with Qwen-2.5 backbone
-
Emova-ollm/emova-qwen-2-5-3b
Text Generation • 4B • Updated • 5 • 2 -
Emova-ollm/emova-qwen-2-5-3b-hf
Feature Extraction • 4B • Updated • 14 • 5 -
Emova-ollm/emova-qwen-2-5-7b
Text Generation • 8B • Updated • 5 • 1
Collections
Discover the best community collections!
Collections including paper arxiv:2409.18042
-
CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmentation Generation
Paper • 2410.23090 • Published • 55 -
SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs
Paper • 2410.13276 • Published • 29 -
Personalized Visual Instruction Tuning
Paper • 2410.07113 • Published • 70 -
Differential Transformer
Paper • 2410.05258 • Published • 180
-
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133 -
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
Paper • 2408.11039 • Published • 63 -
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
Paper • 2408.16725 • Published • 53 -
Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
Paper • 2408.15998 • Published • 86
-
iVideoGPT: Interactive VideoGPTs are Scalable World Models
Paper • 2405.15223 • Published • 17 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 55 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90 -
Matryoshka Multimodal Models
Paper • 2405.17430 • Published • 34
-
impira/layoutlm-document-qa
Document Question Answering • 0.1B • Updated • 7.7k • 1.15k -
EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Paper • 2409.18042 • Published • 39 -
LLM-Powered GUI Agents in Phone Automation: Surveying Progress and Prospects
Paper • 2504.19838 • Published • 22
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 71 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 132 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 55 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90
-
EMOVA Online Interactive Demo
🔥7Live Interactive demo for EMOVA with Qwen-2.5 backbone
-
Emova-ollm/emova-qwen-2-5-3b
Text Generation • 4B • Updated • 5 • 2 -
Emova-ollm/emova-qwen-2-5-3b-hf
Feature Extraction • 4B • Updated • 14 • 5 -
Emova-ollm/emova-qwen-2-5-7b
Text Generation • 8B • Updated • 5 • 1
-
CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmentation Generation
Paper • 2410.23090 • Published • 55 -
SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs
Paper • 2410.13276 • Published • 29 -
Personalized Visual Instruction Tuning
Paper • 2410.07113 • Published • 70 -
Differential Transformer
Paper • 2410.05258 • Published • 180
-
impira/layoutlm-document-qa
Document Question Answering • 0.1B • Updated • 7.7k • 1.15k -
EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Paper • 2409.18042 • Published • 39 -
LLM-Powered GUI Agents in Phone Automation: Surveying Progress and Prospects
Paper • 2504.19838 • Published • 22
-
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133 -
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
Paper • 2408.11039 • Published • 63 -
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
Paper • 2408.16725 • Published • 53 -
Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
Paper • 2408.15998 • Published • 86
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 71 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 132 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 55 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90
-
iVideoGPT: Interactive VideoGPTs are Scalable World Models
Paper • 2405.15223 • Published • 17 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 55 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90 -
Matryoshka Multimodal Models
Paper • 2405.17430 • Published • 34