-
-
-
-
-
-
Inference Providers
Active filters:
rlhf
gyung/lfm2-1.2b-koen-mt-v5-rl-10k-adapter
Text Generation
•
Updated
•
72
•
1
sileod/deberta-v3-base-tasksource-nli
Zero-Shot Classification
•
0.2B
•
Updated
•
16.3k
•
•
133
stanfordnlp/SteamSHP-flan-t5-xl
Updated
•
50
•
43
stanfordnlp/SteamSHP-flan-t5-large
Updated
•
324
•
33
sileod/deberta-v3-large-tasksource-nli
Zero-Shot Classification
•
0.4B
•
Updated
•
218
•
37
sileod/deberta-v3-large-tasksource-rlhf-reward-model
Text Classification
•
Updated
•
21
•
11
trl-lib/llama-7b-se-rl-peft
Updated
•
103
trl-lib/llama-7b-se-rm-peft
toloka/gpt2-large-rl-prompt-writing
Text Generation
•
0.8B
•
Updated
•
17
•
3
AdamG012/chat-opt-1.3b-rlhf-actor-deepspeed
Text Generation
•
Updated
•
31
•
5
AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed
Text Generation
•
Updated
•
21
•
3
AdamG012/chat-opt-1.3b-rlhf-actor-ema-deepspeed
Text Generation
•
Updated
•
16
•
8
sileod/mdeberta-v3-base-tasksource-nli
Zero-Shot Classification
•
0.3B
•
Updated
•
141
•
18
Text Generation
•
Updated
•
13
•
5
Text Generation
•
Updated
•
15
•
3
Text Generation
•
Updated
•
28
•
6
argilla/roberta-base-reward-model-falcon-dolly
Text Classification
•
Updated
•
22
•
4
Text Generation
•
Updated
•
9
PKU-Alignment/beaver-7b-v1.0
Reinforcement Learning
•
7B
•
Updated
•
115
•
13
lyogavin/Anima33B-DPO-Belle-1k
Text Generation
•
Updated
•
1
lyogavin/Anima33B-DPO-Belle-1k-merged
Text Generation
•
Updated
•
22
•
12
PKU-Alignment/beaver-7b-v1.0-reward
Reinforcement Learning
•
7B
•
Updated
•
1.08k
•
17
PKU-Alignment/beaver-dam-7b
Updated
•
2.23k
•
13
PKU-Alignment/beaver-7b-v1.0-cost
Reinforcement Learning
•
7B
•
Updated
•
1.06k
•
10
Ablustrund/moss-rlhf-reward-model-7B-zh
Updated
•
5
•
23
OpenMOSS-Team/moss-rlhf-reward-model-7B-en
OpenMOSS-Team/moss-rlhf-sft-model-7B-en
OpenMOSS-Team/moss-rlhf-policy-model-7B-en
lightonai/alfred-40b-0723
Text Generation
•
Updated
•
41
•
46