mlx-community/Qwen3-Reranker-8B-mxfp8
The Model mlx-community/Qwen3-Reranker-8B-mxfp8 was converted to MLX format from Qwen/Qwen3-Reranker-8B using mlx-embeddings version 0.0.3.
Use with mlx
pip install mlx-embeddings
from mlx_embeddings import load, generate
import mlx.core as mx
model, tokenizer = load("mlx-community/Qwen3-Reranker-8B-mxfp8")
# For text embeddings
output = generate(model, processor, texts=["I like grapes", "I like fruits"])
embeddings = output.text_embeds # Normalized embeddings
# Compute dot product between normalized embeddings
similarity_matrix = mx.matmul(embeddings, embeddings.T)
print("Similarity matrix between texts:")
print(similarity_matrix)
- Downloads last month
- 36
Hardware compatibility
Log In
to add your hardware
Quantized
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for mlx-community/Qwen3-Reranker-8B-mxfp8
Base model
Qwen/Qwen3-8B-Base