llama_pro_base_dare_tie
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using NousResearch/Llama-2-7b-hf as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: NousResearch/Llama-2-7b-hf
- model: GreatCaptainNemo/ProLLaMA
parameters:
density: 0.5
weight: 0.2
- model: dnagpt/llama-dna
parameters:
density: 0.5
weight: 0.2
merge_method: ties
base_model: NousResearch/Llama-2-7b-hf
parameters:
normalize: true
dtype: float16