Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated
- model: mlabonne/NeuralDaredevil-8B-abliterated
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- model: bunnycore/HyperLlama-3.1-8B
- model: darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
merge_method: model_stock
base_model: darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
dtype: bfloat16