bartowski/Hermes-3-Llama-3.1-8B-lorablated-GGUF

bartowski/Hermes-3-Llama-3.1-8B-lorablated-GGUF · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Filename Quant type File Size Split Description
Hermes-3-Llama-3.1-8B-lorablated-f32.gguf f32 32.13GB false Full F32 weights.
Hermes-3-Llama-3.1-8B-lorablated-Q8_0.gguf Q8_0 8.54GB false Extremely high quality, generally unneeded but max available quant.
Hermes-3-Llama-3.1-8B-lorablated-Q6_K_L.gguf Q6_K_L 6.85GB false Uses Q8_0 for embed and output weights. Very high quality, near perfect, recommended.
Hermes-3-Llama-3.1-8B-lorablated-Q6_K.gguf Q6_K 6.60GB false Very high quality, near perfect, recommended.
Hermes-3-Llama-3.1-8B-lorablated-Q5_K_L.gguf Q5_K_L 6.06GB false Uses Q8_0 for embed and output weights. High quality, recommended.
Hermes-3-Llama-3.1-8B-lorablated-Q5_K_M.gguf Q5_K_M 5.73GB false High quality, recommended.
Hermes-3-Llama-3.1-8B-lorablated-Q5_K_S.gguf Q5_K_S 5.60GB false High quality, recommended.
Hermes-3-Llama-3.1-8B-lorablated-Q4_K_L.gguf Q4_K_L 5.31GB false Uses Q8_0 for embed and output weights. Good quality, recommended.
Hermes-3-Llama-3.1-8B-lorablated-Q4_K_M.gguf Q4_K_M 4.92GB false Good quality, default size for must use cases, recommended.
Hermes-3-Llama-3.1-8B-lorablated-Q3_K_XL.gguf Q3_K_XL 4.78GB false Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability.
Hermes-3-Llama-3.1-8B-lorablated-Q4_K_S.gguf Q4_K_S 4.69GB false Slightly lower quality with more space savings, recommended.
Hermes-3-Llama-3.1-8B-lorablated-IQ4_XS.gguf IQ4_XS 4.45GB false Decent quality, smaller than Q4_K_S with similar performance, recommended.
Hermes-3-Llama-3.1-8B-lorablated-Q3_K_L.gguf Q3_K_L 4.32GB false Lower quality but usable, good for low RAM availability.
Hermes-3-Llama-3.1-8B-lorablated-Q3_K_M.gguf Q3_K_M 4.02GB false Low quality.
Hermes-3-Llama-3.1-8B-lorablated-IQ3_M.gguf IQ3_M 3.78GB false Medium-low quality, new method with decent performance comparable to Q3_K_M.
Hermes-3-Llama-3.1-8B-lorablated-Q2_K_L.gguf Q2_K_L 3.69GB false Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable.
Hermes-3-Llama-3.1-8B-lorablated-Q3_K_S.gguf Q3_K_S 3.66GB false Low quality, not recommended.
Hermes-3-Llama-3.1-8B-lorablated-IQ3_XS.gguf IQ3_XS 3.52GB false Lower quality, new method with decent performance, slightly better than Q3_K_S.
Hermes-3-Llama-3.1-8B-lorablated-Q2_K.gguf Q2_K 3.18GB false Very low quality but surprisingly usable.
Hermes-3-Llama-3.1-8B-lorablated-IQ2_M.gguf IQ2_M 2.95GB false Relatively low quality, uses SOTA techniques to be surprisingly usable.