bartowski/Hermes-3-Llama-3.1-8B-GGUF

bartowski/Hermes-3-Llama-3.1-8B-GGUF · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
FilenameQuant typeFile SizeSplitDescription
Hermes-3-Llama-3.1-8B-f32.gguff3232.13GBfalseFull F32 weights.
Hermes-3-Llama-3.1-8B-Q8_0.ggufQ8_08.54GBfalseExtremely high quality, generally unneeded but max available quant.
Hermes-3-Llama-3.1-8B-Q6_K_L.ggufQ6_K_L6.85GBfalseUses Q8_0 for embed and output weights. Very high quality, near perfect, recommended.
Hermes-3-Llama-3.1-8B-Q6_K.ggufQ6_K6.60GBfalseVery high quality, near perfect, recommended.
Hermes-3-Llama-3.1-8B-Q5_K_L.ggufQ5_K_L6.06GBfalseUses Q8_0 for embed and output weights. High quality, recommended.
Hermes-3-Llama-3.1-8B-Q5_K_M.ggufQ5_K_M5.73GBfalseHigh quality, recommended.
Hermes-3-Llama-3.1-8B-Q5_K_S.ggufQ5_K_S5.60GBfalseHigh quality, recommended.
Hermes-3-Llama-3.1-8B-Q4_K_L.ggufQ4_K_L5.31GBfalseUses Q8_0 for embed and output weights. Good quality, recommended.
Hermes-3-Llama-3.1-8B-Q4_K_M.ggufQ4_K_M4.92GBfalseGood quality, default size for must use cases, recommended.
Hermes-3-Llama-3.1-8B-Q3_K_XL.ggufQ3_K_XL4.78GBfalseUses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability.
Hermes-3-Llama-3.1-8B-Q4_K_S.ggufQ4_K_S4.69GBfalseSlightly lower quality with more space savings, recommended.
Hermes-3-Llama-3.1-8B-IQ4_XS.ggufIQ4_XS4.45GBfalseDecent quality, smaller than Q4_K_S with similar performance, recommended.
Hermes-3-Llama-3.1-8B-Q3_K_L.ggufQ3_K_L4.32GBfalseLower quality but usable, good for low RAM availability.
Hermes-3-Llama-3.1-8B-Q3_K_M.ggufQ3_K_M4.02GBfalseLow quality.
Hermes-3-Llama-3.1-8B-IQ3_M.ggufIQ3_M3.78GBfalseMedium-low quality, new method with decent performance comparable to Q3_K_M.
Hermes-3-Llama-3.1-8B-Q2_K_L.ggufQ2_K_L3.69GBfalseUses Q8_0 for embed and output weights. Very low quality but surprisingly usable.
Hermes-3-Llama-3.1-8B-Q3_K_S.ggufQ3_K_S3.66GBfalseLow quality, not recommended.
Hermes-3-Llama-3.1-8B-IQ3_XS.ggufIQ3_XS3.52GBfalseLower quality, new method with decent performance, slightly better than Q3_K_S.
Hermes-3-Llama-3.1-8B-Q2_K.ggufQ2_K3.18GBfalseVery low quality but surprisingly usable.
Hermes-3-Llama-3.1-8B-IQ2_M.ggufIQ2_M2.95GBfalseRelatively low quality, uses SOTA techniques to be surprisingly usable.