bartowski/Meta-Llama-3.1-70B-Instruct-GGUF-torrent

bartowski/Meta-Llama-3.1-70B-Instruct-GGUF · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
FilenameQuant typeFile SizeSplitDescription
Meta-Llama-3.1-70B-Instruct-Q8_0.ggufQ8_074.98GBtrueExtremely high quality, generally unneeded but max available quant.
Meta-Llama-3.1-70B-Instruct-Q6_K_L.ggufQ6_K_L58.40GBtrueUses Q8_0 for embed and output weights. Very high quality, near perfect, recommended.
Meta-Llama-3.1-70B-Instruct-Q6_K.ggufQ6_K57.89GBtrueVery high quality, near perfect, recommended.
Meta-Llama-3.1-70B-Instruct-Q5_K_L.ggufQ5_K_L50.60GBtrueUses Q8_0 for embed and output weights. High quality, recommended.
Meta-Llama-3.1-70B-Instruct-Q5_K_M.ggufQ5_K_M49.95GBtrueHigh quality, recommended.
Meta-Llama-3.1-70B-Instruct-Q4_K_L.ggufQ4_K_L43.30GBfalseUses Q8_0 for embed and output weights. Good quality, recommended.
Meta-Llama-3.1-70B-Instruct-Q4_K_M.ggufQ4_K_M42.52GBfalseGood quality, default size for must use cases, recommended.
Meta-Llama-3.1-70B-Instruct-IQ4_XS.ggufIQ4_XS37.90GBfalseDecent quality, smaller than Q4_K_S with similar performance, recommended.
Meta-Llama-3.1-70B-Instruct-Q5_K_S.ggufQ5_K_S36.13GBfalseHigh quality, recommended.
Meta-Llama-3.1-70B-Instruct-IQ3_M.ggufIQ3_M31.94GBfalseMedium-low quality, new method with decent performance comparable to Q3_K_M.
Meta-Llama-3.1-70B-Instruct-Q3_K_S.ggufQ3_K_S30.91GBfalseLow quality, not recommended.
Meta-Llama-3.1-70B-Instruct-IQ3_XS.ggufIQ3_XS29.31GBfalseLower quality, new method with decent performance, slightly better than Q3_K_S.
Meta-Llama-3.1-70B-Instruct-Q2_K_L.ggufQ2_K_L27.40GBfalseUses Q8_0 for embed and output weights. Very low quality but surprisingly usable.
Meta-Llama-3.1-70B-Instruct-Q2_K.ggufQ2_K26.38GBfalseVery low quality but surprisingly usable.
Meta-Llama-3.1-70B-Instruct-IQ2_M.ggufIQ2_M24.12GBfalseRelatively low quality, uses SOTA techniques to be surprisingly usable.