bartowski/Phi-3.5-mini-instruct-GGUF-torrent

bartowski/Phi-3.5-mini-instruct-GGUF · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Filename Quant Type File Size Split Description
Phi-3.5-mini-instruct-f32.gguf f32 15.29GB False Full F32 weights.
Phi-3.5-mini-instruct-Q8_0.gguf Q8_0 4.06GB False Extremely high quality, generally unneeded but max available quant.
Phi-3.5-mini-instruct-Q6_K_L.gguf Q6_K_L 3.18GB False Uses Q8_0 for embed and output weights. Very high quality, near perfect, recommended.
Phi-3.5-mini-instruct-Q6_K.gguf Q6_K 3.14GB False Very high quality, near perfect, recommended.
Phi-3.5-mini-instruct-Q5_K_L.gguf Q5_K_L 2.88GB False Uses Q8_0 for embed and output weights. High quality, recommended.
Phi-3.5-mini-instruct-Q5_K_M.gguf Q5_K_M 2.82GB False High quality, recommended.
Phi-3.5-mini-instruct-Q5_K_S.gguf Q5_K_S 2.64GB False High quality, recommended.
Phi-3.5-mini-instruct-Q4_K_L.gguf Q4_K_L 2.47GB False Uses Q8_0 for embed and output weights. Good quality, recommended.
Phi-3.5-mini-instruct-Q4_K_M.gguf Q4_K_M 2.39GB False Good quality, default size for must use cases, recommended.
Phi-3.5-mini-instruct-Q4_K_S.gguf Q4_K_S 2.19GB False Slightly lower quality with more space savings, recommended.
Phi-3.5-mini-instruct-Q3_K_XL.gguf Q3_K_XL 2.17GB False Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability.
Phi-3.5-mini-instruct-Q3_K_L.gguf Q3_K_L 2.09GB False Lower quality but usable, good for low RAM availability.
Phi-3.5-mini-instruct-IQ4_XS.gguf IQ4_XS 2.06GB False Decent quality, smaller than Q4_K_S with similar performance, recommended.
Phi-3.5-mini-instruct-Q3_K_M.gguf Q3_K_M 1.96GB False Low quality.
Phi-3.5-mini-instruct-IQ3_M.gguf IQ3_M 1.86GB False Medium-low quality, new method with decent performance comparable to Q3_K_M.
Phi-3.5-mini-instruct-Q3_K_S.gguf Q3_K_S 1.68GB False Low quality, not recommended.
Phi-3.5-mini-instruct-IQ3_XS.gguf IQ3_XS 1.63GB False Lower quality, new method with decent performance, slightly better than Q3_K_S.
Phi-3.5-mini-instruct-Q2_K_L.gguf Q2_K_L 1.51GB False Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable.
Phi-3.5-mini-instruct-Q2_K.gguf Q2_K 1.42GB False Very low quality but surprisingly usable.
Phi-3.5-mini-instruct-IQ2_M.gguf IQ2_M 1.32GB False Relatively low quality, uses SOTA techniques to be surprisingly usable.