SmolLM-1.7B-Instruct-v0.2-f32.gguf | f32 | 6.85GB | false | Full F32 weights. |
SmolLM-1.7B-Instruct-v0.2-Q8_0.gguf | Q8_0 | 1.82GB | false | Extremely high quality, generally unneeded but max available quant. |
SmolLM-1.7B-Instruct-v0.2-Q6_K_L.gguf | Q6_K_L | 1.43GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, recommended. |
SmolLM-1.7B-Instruct-v0.2-Q6_K.gguf | Q6_K | 1.41GB | false | Very high quality, near perfect, recommended. |
SmolLM-1.7B-Instruct-v0.2-Q5_K_L.gguf | Q5_K_L | 1.25GB | false | Uses Q8_0 for embed and output weights. High quality, recommended. |
SmolLM-1.7B-Instruct-v0.2-Q5_K_M.gguf | Q5_K_M | 1.23GB | false | High quality, recommended. |
SmolLM-1.7B-Instruct-v0.2-Q5_K_S.gguf | Q5_K_S | 1.19GB | false | High quality, recommended. |
SmolLM-1.7B-Instruct-v0.2-Q4_K_L.gguf | Q4_K_L | 1.08GB | false | Uses Q8_0 for embed and output weights. Good quality, recommended. |
SmolLM-1.7B-Instruct-v0.2-Q4_K_M.gguf | Q4_K_M | 1.06GB | false | Good quality, default size for must use cases, recommended. |
SmolLM-1.7B-Instruct-v0.2-Q4_K_S.gguf | Q4_K_S | 1.00GB | false | Slightly lower quality with more space savings, recommended. |
SmolLM-1.7B-Instruct-v0.2-Q3_K_XL.gguf | Q3_K_XL | 0.96GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
SmolLM-1.7B-Instruct-v0.2-IQ4_XS.gguf | IQ4_XS | 0.94GB | false | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
SmolLM-1.7B-Instruct-v0.2-Q3_K_L.gguf | Q3_K_L | 0.93GB | false | Lower quality but usable, good for low RAM availability. |
SmolLM-1.7B-Instruct-v0.2-IQ3_M.gguf | IQ3_M | 0.81GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
SmolLM-1.7B-Instruct-v0.2-IQ3_XS.gguf | IQ3_XS | 0.74GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
SmolLM-1.7B-Instruct-v0.2-Q2_K_L.gguf | Q2_K_L | 0.70GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
SmolLM-1.7B-Instruct-v0.2-IQ3_XXS.gguf | IQ3_XXS | 0.68GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
SmolLM-1.7B-Instruct-v0.2-Q2_K.gguf | Q2_K | 0.67GB | false | Very low quality but surprisingly usable. |