Minitron-4B-Base-f32.gguf | f32 | 16.77GB | false | Full F32 weights. |
Minitron-4B-Base-Q8_0.gguf | Q8_0 | 4.46GB | false | Extremely high quality, generally unneeded but max available quant. |
Minitron-4B-Base-Q6_K_L.gguf | Q6_K_L | 3.83GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, recommended. |
Minitron-4B-Base-Q5_K_L.gguf | Q5_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. High quality, recommended. |
Minitron-4B-Base-Q6_K.gguf | Q6_K | 3.45GB | false | Very high quality, near perfect, recommended. |
Minitron-4B-Base-Q4_K_L.gguf | Q4_K_L | 3.28GB | false | Uses Q8_0 for embed and output weights. Good quality, recommended. |
Minitron-4B-Base-Q3_K_XL.gguf | Q3_K_XL | 3.14GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
Minitron-4B-Base-Q5_K_M.gguf | Q5_K_M | 3.06GB | false | High quality, recommended. |
Minitron-4B-Base-Q5_K_S.gguf | Q5_K_S | 2.99GB | false | High quality, recommended. |
Minitron-4B-Base-Q4_K_M.gguf | Q4_K_M | 2.70GB | false | Good quality, default size for must use cases, recommended. |
Minitron-4B-Base-Q4_K_S.gguf | Q4_K_S | 2.58GB | false | Slightly lower quality with more space savings, recommended. |
Minitron-4B-Base-IQ4_XS.gguf | IQ4_XS | 2.46GB | false | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
Minitron-4B-Base-Q3_K_L.gguf | Q3_K_L | 2.45GB | false | Lower quality but usable, good for low RAM availability. |
Minitron-4B-Base-IQ3_M.gguf | IQ3_M | 2.18GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
Minitron-4B-Base-IQ3_XS.gguf | IQ3_XS | 2.06GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
Minitron-4B-Base-Q2_K.gguf | Q2_K | 1.90GB | false | Very low quality but surprisingly usable. |