bartowski/MadWizardOrpoMistral-7b-v0.3-GGUF

bartowski/MadWizardOrpoMistral-7b-v0.3-GGUF · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
FilenameQuant typeFile SizeDescription
MadWizardOrpoMistral-7b-v0.3-Q8_0.ggufQ8_07.70GBExtremely high quality, generally unneeded but max available quant.
MadWizardOrpoMistral-7b-v0.3-Q6_K.ggufQ6_K5.94GBVery high quality, near perfect, recommended.
MadWizardOrpoMistral-7b-v0.3-Q5_K_M.ggufQ5_K_M5.13GBHigh quality, recommended.
MadWizardOrpoMistral-7b-v0.3-Q5_K_S.ggufQ5_K_S5.00GBHigh quality, recommended.
MadWizardOrpoMistral-7b-v0.3-Q4_K_M.ggufQ4_K_M4.37GBGood quality, uses about 4.83 bits per weight, recommended.
MadWizardOrpoMistral-7b-v0.3-Q4_K_S.ggufQ4_K_S4.14GBSlightly lower quality with more space savings, recommended.
MadWizardOrpoMistral-7b-v0.3-IQ4_XS.ggufIQ4_XS3.91GBDecent quality, smaller than Q4_K_S with similar performance, recommended.
MadWizardOrpoMistral-7b-v0.3-Q3_K_L.ggufQ3_K_L3.82GBLower quality but usable, good for low RAM availability.
MadWizardOrpoMistral-7b-v0.3-Q3_K_M.ggufQ3_K_M3.52GBEven lower quality.
MadWizardOrpoMistral-7b-v0.3-IQ3_M.ggufIQ3_M3.28GBMedium-low quality, new method with decent performance comparable to Q3_K_M.
MadWizardOrpoMistral-7b-v0.3-Q3_K_S.ggufQ3_K_S3.16GBLow quality, not recommended.
MadWizardOrpoMistral-7b-v0.3-IQ3_XS.ggufIQ3_XS3.02GBLower quality, new method with decent performance, slightly better than Q3_K_S.
MadWizardOrpoMistral-7b-v0.3-IQ3_XXS.ggufIQ3_XXS2.83GBLower quality, new method with decent performance, comparable to Q3 quants.
MadWizardOrpoMistral-7b-v0.3-Q2_K.ggufQ2_K2.72GBVery low quality but surprisingly usable.
MadWizardOrpoMistral-7b-v0.3-IQ2_M.ggufIQ2_M2.50GBVery low quality, uses SOTA techniques to also be surprisingly usable.
MadWizardOrpoMistral-7b-v0.3-IQ2_S.ggufIQ2_S2.31GBVery low quality, uses SOTA techniques to be usable.
MadWizardOrpoMistral-7b-v0.3-IQ2_XS.ggufIQ2_XS2.20GBVery low quality, uses SOTA techniques to be usable.