Human visual object categorization is best described by a model with few stored exemplars Despite the success of exemplar-based models of categorization, their full complexity is not captured by their statistical number of degrees of freedom: the generalized context model (GCM) stores all training exemplars in memory, whereas a prototype model (WPSM) stores only one memory trace per category, yet both have the same statistical number of degrees of freedom. Since natural visual categories include many exemplars, the GCM's outperformance of the WPSM might rely on an implausibly dense neural representation. To make memory use more explicit, we developed a radial basis function "roaming exemplar model" (RXM[n]). The RXM[n] follows the GCM/WPSM in classifying exemplars based on a weighted sum of their attention-weighted similarity to stored exemplars, but departs from the GCM/WPSM in allowing the stored exemplars to vary in number per category ([n]) and to be refined through training. Thus, the RXM[n]'s degrees of freedom reflect its memory use. We asked human observers (N=9) to classify sets of Brunswik faces (12 sets, each with 2x10 training exemplars for two categories, and 60 test exemplars). We fitted the RXM[n] (with n=1-10 stored exemplars per category), GCM, WPSM, and linear decision bound model (PBI) with individual subjects' data, and assessed the fits using the Akaike Information Criterion (AIC) to penalize models with higher complexity. The best-fitting model was the RXM[1], followed by the PBI and the GCM. Thus, when models' memory use is made explicit, categorization behavior is best described with few stored exemplars, which are refined through training. Unlike the dense representation predicted by the GCM, this result predicts that a sparse neural representation different from prototype abstraction underlies categorization.