We study the L0-penalized and L0-constrained quantile regression estimators. For both estimators, we derive non-asymptotic upper bounds on the mean excess quantile prediction risk as well as mean-square parameter and regression function estimation errors. Further,we characterize expected Hamming loss for the L0-penalized estimator. We implement the proposed procedure via mixed integer linear programming and also a more scalable first-order approximation algorithm. We illustrate the finite-sample performance of our approach in Monte Carlo experiments and its usefulness in a real data application concerning conformal prediction of infant birth weights. In sum, our L0-based method produces a much sparser estimator than the L1-penalized and non-convex penalized approaches without compromising precision.