Abstract: Students often struggle during programming homework and may need help getting started or localizing errors. One promising and scalable solution is to provide automated programming hints, generated from prior student data, which suggest how a student can edit their code to get closer to a solution, but little work has explored how to design these hints for large-scale, real-world classroom settings, or evaluated such designs. In this paper, we present CodeChecker, a system which generates hints automatically using student data, and incorporates them into an existing CS1 online homework environment, used by over 1000 students per semester. We present insights from survey and interview data, about student and instructor perceptions of the system. Our results highlight affordances and limitations of automated hints, and suggest how specific design choices may have impacted their effectiveness.