This summer, for the first time, I took three Dauphine undergraduate students into research projects thinking they had had enough R training (with me!) and several stats classes to undertake such projects. In all cases, the concept was pre-defined and “all they had to do” was running a massive flow of simulations in R (or whatever language suited them best!) to check whether or not the idea was sound. Unfortunately, for two projects, by the end of the summer, we had not made any progress in any of the directions I wanted to explore… Despite a fairly regular round of meetings and emails with those students. In one case the student had not even managed to reproduce the (fairly innocuous) method I wanted to improve upon. In the other case, despite programming inputs from me, the outcome was impossible to trust. A mostly failed experiment which makes me wonder why it went that way. Granted that those students had no earlier training in research, either in exploiting the literature or in pushing experiments towards logical extensions. But I gave them entries, discussed with them those possible new pathways, and kept updating schedules and work-charts. And the students were volunteers with no other incentive than discovering research (I even had two more candidates in the queue). So it may be (based on this sample of 3!) that our local training system is missing in this respect. Somewhat failing to promote critical thinking and innovation by imposing too long presence hours and by evaluating the students only through standard formalised tests. I do wonder, as I regularly see [abroad] undergraduate internships and seminars advertised in the stats journals. Or even conferences.