The next biggest challenge for large language models will lie in solving creative open-ended tasks like scientific discovery. In this talk, I will not propose state-of-the-art algorithms for this purpose, nor will I showcase a large-scale evaluation of LLMs on complex creative tasks that everyone cares about. Instead, I will do the opposite: I will propose a suite of highly controlled tasks that are a minimal abstraction of real-world creative tasks, to study how LLMs learn such tasks. This study will tease a part two fundamental notions of creativity put forth in cognitive science called “combinational” and “exploratory” creativity.