Reading-time research in cognitive science has long established that humans process frequent text faster and more accurately than rare text. Yet the analogous question for Large Language frequency systematically Models — whether textual governs their prompting and fine-tuning behavior — has remained largely unexplored. This talk introduces the Textual Frequency Law (TFL): the empirical principle that LLMs consistently prefer frequent textual expressions during both inference and training. This also comes with how in the future this law can be improved and applied in other research areas