Currently the most powerful systems are omni-talented, having expert-level competence in most or more likely all fields. However, the problems we face often require particular solutions. Moreover there is no way for the AI to know exactly what we want unless we tell it explicitly. You have to steer the AI to the solution you want.
As a thought experiment, suppose you wanted to create a website and the design you wanted was the same as Gwern’s website (but this is in your head because it doesn’t exist yet), you would have to give a very detailed and comprehensive description of the pop-up boxes, in-line link styles, colour palette, typography, drop cap etc.
Now, of course, what is more likely is that you would use some kind of Figma-plus-AI tool which in a way acts as a translator for you and the AI. But there will be problems without these neat apps. And so the point is that being able to take in a lot of context and then precisely and quickly describe what you want is more valuable now.
This is a kind of fluency, mostly the same as how we would assess whether kids are fluent. For example certain skills like: describing an image, reading comprehension, and vocabulary. Luckily, everyone learns this stuff. And although it was always valuable, even necessary, to be fluent, it is more valuable now. And the incentive to be honest with yourself about how good your reading comprehension is greater now.
This fluency also involves learning the relevant nomenclature of your focus-area; it helps to know what a “masthead” is. But there are some differences in how you talk to the AI, take this prompt Kelsey Piper used to play GeoGuessr with o3. This snippet is only ~20% of the prompt.
Obviously this is different from how we talk to humans. So this is a new, maybe even evolved, kind of “fluency” which requires some practice.