It’s about time I disagreed with you about something.
I think OP’s dystopian scenario is not impossible. The thing about machine learning is, it doesn’t need to understand its input in terms of rules and emotional state; you don’t need a good notation to do it. It just needs the input to be quantifiable; which music performances are. If you feed a music composition system lots of Bach, it will spin out more Bach (and that’s been possible for the last twenty years). Maybe not divinely inspired Bach, but certainly competent Bach: there are, after all, rules and regularities recoverable from Bach’s music.
Well, same with what we impute as emotion in music. Rubato may be ineffable in effect on humans, but it’s not ineffable in execution. Neither is articulation, nor dynamics. I think they can be learned.
The thing is that, as Curtis said, we have had player pianos for a century, and they were much more accurate than humans. Conlon Nancarrow relied on that for his pieces. But they didn’t put pianists out of business.
The reason is that, even if technically—or even emotionally—a machine does replicate a good musician, that’s not why we go to concerts. Live gigs have in fact taken a downturn in attendance, and performers will tell you they’re already losing out in competition to digitised sound; except the digitised sound is recordings of Billy Holliday or Miles Davis or AC/DC or Yitzak Perlman.
If people would rather show up to your live gig than listen to Horowitz at home, it’s not because they expect you’ll do a “better” job than Horowitz. It’s because the live performance is the point, and they want to see humans, imperfections and all, grappling with the piece.
But that means that live performances will be more a niche thing: they’ll be competing with computer performances, as well as YouTube and CDs and DVDs. They’re already a niche thing though, and they’ve been a niche thing for decades.