Two thoughts:
1) I think that the culture of BotB has a few features which make it, at the moment, resistant to serious damage from AI-generated entries:
- the community is fairly small and tight-knit;
- that community is, through this thread and elsewhere, expressing a clear collective rejection of AI-generated entries;
- as mentioned by kilowatt, roz, and others, there's a strong emphasis on self-improvement and self-expression, in which context making tracks with AI is kind of pointless
- there's a culture of sharing one's process -- while not everybody talks about how they make their entries, a lot of people do, further emphasizing that the process matters just as much as the output. (The use of source files rather than mp3s for most formats is part of this too!)
I don't mean to say that AI-generated entries aren't a problem here -- as discussed above, it has happened already. I absolutely agree that a clear policy on AI tools is needed. But I don't think they're going to destroy the site any time soon.
I bring this up because I think
any attempt to enforce policy which impedes participation -- like requiring people to include proof that they worked on their entries -- will cause substantially more harm than it will prevent.
But we should promote that culture of process sharing by talking about our own entries and asking others how they made theirs, which is a good thing to do anyway.
2) I think the first situation that comes to mind when this issue is raised is an algorithm spitting out an entire song or artwork which someone then submits with minimal touch-ups. But as the technology gets more advanced and people learn about it, its use cases are going to get more nuanced, and if AI becomes a serious factor on BotB it's likely to be in a less straightforward role.
So here are six potential scenarios -- not exhaustive, but touching on some of the ways AI could be relevant. Which ones do you think should be allowed? How would you frame a policy to address the ones which shouldn't be?
a) Someone types a prompt into a generative neural network, it spits out an audio or image file, and they submit it with minor or no changes. (I figure almost nobody is okay with this, but it's the baseline. And I believe this includes the example of the Winter Chip cover cited above, so certainly some policy needs to be made here.)
b) Someone types a prompt into a GNN, it spits out an audio file, and they substantially remix it, moving parts around and adding some original material of their own.
c) Someone uses a GNN to generate a chord progression or melody, then arranges it using their own sounds, and adds accompaniment and production of their own devising.
d) Someone uses a GNN to generate a set of short samples, then arranges those samples into an entirely original composition.
e) Someone uses a GNN to generate a set of short samples and uses them as the bitpack for a remix battle. (And does it matter whether the host explicitly mentions that they got the samples from a GNN?)
f) Someone generates a single sample using a GNN and includes it in an otherwise ordinary bitpack for a remix battle. (Here we're in the territory of Sample Pack Contest XVI, whose pack included (obviously) fake voice clips of Barack Obama and Rick from Rick and Morty.)
Personally, I think a), b), and c) are not okay; I have mixed feelings about d); and I think e) and f) are probably okay. But you may feel differently, and I think some clarity here is needed for any policy discussion.