Would it be possible to reverse-engineer generative AI content to find out what original sources were used to create it?

In principle, no. Once you add entropy to the generative process, it becomes inherently irreversible. If there were any traceability between the results and the sources used, it would not be a true stochastic process.

BUT you can use metadata to credit specific sources, and that’s where things get interesting. A new discipline called “mechanistic interpretability” tries to understand in broad strokes what neutral networks do in order to reverse engineer the algorithms encoded in their parameters.

In this video, researchers have been able to partially reverse-engineer how InceptionV1, a convolutional neural network, recognises images.