lukajk / blog / ai art iii

apr 05 25 07:10pm
last modified apr 08 25 02:45am

Resistance to gen AI is certainly understandable--livelihoods being threatened pretty inarguably sucks. However the industry it would at least supplant comes from a place of pragmatism rather than a product in of itself. This often segues into an indictment on capitalist structures in general but that's a whole other question. The fundamental fact is, if something is entertaining, then exactly how it was produced is of secondary concern. The broader audience of entertainment consumers like gamers and moviegoers consistently reaffirm this--a smaller vocal population might insist entertainment products be held to higher standards, but for many it's just a bit of fun and one tiny aspect of their lives (interestingly, this is all true in general, generative AI-assisted or otherwise). Because of this, intervention in this process from a moral angle seems untenable. If it were just my choice or something gen AI would only have application in automating relatively rote aspects of large projects so that one-man movies, manga that don't take an irrevocable toll on the mangaka's health, ambitious game projects, etc are a lot more feasible, but not something that supplants the entire process. Again however, intervention on the basis of it conflicting with some self-interested desire unrelated to general human rights is not something that I can really substantiate. In any case, gen AI at the "magnitude" of ability that it is currently, will never be able to adequately circumvent the entire manual process of creation. It is simply not good enough--ai art, ai art ii--as it is. In fact as it is it is still quite unhelpful for more experienced artists: it needs some sort of tooling suite built around it that would allow for more fine-grained control and rapid iteration at the least. As for what happens in the future I imagine there's only really two routes, with different "magnitudes" of achievement - the first where AI becomes more technically competent but is ultimately still only ever copying from the surface-level expressions of human emotions and experiences, which while serviceable for some limited threshold, would not be able to produce an "original" synthesis--founded from applying its own understanding. As an aside, humans are far from immune to this pitfall - it's a common critique that some entertainment product merely copies the form of another without understanding the underlying mechanisms of those decisions, and the specific contexts that allow those decisions to work. Regardless, in this future AI would likely be quite useful for automation but products where a significant portion of decisions have been left to it don't amount to much more than adding to the pile of questionable quality products that has always existed. Although one lasting effect might be a deemphasis on illustration as an ends in of itself and more emphasis on larger creative products. The second is where AI understands the mechanisms and experiences of humans well enough to create works of better reflection than we are generally capable of ourselves--that true "original" synthesis. However this sounds like artificial general intelligence might nearly need to be a prerequisite to this sort of creation, in which case we have much bigger problems to worry about. I have pretty large doubts that agi would arise out of further continued mimicry of our creative output however so we're probably safe from development on that angle at the least.