Something similar happened in the late 1990's when a PlayStation memory card was being sold with pre-loaded save data for a dating sim game (Tokimeki Memorial). Konami claimed it violated the integrity of the work because selling a hacked save with all stats maxed tampered with the game's natural progression. (And they won twice, the original case and the appeal.)
That's such a weird theory of harm. They still had to buy a copy of the game, right? So the harm is that the consumer would experience the story in the way they chose instead of the way the author intended? Death of the audience I suppose?
That's a good clarification, but it does lead us back to where we started. What's the harm of violating "integrity?" That the author doesn't get to present the story in the manner they chose.
It only makes sense with the context of Japanese copyright law. Here's the actual text:
> (Right to Integrity)
> Article 20 (1) The author of a work has the right to preserve the integrity of that work and its title, and is not to be made to suffer any alteration, cut, or other modification thereto that is contrary to the author's intention.
If you distribute hacked save files, you are "cutting out" parts of the original game story, and thus violate this article.
Yes, it is ludicrous, but it's consistent, at least.
Czechia also has this clause in the copyright law. It sucks, because you can be sued for modifying FLOSS to do something that does not sit well with the original author(s).
Last time I've seen it used was for an architect to prevent us from using garbage bags of a different color. I am not kidding you. Their interior design was very specific (and good) and this was an attempt to save costs and it looked weird, but still
So, in that case, I wonder what would happen if someone just flooded the internet with modified save files for free. Remove the money incentive and just publish thousands of uploads around the web from outside Japan.
If the sites are hosted in Japan, then they could be taken down by Japanese courts. If the sites are hosted outside Japan, it's very likely nothing will happen.
Trying to profit off someone else's IP is the problematic part(as with any fake brand products). They don't go after every ad-supported save file repositories of curious interest.
The most capitalist translation for this "integrity" is sales + customer loyalty, although it's slightly more than just direct financial income. Spoiled media content sells worse shorter term, and drags down related products longer term.
I guess, "there's effectively a game in the market with our same branding and IP, allowing people to access only the scenes in our dating sim they want to, turning it into a different kind of application with deleterious associations to our brand" is a theory of harm I can grok. I don't think it would hurt short term sales in this particular case, but that could hurt the brand and thus long term sales. I have trouble parsing the rationale because it seems very anti-consumer, at least with this being my only exposure to Japanese law, but that probably wouldn't have flown in the US either.
I once played an indie JRPG that took inspiration from Cordwainer Smith and some other authors. It's untranslated but the Santaclara drug is a usable item that acts like a Phoenix Down.
Recently I've tended to look at mid-2000s Web 2.0 as a source of nostalgia. It was back in the day when twttr felt like people were optimistic about building a new community for the future and the possibilities were still unexplored. Poring through all the dense skeumorphic design trends of the era is like unearthing a time capsule.
Back then, not everything had been flattened to fit the needs of the app stores. Dedicated hardware existed for portable music, movies, games and photo/video. Each of these had their own particular design language and fanbase that made the world feel larger and more varied than the one-device-fits-all reality we havetoday.
I think of it like: The only reason humans still drive cars is we have yet to find a good enough way of replacing ourselves with something more effective. It's merely an implementation detail of "getting from A to B" that would be disrupted if a true autonomous solution was discovered. Many would want to optimize away drunk drivers and road rage if it were possible in some faraway future. So something like a steering wheel could be seen like a compromise of sorts, until the next big thing makes them obsolete.
That, and the state of missing a technology in a period of time is irreplaceable once it's been discovered. Nobody can live in an era without social media anymore, barring a global-scale catastrophic reset. So I believe it's important to consider what technology is not yet totally pervasive, for example by realizing there is still a steering wheel for you to grip in your car.
And in my mind, the sinister feeling stems from the fact that all it takes to irreversibly shift society like that is enough smart people with honest intentions but little foresight of what will happen in a few decades as a result of proliferating all this. The problems that result stop being in anyone's control, "throwing it over the wall" so to speak, and instead become yet another fact of life that could weigh us down (mostly I think of the ubiquity of social media and how it has changed human interaction). And it all stems from just a few engineering type people getting overexcited about cool possibilities they can grasp at, not considering there are billions of people unlike them who may have other ideas.
Given Palworld is the new thing I doubt its predecessor will ever exit EA by now. I hope the devs commit the new game to a full release instead of dropping support again after the hype dies down.
It's sort of the same thing with "all regulations are written in blood", right? For example, many of the current set of laws, regulations and best practices that prevent people on planes from dying were only enacted after people in the past died from scenarios that were either unknown unknowns or had never been observed in practice before. And that I think applies to many types of regulations besides just aviation.
I think what I am disagreeing with is the idea that India has to learn the lessons the hard way which other places learned 100 years ago. With good administration a lot of those lessons could be learned through research, not deaths. This reduces the need to follow the same path in the same way. But it requires a will people are saying is not there.
> But that doesn't mean that there is any remotely moral case for undoing the green revolution and allowing billions to starve.
A question is, is it possible to advance technology to fulfill the green revolution without changing the value of human creativity due to the creation/advancement of genAI? Or past a certain point, the results of discovering improved health and ecological outcomes will become inextricably linked with discovering new technologies that cause conflict? What actually drives such a process?
I think more people might become interested on why we end up here talking about new possibilities conflicting with stability again and again, similar to how the negative effects of the invention of smartphones are being talked about now.
> A question is, is it possible to advance technology to fulfill the green revolution without changing the value of human creativity due to the creation/advancement of genAI?
I have to admit not quite sure what you mean, and I do admit full guilt in starting us down the path of "mixed analogies" :). I'll try my best, though.
> Or past a certain point, the results of discovering improved health and ecological outcomes will become inextricably linked with discovering new technologies that cause conflict? What actually drives such a process?
I do think with respect to life-sustaining things -- medicine, pharma, food, shelter, water, energy -- that a combination of specialization and automation is necessary to increase the collective standard of living, and that labor alienation stems from a combination of specialization and automation.
Where I struggle is coming up with an affirmative argument that an artist should benefit from automation of medicine or farming, but that an alienated lab tech or food factory worker should not benefit from automated art.
Another way to look at this is: the less you pay for art-as-entertainment, the more resources you have to buy free time to produce your soul-work (whatever that may mean to you).
> Where I struggle is coming up with an affirmative argument that an artist should benefit from automation of medicine or farming, but that an alienated lab tech or food factory worker should not benefit from automated art.
> Another way to look at this is: the less you pay for art-as-entertainment, the more resources you have to buy free time to produce your soul-work (whatever that may mean to you).
Ah, yes. The alienated workers of the world will warm their weary souls at the hearth of derivative algorithmic creativity units. The reduced price and efficient delivery of each drone's creativity units will obviously give them more free time.
Perhaps we can even come up with a pill that'll let the drones feel entertained without any content at all. If the side effects are well-tolerated, they can take it before work.
> AI's do not make a copy of the source material. It very much just adjusts their internal weights, which from a broadminded perspective, can be seen as simple inspiration, and not copying.
I think the term "AI" is one of the most loaded and misleading to come up in recent discourse. We don't say that relational databases "pack and ship" data, or web clients "hold a conversation" with each other. But for some reason we can say that LLMs and generative models "get inspired" by the data they ingest. It's all just software.
In my own opinion I don't think the models can copy verbatim except in cases of overfitting, but people like the author of the post have a right to feel that something is very wrong with the current system. It's the same principle of compressing a JPEG of the Mona Lisa to 20% and calling that an original work. I believe the courts don't care that it's just a new set of numbers, but instead want to know where those numbers originated from. It is a color of bits[1] situation.
When software is anthropomorphized, it seems like a lot of criticisms against it are pushed aside. Maybe it is because if you listen to the complaints and stop iterating on something like AI, it's like abandoning your child before their potential is fully realized. You see glimpses of something like yourself within its output and become (parentally?) invested in the software in a way beyond just saying "it's software." I feel as if people are getting attached to this kind of software unlike they would to a database, for example.
A thought experiment I have is whenever the term "AI" appears, mentally replace it with the term "advanced technology." The seeming intent behind many headlines changes with this replacement. "Advanced developments in technology will displace jobs." The AI itself isn't the one coming for people.
My own perspective is that humans do not have an exclusive right to intelligence, or ultimately to personhood. I am not anthropomorphizing when I defend the rights of AI. Instead, I am doing so in the abstract sense, without claiming that the current technology should be rightly classified as AI or not. But since the arguments are being framed against the rights of AI to consume media, I think the defense needs to be framed in the same way.
https://ja.m.wikipedia.org/wiki/%E3%81%A8%E3%81%8D%E3%82%81%...
http://gaming.moe/?p=2938