Hacker Newsnew | past | comments | ask | show | jobs | submit | jltsiren's commentslogin

Frodo is determined and reasonably competent, but he ultimately fails his quest. In the end, Frodo is not strong enough to let the Ring go, but he instead claims it as his own. Middle-earth is only saved, because Frodo decided to spare Gollum earlier. Gollum proves treacherous yet again, fights for the Ring, wins, and falls to his doom.

When the hobbits return home, Merry and Pippin (and to lesser extent Sam) are the ones leading the liberation of Shire. Frodo has been traumatized by his experiences and no longer wants to see any violence, no matter the cause. But he cannot adjust to civilian life either. He is invited to live in Valinor. Not as much as an honor, but because his involvement with the One Ring has made him a relic of the past, like the elves. Middle-earth is no longer a place for him.


Patent revenue is mostly irrelevant, as it's too unpredictable and typically decades in the future. Academics rarely do research that can be expected to produce economic value in the next 10–20 years, because the industry can easily outspend the academia in such topics.

The population is growing older. Young adults rarely live alone, while retirees often do. There are more old people than there used to be, and people often want to continue living in their own home after their spouse dies.

There are plenty of Western journalists in Iran, but they are subject to the same internet blackout as everyone else. Embassies can use satellite communications due to diplomatic immunity, while journalists are just average nobodies who face extra scrutiny due to their jobs.

I would be surprised if there many western journalists left in Iran…

Here is an excellent podcast from a Washington post journalist that was captured and held as a hostage - it’s called 544 days (that’s the amount of time he was jailed there)

https://crooked.com/podcast-series/544-days/


Why can't they use starlink?

Starlink is illegal in Iran. Being a foreign journalist is a huge red flag in totalitarian countries, making it harder to smuggle in illegal devices than for the average citizen or visitor. And because journalists are probably under surveillance by the regime, it's harder for them to obtain Starlink terminals in the country than for the average person.

The government was ignoring Starlink until it was being used by western clandestine agencies & Israel to foment violence and burning down property. People were being paid for each act of violence they committed, by those spy agencies.

The Iranian government then used Chinese tech to block Starlink, shutdown the external internet and the violence stopped.


> People were being paid for each act of violence they committed, by those spy agencies.

Do you have evidence of this? At least in the USA, mobs angry at the government will conduct arson and property destruction without being paid a dime.


TFA mentions one reason: the "recent Iranian law that would equate the use of Starlink with espionage, punishable by death"

Are there? It seems like an extremely dangerous place to do journalism.

I don't think the question is really about whether AI art is real art. (But it could be about that, as I'm not familiar with commercial cons in the US.)

Some years ago, around the time I became aware that AI art is a thing, the artist scene around Finnish cons had already decided to ban it. And the reason was obvious, as the same people are also very eager to police others who might be selling pirated products.

They don't care legal constructs such as intellectual property. They don't really care about economic constructs such as copyright. What they care about are authors' moral rights. If the model was trained without obtaining a permission from the authors of every work in the training data, they think using the model to create art is immoral.


> What they care about are authors' moral rights. If the model was trained without obtaining permission from the authors of every work in the training data, they think using the model to create art is immoral.

Art is not created in isolation. It is a result of the artist's exposure (aka training), both intentional and incidental. If an artist wants an AI model to get permission before training on their work, then the artist should get permission from all the artists they were exposed to that shaped their artistic expression.

It's training and copying all the way down.


> Art is not created in isolation. It is a result of the artist's exposure (aka training), both intentional and incidental.

"aka training" is doing A LOT of work here


But it's fundamentally a correct view.

(Not to take away from human artist's unhappiness - it's completely understandable).


In what way? It certainly does not mean the same thing to a developing artist as it does in the context on an LLM, so I do not even know why people bother with this wordsmithing.

The problem is that if this argument is allowed to stand, art, as a human endeavor will shrink 99% or maybe even 100%.

Oh and this happens in a very underhanded way. Courts, governments and companies (including OpenAI and others) demand copyright is respected by humans. They impose great penalties when humans cheat, and then this happens:

https://torrentfreak.com/nvidia-contacted-annas-archive-to-s...

https://torrentfreak.com/authors-accuse-openai-of-using-pira...

https://torrentfreak.com/meta-torrented-over-81-tb-of-data-t...

If these companies were forced to abide by the rules courts impose on humans, they would have to buy billions worth of books. But of course, "that's not how copyright works". Of course, these companies ARE using copyright to avoid reciprocating:

https://openai.com/policies/row-terms-of-use/

So this is yet another "rules for thee, not for me" situation involving companies worth billions of dollars. A situation that's really hurting people's livelihoods ...


I can't disagree with your AI doomerism perspective. I firmly believe that AI companies should buy one copy of whatever work they use for training. While this won't provide the never-ending royalty stream on copyrighted material that corporations strive for, it would foster the mindset that AI companies must pay society in some way. And I truly think that if AI companies are going to train on all the knowledge in the world, their profits should go back to everyone in the world. i.e., LLM models are a public good.

I have an almost unshakable conviction that LLM-type AI systems should become a repository of all human knowledge. When LLMs give you an answer, you should be able to ask, what are the sources behind your answer? People won't do this, but curious, wanting-to-learn people will. Which leads to one of the important questions. How do you keep people curious?


But all these companies violated that on a massive scale. It's done. They're not paying. Oh, and when asked what the consequences are for people doing illegal downloading, ChatGPT helpfully answers:

> About $750 to $30,000 per copyrighted work

> Can go up to $150,000 per work if it’s considered willful

... it was definitely willful. And these are amounts that would bankrupt even OpenAI. But I guess only you and me will have to pay these sorts of amounts, not big companies ...


I don't disagree with either of you regarding the doomerism, but Anthropic just paid out the largest US copyright settlement ever, based upon their exposure to the liability of $150k per copyrighted work they faced.

I haven't gotten my $150k for one (like a lot of people, I wrote an IT book that chatgpt can 95% repeat sentences from), and nobody I know has gotten theirs either.

Your publisher probably did. (Figuratively speaking, it always seems to be publisher corpos getting the money in such cases).

The settlement is for $3k per protected work of class members. Are you a class member? You should've been contacted by your publisher if you were. If you weren't in the shadow library, then you are not in the settlement.

(I'm European)

(Europeans are able to obtain copyrights over their works in the US)

or

(so is J.K. Rowling)


People would say: I love when a person does that, it's cool to see someone's inspirations and participate in the process and journey of them developing their artistic talent. And I don't really care to be involved in an AI doing that

To clarify, I'm not saying AI created stuff can't be art, I'm saying that someone that enters a text prompt is not the creator of the AI's output

Technically, it's a legal grey area, and currently any image by AI can be considered public domain.

This is a good change in society towards protectionist IP, which was long due for fixing, but was never done.


AI generative art doesn't exist by definition. You cannot generate art. Actually we have a term for this - kitsch.

I think that you can probably do some interesting things. Sol Lewitt made art that was just instructions to be interpreted by another human. But the medium needs to be the instructions rather than the isolated output of the machine.

Ethnic Greenlanders living in Greenland are ordinary Danish citizens. Any Danish citizen can obtain almost the same legal rights by moving to Greenland.

Citizens of other Nordic countries can also live and work in Greenland without any permits. However, some jobs are restricted to Danish citizens who were born or raised in Greenland. EU citizens need a residence permit, because Greenland is not in the EU.


Annapurna Circuit has changed much over the years. It feels busier than EBC, because roads go all the way up to Manang and Muktinath, with only three days between them. And Muktinath is a big pilgrimage destination, with ~800k visitors a year.

You must be thinking of 2008. And the EU economy was not bigger than the US back then. There was a temporary distortion in currency exchange rates, as people saw a significant risk that the financial crisis would cause serious long-term damage to the US economy. It didn't, the exchange rates normalized, and the illusion that the EU economy was bigger vanished.

Currencies are speculative instruments, not reliable measures. If you measure the economy of one entity in the currency of another entity, you should never accept the numbers at face value.


If you want to use national security as a justification for subsidies, you need to be careful with what you are subsidizing. Only essential things should be subsidized. Non-essential things can be left to the market, or at least their subsidies require other justifications.

From a national security perspective, it is essential to provide basic nutrition to people when international trade is disrupted. Having access to food people enjoy eating is not essential. The viability of existing agricultural businesses is not essential. The preservation of cultural traditions related to food and agriculture is not essential. And so on.

It's also important to consider where the subsidies should be directed. Here in Finland, the explicit justification for agricultural subsidies has always been the assumption that food produced in "European countries that still have a strong farming industry" might not be available during a crisis.


Concurrency is easy by default. The hard part is when you are trying to be clever.

You write concurrent code in Rust pretty much in the same way as you would write it in OpenMP, but with some extra syntax. Rust catches some mistakes automatically, but it also forces you to do some extra work. For example, you often have to wrap shared data in Arc when you convert single-threaded code to use multiple threads. And some common patterns are not easily available due to the limited ownership model. For example, you can't get mutable references to items in a shared container by thread id or loop iteration.


> For example, you can't get mutable references to items in a shared container by thread id or loop iteration.

This would be a good candidate for a specialised container that internally used unsafe. Well, thread id at least; since the user of an API doesn't provide it, you could mark the API safe, since you wouldn't have to worry about incorrect inputs.

Loop iteration would be an input to the API, so you'd mark the API unsafe.


There’s split_at_mut to avoid writing unsafe yourself in this case.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: