Hacker Newsnew | past | comments | ask | show | jobs | submit | sonofaragorn's commentslogin

That's what the Basic Attention Token (BAT) from the Brave team tried (is still trying?) to do: https://brave.com/brave-rewards/


Kinda?

The fact that they chose to tie it to and advertise it as "get paid to see ads" is a significant turn-off in my mind even if the rest of the ecosystem theoretically works in functionally the same way.

In my mind, the entire point is to get away from advertising as a revenue stream entirely. I want to pay for the things I consume. If the advertising market has decided that my page impression is worth less than pocket change, I'd far rather just give that money to the publisher directly and avoid ads being part of the equation.

The core idea behind BAT isn't bad, but the marketing is pretty terrible if you're targeting people like me.


> The core idea behind BAT isn't bad

I think it is bad because it legitimizes bad practices of the marketing industry. "How bad could grabbing as much data from the population really be? We're sharing our profits!"


I'm not sure that is fair. I've been reading her blog for over a decade, and she has always taken a contrarian view.


I think it's fairly accurate, it's true that she was always contrarian, but back in the 2010's it was a more respectable type of questioning.

These days she's clearly rage baiting with titles, which is typical algorithm submission behaviour.


Loved this. Would it be possible to add subtitles?


What about a post or comment that includes proper names?


Aidan Gomez, Nick Frost, and Ivan Zhang, all of whom were Hinton's students at UofT started Cohere (https://cohere.com/about)


Not true. I've done SRED every year for the past ~7 years. It is work, but there are specialized consultants that do most of it. If the work is truly R&D (which would be the case for a cutting-edge AI company) and you track your work in JIRA or something like that, then it's mostly just writing a few pages describing the efforts.


I get your sentiment, but I think it's important for science communication to adapt to the times. Decades ago (and even as little as one decade ago), most scientists (maybe Hawking being the exception) who would dare appear in these 1hr documentaries would be belittled by the "hardcore" scientists with the same words you used "Science should not be over-simplified like that", "they are not real scientists, they just want to be on TV", etc.

The truth is that young people are mostly on TikTok et al, so this type of content needs to get there.


> "Science should not be over-simplified like that",

It is a difficult balance to strike, but science should be

> as simple as possible, and no simpler

The hard truth is that any simpler means inaccurate, which means when educating the public there __are__ inaccuracies. So the balance to strike is accuracy vs understanding. Most people do not understand the Schrodinger's Cat thought experiment, wave-particle duality, or many similar things (like how one of my namesake's Incompleteness Theorem, Super Position, and the Halting Problem are linked) yet will confidently correct explanations given that do hold more accuracy (even "teaching" those where it is clear one side is vastly more qualified than the other).

But think about it this way, the people getting mad at the over-simplification are not a force preventing public explanation but rather a pressure to find a better and more accurate explanation. The problem is when we frame these things as adversarial in the sense of enemies fighting rather than adversarial in the sense of improving one's self/group's position/arguments/discourse. Both sides can benefit from a reframing of these interactions by not holding the positions as equivalent to one's intellect but rather recognize that statements are independent. We are better as a coalition than separated, even with disagreement (especially with). It is clear here that everyone is on the same side after all, as all parties involved are seeking the same goal: better public education. It then becomes the duty to read between the lines to extract what the actual complaint is, because this is often also difficult to express (without a lengthy process) given we do not know one another's priors. We should not confuse critiques for attacks or dismissals. Nor should we dismiss or attack when we should critique! Though it is acceptable when errors are egregious or when people intentionally mislead. Unfortunately there is a large amount of that, but let's also distinguish idiocracy from maliciousness, as the former can be fixed (if we formulate with the above framing).


Won’t say that TikTok audience is a pound where you’ll find future scientists. I’ll invest on promoting alternative spaces both virtual and local best.


Yeah that's a fair point. As an early career scientist myself now and as someone not that interested in current social media trends, I certainly do risk being in the same spot as those 'hardcore' scientists.


What if they were liable? Say the company that offers the LLM lawyer is liable. Would that make this feasible? In terms of being convincingly wrong, it's not like lawyers never make mistakes...


You'd require them to carry liability insurance (this is usually true for meat lawyers as well), which basically punts the problem up to "how good do they have to be to convince an insurer to offer them an appropriate amount of insurance at a price that leaves the service economically viable?"


Given orders of magnitude better cost efficiency, they will have plenty of funds to lure in any insurance firm in existence. And then replace insurance firms too.


"What if they were liable?"

They'd be sued out of existence.

"In terms of being convincingly wrong, it's not like lawyers never make mistakes..."

They have malpractice insurance, they can potentially defend their position if later sued, and most importantly they have the benefit of appeal to authority image/perception.


All right, what if legal GPTs had to carry malpractice insurance? Either they give good advice, or the insurance rates will drive them out of business.

I guess you'd have to have some way of knowing that the "malpractice insurance ID" that the GPT gave you at the start of the session was in fact valid, and with an insurance company that had the resources to actually cover if needed...


It's funny how any conversation ends with this question unanswered.


Weirdly HN is full of anti AI people who just refuses to discuss the point that is being discussed and goes into all the same argument of wrong answer that they got some time. And then they present anecdotal evidence as truth, while there is no clear evidence if AI lawyer has more or less chance to be wrong than human. Surely AI could remember more and has been shown to clear bar examination.


"while there is no clear evidence if AI lawyer has more or less chance to be wrong than human."

In the tests they are shown to be pretty close. The point I made wasn't about more mistakes, but about other factors influencing liability and how it would be worse for AI than humans at this point.


> at this point.

This is the key point. Even if assume the AI won't get better, the liability and insurance premiums will likely become similar in very near future. There is a clear business opportunity that's there in insuring AI lawyer.


are these safe? being clones doesn't really inspire much confidence

rarbg was my go to for years :S


They're not clones. Just proxies so they are down too.

This is usually done to make it harder for the courts to force providers to block these sites. Whack-a-mole..

PS: Why am I getting downvoted? Try clicking on the links, you will see the same goodbye message.

Edit: Oh the first 2 do seem to be clones, now I understand. The rest are proxies though. I do think even the clones basically skived off the original's database though so I doubt they will have much going forward.


Did you actually try any of these yourself? Some are clearly clones, and are not down.


The first two are 'clones' but they seem to have just taken a backup of the actual rarbg content from a few days ago (last torrents from the 25th of May!)


Not even, they cloned the UI but (some of) the torrents are stuff that never was on rarbg


Agreed on the confusing page. I use notebooks every single day but a quick glance at the README gave me zero indication of this being something I need


On the other hand, the link description alone was enough to convince me that this is something I want.

Having a very specific target makes it easier to reach that target in writing, I guess, and harder for people outside the target to understand what it's about.


Well that was my point. I am part of the target audience and I would find this useful but it took me a while to realise that.

Of course, maybe I am just incompetent or missing knowledge that everyone else in the target has.


Well, other than Observable, reactive notebooks are not that common and well known (precisely because Jupyter, which is the most famous, didn't support that model before).

So maybe today is the first day that you are exposed to that model and you learn about it? There's always a first time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: