The fact that they chose to tie it to and advertise it as "get paid to see ads" is a significant turn-off in my mind even if the rest of the ecosystem theoretically works in functionally the same way.
In my mind, the entire point is to get away from advertising as a revenue stream entirely. I want to pay for the things I consume. If the advertising market has decided that my page impression is worth less than pocket change, I'd far rather just give that money to the publisher directly and avoid ads being part of the equation.
The core idea behind BAT isn't bad, but the marketing is pretty terrible if you're targeting people like me.
I think it is bad because it legitimizes bad practices of the marketing industry. "How bad could grabbing as much data from the population really be? We're sharing our profits!"
Not true. I've done SRED every year for the past ~7 years. It is work, but there are specialized consultants that do most of it. If the work is truly R&D (which would be the case for a cutting-edge AI company) and you track your work in JIRA or something like that, then it's mostly just writing a few pages describing the efforts.
I get your sentiment, but I think it's important for science communication to adapt to the times. Decades ago (and even as little as one decade ago), most scientists (maybe Hawking being the exception) who would dare appear in these 1hr documentaries would be belittled by the "hardcore" scientists with the same words you used "Science should not be over-simplified like that", "they are not real scientists, they just want to be on TV", etc.
The truth is that young people are mostly on TikTok et al, so this type of content needs to get there.
> "Science should not be over-simplified like that",
It is a difficult balance to strike, but science should be
> as simple as possible, and no simpler
The hard truth is that any simpler means inaccurate, which means when educating the public there __are__ inaccuracies. So the balance to strike is accuracy vs understanding. Most people do not understand the Schrodinger's Cat thought experiment, wave-particle duality, or many similar things (like how one of my namesake's Incompleteness Theorem, Super Position, and the Halting Problem are linked) yet will confidently correct explanations given that do hold more accuracy (even "teaching" those where it is clear one side is vastly more qualified than the other).
But think about it this way, the people getting mad at the over-simplification are not a force preventing public explanation but rather a pressure to find a better and more accurate explanation. The problem is when we frame these things as adversarial in the sense of enemies fighting rather than adversarial in the sense of improving one's self/group's position/arguments/discourse. Both sides can benefit from a reframing of these interactions by not holding the positions as equivalent to one's intellect but rather recognize that statements are independent. We are better as a coalition than separated, even with disagreement (especially with). It is clear here that everyone is on the same side after all, as all parties involved are seeking the same goal: better public education. It then becomes the duty to read between the lines to extract what the actual complaint is, because this is often also difficult to express (without a lengthy process) given we do not know one another's priors. We should not confuse critiques for attacks or dismissals. Nor should we dismiss or attack when we should critique! Though it is acceptable when errors are egregious or when people intentionally mislead. Unfortunately there is a large amount of that, but let's also distinguish idiocracy from maliciousness, as the former can be fixed (if we formulate with the above framing).
Won’t say that TikTok audience is a pound where you’ll find future scientists. I’ll invest on promoting alternative spaces both virtual and local best.
Yeah that's a fair point. As an early career scientist myself now and as someone not that interested in current social media trends, I certainly do risk being in the same spot as those 'hardcore' scientists.
What if they were liable? Say the company that offers the LLM lawyer is liable. Would that make this feasible? In terms of being convincingly wrong, it's not like lawyers never make mistakes...
You'd require them to carry liability insurance (this is usually true for meat lawyers as well), which basically punts the problem up to "how good do they have to be to convince an insurer to offer them an appropriate amount of insurance at a price that leaves the service economically viable?"
Given orders of magnitude better cost efficiency, they will have plenty of funds to lure in any insurance firm in existence. And then replace insurance firms too.
"In terms of being convincingly wrong, it's not like lawyers never make mistakes..."
They have malpractice insurance, they can potentially defend their position if later sued, and most importantly they have the benefit of appeal to authority image/perception.
All right, what if legal GPTs had to carry malpractice insurance? Either they give good advice, or the insurance rates will drive them out of business.
I guess you'd have to have some way of knowing that the "malpractice insurance ID" that the GPT gave you at the start of the session was in fact valid, and with an insurance company that had the resources to actually cover if needed...
Weirdly HN is full of anti AI people who just refuses to discuss the point that is being discussed and goes into all the same argument of wrong answer that they got some time. And then they present anecdotal evidence as truth, while there is no clear evidence if AI lawyer has more or less chance to be wrong than human. Surely AI could remember more and has been shown to clear bar examination.
"while there is no clear evidence if AI lawyer has more or less chance to be wrong than human."
In the tests they are shown to be pretty close. The point I made wasn't about more mistakes, but about other factors influencing liability and how it would be worse for AI than humans at this point.
This is the key point. Even if assume the AI won't get better, the liability and insurance premiums will likely become similar in very near future. There is a clear business opportunity that's there in insuring AI lawyer.
They're not clones. Just proxies so they are down too.
This is usually done to make it harder for the courts to force providers to block these sites. Whack-a-mole..
PS: Why am I getting downvoted? Try clicking on the links, you will see the same goodbye message.
Edit: Oh the first 2 do seem to be clones, now I understand. The rest are proxies though. I do think even the clones basically skived off the original's database though so I doubt they will have much going forward.
The first two are 'clones' but they seem to have just taken a backup of the actual rarbg content from a few days ago (last torrents from the 25th of May!)
On the other hand, the link description alone was enough to convince me that this is something I want.
Having a very specific target makes it easier to reach that target in writing, I guess, and harder for people outside the target to understand what it's about.
Well, other than Observable, reactive notebooks are not that common and well known (precisely because Jupyter, which is the most famous, didn't support that model before).
So maybe today is the first day that you are exposed to that model and you learn about it? There's always a first time.