Hacker Newsnew | past | comments | ask | show | jobs | submit | ijk's commentslogin

I'm not sure why you're using Zuckerberg's sites as examples of internet freedoms.

TFA mentions that EFF continues to post on Facebook and Instagram.

Surely if you read the article you read the “But You're Still on Facebook and TikTok?” section and don’t need me to explain what it said - but i can summarize:

Twitter is un-aligned with their goals, and has dismal reach. Facebook and instagram are unaligned with their goals and are how they reach a lot of new people.

Not super complicated, tho if i am reading between the lines - calling out the numbers feels like a call to action for other orgs. Suggesting they run their own numbers, and get off twitter.


> We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago.

Given that social media posts are not free, in the sense that someone or something has to put some effort in to format the message for that particular site, I can see how a simple cost calculation would show that it is no longer worth it.


They are posting the same content in virtually identical format to other twitter clones. The whole process can be automated, the marginal cost is nothing.

I hope they ran the numbers and did some cold surveying/analysis/postmortem before deciding that.

What is worse is those aren't shitty ad impressions. Interested people will be following maybe even expecting to see them. In addition and ironically also other interested people will be algorithmed in to their orbit.

E.g. I read more of a blogger I like because I follow him on LinkedIn over following RSS feed.


> Interested people will be following maybe even expecting to see them.

But they won't. That isn't how modern social networks work, and X definitely isn't an exception. The chronological feed of people you follow is long gone.


That is my point. Who sees them? whatever the algo predicts will engage.

X suppresses posts from people you follow in favor of algorithmically boosted posts, so at scale the follow counts don't matter as much.

Show me how to uninstall it, because I've tried and failed.

Open Control Panel. OneDrive is in it. Click the 'Uninstall' button. Then click yes.

And this keeps it from reinstalling?

Yes.

It feels a lot like storing your data as an essay in a Word doc instead of a spreadsheet. It can work and all of the math is probably correct, but it's very much the wrong tool when the structured data was right there to be used instead.

The structure data is scattered all over the place. This does the very important thing of aggregating them, and bringing them together. If you had to manually do that it could take weeks.

What’s the point of getting the wrong answer quickly?

https://news.ycombinator.com/item?id=47587662


Well, we’re just going in circles now. I just said LLMs cite what they find so it’s not going to be the wrong answer if you do your due diligence.

Missing entries don’t get corrected by looking at the LLM output. That only helps when the LLM makes something up from thin air or mangles the output.

Of course it’s not the kind of question you can get an objectively correct answer for, but you could come up with the correct answer for a given methodology.


Isn't verifying sources a much harder problem than just searching the list of works in the first place?

Especially in cases such as this. For well known works of literature and music structured data exists already.


Do extra work in step 2 because you got lazy in step 1 is not my idea of efficient or complete.

It’s a long way from got lazy to didn’t write their own Internet scraper to scan for books, author’s age and opinions.

that depends how much more quickly and efficiently you can do the extra work in step 2 than in step 1.

In this case it’s strictly less efficient.

You can only correct for missing entries by doing the same work you’d need to start from scratch. But after that you now have a second list to consider.


Also the rules and norms of the subreddit has changed over time, which has led to spin-off subreddits that serve those purposes.

That's not necessarily a downside for traffic safety, though. Though I imagine someone must have studied the effects of various wavelengths on drivers...


Advertisers definitely did - there's (some) money in billboards, but only as long as you don't kill your prospective customers.

Which is GOG's selling point, versus Steam.


Everything is DRM free and they provide offline installers. They are also proactive in making sure the games they sell run on modern systems.


I've been reaching for BAML when I really need prompt iteration at speed.


This matches my experience with Dspy. I ended up removing it from our production codebase because, at the time, it didn't quite work as effectively as just using Pydantic and so forth.

The real killer feature is the prompt compilation; it's also the hardest to get to an effective place and I frequently found myself needing more control over the context than it would allow. This was a while ago, so things may have improved. But good evals are hard and the really fancy algorithms will burn a lot of tokens to optimize your prompts.


Yes! I have also felt this. I highly recommend taking a look at Maxime's template adapter: https://github.com/dspy-community/dspy-template-adapter

I think it solves some of this friction!


Yeah, there's often a heavy instruction and recency bias that just squeezes all of the nuance and subtlety out if it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: