Surely if you read the article you read the “But You're Still on Facebook and TikTok?” section and don’t need me to explain what it said - but i can summarize:
Twitter is un-aligned with their goals, and has dismal reach. Facebook and instagram are unaligned with their goals and are how they reach a lot of new people.
Not super complicated, tho if i am reading between the lines - calling out the numbers feels like a call to action for other orgs. Suggesting they run their own numbers, and get off twitter.
> We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago.
Given that social media posts are not free, in the sense that someone or something has to put some effort in to format the message for that particular site, I can see how a simple cost calculation would show that it is no longer worth it.
They are posting the same content in virtually identical format to other twitter clones.
The whole process can be automated, the marginal cost is nothing.
I hope they ran the numbers and did some cold surveying/analysis/postmortem before deciding that.
What is worse is those aren't shitty ad impressions. Interested people will be following maybe even expecting to see them. In addition and ironically also other interested people will be algorithmed in to their orbit.
E.g. I read more of a blogger I like because I follow him on LinkedIn over following RSS feed.
> Interested people will be following maybe even expecting to see them.
But they won't. That isn't how modern social networks work, and X definitely isn't an exception. The chronological feed of people you follow is long gone.
It feels a lot like storing your data as an essay in a Word doc instead of a spreadsheet. It can work and all of the math is probably correct, but it's very much the wrong tool when the structured data was right there to be used instead.
The structure data is scattered all over the place. This does the very important thing of aggregating them, and bringing them together. If you had to manually do that it could take weeks.
Missing entries don’t get corrected by looking at the LLM output. That only helps when the LLM makes something up from thin air or mangles the output.
Of course it’s not the kind of question you can get an objectively correct answer for, but you could come up with the correct answer for a given methodology.
You can only correct for missing entries by doing the same work you’d need to start from scratch. But after that you now have a second list to consider.
That's not necessarily a downside for traffic safety, though. Though I imagine someone must have studied the effects of various wavelengths on drivers...
This matches my experience with Dspy. I ended up removing it from our production codebase because, at the time, it didn't quite work as effectively as just using Pydantic and so forth.
The real killer feature is the prompt compilation; it's also the hardest to get to an effective place and I frequently found myself needing more control over the context than it would allow. This was a while ago, so things may have improved. But good evals are hard and the really fancy algorithms will burn a lot of tokens to optimize your prompts.
reply