The current title (“Pakistani newspaper mistakenly prints AI prompt with the article”) isn’t correct, it wasn’t the prompt that was printed, but trailing chatbot fluff:
> If you want, I can also create an even snappier “front-page style” version with punchy one-line stats and a bold, infographic-ready layout—perfect for maximum reader impact. Do you want me to do that next?
The article in question is titled “Auto sales rev up in October” and is an exceedingly dry slab of statistic-laden prose, of the sort that LLMs love to err in (though there’s no indication of whether they have or not), and for which alternative (non-prose) presentations can be drastically better. Honestly, if the entire thing came from “here’s tabular data, select insights and churn out prose”… I can understand not wanting to do such drudgework.
The newspaper in question is Pakistan's English language "newspaper of record", which has wide readership.
For some reason, they rarely ever add any graphs or tables to financial articles, which I have never understood. Their readership is all college educated. One time I read an Op-Ed, where the author wrote something like: If you go to this gov webpage, and take the data and put it on excel, and plot this thing vs that thing, you will see X trend.
Why would they not just take the excel graph, clean it up and put it in their article?
Gemini has been doing this to me for the past few weeks at the end of basically every single response now, and it often seems to result in the subsequent responses getting off track and lower quality as all these extra tangets start polluting the context. Not to mention how distracting it is as it throws off the reply I was already halfway in the middle of composing by the time I read it.
In similar "news" Re: tangents, I've noticed Claude now suddenly starting to give up on problem analyses. After like three rounds of it trying to figure something out that I told it is a requirement, it suggests that we don't need to do that as it's a nice-to-have only anyway. If in auto-accept mode it'll start doing the other thing even. I have to catch it and tell it to not effing give up so easily and to stop giving me BS excuses turning hard requirements (literally the reason we're doing what we're doing) into a nice-to-have it can skip.
I guess they recently re-trained on too many "perpetual junior senior dev" stuff.
Add "Complete this request as a single task and do not ask any follow-up questions." Or some variation of that. They keep screwing with default behavior, but you can explicitly direct the LLM to override it.
This is why I wish chat UI's had separate categories of chats (like a few generic system prompts) that let you do more back-and-forth style discussions, or more "answers only" without adding any extra noise, or even an "exploration"/"tangent" slider.
The fact that system prompts / custom instructions have to be typed-in in every major LM chat UI is a missed opportunity IMO
I think AI should present those continuation prompts as dynamic buttons, like "Summarize", "Yes, explain more" etc. based on the AI's last message, like the NPC conversation dialogs in some RPGs
You can if you script the request yourself, or you could have a front end that lets you cut out those paragraphs from the conversation. I only say that because yesterday I followed this guide: https://fly.io/blog/everyone-write-an-agent/ except I had to figure out how to do it with Gemini API instead. The context is always just (essentially) a list of strings (or "parts" anyway, doesn't have to be strings) that you pass back to the model so you can make the context whatever you like. It shouldn't be too hard to make a frontend that lets you edit the context, and fairly easy to mock up if you just put the request in a script that you add to.
For years, both the financial and sports news sides of things have generated increasingly templated "articles", this just feels like the latest iteration.
This dates back to at least the late 1990s for financial reports. A friend demoed such a system to me at that time.
Much statistically-based news (finance, business reports, weather, sport, disasters, astronomical events) are heavily formulaic and can at least in large part or initial report be automated, which speeds information dissemination.
Of course, it's also possible to distribute raw data tables, charts, or maps, which ... mainstream news organisations seem phenomenally averse to doing. Even "better" business-heavy publications (FT, Economist, Bloomberg, WSJ) do so quite sparingly.
A few days ago I was looking at a Reuters report on a strategic chokepoint north of the Philippines which it and the US are looking toward to help contain possible Chinese naval operations. Lots of pictures of various equipment, landscapes, and people. Zero maps. Am disappoint.
I believe the word is depression, which seems apt when thinking of the idea of people using AI to make content longer and then the readers all using AI to make it shorter again.
But there's the approach the Economist takes. For many decades, it's relied on a three-legged revenue model: subscriptions, advertising, and bespoke consulting and research through the Economist Intelligence Unit (EIU). My understanding is that revenues are split roughly evenly amongst these, and that they tend to even out cash-flow throughout economic cycles (advertising is famously pro-cyclical, subscriptions and analysis somewhat less so).
To that extent, the graphs and maps the Economist actually does include in its articles (as well as many of its "special reports") are both teasers and loss-leader marketing for EIU services. I believe that many of the special reports arise out of EIU research.
...
The rules for the race: Both contenders waited for Denny's, the diner company, to come out with an earnings report. Once that was released, the stopwatch started. Both wrote a short radio story and get graded on speed and style.
StatSheet, an online platform covering college basketball, runs entirely on an automated program. In 2006, Thomson Reuters announced their switch to automation to generate financial news stories on its online news platform. Reuters used a tool called Tracer. An algorithm called Quakebot published a story about a 2014 California earthquake on The Los Angeles Times website within three minutes after the shaking had stopped.
Sports and financial are the two easiest to do since they both work from well structured numeric statistics.
> Quakebot is a software application developed by the Los Angeles Times to report the latest earthquakes as fast as possible. The computer program reviews earthquake notices from the U.S. Geological Survey and, if they meet certain criteria, automatically generates a draft article. The newsroom is alerted and, if a Times editor determines the post is newsworthy, the report is published.
> The computer program reviews earthquake notices from the U.S. Geological Survey
Probably a service that is provided to the general public for free, similar to NOAA and weather data - so chances are rather high it ends up on the chopping block or for-money only.
In the mid-late naughts, there used to be a content farm called "Associated Content". They would get daily lists of top searched terms from various search engines (Yahoo, Dogpile, Altavista, etc. etc.) and for each search term, pay an English major to write a 2-page fluff article. Regardless of what the topic was, they churned out articles by the bushel. Then they place ads on these articles and sat back and watched the dollars roll in.
A non-"AI" template is probably getting filled in with numbers straight from some relevant source. AI may produce something more conversational today but as someone else observed, this is a high-hallucination point for them. Even if they get one statistic right they're pretty inclined to start making up statistics that weren't provided to them at all if they sound good.
Not just that we know from heavy reddit posters that they have branching universe templates for all eventualities, so that they are "ready" whatever the outcome.
I guess in the end the journalist didn't feel necessary to impact his readers with punchy one line stats and bold infographic-ready layouts, considering he opted for the first draft.
Nobody outside Pakistan knows Dawn even though it is the newspaper that was founded by Muhammad Ali Jinnah (considered founding father of the nation) and one of the largest and most prestigious as well.
It is like the NYT for the country. But the relevant detail here is the printing of the prompt in a nationally recognized newspaper. The brand, as local as it maybe, still provides more context than some random newspaper in a foreign country would.
And I have ran into Dawn newspaper on google news frontpage several times, usually on entertainment stuff.
Do we know it was an AI? I realize that it rings with a sycophantic tone that the AIs love to use, but I've worked with some humans who speak the same way. AIs didn't invent brownnosing.
> If you want, I can also create an even snappier “front-page style” version with punchy one-line stats and a bold, infographic-ready layout—perfect for maximum reader impact. Do you want me to do that next?
The article in question is titled “Auto sales rev up in October” and is an exceedingly dry slab of statistic-laden prose, of the sort that LLMs love to err in (though there’s no indication of whether they have or not), and for which alternative (non-prose) presentations can be drastically better. Honestly, if the entire thing came from “here’s tabular data, select insights and churn out prose”… I can understand not wanting to do such drudgework.