I think if they'd teased a phone game it would have been well received. From memory, the problem was they teased something much larger/exiting (new diablo, not a chinese arpg reskin) so when the reveal hit everyone was massively let down.
I guess this is kind of similar though. what is promised isnt and likely wont be delivered.
I wouldnt mind if the conversations actually were smart, not sycophantic, or were otherwise useful. I find more often than not it creates more work for me than it saves me, and even if i were to break even on time invested i lose massively on comprehension/understanding.
I'd rather see this technology being used as a method of input and control of the machine, just like keyboard, mouse and monitor is. Without humanized bits of conversation or suggestions, exaggerated remarks like "Got it!" - you would ask OS to preform a task and it would do what was told. And if I'd want to have a specific question or task I'd use some dedicated application that would stay dormant up until used.
Basically sums up why i don't use any kind of voice assistant still. Until the computer can DO exactly and precisely what i asked -- not what it's faulty recognition model thinks i asked -- there's zero point to trying to talk to it
I have one device in my house on Alexa called "under cabinet lights", I asked Alexa to "turn on the cabinet lights" she said "several things share the name cabinet lights, which one do you mean?" (As if there's enough information there to answer the question) I told her "all of them" thinking maybe the lights got duplicated or something.
She turned on every smart home light in the house.
Sex is great, but if you constabtly try to force it on me, sneak it into deals we make, or while i sleep etc. it will leave me quite hostile towards the topic, and that's where we are at. Consent is important in all things, no less here.
Plus if it does happen, folks need to laern a bunch of new hostile stuff, given how linux is taking off, why not just move to treating linux as the first class platform.
> why not just move to treating linux as the first class platform
This is where the argument goes back to Win32 is the most stable API in Linux land. There isn't a thing such as the Linux API so that would have to be invented first. Try running an application that was built for Ubuntu 16.04 LTS on Ubuntu 24.04 LTS. Good luck with that. Don't get me wrong, I primarily use and love Linux, but reality is quite complicated.
stability has critical mass. When something is relied on by a small group of agile nerds, we tend not to worry about how fast we move or what is broken in the process. Once we have large organisations relying on a thing, we get LTS versions of OS's etc.
The exact same is true here. If large enough volumes of folks start using these projects and contribute to them in a meaningful way, then we end up with less noisy updates as things continue to receive input from a large portion of the population and updates begin more closely resembling some sort of moving average rather than a high variance process around that moving average. If not less noisy updates, then at least some fork that may be many commits behind but at least when it does update things in a breaking way, it comes with a version change and plenty of warning.
> Try running an application that was built for Ubuntu 16.04 LTS on Ubuntu 24.04 LTS. Good luck with that.
Yea, this is a really bad state of affairs for software distribution, and I feel like Linux has always been like this. The culture of always having source for everything perhaps contributes to the mess: "Oh the user can just recompile" attitude.
MacOS apps are primarily bound to versioned toolchains and SDK's and not to the OS version. If you are not using newer features, your app will run just fine. Any compatibility breaks are published.
Neither does windows tbh. You're not getting most early 2000s let alone 90s games working on W11 without a lot of time and effort having gone into getting it to work. E.g. try run original (not gog) vampire masqurade bloodlines, or black and white without the community patches running. Running both in original form is feasible on linux, but straight up not possible on w11 without patches.
I'm not trying to be recalcitrant, rather I am genuinly curious. The reason I ask is that no one talks like a LLM, but LLMs do talk like someone. LLMs learned to mimic human speech patterns, and some unlucky soul(s) out there have had their voice stolen. Earlier versions of LLMs of LLMs that more closely followed the pattern and structure of a wikipedia entry were mimicking a style that that was based of someone elses style and given some wiki users had prolific levels of contributions, much of their naturally generated text would register as highly likely to be "AI" via those bullshit ai detector tools.
So, given what we know of LLMs (transformers at least) at this stage it seems more likely to me that current speech patterns again are mimicry of someones style rather than an organically grown/developed thing that is personal to the LLM.
Looks like AI to me too. Em dashes (albeit nonstandard) and the ‘it’s not just x, it’s y’ ending phrases were everywhere. Harder to put into words but there’s a sense of grandiosity in the article too.
Not saying the article is bad, it seems pretty good. Just that there are indications
The content ChatGPT returns is non-deterministic (you will get different responses on the same day for the same email), and these models change over time. Even if you're an expert in your field and you can assess that the chatbot returned correct information for one entry, that's not guaranteed to be repeated.
You're staking personal reputation in the output of something you can expect to be wrong. When someone gets a suspicious email, they follow your advice, and ChatGPT incorrectly assures them that it's fine, then the scammed person would be correct in thinking you're a person with bad advice.
And if you don't believe my arguments, maybe just ask ChatGPT to generate a persuasive argument against using ChatGPT to identify scam emails.
This blog post isn't human speech, it's typical AI slop. (heh, sorry.)
Way too verbose to get the point across, excessive usage of un/ordered bullets, em dashes, "what i reported / what coinbase got wrong", it all reeks of slop.
Once you notice these micro-patterns, you can't unsee them.
Would you like me to create a cheat sheet for you with these tell tale signs so you have it for future reference?
Overuse of bold markup, particularly to begin each bullet point.
Overuse of "Here's..." to introduce or further every concept or idea.
A few parts of this article particularly jump out, such as the 2 lists following the "The SMS Flooding Attack" section (which incidentally begins "Here's where..."). A human wouldn't write them as lists (the first list in particular), they'd be normal paragraphs. Short bulleted lists are a good way to get across simple bite-sized pieces of information quickly, but that's in cases where people aren't going to read a large block of text, e.g. in ads. Overusing them in the wrong medium, breaking up a piece of prose like this, just hurts its flow and readability.
Sorry but I think you just don't know a lot about LLMs. Why did they start spamming code with emojis? It's not because that is what people actually do, something that is in the training data. It's because someone reinforcement learned the LLM to do it by asking clueless people if they prefer code with emojis.
And so at this point the excessive bullet points and similar filler trash is also just an expression of whatever stupid people think they prefer.
Maybe I'm being too harsh and it's not the raters are stupid in this constellation, rather it's the ones thinking you could improve the LLM by asking them to make a few very thin judgements.
I know the style that most LLM's are mimicking quite well, and I also know people who wrote like that prior to the LLM deluge that is washing over us. The reason people are choosing to make LLMs mimic those behaviours is because it used to be associated with high effort content. The irony is now it si associated with the lowest effort content. The irony is I have stopped proof reading my comments etc. and put zero effort into styling or flow, because right now the only human thing left to do is make low effort content of the like only a human can.
Just chiming in here - any time I've written something online that considers things from multiple angles or presents more detailed analysis, the liklihood that someone will ask if I just used ChatGPT go way up. I worry that people have gotten really used to short, easily digestible replies, and conflate that with "human". Because of course it would be crazy for a human to expend "that much effort" on something /s.
EDIT: having said that, many of the other articles on the blog do look like what would come from AI assistance. Stuff like pervasive emojis, overuse of bulleted lists, excessive use of very small sections with headers, art that certainly appears similar in style to AI generated assets that I've seen, etc. If anything, if AI was used in this article, it's way less intrusive than in the other articles on the blog.
Author here - yes, this was written using guided AI. I consider this different than giving a vague prompt and telling it to write an article. My process was to provide all the information, for example I used AI to:
1. transcribe the phone call into text using whisper model
2. review all the email correspondence
3. research industry news about the breach
4. brainstorm different topics and blog structures to target based on the information, pick one
5. Review the style of my other blog articles
6. write the article and redact any personal info
7. review the article and suggest iterate on changes multiple times.
To me this is more akin to having a writer on staff who can save you a lot of time. I can do all the above in less than 30mins, where it could take a full day to do it manually. I had a blog 20 years ago but since then I never had time to write content again (too time consuming and no ROI) - so the alternative would be nothing.
There are some still some signs you can tell content is AI written based on verbosity, use of bold, specific HTML styling, etc. I see no issues with the approach. I noticed some people have an allergic reaction to any hint of AI, and when the content produced is "fluff" with no real content I get annoyed too - however that isn't the case for all content.
The issue is that the article is excessively verbose; the time you saved in writing end editing comes at the cost of wasting readers' time. There is nothing wrong with using AI to improve writing, but using it to insert fluff that came at no cost to you and no benefit to me feels like a violation of social contract.
Please, at least put a disclaimer on top so I can ask an AI to summarize the article and complete the cycle of entropy.
> [...] I can do all the above in less than 30mins, where it could take a full day to do it manually [...]
Generating thousands of words because it's easy is exactly the problem with AI generated content. The people generating AI content think about quantity not quality. If you have to type out the words yourself, if you have to invest the time and energy into writing the post, then you're showing respect for your readers by making the same investment you're asking them to make... and you are creating a natural constraint on the verbosity because you are spending your valuable time.
Just because you can generate 20 hours of output in 30 minutes, doesn't mean you should. I don't really care about whether or not you use AI on principle, if you can generate great content with AI, go for it, but your post is classic AI slop, it's a verbose nightmare, it's words for the sake of words, it's from the quantity over quality school of slop.
> I had a blog 20 years ago but since then I never had time to write content again (too time consuming and no ROI) - so the alternative would be nothing.
Posting nothing is better than posting slop, but you're presenting a false dichotomy. You could have spent the 30 minutes writing the post yourself and posted 30 minutes of output. Or, if you absolutely must use ChatGPT to generate blog posts, ask it to produce something that is a few hundred words at most. Remember the famous quote...
"If I had more time, I would have written a shorter letter."
If ChatGPT can do hundreds of hours of work for you then it should be able to produce the shortest possible blog post, it should be able to produce 100 words that say what you could in 3,000. Not the other way around!
If you can't be bothered to spend even an hour writing something up, especially allegations of this magnitude, then chances are you know it's actually not an article with any content worth reading.
Sure, the problem here isn't a lack of veracity in regard to your source material. Many readers are also concerned with the stilicisms and prose of the articles they read. I don't care particularly that the complete article wasn't written by a human. The generic LLM style is however utterly unbearable to me. It is overly sensational and verbose, while lacking normal sized paragraphs of natural text. It's reminiscent of a poor comic except extrapolated to half the stuff which gets posted to HN.
I get you, It grinds my gears. I've been told that I "Talk" like an LLM because I go into detail and give thorough explanations on topics. I'm not easily insulted but that was a first for me. I used to get 'human wikipedia' before, and before that 'walking dicitonary' which I always thought was reductive but it didn't quite irk me as much as being told my entire way of communicating is reminiscent of a bot. So perhaps I take random accusations of LLM use to heart, even if it does seem overwhelmingly likely to be true.
You're getting downvoted for being right. Attempt being nuanced and people will call you a robot.
Well if that's how we identify humans I for one prefer our new LLM overlords.
A lot of people who say stuff like "boo AI!" are not only setting the bar for humanity very low, they're also discouraging intellectualism and intelligent discourse online. Honestly, if a LLM wrote a good think piece, I prefer that over "human slop".
I just wish people would critique a text on its own merits instead of inventing strawman arguments about how it was written.
Oh and, for the provocative effect — I'll end my comment with an em dash.
I dunno man, looks like goodharts law in action to me. That isnt to say the models wont be good at what is stated, but it does mean it might not signal a general improvement in competence but rather a targeted gain with more general deficits rising up in untested/ignored areas, some which may or may not be catastrophic. I guess we will see but for now Imma keep my hype in the box.
That's like accepting vaders 'altered' deal, and being grateful it hasn't been altered further.
If google wants a walled garden, let it wall off it's own devices, but what right does it have to command other manufactures to bow down as well? At this stage we've got the choice of dictato-potato phone prime, or misc flavour of peasant.
If you want walled garden, go use apple. The option is there. We don't need to bring that here.
Google Certified Devices is any device that has GMS (Google Mobile Services) installed - ergo almost all of them. It's worth noting that a _lot_ of apps stop functioning when GMS is missing because Google has been purposefully been putting as much functionality in them instead of putting them in AOSP. So you end up in a situation where, to make an Android phone compatible with most apps, you need GMS. Which in turn means you need your phone to be Google Certified, and hence must implement this specification.
I guess this is kind of similar though. what is promised isnt and likely wont be delivered.
reply