Yes! Please, I also regularly write in three different languages (Spanish, French and English in my case) and it's just insufferable using my phone vs using a keyboard, that I can't really interact fluently with my phone and third party services.
I can only speak positively of Hetzner. I particularly like that they have a Terraform provider.
I've only used them for personal projects, though, so I can't say anything to the suitability in a corporate environment. In particular their support. That's quite an important factor when working in a corporate setting.
I wish they already had that S3-like service they've announced for some time. It would be really handy.
They have an outstanding professional support with in depth Linux knowledge and provide help, even if it is out of scope e.g. failed distro updates on a vps
I've been working now around 8 years with cloud stuff. Not a lot of years compared with many of you here, but still enough to feel confident about my dislikes being more than just ignorance or inexperience.
I really wish AWS conceded the IaC war and stopped putting resources into Cloudformation. I never suffer as much as when I have to work with CF. The only worse thing I can think of is having to interact with Azure, which around 5 years ago was a terrible experience all around with regard to automation.
Going back to my CF rant: As soon as you get into any amount of complexity (and this also includes CDK, as it inherits all of CF problems), like for example using nested stacks and custom resources, it becomes almost impossible to troubleshoot incidents and problems. Error messages are obtuse. Fail states are too frequent. Update and deploy times are incredibly slow. Working with CF makes me reconsider my whole job every time. I curse the day that I chose to ignore my general precaution with CF and go for a database (Opensearch) managed with CDK.
There's a night and day difference between managing infrastructure state with Terraform and CF. Terraform also has its quirks and warts, of course, but at the very least there's very little that cannot be recovered by yourself. And it is also fast enough. CF is mostly a black box of misery.
Which is mostly what I feel also happens with LLMs producing code. Useful to start with, but not more than that. We've still got a job us programmers. For the moment.
Producing code is like producing syntactically correct algebra. It has very little value on its own.
I’ve been trying to pair system design with ChatGPT and it feels just like talking with a person who’s confident and regurgitates trivia, but doesn’t really understand. No sense of self-contradiction, doubt, curiosity.
I’m very, very impressed with the language abilities and the regurgitation can be handy, but is there a single novel discovery by LLMs? Even a (semantic) simplification of a complicated theory would be valuable.
I wouldn't blindly trust what the LLM says, but I take it that it would be mostly right, and that would give me at the very least explorable vocabulary that I can expand on my own, or keep grilling it about.
I've already used some LLMs to ask questions about licenses and legal consequences for software related matters, and it gave me a base, without having to involve a very expensive professional into it for what are mostly questions for hobby things I'm doing.
If there was a significant amount of money involved in the decision, though, I will of course use the services of a professional. These are the kinds of topics you can't be "mostly right".
I don't understand how everyone keeps making this mistake over and over.
They explicitly just said "in 5-10 years".
So many people continually use arguments that revolve around 'I used it once and it wasn't the best and/or me things up', and imply that this will always be the case.
There are many solutions already for knowledge editing, there are many solutions for improving performance, and there will very likely continue to be many improvements across the board for this.
It took ~5 years from when people in the NLP literature noticed BERT and knew the powerful applications that were coming, until the public at large was aware of the developments via ChatGPT.
It may take another 5 before the public sees the developments happening now in the literature hit something in a companies web UI.
> It took ~5 years from when people in the NLP literature noticed BERT and knew the powerful applications that were coming, until the public at large was aware of the developments via ChatGPT. It may take another 5 before the public sees the developments happening now in the literature hit something in a companies web UI.
It also may take 10, 20, 50, or 100 years. Or it may never actually happen. Or it may happen next month.
The issue with predicting technological advances is that no one knows how long it'll take to solve a problem until it's actually solved. The tech world is full of seemingly promising technologies that never actually materialized.
Which isn't to say that generative AI won't improve. It probably will. But until those improvements actually arrive, we don't know what those improvements will be, or how long it'll take. Which ultimately means that we can only judge generative AI based on what's actually available. Anything else is just guesswork.
I'm concerned that until they do improve, we're in a weird place. For example, if you were 16, would you go an invest a bunch of time and money to study law with the prospect of this hanging of your future? Same for radiology, would you go study that now Geoffrey Hinton has proclaimed the death of radiologists in 3 years or whatever? Photography and filmography ?
My concern is we're going to get to a place where we think the machines can just take over all important professions, but they're not quite there yet, however people don't bother learning those professions because they're a career dead end and then we just end up with a skill shortage and mediocre services, when something goes wrong, you just have to trust "the machine" was correct.
How do we avoid this? Almost like we need government funded "career insurance" or something like this.
I'm not so sure that truth and trustability is something we can just hand-wave away as something they'll sort out in just a few more years. I don't think a complex concept like whether or not something is actually true can be just tacked onto models whose core function is to generate what they think the next word of a body of text is most likely to be.
on the other hand the rate of change isn't constant and there isn't a guarantee that the incredible progress in the past ~2 years in the LLM/diffusion/"AI" space will continue. As an example, take computer gaming graphics; compare the evolution between Wolfenstein 3D (1992) and Quake 3 Arena (1999), which is an absolute quantum leap. Now compare Resident Evil 7 (2017) and Alan Wake 2 (2023) and it's an improvement but nowhere near the same scale.
We've already seen a fair bit of stagnation in the past year as ChatGPT gets progressively worse as the company is more focusing on neutering results to limit its exposure to legal liability.
Yes again, it's very strange to see a simple focus on one particular instance from one particular company to represent the entire idea of technology in general.
If windows 11 is far worse in many metrics than windows XP or Linux, does that mean that technology is useless?
It's one instance of something with a very particular vision being imposed. Windows 11 being slow due to reporting several GB of user data in the first few minutes of interaction with the system does not mean that all new OS are slow. Similarly, some older tech in a web UI (ChatGPT) for genAI producing non-physical data does not mean that all multimodal models will produce data unsupported by physics. Many works have already shown a good portion of the problems in GPTs can be fixed with different methods stemming from rome, rl-sr, sheavNNs, etc.
My point isn't even that certain capabilities may get better in the future, but rather that they already are better now, just not integrated into certain models.
That website doesn't load for me but anyone who uses ChatGPT semi regularly can see that it's getting steadily worse if you ever ask for anything that begins to border risque. It has even refused to provide me with things like bolt torque specs because of risk.
It could be a bias, that's why we do blinded comparisons for a more accurate rating. If we have to consider my opinion, since I use it often, then no, it hasn't gotten worse over time.
Well I can't load that website so I can't assess their methodology. But I am telling you it is objectively worse for me now. Many others report the same.
Edit - the website finally loaded for me and while their methodology is listed, the actual prompts they use are not. The only example prompt is "correct grammar: I are happy". Which doesn't do anything at all to assess what we're talking about, which is ChatGPT's inability to deal with subjects which are "risky" (where "risky" is defined as "Americans think it's icky to talk about").
Worse is really subjective. More limited functionality with a specific set of topics? Sure. More difficult to trick to get around said topic bans? Sure.
Worse overall? You can use chatgpt 4 and 3.5 side by side and see an obvious difference.
Your specific example seems fairly reasonable. Is there liability in saying x bolt can handle y torque if that ended up not being true? I don't know. What is that bolt causes an accident and someone dies? I'm sure a lawyer could argue that case if ChatGPT gave a bad answer.
No. Malpractice insurance would be at the professional level. There could be lawyers using a legal chatGTP, but the professional liabilities would still be with the licensed professional.
More legal malpractice? No, because they aren't attorneys and you cannot rely upon them for legal advice such that they'd be liable to you for providing subpar legal advice.
Why? Because there's no word for "insurance of AI advise accuracy"? The whole point of progress is that we create something that is not a thing at the moment.
No, because, like I said, GPTs are not legally allowed to represent individuals, so they cannot obtain malpractice insurance. You can make up an entirely ancillary kind of insurance. It does not change the fact that GPTs are not legally allowed to represent clients, so they cannot be liable to clients for legal advice. Seeing as how you think GPTs are so useful here... why are you asking me these questions when a GPT should be perfectly capable of providing you with the policy considerations that underline attorney licensing procedures.
I like the term "explorable vocabulary." I can see using LLMs to get an idea of what the relevant issues are before I approach a professional, without assuming that any particular claim in the model's responses is correct.
This is an area for further development and thought...
If a LLM can pass the bar, and has a corpus of legal work instantly accessible, what prevents the deployment of the LLM (or other AI structure) to provide legitimate legal services?
If the AI is providing legal services, how do we assign responsibility for the work (to the AI, or to its owner)? How to insure the work for Errors and Omissions?
More practically, if willing to take on responsibility for yourself, is the use of AI going to save you money?
A human that screws up either too often or too spectacularly can be disbarred, even if they passed the bar. They can also be sued. If a GPT screws up, it could in theory be disbarred. But you can't sue it for damages, and you can't tell whether the same model under a different name is the next legal GPT you consult.
> If a LLM can pass the bar, and has a corpus of legal work instantly accessible, what prevents the deployment of the LLM (or other AI structure) to provide legitimate legal services?
The law, which you can bet will be used with full force to prevent such systems from upsetting the (obscenely profitable) status quo.
Re your first point: it's not conscious. It has no understanding. It's perfectly possible the model could successfully answer an exam question but fail to reach the same or similar conclusion when it has to reason it's own way there based on information provided.
Great point, LLM will not be great at ground breaking law.... But most lawyers aren't. That's to say, most law isn't cutting edge. The law is mostly a day-to-day administrative matter
Careful, there are plenty of True Believers on this website who really think that these "guess the next word" machines really do have consciousness and understanding.
The obvious intermediate step is that you add an actual expert into the workflow, in terms of using LLMs for this purpose.
Basically, add a "validate" step. So, you'd first chat with the LLM, create conclusions, then vet those conclusions with an expert specially trained to be skeptical of LLM generated content.
I would be shocked if there aren't law agencies that aren't already doing something exactly like this.
What if they were liable? Say the company that offers the LLM lawyer is liable. Would that make this feasible? In terms of being convincingly wrong, it's not like lawyers never make mistakes...
You'd require them to carry liability insurance (this is usually true for meat lawyers as well), which basically punts the problem up to "how good do they have to be to convince an insurer to offer them an appropriate amount of insurance at a price that leaves the service economically viable?"
Given orders of magnitude better cost efficiency, they will have plenty of funds to lure in any insurance firm in existence. And then replace insurance firms too.
"In terms of being convincingly wrong, it's not like lawyers never make mistakes..."
They have malpractice insurance, they can potentially defend their position if later sued, and most importantly they have the benefit of appeal to authority image/perception.
All right, what if legal GPTs had to carry malpractice insurance? Either they give good advice, or the insurance rates will drive them out of business.
I guess you'd have to have some way of knowing that the "malpractice insurance ID" that the GPT gave you at the start of the session was in fact valid, and with an insurance company that had the resources to actually cover if needed...
Weirdly HN is full of anti AI people who just refuses to discuss the point that is being discussed and goes into all the same argument of wrong answer that they got some time. And then they present anecdotal evidence as truth, while there is no clear evidence if AI lawyer has more or less chance to be wrong than human. Surely AI could remember more and has been shown to clear bar examination.
"while there is no clear evidence if AI lawyer has more or less chance to be wrong than human."
In the tests they are shown to be pretty close. The point I made wasn't about more mistakes, but about other factors influencing liability and how it would be worse for AI than humans at this point.
This is the key point. Even if assume the AI won't get better, the liability and insurance premiums will likely become similar in very near future. There is a clear business opportunity that's there in insuring AI lawyer.
I feel like this is a common sentiment shared among those that started their life with physical books being the only way to read a book (and I count myself among those).
As years have passed, I found physical books are great for gifting, but terrible to keep. I never read any given book more than once, twice at most. Having those books be occupying space in my home, having to be managed across moves, and then just forgotten, perhaps only to be remembered with some nostalgia when you linger your gaze idly on them from time to time...I find that wasteful.
I've been very happy reading from an ebook device for almost two decades now. I don't think the tactile sensation of a book to be that great anyway. It's much more convenient, and, above all, I don't have the burden of having to manage all that space. The same way that I was burdened by my father's passing and having to manage his books and comics (of course, this was only a minor annoyance, but not something I would have ever wished to do, because of course...I couldn't bring myself to just throw it all out to the recycling boxes, and gifting them all to some place is no that easy now that almost no one is taking in physical books anymore). All of that...for an ephemeral experience. A passing distraction. Much like a movie.
Now...how does this relates to the library disappearing? I think libraries as they're designed don't provide that much value when you can get almost all what you want in a single kindle. Indeed, physical books make no sense to me, specially having to store them in a public building, whose space could be put to better use for the community.
The concept of accessing the literature freely should never disappear, I'm not advocating for that. But I would prefer that this would be done by lending or allowing the use of e-books rather than physical books. Or even just pdf files to be read with the user's own devices. Perhaps a self-deleting pdf if some kind of lending wants to be implemented?
Instead, the physical space currently occupied by books could be used to promote communal activities, between kids and between adults. There are public buildings like that in Barcelona, that can be used by cultural neighborhood associations that allow the people of the city to bond in much the same way that ancient communal places like the church would have performed. So, for example, food distribution citizen cooperatives, or role playing/board game groups, singing and dancing groups, theater play associations or courses, historical recreation societies...the sky's the limit!
All of the previous activities provide more integration and community than what a traditional library could ever provide. Reading being mostly a personal experience, even taking into account potential reading clubs.
And what I've come to believe is that it is much more important to enabling opportunities and social integration, to provide shared spaces rather than trying to cover that need by giving free access to self-learning or leisure resources. I would prefer my public money to be used towards communal integration, rather than reinforcing the solitude crisis that we're all drifting to.
> Instead, the physical space currently occupied by books could be used to promote communal activities, between kids and between adults. There are public buildings like that in Barcelona, that can be used by cultural neighborhood associations that allow the people of the city to bond in much the same way that ancient communal places like the church would have performed. So, for example, food distribution citizen cooperatives, or role playing/board game groups, singing and dancing groups, theater play associations or courses, historical recreation societies...the sky's the limit!
What you're describing is the function of many public libraries in the United States, especially in the last 30 years. At least, one of the major functions: a place to meet, a place to study, a place to take classes, a place to get resources about social services, a place to sit inside when it's raining and you don't have a home to go to.
When I said that switching to a digital model would eventually diminish the role of libraries, I was thinking about this exactly. The literal loss of a shared community resource. But I am also noting that there are almost certainly knock-on effects of having such a place that we don't fully understand, and would not understand until years after we'd lost it.
I am thinking, abstractly, about the consequences of over-indexing on a particular metric, at the cost of everything else. Especially everything else which was not known or considered in the original problem statement. If our metric is "access to information", we could maximize that by getting rid of books and focusing exclusively on digital lending, and we could call it a success. What I'm saying is: have we accounted for everything that would happen if we actually did that?
As a species, we're pretty bad at guessing these things. We thought the internet would make us more civilized, we thought social media would bring us together, etc., etc.
This starts to sound like a Chesterton's Fence argument, and I was trying to avoid that, but there's no denying it now. That's basically what I'm saying I guess!
You've put in words something that has been around my mind for so much time. Thanks for sharing the thought! I would say some of my problems are related to what you just wrote.
I've been an avid reader all my life, mostly of fiction novels, but not only.
Over time, as the habits of my work have permeated into my daily life I've found the following:
- I skim over most written text to get to gist of it, even for novels
- as a consequence, I usually lose the nuance and the details of a text, even though I get mostly 80% of the content. I find myself needing to go back when I don't understand why something happens, or a character comes out of nowhere, etc.
The same thing has also filtered to my non-reading habits, for example when listening to people. I quickly switch to getting the gist of what someone is telling me and making up the rest in my mind.
And this has gotten me into trouble several times, even though it is an effective technique most of the time when programming or with similarly related tasks and you research across many tabs and search results and you need to get knowledge quickly without stopping to read everything.
I've tried to slow down myself and really listen until the end when people talk to me, and making a conscious effort to try to remember what they're saying to me, or what I'm reading. This works better.
I also feel that having access to all those helps, reminders, and also search engines, has greatly deteriorated my memory capacity or willingness to store facts, plans, ideas, in my mind, as I know I can retrieve it later if I need it, or I trust tools to remind me of things. I've become more forgetful over time.
Well...at least it is important to be aware to try to fix it.
I'm not from the US, I think it is a global phenomenon for similar cultures.
Instead of skimming, I just won't read something as I don't believe skimming imparts any actual knowledge. Your comment for instance. I read the first few sentences and skipped the rest.
I couldn't agree more with you! I played through the game and Horizon Dawn story was so rewarding! I kept playing for it even though the gameplay part had long run its course in keeping my interest. The expansion also expanded a little bit on the lore, broadening a little bit the world outside of the main arc, which was nice.
This game surprised me so much, because I just picked it on a whim a short while ago. It is so much more than what the cover suggests.
A very emotional and impactful story, really looking forward what they do for the next chapter of her story.