Hacker Newsnew | past | comments | ask | show | jobs | submit | davidparks21's commentslogin

Near flawless is an unreasonable expectation. SpaceX had the advantage of running years of cargo missions. The error was simply not using the same successful model with any new vehicle.


It is subjective language but for me "near flawless" is a reasonable expectation for a qualification flight. Not perfect because that is unreasonable. A leaking toilet would have been fine if Starliner had one. It isn't a critical system. An off-nominal thruster or two would also be acceptable. They have redundancy for a reason.

The helium leaks were not desirable but helium is difficult. Boeing engineers could demonstrate the rate of the leaks did not threaten the mission so it was manageable.

The thruster issues are completely different. Boeing couldn't present NASA with a convincing model of how the thrusters would perform. If they can't characterize such a fundamental part of the vehicle it is impossible to make an informed decisions which is a huge fail.

In hindsight a cargo first experience would have been valuable for commercial crew whether the winner was Sierra, NG or Boeing however that would have disqualified Sierra and Boeing. Boeing's involvement in commercial crew legitimized it politically. Boeing directly, via acquisition and through their suppliers including Aerojet have decades of experience in space flight far beyond SpaceX at the time. Boeing were not the underdogs.


What a weird argument. Boeing has the advantage of building things for space and NASA since the 60s - orbiters, rockets etc. They had a 40 year head start and some of the best engineers in the world. Near flawless compared to a 20 year old company I would say is a very reasonable expectation.


The process ship of theseus exchanges all components within a generation. It now even replaced its culture and is unable to ship. History of ability can not be interpolated to current disability .


I struggle with this question. The editor plays a valuable role and it's not hard to swallow the argument that editors make science better, they're the front line filter. Employing them full time is a core value that the current system provides. I'm wary (but not close minded) to the idea that volunteer orgs could scale to the top echelons of scientific publication, which require a lot of filtering. Perhaps if we look to the conference paper model where funding primarily comes from the conference? The conference brings benefits beyond publishing. In that model publishing equates to financial discount or free entrance to the conference, so it flips the equation.


I work in CS and we do (mostly) conference publishing.

I could be wrong, but I don't think PC chairs are paid, by anybody. Maybe they get free housing at the conference, but this is more of a consolation prize given the amount of work involved.

Certainly the rank and file PC members don't get anything. I was on one PC mailing list where one of the organizers accidentally let slip that (some) organizers get free housing, and there was a big uproar in the PC. None of us get anything like that.

So this is literally a system where the expert reviewers get nothing, and even the chairs in charge do it nearly for free. What part of this needs to cost money?

The peer review comes from the community. The exclusivity and filter comes from the community. Even the funding comes from the community, because community members pay to go to the conference.

What the publisher does is, as GP noted, mainly to host PDFs on a website and make sure they stay up. That costs something, but nothing like what the licensing fees for these services are (or the so-called "open access" fees that we now pay).


I'd guess the more mundane, someone probably had a spreadsheet of a few thousand updates or improvements, checked a reasonable sample of them, then accepted the lot of then. I bet you they don't make that mistake next time.


This feels like something of a non story to me. Using AI for product descriptions seems like an obvious and reasonable use case; and data entry errors are not uncommon nor terribly harmful in the context.


I think it's a sign of what's to come. A world where many things are done so shoddily and with such little regard that it becomes nearly impossible to navigate. You sift through a pile of crap trying to get basic shopping done, descriptions that are wrong and nonsensical, fictional product photos that are useless to judge scale or fitness for purpose, etc. When you finally find what you need, you end up receiving the wrong item because no one in the supply chain gave a shit. You try to get this resolved, and are bounced around a series of half-broken customer service bots. It's difficult and expensive to find alternatives who do give a shit, because the automated companies have driven prices (and quality) into the ground, and it ends up being cheaper and easier to let it go and try your luck again.

I don't know if that will come to pass or not, I sure hope not, but I think it's a real possibility and that things like this are the early warnings.


That way of looking at it feels like it focuses on the one error while ignoring that, in all likelihood, the same action that caused the error, probably improved 1000 other listings.

I guess I'm a glass half full kinda person, this shows me that someone is working on improving things. And I bet they're quite flush from all the attention cause by their oversight in a big spreadsheet. I bet they won't miss the next one. :)


Why do you suggest that?

What do you think the seller's goal was in employing an LLM? Was it to improve quality, or to drive down costs?


I suspect someone was tasked with using the latest tools to improve a bunch of listings. 50 years ago they were given a typewriter for the same task, today they were given an LLM. It just feels like someone doing their job to me. Different year different tool. We no longer hand-transcribing books anymore, we don't lament that, and we won't lament LLMs one day either.


I think the purpose of automating product descriptions is far more likely to be to pay fewer people than to improve the quality of the listings.

I think if the purpose was to improve the quality rather than to crank them out - they probably wouldn't have let such severe and obvious errors get through, certainly not in such a large quantity. If I was tasked with doing this, at a minimum I would kick any listing that contained the word "OpenAI" into a QA queue rather than publishing it. Since they obviously didn't have even the minimal filters to catch errors, I have to infer they never spot checked their output for sanity. Because they didn't really give a shit.

It feels like someone doing their job to me too, sure. That job being to spam. When I see a watch, I infer the existence of a watchmaker. When I see a pile of spam, I infer the existence of a spammer.


I think this is a fair point. I see other comments pointing out the value in this as exploratory research. There's some middle ground here though. I think studies like would serve the lay reader and non-scientific community by including a disclaimer sentence in the abstract about not extrapolating conclusions from this. Their opening sentence does the opposite, "Soybean oil consumption has increased greatly in the past half-century", which seems to set up the reader to think about the impact to humans. This paper is written by scientists, for scientists, but the scientific community as a whole would be well advised to realize that lay readers and lay press will skim these articles, in particular the abstract, for the key takeaways.


If you want something to be written by scientists for scientists, you have to make it inaccessible to non-scientists. Otherwise people are going to take it as the gospel truth. And honestly, they should put up major guard rails on any article interpreting a scientific study. It's too easy to miss things that other people will take to heart.

As a lay-person, when I come to this article, I have no idea of the information in the study. Which is why the veneer of a scientific study on this makes it actively worse for me to trust. And then if I ever find out the study is wrong, I trust science less, because I put my bets on a finding that was basically a coin-flip conclusion.

Now, I'm not inclined to trust the scientific process less because it finds something is wrong, but I also don't think I'm the average internet person.


I feel like they are leading so that very thing happens. Noteriety and buzz are the goals.


The article also states: "A caveat for readers concerned about their most recent meal is that this study was conducted on mice, and mouse studies do not always translate to the same results in humans."


I'm excited to see a goodnewsnetwork.org article make it to the top of hacker news. This site has become a favorite of mine.


This study is getting a fair bit of media attention and I'm curious what others think of it. The study is large and appears to follow sound methods. My concern about it is their focus on bias. They address age, gender, and racial bias well. However there's a more obvious bias that I don't see mentioned. Dialysis patients know they are immune compromised and seem more likely than the average population to take extra precautions during the pandemic.


On Windows I could consistently open a 32GB matrix in Matlab with 16GB of RAM on my laptop and perform operations on the matrix. The disk would spin, and it would take 20 minutes to do a simple operation because of the swapping, but I could open it, perform the operation, save, and exit successfully. I could easily background Matlab and do email or other common tasks such as browsing with very little impact to those applications. On Linux Mint that same task locks the mouse and brings the system to its knees, I can't even kill Matlab and would typically resort to a hard reboot. I learned quickly that I can't do the same things on Linux Mint that I used to do pretty easily on Windows.


For me, Windows (7, 64 bit) behaves exactly as you report for Linux. I would love to be able to get Windows to behave like it does for you. What version were you using? Did you tweak any settings?


I haven't been running windows for a year or two now, so this was a while back, but I think I was on Windows 10, possibly it was 8 back then, no tweaks. But I was quite successful at this in Matlab specifically. If you overload memory across many processes perhaps you can get into a bad place, but when it was just one process that was abusing swap, Windows is quite good about making sure other processes aren't dramatically affected in my experience (there was some lag, but it was usable).


In Windows on a 16GB RAM laptop, I've often fired up Matlab, opened a 32GB matrix, and performed a few simple operations on it. In Windows Matlab dutifully chugs away on the problem, the disk spins like mad, and I put Matlab in the background and do email for 20 minutes. This identical use case completely cripples my Linux Mint OS, the mouse hangs, nothing functions, and I've never gotten it to even complete the operation. I just can't operate on a 32GB matrix with 16GB of RAM in Linux, but I can in Windows with relative ease.

To me, this is the Linux kernel's biggest weakness against Windows. Most other gripes about Linux (poor power management, poor driver support, etc) belong outside the kernels domain, but this one is a glaring win for Windows over Linux.


I reported 4 fake reviews to Amazon on a product I purchased before.

The product page had changed from the old one I originally purchased on to a new one with 4 obviously fake 5-star reviews. I found this when I went to re-purchase the product after it (an eBike) was stolen. The page is here (https://www.amazon.com/dp/B07K2VLSX5/ref=cm_sw_r_cp_apa_i_He...)

There is a review from me, calling out the fakes and providing detail, and the other 4 that are clear-as-day fakes. The previous product page, which I referenced in my report to Amazon, had numerous high quality, legit reviews averaging around 3 stars with tons of detail.

Amazon has made no changes after my report.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: