Hacker Newsnew | past | comments | ask | show | jobs | submit | npsomaratna's commentslogin

Why is there a significant time lag? Is it because monitoring is done via cultures?


Correct, the workflow is something like this usually.

You would take contact, settle and active air sample plates within the cleanroom, followed by approximately one day before culturing is initiated in QC. Incubation then typically takes around seven days to cover both bacteria and fungi. You then get the colony forming units value which is the key parameter. Some companies take this further and perform organism identification, which adds additional days to the timeline but great for reactive investigations.

There is also a lag until the data becomes available in a digital format.

This of course differs between companies. Some companies may opt for shorter or longer incubation times, but in general, the key takeaway is that the process takes time.


That makes a lot of sense. After 30+ years of programming, I still have to do a search (or use an LLM) to do anything useful with sed, xargs, etc. Perl never really clicked with me either.

On the other hand, I was able to easily pick up just about any "tradional" language I tried--from Basic and C in the 80s all the way to Dart and Go more recently.


I remember playing Ultima VII back in '94 (or was it '95?). When the game started up, I could only feel wonder. It was so far ahead of the competition at the time.

The only other instance where I got the same "this game is way ahead of everyone else" feel was when I first played Morrowind.


Same here. Had a 7 for years. Upgraded to a 13. So far not felt the need to upgrade.

I compare this to when I had an 3G and the 4 came out. The gap between the two was so huge that I upgraded quickly. Reminded me of how quickly PCs evolved in the 90s.


The difference was “hang on let me pull over” to “just do it live!”.

With 4G, you could actually do something quickly.


I have no understanding of the physics involved, but could the broadcast location be reverse engineered? (With triangulation and clever math?)


Your location is very very visible to any plane or satellite passing overhead.


Indeed, though it would take some coordination to actually narrow it down precisely. You'd need a few different planes/satellites to detect the signal and share their reading to allow triangulation. With only a single plane or a single satellite that is not in geosynchronous orbit, you could take multiple readings and get a rough idea of location, but the inability to turn from a straight line (not impossible for a plane of course, but it would require intentionality and willingness for the crew/commanders and typically not cheap as it disrupts whatever flight plan they previously had) would be a hindrance. That said, with how many satellites are up there I doubt it would take much extra effort to do that coordination if the satellite operators have motivation to do so.


Highly directional antennas on a moving platform can perform effectively radio direction finding independently.


Let's remember that there is also a antenna array with LOS yo the mooon.....


The moon is visible to ?half? the earth at a time? That’s a huge search area. Certainly the antennas broadcasting to the moon are quite directional, and outside the main beam, would be hard to detect?


I don’t understand how? Wouldn’t the signal be highly directional? Surely it wouldn’t be easily detectable unless the viewer’s POV intersects the path of the beam?


On the topic of professions: Joseph Lister was a surgeon. Modern surgery (which I define as surgery aided by anesthesia) is a relatively recent discipline dating to the early 19th century. The introduction of anesthesia made lengthy and intricate operations possible but also ushered in novel problems and complications. Surgery as a field had to learn tough lessons over time.


He was known more for antiseptics but the biggest surgery moment for me will always be “using soap” and I wonder what the software equivalent is.

Like I said we are still young, so it feels sort of arrogant saying we have figured something out when I know how many things are industry standard now that almost resulted in shouting matches trying to get done even 20 years ago. Maybe our soap moment is coming up ten years from now.

But I suspect automated testing may be the wash your hands, because it represents a sort of hygiene that “we” used to just say fuck it or make a minimal effort.


Programmer from DOS times here. I've ingrained pressing CTRL + S every few seconds as a reflex. That has saved my bacon more than a few times.


I've started programming later than DOS (probably around win95 or so) but I still have a reflex to hit ctrl-s three times basically any time I stop typing.

You only need to lose hours of work once or twice to learn that lesson!


Out of curiosity, are there any algorithms faster than BLAST? (For DNA search).


For DNA? Not sure. https://pmc.ncbi.nlm.nih.gov/articles/PMC3197634/ claims it's reached parity on protein sequences.

I'd much rather have a slower, but more accurate searcher, or one that was easier to use as an API.


There have been countless specializations and improvements on the original BLAST (which itself evolved).

When I left the field, the latest hottest thing was diamond (https://github.com/bbuchfink/diamond).


Anecdotally, what I've seen is that folks who learned typing in the 80s and earlier use two dashes '--' instead of the em-dash (although modern word processors seem to replace this combination with the em-dash). Something else I've noticed is their tendency to use two blank spaces between sentences.

I'm a self-taught typist, with all the quirks that comes with (can type programming stuff very accurately at a 100+ WPM; can type normal stuff at a high WPM as well, but the error rate goes up).


I learned to type in the nineties and I used two hyphens. I also learned to put two spaces between sentences but dropped that in the oughts.


Are you the CEO of Cerebras? (Guessing from the handle)


I wonder why he (Andrew Feldman) didn't retort to the SRAM vs HBM memory incorrect assumption made by the OP comment; maybe he was so busy that he couldn't even cite the sibling comment? That's a bigger wrong assumption than being off by maybe 30-50% at most on Cerebras's single server price (it definitely doesn't cost less than $1.5-2M).


I have followed Cerebras for sometime. Several comments: 1. Yes I think that is Feldman. I have seen him intervene at hacerknews before thou don't remember the handle specifically 2. Yes, the OP technical assumption is generally correct. Cerebras load the model onto the wafer to get the speed. It's the whole point of their architecture to minimize the distance between memory and compute. They can do otherwise in a "low cost" model, they announced something like that in a partnership with Qualcomm that AFAIK has never been implemented. But it would not be a high-speed mode. 3. The OP is also incorrect on the costs. They pick these costs up from dated customer quotation seen online (in which the Cerebras has incentive to jack it up), but this is not how anything works commercially, and at that time Cerebras was at much smaller scale. But you wouldn't expect Feldman to tell you what their actual costs are. That would be nuts. My thinking is the number could be off by up to 80% by now assuming Cerebras was making progress in their cost curve and the original number had very high margins (which it must have).


Probably because they are loading the entire model into SRAM. Thats how they can achieve 1.5k tokens/s.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: