So there is access to "degraded functionality" from start (the "3-15" of "degraded functionality" one) - people are asking why not share THAT then?
Nobody cares about internal escalations, if manager is taking shit or not - that's not service status, that's internal dealing with the shit process - it can surface as extra timestamped comments next to service STATUS.
When you've guaranteed 4 or 5 nines worth of uptime to the customer, every acknowledged outage results in refunds (and potentially being sued over breach of contract)
Meh, I’ve never seen an uptime (SLA) guarantee that was worth anything anyway. They’re consistently toothless, publicly-offered ones anyway (can’t comment on privately-negotiated ones). I’ve written about it a few times, with a couple of specific examples: https://hn.algolia.com/?type=comment&query=sla+chrismorgan.
But not acknowledging actual outages, yeah, that would open you up to accusations of fraud, which is probably in theory much more serious.
Because the systems are so complex and capable of emergent behavior that you need a human in the loop to truly interpret behavior and impact. Just because an alert is going off doesn't mean that the alert was written properly, or is measuring the correct thing, or the customer is interpreting its meaning correctly, etc.
Health probes are at the easiest side of software complexity spectrum. It has nothing to do with it and everything with managing reputational damage in shady way.
Because they're incentivized to delay it, ideally until resolved, this way their SLA uptime is 100%. Less of reported downtime is better for them so they push it as much as possible. If they were to report all failures their pretty green history would be filled with red. What, are you going to do, sue them? They can do it so they do.
By what definition of "sentience"? Wikipedia claims "Sentience is the ability to experience feelings and sensations" as an opening statement, which I think would be trivial depending again on your definition of "experience" and "sensations". Can a LLM hooked up to sensor events be considered to "experience sensations"? I could see arguments both ways for that.
> The only basis I have for assuming you are sentient according to that definition is trust in your self-reports
Because the other person is part of your same species so you project your own base capabilities onto them, because so far they shown to behave pretty similarly to how you behave. Which is the most reasonable thing to do.
Now, the day we have cyborgs that mimic also the bodies of a human a la Battlestar Galactica, we will have an interesting problem.
I'm fairly sure we can measure human "sensation" as in detect physiological activity in the body in someone who is under anesthesia yet the body reacts in different ways to touch or pain.
We can measure the physiological activity, but not whether it gives rise to the same sensations that we experience ourselves. We can reasonably project and guess that they are the same, but we can not know.
In practical terms it does not matter - it is reasonable for us to act as if others do experience the same we do. But if we are to talk about the nature of conscience and sentience it does matter that the only basis we have for knowing about other sentient beings is their self-reported experience.
We know that others do not experience the exact same sensations, because there are reported differences, some of which has been discussed on HN, such as aphantasia. The opposite would be visual thinkers. Then you have super tasters and smellers, people who have very refined palats, perhaps because their gustary and/or oilfactory senses are more heightened. Then you have savants like the musical genius who would hear three separate strands of music in his head at the same time.
Absolutely - I have aphantasia myself, and did assume for 40+ years that my experience was like everyone elses, but I didn't want to make the argument more complex. It's indeed correct that we have often assumed we think the same way but have reasonable reason now to think that isn't actually true. But it still feel reasonable to accept that we're probably close enough. But still, we absolutely can't prove it.
How do you know that model processing text or image input doesn't go through feeling of confusion or excitement or corrupted image doesn't "smell" right for it?
Just the fact that you can pause and restart it doesn't mean it doesn't emerge.
reply