My tinnitus started with local anesthesia given by dentist. It caused spasm in jaw muscle which I had for almost a month - and as a side effect there was a tinnitus in left ear. The spasm eventually disappeared, but tinnitus never did. It's been 2 years now and by now I've realized that I'll probably never ever enjoy silence again. So let it be a warning what banal anesthesia can cause…
I had a similar experience. I was put to sleep with general anesthetic for a minor operation. When I woke up I immediately thought that the surgeon must have used loud equipment during the procedure, because my ears were ringing. Turns out it was the anesthetic. I haven't heard true silence for 2 years now.
That really sucks. I occupationally get tinnitus. Only thing I found that helps is focusing on relaxing the jaw / facial mussels / something in that area that's hard to describe. It fells like popping your ears, but not quite as much. Which is probably as helpful as telling someone to roll their tongue, but I thought I would put that out there.
I have it 24/7, nothing seems to help. It can be only masked by external noise. Alcohol doesn't help me to get temporal relief - it's even worse, louder.
But by far the worst consequence of tinnitus and the real problem (at least for me) is disturbance of sleep. At first I couldn't fall asleep - because of the noise which manifests itself most strongly in silent environment. So I was lying in bed for a few hours before falling asleep (now I got accustomed so I can fall asleep faster). But even worse is that the noise wakes me up after like 4-5 hours. That means I'm completely exhausted all day, because I need 7-8 hours of sleep to feel refreshed. I tried Zolpidem, which can get me 6 hours of sleep, but its effect wears off rapidly if taken for a few consecutive nights. Now I take Mirtazapine, which works better. Also it helped me to take Magnesium supplements.
I went to a tinnitus clinic and they did a profile of the tinnitus I hear. Then they configured this app to amplify sounds of those frequencies to mask my tinnitus. When I'm hearing a low buzz the app can almost completely remove it, but it doesn't work quite as well with a high frequencies.
You'll need rubber ear buds that form a tight fit with your ear canal.
Since the app only amplifies sounds it doesn't really do anything in a totally quiet environment. You might want to play pink noise to give your hearing something to work with. I've experimented with different kinds of noise, but this type seems to work the best: https://www.youtube.com/watch?v=ZXtimhT-ff4
Btw, I've already tried some noise generators for Android and I've observed strange effect - I have ringing only in the left ear so I used only one ear bud with the generator. Then when I turned it off, it induced noise in the other ear. It was like the brain was generating signal to compensate for the external noise in the left ear and when the noise generator was turned off, the spurious signal in the right ear became hearable. Fortunately this effect was only temporal.
Mine has actually helped me sleep better. I can't hear anything in my left ear (save for the constant ringing and some static accompanying particularly loud noises), so I am able to successfully drown out annoying background noises (like my wife's ridiculously loud alarms that never wake her up) by burying my right ear in my pillow.
https://www.vice.com/video/the-ambien-effect - ambien let one man (a former voice actor, no less) regain the ability to control his jaw/face after an unfortunate dental anesthesia incident
Well, denotation of the country is Czechy; nobody in Poland would use full Republika Czeska (unless in some official documents). So I guess if the name gets changed, at least Poles won't have to change their convention :)
There's barely anyone cheering even for "Lewiz Hamiltononbot". In current F1 the pilot is merely a ballast - the result is basically predetermined by technology.
Also there's no need to use "super fast human lap" - you can compute optimal lap, it's been done. The challenge will be to design AI to beat your opponents - then it gets interesting.
I really don't understand why it has been chosen over S/MIME. Maybe they gave the money to that german guy who wrote it and now they don't want them to be completely wasted :)
S/MIME has very little adoption - the kind of people who care about encrypting their email are usually the same kind of people who don't trust the CA system.
Probably it's "S/MIME has very little adoption, outside of the corporate / enterprise market.."
Getting keys sign by CA is just as bad (or even worse -- you have to generate and then have that key signed by CA) than generating a key on OpenPGP scheme, and then there's the issue of cost. (Although I've seen some free ones out there.)
No, it's not. WoSign, StartSSL and iirc Comodo create the private key in your browser. This functionality is afaik around since Netscape's first SSL-enabled browser and originally intended for client-side-certificates. Today it's implemened using the <keygen> tag [0].
This usually means, that you press a button in your browser, the Browser generates public+private key, stores them in your Keychain (OSX for example), sends the public key to the CA and the CA mails you the certificate.
It's really done in seconds and for Mail.app or iOS mail you just need to enable S/MIME and sign/encrypt. There are many tutorials out there for various MUA. Except of Android nearly every popular MUA can speak S/MIME including Outlook, Thunderbird… [1] and many tutorials are out there [2].
He meant Werner Koch, the guy who is maintaining gnupg
A few months ago, he asked again for donation, this time he got "good media exposure" and got funded.
cf https://news.ycombinator.com/item?id=9011138
Facebook pledged to donate $50,000 a year to Koch’s project.
As you have written, a formal test/analysis will always detect that a monotonously increased tick counter will not be bound by an upper limit. And the obvious solution is that you don't rely on such a thing, but define your API such that the (preferably unsigned, but doen't matter) tick-counter will roll over in a well defined way.
If the algorithms really depend on an always monotonously increasing tick-counter (which I doubt), the solution is quite easy: After 2^30 ticks set a flag which raises the "service needed" light in the cockpit, taking the plane out of service until it's power cycled. By this you explicitly state that your device cannot be used longer than 120 days continuously.
Agree with the first paragraph, but in the second I don't see how requiring a periodic reboot is a solution. Your "service needed" light is a "Case closed, WON'T FIX" message made real.
Airplanes already have an elaborate schedule for mandatory periodic service. Pressing a "reset" button once every 30 days is pretty trivial compared to dismantling the entire engine every couple of years.
What made this bug dangerous is that nobody knew about it, that's the main problem that needs to be solved.
On the assumption that there's a complicated control algorithm which, unfortunately, does arithmetic on (now-then) tick-values everywhere... but this algorithm has been extensively validated to be correct on the non-overflowing case, and it will take a while to find out how it handles discontinuities in its time scale.
Then the simple "raise service-needed signal" would be a valid workaround and easily testable local change for the next two years until the extensively fixed-algorithm went through testing and certification.
A general solution to the overflowing-clock problem is to deal with the clock in modular arithmetic. When wanting to know if t2 comes before t1, check the MSB of the modular difference.
uint32_t t1 = ...;
uint32_t t2 = ...;
if ((uint32_t)(t2 - t1) >= UINT32_C(0x80000000)) {
// t2 is before t1
} else {
// t2 is after or equal to t1
}
What this gives us is that if the difference of the actual times (not these uint32 representations which are ambiguous modulo 2^32) is less than 2^31 units (plus minus one maybe..), this check will give the expected result. This does allow a correct system that never fails if the timing/duration of the events is suitably limited.
For example you time events at a fixed time interval, and it will keep going forever in spite of clock roll-over.
uint32_t next_time = now();
while (1) {
while ((uint32_t)(now() - next_time) >= UINT32_C(0x80000000));
printf("Shoot\n");
next_time += interval;
}
The timing events also need to be processed quickly enough of course (that printf shouldn't block for longer than about 2^31).
It would. Which would fail the condition that I mentioned "if the timing/duration of the events is suitably limited". So you should just not do what you suggest :)
I disagree. You can reduce space usage with a logarithmic complexity. A couple tens of bytes is enough to store miliseconds until the heat death of the universe.
The "doubles its range of positive values" is weak argument because you should never be reaching values more than a few decades, never mind 10-100x the age of the universe. Such a state is a bug.
The "can not hold negative values" argument is also weak because a uint does not prevent generating negative values - it only prevents you from knowing that you've generated negative values. Such a state is a bug.
Using a uint only serves to make it harder to test when your system is in an invalid state.
The concept of timer "ticks" is well established as a unit of time in embeded programming, it's almost universally included in your embedded (realtime-)OS and might increase at any conceivable rate, both limited by the hardware constraints (e.g. a fixed, simple, 16-bit ripple counter that is clocked by the main CPU clock of 8 MHz will clock at 122.07 Hz) or at your application requirements (you let a slightly more configurable timer only count to 40000 at half the CPU clock to get exactly 100 Hz). Hence you shouldn't explicitly inscribe the tick rate in your symbol name, as it can change when requirements change.
You'll almost always have a global variable, preprocessor define... or something similar to get the frequency (or time increase per tick), which you should use whenever you have to convert "ticks" to actual physical units. If the actual effective tick rate is visible at many places in your code, both as a symbol name or as a comment, you are most certainly doing something wrong.
I think you kind of missed the point of my post (which was a bit tongue-in-cheek). The original code fragment had the tick duration embedded in a comment, so changing a global variable which defines it something other than 10ms is going to cause all sorts of problems in maintaining that code. (Leading possibly to the very problem Boeing had).
Another good practice is to initialize the time counters to something close to the overflow point, rather than zero. This encourages overflow bugs to show up at timescales where they will be noticed during testing, rather than after 248 days of service.
This is a scary-ass bug in a codebase that was supposed to be authored to strict professional standards.
I honestly can decide whether this is serious, satire, or conspiracy theory, but it's awesome nonetheless. My first project working on the 787 was converting an Ada codebase to C.
Invariably, cost. SPARK Ada is demonstrably superior to C for safety-critical development (I can't cite the sources for this, but a major company developing safety-critical software has shown this to be the case).
But, SPARK Ada requires a lot of highly skilled manpower and it's slow to develop. C gets the job done, albeit with lots of bugs.
If the industry is unwilling to invest in the training or tooling for a safe language like SPARK Ada, is there research into "easier" safe languages, something between C and Ada? Or do companies like Boeing still expect to be writing new avionics safety in C in 2030 or 2040?
Realistically, it seems to me that avionics etc. will be written in C for a very long time to come. It all comes down to the cost and availability of programmers.
Sure, but shouldn't safety be the number one concern here? Programmers can always be trained to learn it, as long as they demonstrate competence. It seems like an unfortunate case of trying to cut costs at the cost of safety.