Wait, this is news to me - which VPS providers do not have persistent data storage? Are you thinking of Heroku-like deployments? I feel like every VPS provider I've encountered always listed storage as a feature?
I was thinking about EC2's default instance storage - it's ephemeral and gets wiped when you restart or stop the instance. Without paying for EBS volumes, EC2's storage is non-persistent by design.
You can't rely on VPS disk - backups, data retention and recovery is all up to you in case of node failure. There are other much more expensive and much slower products (external networked volumes) that do offer guarantees, but that's additional charge.
Well, Amazon might fail as a company at some point and then all your data will be gone. Theoretically.
Much more likely, though, is that you, or some sysadmin at your company, or even some user will accidentally hit the "delete" button on something important, and then without a backup, you can't get it back. Which is honestly the thing that people usually need their backups for, anyway. This is what most "data loss incidents" are: people just messing up and deleting things they shouldn't have. Wetware is much more prone to failure than hardware, after all.
Yes, and you never use those, because if the VPS company fails, your backups are gone. So use the backup services of a second (and third) company if you value your data.
I didn't realize they had a (effectively) scale-to-zero option! Is it possible to store a bit of state, or is the bot entirely for ephemeral responses? I wasn't clear on what storage would be available from their pricing page.
Both bots I run are just for ephemeral responses, so unfortunately I'm not sure. I tend to be well under the $5 threshold, though, even at 2x bots, so it seems like there's plenty of wiggle room for more storage without incurring totally nutso costs.
(And even if you do exceed the $5 threshold, it's still waaaaaay cheaper than other options I've seen. RIP to Heroku's free hobby tier.)
It depends where you live and what the building codes in that area try to address. Here in Tokyo the biggest worries are earthquakes, so my apartment walls usually incorporate wood instead of concrete and the 5GHz signal through my closed door and six layers of rooms only just drops to 2/3 on my phone.
Unfortunately, this also means I'm competing with nearly a hundred different APs in the apartment alone - of which many broadcast in both 2.4GHz and 5GHz - to the point that my Macbook less than a meter away from the router still takes a hot minute to automatically find the AP unless I manually select it.
Even right now in modern day Japan I have to canonize my name in katakana (syllabary designed for foreign/loan words), and all the systems strictly expect a singular word First Name and a singular word Family Name. If you have a middle name, it effectively gets thrown out. Multi-word first and/or last names need to be smooshed or cut down.
I have encountered even worse issues digital forms that only accept kanji (Chinese characters) or hiragana (syllabary designed for native Japanese words), the latter of which usually does not support certain voices that katakana supports. Ashley Tisdale, for example, is normally rendered as アシュレイ・ティスデイル (ashurei tisudeiru) - ティ is actually te with a small -i modifier, which does not usually exist with hiragana. Forcibly converted to hiragana, it turns into あしゅれい・てぃでいる - but ぃ is not accepted by the form, even if it exists in UTF-8. Your options are either converting the ティ into ち (chi) or て (te), neither of which are ideal, and may cause mismatches to other systems that properly support the katakana version.
The problem extends further into physical paper forms, where often they provide a very limited amount of boxes for characters, because native Japanese and Chinese names can easily fit within 8 characters. Combine this with the digital systems above and you're bound to have several versions of your name floating around on official documents all mismatching each other.
Some systems that need to print onto physical cards (e.g. getting a 1/3/6 month route pass on your SUICA or PASMO contactless smart cards) are even worse and turn dakuten (diacritics for hiragana/katakana) into their own character. As an example, the character ほ (ho) can be turned into ぼ (bo) using a dakuten, or ぽ (po) using a handakuten. The system will instead render those as two separate characters: ほ゛ and ほ゜ respectively, which cuts down on the number of available characters for the already limited textbox space you're dealing with.
The world is full of presumptions about names even today.
> The problem extends further into physical paper forms, where often they provide a very limited amount of boxes for characters, because native Japanese and Chinese names can easily fit within 8 characters.
This happens in Europe quite often, even though many people have longer names.
Any idea if this is why, in Japanese-dubbed anime, the voice actors seriously mangle some English words/names? E.g., they often add a vowel sound to the ends of English words that should end with a percussive syllable.
I.e., do you think it comes from those words/names being written in katakana or hiragana in the dialog scripts, and those systems just can't express the correct pronunciation of such English words/names?
Actually, it's probably a simpler reason than that. The Japanese language is largely a CV syllable string (consisting of a consonant and vowels); consonant clusters do not exist, and the only final consonant permitted is 'n'. English, by contrast, is a much more phonotactically complex language--consonants can pretty freely appear both before and after vowels in a syllable, and English also has several consonant clusters. Imagine trying to pronounce the word "strengths" if your native language lacks consonant clusters--it's like an English person trying to pronounce the Czech phrase "Strč prst skrz krk". On top of that, Japan is not great at English proficiency (it's definitely weaker than any other rich country, see https://www.ef.com/wwen/epi/).
It's not really that the written language makes the names hard for them to pronounce, it's that the spoken language doesn't make it easy, and there's probably not enough care to try to pronounce them. Where the written language does make it hard, it's usually when people try to localize Japanese media into foreign languages, and the intended references in names are lost because of the mangling process of transcription into katakana.
As an English speaker who has traveled to Japan without learning much of the Japanese language, I agree generally but I also noticed that there are some cases where a vowel is written but not pronounced. For example, "gosaimasu" is mostly pronounced without the "u" (creating a counterpoint against final consonant other than "n" being forbidden) and "gozaimashita" is mostly pronounced without the second "i" (creating a counterpoint against consonant clusters such as "sht" being forbidden). It gives me the impression that these rules exist more in written Japanese than spoken Japanese, at which point it becomes less clear why adding a vowel to the end of foreign/imported words is so common. Maybe it's just my English perception that the sounds /s/ and /sh/ consist of pronouncing only a consonant, when in reality the fact that those sounds have duration (not just a moment) actually means it's more of a vowel even when totally unvoiced!
As I think on this further, even these voiceless /s/ and /sh/ sounds involve putting the lips into either an /u/ or an /i/ shape based on the following vowel even if that is also voiceless, creating that which is not a syllable in English, but perhaps is for this purpose in Japanese. The C-V cadence and final vowel (given lack of final -n) rules are satisfied...
Second, in Japanese dubs these words are not usually actual English words, but Japanese words originated as borrowings from English language, so voice actors don't actually mangle them, the same way as English speaking people don't mangle the word "coffee" as they usually pronounce it, despite it being different from how Italians pronounce "caffè".
> Any idea if this is why, in Japanese-dubbed anime, the voice actors seriously mangle some English words/names? E.g., they often add a vowel sound to the ends of English words that should end with a percussive syllable.
I don't know anything about anime, and little about Japanese, but I think Japanese (and Chinese) have a fairly strict consonant-vowel form for all their syllables. That makes foreign words that have runs of consonants or do not end it a vowel hard to pronounce, so speakers of those languages have a tendency to insert extra vowels to make pronunciation easier for themselves.
It's kind of like how English speakers will usually change the Pinyin "X" (as in Xi Jinping) into an English S or SH sound when they try to speak it, because the actual sound doesn't exist in English.
I think it's more that Japanese speakers just don't have those types of sounds in their phonetic repertoire. Some may be able to pronounce them, but most will not (and may not even notice the difference).
Every person has a certain limited set of consonants, vowels, diphtongs, triphtongs, tones, and even syllables that they are able to recognize and reproduce. This is something you can train to recognize more, but you will probably never be able to pronounce or even distinguish the totality of all those used in all languages, even just the living languages on Earth.
Even if you did, there is an added complication that some languages actually used multiple sounds interchangeably, and explicitly distinguishing them may actually confuse you. For example, most European languages recognize various consonants as the same "R" sound, even though they are vastly different (French R is a back of the throat trill, Italian R is a trill near the palate, and English R is articulated next to the palate without any trill). If you come from a language where these are distinct sounds, you may have trouble understanding that two people who use different R sounds are pronouncing the same word.
There is also the R/L problem, A sound that to me, a native english speaker, is fairly distinct. However these are the same sound in Japanese. Because of this I think that it is very hard for Japanese speakers to figure out which one to use and they get switched all the time.
Also, the default target configuration is "es2016," and modern browsers only support up to "es2015."
I had to recheck this was indeed an article from 2023, because this part surprised me greatly. To my understanding, es2016 only added Array.prototype.includes, the exponentiation operator (**), and preventing generator functions from being constructed. Even the slowest adopters already had these in 2017. Is the author assuming IE11 support?
This cheatsheet would have you publish dist/*.{js,d.ts}. Presumably you would use "files":["dist"] in package.json to exclude sources from being published.
The OP recommends to additionally package src/*.ts along with sourceMaps and declarationMaps.
I poke around in node_modules with some regularity (often in combination with the debugger) and it’s always nice to find actual source files, not just source maps.
Browser support for ES2015 is about 95% [0], while most ES2022 features sit at around 90% [1]. You can find the individual benchmarks on compat-table [2].
Some, but not all of these, are transpilable/polyfillable (refer to compat-table).
When we talk about "modern browsers", we're not talking about usage percentage, we're talking about browsers released in the past few years. If you need to support browsers older than that, that's great, but we don't call those modern browsers, that's legacy support.
Most modern browsers are evergreen, so targeting es2022 would be fine for most users. The exception is Safari, which is slower to incorporate new features and doesn't roll out its updates nearly as quickly as Chromium-based or Firefox, but even they have had es2016 support since 2016.
Sure, but the reason why browser compatibility is being measured is to quantify that notion of "modern".
You can then use that to make your own decisions; if you're working on a high-tech video streaming platform for kids you can probably safely ignore the 5%, but if you're providing information on behalf of the government you might need to support much more than that.
Also Safari greatly picked up pace a few years ago and honestly it’s hard to categorize them as behind anymore. They lead in several areas, and have caught up in most.
Wait, they're planning on charging the Game Pass game fees to Microsoft? The Aggro Crab example alone could incur hundreds of thousands of dollars per game for Microsoft, they're not going to be happy about that. Surely that's going to drag Microsoft legal very quickly into this mess?
The paper is about trying to statically analyze this. As I understand it, fip-annotated functions are ones that are checked to neither allocate nor deallocate.
"Instead, ask your colleagues or your community on Twitter why you should pick this project over another. You can also start a new discussion or create an issue on GitHub asking for other people's experiences."
As it gets easier and cheaper to run LLM-based bots, I wonder how long this approach will keep working. Devs in mid-to-large companies should have colleagues they can ask directly, but smaller startups might be vulnerable to being artificially swayed towards specific options.
Assuming the issues aren’t also hijacked it’s pretty easy to do your own sanity check of a project. Takes no more than a few minutes of archeology in the code and issue tracker.
Also, Twitter has a fomo/cool bias which is really terrible for software imo. Boring is usually better.