Hacker Newsnew | past | comments | ask | show | jobs | submit | centur's commentslogin

100% agree. Did the same back in early OAuth2 days, before main platforms got libraries and support (we were transitioning from OpenId 2.0, not yet OIDC ). OAuth2 spec is surprisingly straightforward and readable, couplet with basic understanding of ABNF that is used in all RFCs - it was a joy to read and implement. And this understanding also stuck with me for many years and helped massively in my career :).


IIRC dot is one of the characters that can't be discarded when checking local addr part (RFC 5322). So fubar@domain.tld and fu.bar@domain.tld are different addresses really. As far as I understand - it's a Gmail's team decision to configure local addr interpretation and allow `helloworld@gmail.com` and `hello.world@gmail.com` to be treated as the same address. I'd expect that dot trick rarely works anywhere outside of gmail world.

+ sign is part of the standard (`atext` token, RFC 5322), so sites, which disallow it in address are doing it wrong. The fact, that industry adopted a practice of using everything after + sign as a "tag" is not captured anywhere so this creates even more mess in already messy space (e.g MS followed GSuite in this too and added subaddressing - https://learn.microsoft.com/en-us/exchange/recipients-in-exc...)


You still can buy Office standalone licenses with Classic Office (aka desktop apps). I got myself 2019 and 2021 versions (different OSes) and prefer them. However I've a personal MS365 subscription for business, I like permanent licenses for products more. On a related note - I bought some permanent licenses for devtools, where they advertised subscription model only :) Directly reached developers and asked if it's possible or not. Works well for both sides - I probably paid for the tool more than their average subscription lifetime, but I've got few tools that I need to use occasionally but over a longer period of time - e.g. one tool I used maybe 10-15 times, but over a 7 years. That does not work with bigger companies though :(


I don't think it will see any, people moved from Russia not because a single person in power, but because of systemic problems on all levels - kindergartens, schools, police and safety, rights to do a legit business. It's never a head person, it's always a system that been enabled and groomed by a head person or party


Back in windows and powershell days, this was my favorite "security reminder" prank. Everyone who forgot to lock their machine and walked away got this script executed on their machines. TLDR: it schedules a task "Security Reminder" to play "I'm watching you" via voice synth, every 30 mins.

```

$action = New-ScheduledTaskAction -Execute 'Powershell.exe' -Argument '-NoProfile -WindowStyle Hidden -command "& {[System.Reflection.Assembly]::LoadWithPartialName(""System.Speech"") | Out-Null; (New-Object (""System.Speech.Synthesis.SpeechSynthesizer"")).Speak(""I''m watching you"")}"'

$period = New-TimeSpan -Minutes 30;

$trigger = New-ScheduledTaskTrigger -RepetitionInterval (New-TimeSpan -Minutes 30) -RepetitionDuration (New-TimeSpan -Days 1024) -At 9am -Once

Register-ScheduledTask -Action $action -Trigger $trigger -TaskName "Security reminder" -Description "Security Reminder"

```

You can imagine people faces, especially many were sitting in headphones in open space offices. Those were the days...


I think it has less to do with the fact of putting data somewhere, but with the way data is accessed. When you store something in S3 - you can encrypt it, in addition to built-in encryption. You have control (more or less) over your data in your cloud tenants and databases. Not with AI models trained on your data. You can't even extract\remove it or see how it's used. Literally - zero traceability and transparency. This is the problem, not the fact that it's not stored on your physical hardware.

I suspect when any AI model will start using patents databases for training - it will be a watershed moment for what one can do with open data. Old regulations simply would not put up any meaningful fight against volume and quality of model hallucinations, that may become valuable and patentable inventions and improvements according to the same regulations.


I'd put it differently - programmers of all levels tried to get into companies where their work is a "profit center", where product of engineers adds to profitability. And not a "cost center", where company sells something else, but spends some of their profits on IT. Latter positions usually associated with a lot of horror stories and uphill battles. Naturally one would avoid them if they can.

So, when outlook is not so great and even profit centers started to cut their OpEx-es too - there is a surplus of engineers on a market and higher competition over positions that are left in profit centers (and those are compensated less). But I don't believe cost center companies actually gave up on IT, they may found different ways to close the need, but they still have some form of IT, either in-house or outsourced/off-the-shelf products.

But it's indeed of humbling moment in time for some engineers, to realize that they were in very very premium position compared to many. And got used to it too much - why build networks and stay in touch, when one receives 10 inbounds from recruiters a day, why keep skills up to date and stay sharp when the moment you're on a market - there is a queue to snatch you off the market, why part your ways on good terms instead of burning the bridges, when there seems unlimited offerings in front of you, so you never want to look behind...

Well, now we know why - to get through the "thin", until next "thick".


I can't say about comparable quality of search results as I use default DDG search across all my devices for many years, so I'm in my own echo chamber here.

What I can say that, with current tendency of mass enshittification of available content, and noticeable decline of other major players in this space (from time to time I accidentally search via Google\bing or specialized engines), if DDG would just keep the current levels of search quality and modesty of interface with rich !bang syntax - it will be totally fine by me.

I don't need AI generated results, I don't need widgetry, ads, previews, helpers rampage on my search page. I like that they haven't changed their search page colours, css classes, layouts and everything else what compounds into UX for ages. Every now and then when I search something with different engines - I feel like I'm in the IE6 era when everything tries to install a toolbar into your browser, tries to steal your attentions or those old link portals riddled with flashy banners And then google was born with clear and simple search results page

So it's not so much of a best search quality or ever changing landscape, it's the consistency of experience from DDG I'm after.

DDG is my daily driver, I used to it's style, results, visuals, syntax. In some way I mastered DDG search like one can master regex syntax or programming language. I used to it how someone used to their old tools. I don't want a new "shiny" hammer or a screwdriver, I want the one that is stable and I can adapt to it and master.

And if/when they would ask for small donation or a subscription - I'd happily help them, same way as one would help an old reliable friend in need, rather than flock away to some new hype-driven,money-burning,born-in-AI vendor locked native application.

Being predictably reliable in last few years of internet-worsening landscape - is not a bad thing really. It's a great perk (and kudos to DDG team) that some people value much more, than a fire-hose of degenerative AI novelty.


This hits so very close to home for me. I'd rather prefer that certain things stays in ASCII, despite my name and surname are not originally written in English.

This problem is too well-known to everyone who have their original docs written in non-English alphabet (read - everyone with diacritics or cyrillic script or hebrew or heaps of other non-Latin-based writing systems).

To add insult to injury - this is not even about some common transliteration tables from non-Latin to Latin script. Sometimes? rules of transliteration are changed based on local govt whims and if you got caught in such bureaucracy - all hell let loose. You can't obtain a document with previous "latin spelling", because it's changed in a system. This way you may have a ticket with one spelling and passport with another. Or, like in my case, my brother and I have different surname transliterations to English. And my first name would be spelled with 4 different letters than it's in all my other documents if I have to issue a document from scratch. Good luck explaining all this nonsense to anyone, who is trying to "character-match" someone's hard-to-pronounce surname from official document. Even worse when they would try to re-type any of UTF-8 characters without knowing a correct character codes or not having corresponding layout installed (most of the time in international airports and customs).

So, I'm personally fine with something is incompatible with GDPR, as long as certain systems would stay ASCII as long as they can. We'll be opening a whole new can of worms if internationally used documents would be in UTF-8, imo.


This one is my favourite concept, bloom filter goes after that. Cool stuff about Geohash that you can expand it's concept into 3 or more dimensions and whole ide just makes you think a bit differently about coding and operating over small tokens of data. fantastic stuff


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: