The backend speaks SMTP, IMAP, and HTTP. It is basically a glue (or proxy) between SMTP/IMAP (including those protocols over TLS) and HTTP. Nylas Pro talks HTTP to the glue (I assume that is using TLS, so HTTPS). Nylas Mail also talks HTTP to the backend, but the backend runs locally, so you're self hosting the backend. I suppose this has a performance loss. (With Nylas Pro, you can set a separate backend server if you desire but that requires work. That is a really cool feature for a business though.)
So I would assume you are hosed. The passwords need to be stored plaintext.
Oauth makes sense that there's no password saved. A unique key is saved which is authenticated with Google. If this key leaks, you are hosed, too, but at least you can revoke that key.
I tried grep mypasswd ~/.nylas-mail/* and grep said Binary file shared.sqlite matches. This did not occur in ~/.nylas it makes sense and it is inevitable, a client like Thunderbird suffers from the same.
It can be circumvented by saving the password encrypted and decrypting it using a master password. That is akin to how LastPass and Mozilla save their cloud data.
Using containers etc would also lower the threat.
In a way its good the password is saved locally. The engine also runs locally. It moves the threat model to the client, away from Nylas servers. Kudos.
The app looks great, but I just am not in favor of piping everything I have with email through your servers. Just a standalone app as basic version with opt-in for Google OAUth would justify its use case.
Since those are accomplished via a tracking pixel in the email and you need to serve that image from a public server, then yes, you'll need a cloud service to run that. As stated before, you can choose to run that server yourself if you don't trust Nylas with that kind of data.
Unfortunately, standardization != widespread use. Tracking pixels are still the most reliable way to know whether a recipient (or many) opened your message.
Gemail, Outlook.com/hotmail, thunderbird and outlook all block images, unless whitelisted... Now, there may be other providers/clients that don't, but the above accounts for a significant number of users (if not most western mail users).
Gmail does something more interesting. It shows the image by default if the url is not unique, or the image is an attachment. It also uses googles servers to download and cache the image when receiving it.
That's not a good excuse for not implementing a standard. It's also a little like saying "private investigators are still the most reliable way to know whether a person is at home": it's true, but not really a better thing to implement.
Tracking pixels are ubiquitous, and used in virtually almost every situation where you want to know if a person opened your email, which equates to virtually all ecommerce in my experience.
As you said, in newsletter/spam business. But common mail apps don't do that - there is already a defacto standard for that. It's clear that Nylas business is around tracking and collecting data. It's like Win10, the user is the product and everything gets collected.
Good that Outlook and other mail apps blocks third party pictures by default.
The parent mentioned probably the two biggest mail clients in the world, and you mentioned 2 that I haven't heard of despite 20 years of sending email and toying around with new software.. I'm not sure it's an effective rebuttal.
So this won't work with emails coming from Gmail (and probably other providers) since these services cache these images on their server as soon as they receive them (and not when recipients open them) precisely to defeat this kind of tracking.
And of course, a lot of email clients only open such images on demand anyway.
I don't believe this is correct. Gmail will only request an image to proxy once you've opened the email, according to MailChimp, so this would only prevent tracking multiple opens.
It's actually hard to track down exactly when these images are cached. I don't really trust MailChimp to be truthful about this but I also can't find any specific from Google themselves about whether that caching happens when their SMTP receives the email or when the user actually opens the message.
Since Google's goal with this was clearly to defeat tracking, I would strongly expect the former, but I can't back this up.
1. All local. Unless you don't want and a 100-200kb JS file is too much of a strain on your server bandwidth. Or are you serving 15Mb of JS files?
2. Screw Disqus. Screw Facebook Comments. Start thinking about your visitors, as someone said on another related thread, you are responsible for the tracking of your visitors by 3rd-party sites. Local comments or turn them off if you don't care about what others are saying. Don't save any information about the commenters except what they enter in the boxes. One-way hash the IPs if you need to compare for spam reasons.
3. If you need your ego stroked when you see you had xx visitors on your site, go ahead, use Google Analytics and screw us all. We're gonna block it anyway.
[1] This is a privacy policy I use and respect very much when interacting with the visitors/commenters on my personal blog.
> 3. If you need your ego stroked when you see you had xx visitors on your site
That's a bit harsh - wanting to know whether you get 0 visitors to your blog post or 2,000 yesterday isn't just about ego; it helps you understand the value of your posts (and whether you should bother). Knowing how many people visited isn't the same as bragging about it.
Maybe it was harsh, the idea is: You might write better when you don't know how many people read your articles. On the other hand, you might write better when you know. Plan accordingly :)
2. I need to look into self-hosted comments; but I was hoping to make the blog portion of the site static to keep it simpler. Project pages may have demos/etc that pull in JS libraries. But you raise a good point in 1 that also applies to 2 - given that I'm probably going to be reaching only a handful of people initially (and perhaps longer), worrying about bandwidth is a premature optimization.
3. I've just about talking myself into going with log-based analytics here. I find ga's omnipresence too worrisome to contribute to it, even with consent.
Thanks for that link, that's essentially the policy in my head before I started thinking about things like comment support. It's way better written than I would've come up with.
Can't say I would disagree. A lot of folks these days just seem to append an HN/Reddit link to posts that get discussions on those sites. There's the blog as an expression of author's personality, and then there's the discussion space as an area with a life of its own.
All due respect for Leah Rowe and her work on libreboot, but this kind of drama only hurts the OpenSource (regardless of the name you like to use) initiative.
What are you talking about? FSF handled it 100% examplary from the get go.
Leah Rowe however was the toxic and crazy drama queen who couldn't keep a civil tone, and even her initial email included the word "fuck".
To top it off, she had zero proof for her allegations, and the person she claimed to represent specifically didn't want her to raise those allegations in her name.
In the rest of the world, we refer that as slander and libel. She should be happy nobody bothered to sue her. On either end.
Given Leah's behavior, I cannot see how on earth the FSF could have acted more level headed, reasonable and professional.
I don't have much to add to this discussion... but historically the FOSS community loves sharing Linus' nastygrams... I find people having a problem with this developer's tone somewhat sexist and irrelevant.
If the FSF thought that minimizing drama was a priority, they could have accepted that the maintainer wanted to take libreboot out of the GNU project without prolonging the whole thing. Like, basically this submission, but a couple of months earlier.
Also, more generally, I maintain that there's more nuance to the question of professional behavior than whether your communication includes common swear words.
> could have accepted that the maintainer wanted to take libreboot out of the GNU project without prolonging the whole thing
If I read the FSF statements correctly, there were very good reasons for taking some time, which I found quite convincing.
For instance, they were looking for a new maintainer of this GNU project, especially since GNU projects belong to GNU/FSF, not to the individual maintainers. The maintainers are free to step down and let other maintainers continue a project.
Would it have been more professional if they had thrown away their management process for GNU projects in this single case?
The only documentation that I see[0] about this just says "The program remains a GNU package unless/until the GNU project decides to decommission it.", but that's not a legal document. Assigning the copyright to the FSF would be a good indicator, but is not actually required to be a GNU package. So unless they have some other document to sign, then this seems to be a mutual contract which can be terminated by either party at any time. I don't know if that document exists (if it does, you'd think they'd link it on that page so people could review it), but if it doesn't, then I don't see any reason why the original submitter can't revoke GNU package status, like any other business relationship.
When you get your project into GNU, you practically transfer the ownership and become a maintainer only. Maintainers come and maintainers go, GNU tries to ensure that projects live on when staff changes. GNU projects thus belong to the community and their fate should not be left to the whims of the individuals.
In this case though it's nice that the project is detached given it's not all that important and does not worth the burden.
Exactly. The people that think Medium is innovative are the same people that think Farm From the Box (https://news.ycombinator.com/item?id=13309610) is innovative. Some people just live in their pretty bubbles.
Probably not but statistically I'm vastly more likely to be pissed off by a human driver killing them. I think in the last year the numbers killed were approx a million by humans, one or two by self driving cars. If the percentage killed by self driving cars gets over 0.1% I'll worry about it.