"The idea is that you'll have an authentication server (or it could be part of your core API) which is responsible for giving users their token. Once a user has a token, they can hit any of the applications, and the application can very easily check the validity of the token before deciding what to do with the request."
Are people really trying to reinvent Kerberos just so they can use the familiar JSON?
Well, yeah, since Kerberos stubbornly hasn't re-invented itself to use HTTP over port 80.
Another option you could have said was client-side SSL certificates, though they're simply not portable enough for most uses.
Finally, why bother implementing a sign-in service at all? You could follow the AccountChooser model and simply allow users to sign in using their own existing email address. Details at the recently redesigned http://www.accountchooser.net/developers ... though to support email providers you don't already have IdP relationships for, I guess that means letting them sign up for a password via email validation after all.
Well, if nothing else, I think it points out that Kerberos was designed for one institution, but today we often want multiple identities on the web, and that better fits a decentralized, browser-stored model of "SSO" or login, e.g. AccountChooser screens.
I agree with you both, at the same time not everything using a session/token auth/authz combination needs to be "kereberos". While one might argue if it's good or bad, we've long let the web server be the authentication/authorization boundary -- and there's not really anything wrong with formalizing the architecture into a auth.example.com and a service[1-through-n].example.com.
Let sessionN.example.com check for a valid sessionN cookie, if it's missing, let SessioN.x.c set a temp cookie, pushing a token also to auth.example.com, then -- the client that's missing a valid session for serivceN, is redirected to auth.example.com with a ?token=<encrypted>. Auth does the authentication, and bounce back to serviceN.
[edit: Hm, I'm completely missing the SSO bit here, actually -- at the minimum there's a redirect bounce for every new service N+1 the client access after obtaining a valid session for auth.x.c. That would probably be a problem for AJAX? Maybe it's possible to wrap with javascript in a sane way]
Client has a session (flagged not authenticated, not authorized) for ServiceN. ServiceN asks auth for the status of the session-token, gets a valid (optionally along with authorization data -- depends what "authenticated" means) -- assuming a valid reply, ServiceN sets up a "proper" session (eg: php session id, whatever framework ServiceN uses).
Basically Single-sing-in -- without single-sign-out (unless ServiceN can/does sign client out via auth.x.c on sign-out from ServiceN).
Yes, this is basically CAS/Shibboleth/etc -- but for medium sized architectures it might actually be simpler.
All interaction(s) assume trusted communication paths (ssl for client-server/service -- vpn/ssl/internal for service-service).
The other way (less web centric?) is to simply have Service1-N lookup via LDAP/AD/RADIUS towards some central internal user database).
I've just heard about 'API backend' and consumers (Web, Mobile, etc) and it makes a lot of sense to me on paper. The engineer in me foams at the mouth to dive into this.
I just haven't had the time or right project to really use it. For example, in one of my pet projects[0] I could write an API data store and easily consume it client side or in my planned mobile applications. IT MAKES SENSE!
Does this architecture have an official name? I would love to learn more and avoid mistakes made during it's infancy.
Isn't this just "3.4.7 Remote Data Access (RDA)"[1] from Fielding's REST thesis[2] (where client means javascript+browser and server reads some-kind-of-wrapper-over-sql-or-nosql-so-that-quering-is-quite-simple?). I sometimes wonder if people haven't read that despite it being quite accessible, a fantastic introduction to architectural analysis and the hype of the Representional State Transfer acronym.
For those that have somehow missed it (is it really obscure?) -- the introduction alone is well worth the read, even if you don't care about the argument that leads up to REST being a good idea for an architecture for a hypertext application/system.
thanks for the links. i hadn't read the paper. it seems like the design patterns book in that it's a good reference to, um, refer back to when looking at the latest "new" thing.
going back to the parent comment,
> I would love to learn more and avoid mistakes made during it's infancy.
you'll probably want to look at zookeeper (or some system built on top of zookeeper like storm) if you expect the thing to grow into a large project with lots of services.
and to the article,
> Should all your applications use the same language?
there are definitely reasons (library availability, runtime capabilities, the simple fact that you don't have to, and so on) not to use the same language, but i've found being able to refactor between applications very helpful.
"Client agnostic" (I may have just made this term up, not sure) & service oriented architecture would probably be the most accurate descriptions.
This really isn't anything new though. Many applications have been moving in this direction since 2008 and on. Mobile apps and heavy JS frameworks like Ember and Angular have made it even more prominent.
The real shift is thinking about the user interface as just another service, and one best provided by a static web application that talks to your API server and other services via CORS.
As long as you completely split the front-end and back-end, you're able to start with a simple single API and break it into services once you recognize pieces of your application that would work better independently. This also makes it stupid simple to implement new interfaces e.g. mobile apps.
I don't think that long-term I'd want any client-code speaking directly to the backend over a generic API. It's far more optimized to migrate to code that sends fewer requests with as little traffic as possible over the wire, and often that means knowing state both on the server and client, which isn't all that RESTful. http://blog.programmableweb.com/2012/05/15/why-rest-keeps-me...
The point is the different design goals: Yes, you can send page after page of HTML, refreshing content that hasn't changed. But we moved to AJAX because that was inefficient. Better to have the browser ask for just the data it needed. Well, efficiency will then lead to either better protocols to bundle up multiple requests or the simpler approach of bundling up data into one request designed for that particular user's session. Hard to put RESTful, meaningful unique IDs on that one.
Of course, the risk you run is one I've frequently encountered in Google Music, for instance. The state graph is messed up somehow and duplicate data or corrupt data starts appearing in the JSON stream to the browser and in the UI. Not much you can do except refresh, logout, or wait for a code update to clear such caches, unfortunately. That can be the downside to "smarter" clients and why even today we have cache clearing and a "force-reload" action.
This is also why automated backend services should use REST for simplicity, and why UIs need to be built to consider network traffic, re-sending failed requests, checking for invalid data, etc.
The sensible way to combine classical REST and AJAX (which I think
original AJAX did open for, with it being Asynchronous JavaScript and
XML) is to allow requesting partial documents. So if you have:
<xml><d><p>I am initial document, I have only one thing to say, and that
is hello!</p></d></xml>
At, /document.xml
You should be able to get a partial: <p>I am p2</p> from something like
/document.xml?2 /doucument.xml/2 or something.
Then all that could be cached, and you only send new sections.
But yes, the client does need to keep some state, because HTML doesn't
support transclusion[1] of the "most important" html elements:
paragraphs, divs etc. So you can transclude an image, or (badly imnho)
an entire document through an iframe -- or javascript (henche modern
ajax, which is basically asynchronous javascript and javascript or json
which is basically javascript).
There's no real contrast between REST and AJAX as an architecture, as
such. What's maybe missing is server-to-client PATCH or something (eg:
client says I've got d1.html as of <some-date>, give me a diff).
Wouldn't that have solved almost all our problems?
[edit: I hereby reserve the http method "DIFF" as a reverse "PATCH", as for some strange reason no-one seems to have defined this before (as far as I can google, anyway). Semantics to be hammered out, but in general a client does a "DIFF /<uri>" along with a cache header/timestamp/sha512-hash, and gets back either an unchanged header, or a reply with a patch to be merged with the document/uri in question in order to get an up-to-date copy.
In other words, a DIFF is to GET as PATCH is to PUT ]
Are people really trying to reinvent Kerberos just so they can use the familiar JSON?