LinkedIn epitomizes everything wrong with todays front-end.
Npm,grunt,gulp,es6,ts,babel, webpack, yeoman, browserify, reactjs, reacts mother and its dog, yarn, bower, jsx.. and so on.
I can certainly chime in on this. Yes it is Ember but I'd blame that on the way it is "abused".
At one point it had become so bad that we had purge the excess whitespace from the HTML at the traffic layer with middleware. It actually had megabytes of whitespace.
Not to mention the server side rendering mess.
However when it comes to the subject matter of this thread, I don't think this is as sketchy as the OP makes it sound to be. This is LinkedIn's anti-scraping team at work, and nothing nefarious is going on.
This would be the application security team's work. They have a pretty extensive anti-scraping initiative and I know for a fact that these are used to determine if the account is scraping or not.
Someone on a different comment mentioned the "email-hunter" extension. That's exactly the kind of extension they are targeting. I remember many requests sent to support, asking why their account is terminated, and the response was usually "oh you used email-hunter" etc.
It was probably a combination of bad practices. However it came to the traffic layer to fix it because the problem was with many application origins and it would have been a huge horizontal initiative otherwise to address it.
I don't have any examples from the past because I no longer work there but when this middleware was turned off for a few days by ommission, homepage would become 2/3 whitespace. DOM would render correctly of course and the user wouldn't notice anything is wrong however if they were to "View Source", they'd realize they just downloaded a bunch of whitespace.
Imagine having 5 kilobytes of "\n" after each HTML element kind of thing.
When it comes to the middleware, it is just parsing and minifying the HTML source, in the form of an Apache Traffic Server plugin/middleware.
13 MB of JS/css/html: https://imgur.com/a/oehQQzJ