Hacker Newsnew | past | comments | ask | show | jobs | submit | fgsfdsfgsfds's commentslogin

A fundamental difference between Japanese and Western animation is their approach to motion. Western animation emphasized fluid motion, attempting to stick to the maximum framerate (24 FPS) allowed by the film medium as much as possible. This could produce striking results, but tended to limit artists to more simpler, slower, more deliberate stretch and squash movements that would make sense when interpolated across key frames. It was also very expensive to produce, which is part of why western studios have mostly abandoned traditional animation, and why the ones that still do 2D often use Flash to cheaply "tween" between keyframes to get fluid motion for the minimum amount of effort.

Japanese animation, on the other hand, plays fast and loose with the rules. Very little is animated at the full 24 FPS, drifting around 12 to 8 to 6 FPS (called animating on twos, threes, or fours) for typical scenes. This allows for the user's attention to be captured by backgrounds and stills in expository scenes, and encourages a different style of exaggerated movement that emphasizes the ''feeling'' of the movement over brute force motion quality. It also allows the budget to be conserved for the most important scenes, sakuga ("moments in a show or movie when the quality of the animation improves drastically, typically for the sake of making a dramatic point or enlivening the action.") So called "key animators" work exclusively on these parts of an animation, and here, their skills in creating expressive motions shine when given the time and resources to completely bring a dramatic scene to life.

As the game, comic, and animation artforms grew alongside one another, early games took direct influence from styles of animation native to their creators. The concept of sakuga fit Japanese games well, which would often sparingly animate unimportant "cannon fodder" enemies for the sake of saving budget and development time for elaborate boss monsters. In contrast, I was always struck by the fluidity of motion in Western games produced by groups like The Bitmap Brothers for the platforms like the Amiga, yet unimpressed by the gameplay underneath. It seemed almost as if these groups would spend so much time lavishly animating their characters, that they would forget they were making a game.

Another important consideration is that more frames can work against an action game. Every frame of a sword swing, for instance, is another 16th of a second delay between the player's button press and the intended action occurring. A 1 or 2 frame anime-style sword swish fits perfectly into this mold, allowing for responsive control and an expression of energy. Whereas many Western games I played would lavishly animate the beginning of a jump, or thrust or swing of a sword, compromising the feel of responsiveness for the sake of the artist's ego.

So if you notice the lack of frames in a modern "pixel art" game, then while the artists might have been lazy (the typical "pixelated platformer" drawn at a lower resolution than an Atari, with more colors than a SNES game), perhaps they instead were copying older Japanese artists without understanding the reasoning behind their animation choices. Or perhaps you just don't appreciate this style of animation.


No source for this, but I remember hearing an interesting anecdote some time ago, and I'd love to know whether there's any truth to it:

Back in the 50's or so, when the major American animation studios had a couple decades of experience under their belt but anime was just getting off the ground, some Japanese animators took a trip to Hollywood to learn how the industry worked in the US. They were given a tour of a major animation studio, but through some kind of miscommunication, they came away with the impression that the studio was spending an order of magnitude fewer resources (time, money, manpower) than they actually were.

When they returned to Japan, they budgeted their productions using those lowered expectations, forcing their animators to cut corners and squeeze as much expressiveness into as few frames as possible. Which in turn led to audiences getting used to that style of animation, and created the recognizable Japanese aesthetic that has persisted to this day.


I have no idea, sorry. It would be very interesting if it were true, but it sounds like a bit of an urban legend. Working within the constraints of limited resources seemed to be a constant in all of Japan's pre-bubble successes, from comics, to calculators, so I wouldn't assume there would be any extraordinary reason for early Japanese animators to follow the same principles.

My interpretation was that anime's more limited, stylized animation was due to its direct influence from the prolific manga artist and animator Osamu Tezuka (sometimes called "the Walt Disney of Japan.") All comics, Western or Eastern, must heavily stylize motion in order to have any hope of expressing it in still frames. So having an accomplished dramatic comic artist lead most of the earliest, most influential Japanese animations, most of which were direct adaptations of his comics, working with the low framerate stylized motions that extended naturally from comic book stills, seems to have had a profound effect on the Japanese school.

In contrast, Disney's early animations were inspired by conventional cartooning in subject, and early film and Vaudeville in motion, so given Disney's influence, it should similarly not be surprising that the Western animation school tended towards comedy and fluid motion that attempted to emulate the impression of film.


Ex-animator from the US here.

Most Western animation has historically been done at 12fps ("on twos"); you go to 24fps for very fast motion or stuff you want to be super-smooth. This holds even in big-budget 2D theatrical features[1]. You will also find no small amount of super-stylized motion - dig up "The Dover Boys" (dir. Chuck Jones), commonly cited as the first cartoon to experiment with the smears, multiple images, and other crazy stylizations of motion. Single-frame your way through that someday; it's full of amazing semi-abstract images.

(First in the US, at least; there may have been people in the East playing with this as well, but there was very little cross-pollination at this time.)

The big difference between the Western and Eastern animation traditions, in my mind, comes from different decisions in how to best allocate "pencil mileage". You have a budget of X number of miles of lines drawn by your animators for each project; how do you spend it? Westerners generally pursued "the illusion of life", favoring really fluid and subtle motion over a huge number of drawings. This meant that very few single drawings could be very complicated. Easterners, on the other hand, gravitated towards complicated character designs. You might spend four or five times as much effort drawing a single frame. This meant that very few motions could be fluid.

Fluidity of motion × character design complexity = pencil mileage. Pencil mileage is fixed by your budget. Which one of those do you want to emphasize? You can't have both unless your budget is amazing. (This is also why animated commercials can have both; they can cost more per second than some features.)

To eyes trained in one tradition, the other one looks like crap. Why is this Eastern cartoon barely more than a slide show half the time? Why is this Western cartoon full of such simplified designs? Personally, it took me many years to learn how to appreciate Eastern animation.

(Also it is worth noting that most lovingly-animated Western games quickly converged on quickly popping into a motion as a result of a control input, with exaggerated settling into the new pose - look at Crash Bandicoot, Earthworm Jim, or Aladdin.)

I am rambling. Mostly I just wanted to mention that my experience of the Western tradition (including years of single-stepping cartoons) is largely on twos, with occasional bursts of ones.

[1] except for anything Richard Williams directed because he is kind of a total obsessive who would wander around the studio late at night and inbetween stuff onto ones that maybe should have never been on ones. See 'Raggedy Ann and Andy' for instance. Or if you want to actually see a decent movie hunt up the Recobbled Cut of 'The Thief and the Cobbler' because RA&A is pretty much 80 minutes of pure nightmare fuel.


Thank you for sharing your experience. Pencil mileage is a great way to sum up the difference between the two traditions. Sorry for sounding so down on Western animation; it's true that I'm more used to the look of Eastern animation, having invested much more time invested in it (hence my mistake about Western films animating on ones vs. twos; it's been a while since I watched one of the classic Western animated films, oops), but I don't want to discount the work of Western animators. I have my biases, but there are gems in both traditions, and of course I'm saddened by way Western 2D animation has fallen by the wayside in mainstream Western (or at least American) culture since the dawn of 3D animation.

As for games, I wanted to add a comment to my post about how I was mostly thinking about mid 80s-early 90s games, but Hacker News seems to have some restrictions keeping new users and throwaways from editing posts. Many European games from this time stood out to me as compromising gameplay at the expense of their animation. In contrast, many of the 90s and early 2000s Western mascot platformer games, as you mentioned, did great jobs of working charismatic, fluid animation into the medium. Crash Bandicoot in particular is notable for being one of the few games to use per-vertex animation; for each key frame of animation, the vertices that made up Crash were placed manually, giving the artists an unprecedented degree of control.

Since then, sadly, most Western AAA devs have tended overwhelmingly towards more "realistic" art styles that make heavy use of inverse kinematics-driven procedural animation for gameplay, and motion capture for cutscenes, leaving the animators much less room for expression.

Regardless, my intent was not to comment on Western animation as a whole, but (bringing it all back to the original commenter's remark about mobile games being sparsely animated) simply state that sparse animation doesn't have to be a bad thing, and in fact, there's an entire school of animation built around it.

Anyway, thanks again for replying, and I'll be sure to check your suggestions out.


* WARNING * "Raggedy Ann & Andy", IMHO, is NOT A GOOD MOVIE. It is a fascinating trainwreck of obsession. It is a weird damn thing you see when you are five, then halfway remember when you are twenty and wonder if you dreamed it.

The idea of "pencil mileage" works both ways; really thinking about that helped me be able to finally actually enjoy some Japanese stuff.

There is also a strong tradition of Western limited animation; it actually started as the artistic cutting edge with the work of UPA (Gerald McBoing Boing, Rooty Toot Toot). Even Disney played win that domain with stuff like Pigs Is Pigs or Toot Whistle Plunk & Boom.

But then in the 60s, TV came along, and Joe Hanna and William Barbera's sweet gig doing beautifully animated Tom & Jerry shorts was gone; they started doing super-simplified cartoons that were carried largely by the voice talent. This got worse in the 70s and 80s, with tons of terrible toy-based cartoons done on the smallest budget possible (He-Man! GI Joe!). "Limited animation" is a dirty word to an entire generation of American animators.

Or at least it was until John K's "Ren & Stimpy" hit the cable networks, and inspired a new generation of animators. Who're busy making a lot of the TV stuff of today.

(But honestly I think that deep in the heart of every Western animator, there is a grey-toned Fleischer character bopping up and down in time to their heartbeat. Their hallucinatory version of Snow White - https://www.youtube.com/watch?v=CNG8GYrh1mg - is a good example.)

Crash and Spyro are great examples of what can be done when a 2D animator's mindset is applied to 3D games! It carries on nicely from all the gorgeous 2D work on games like Aladdin or Earthworm Jim. I would love to see people come back to that mindset in video games. Guilty Gear Xrd is the first ray of light in the clunkily-puppeted darkness in a long time.


What great 2D animation is left? Ghibli stopped producing features (or put it on hold) and Disney, and everybody else in the US moved to 3D

I was thoroughly impressed Knights of Sidionia on Netflix, and though it can match some feature animation, it is still far from Ghibli/Disney. And it isn't even 2D!

The other day someone posted a gallery from Ghost in the Shell on Reddit and I thought "When was the last time we saw art like this on the big screen?"



Thank you for your detailed explanation.


Sounds like they should have been developing for PC instead of mobile. The average mobile user plays exploitative "freemium" garbage to kill time in lines. How can you expect them to appreciate the art behind it?

Meanwhile, "retro"-style games are doing gangbusters on Steam, selling to people that actually appreciate those styles of art, music, and game design.


There's this cognitive dissonance with a lot of game devs who damn well know mobile is a ghetto, but want the big bucks mobile can occasionally deliver. So they try to justify mobile development thinking they'll be the ones who make the "mature" game that will be "appreciated." Instead, freemium mouthbreathers leave 1 star reviews because "it looks funny on my phone."

Don't blame the game or the style. Blame your chosen audience.


You hit the nail on the head.

Funding is also a big part of it. It's much easier to get funding for a mobile game than for an indie PC game, even though the mobile game is significantly less likely to make a return on investment. So perhaps some mobile devs know they aren't going to make it, but take the money anyway so that they can work on something on someone else's dime, while they build up experience and add another piece to their portfolio.

I'm not convinced it's a worthwhile trade, however, if it makes you lose your faith in humanity and your art form in the process.


I'm working on a PC game and spoke to a lot of potential investors, and this is spot on.

"Funding an original game that has already proved itself by actual sales, on a proven and stable platform?" Not interested.

"Giving away a few millions to take a crap-shot chance at a totally unethical business on a overcrowded platform?" Yes please!


While you have a point (and I've given you a +1 vote because of it), the fact remains that the "average person" I know thinks that Street Fighter 4 animations were better than Street Fighter 3.

The Chun Li discussion happens a lot in my circle of friends, and I'm always in the minority. Very few people appreciate classic pixel art.


So, if we look to the digital art like Pixar, there are a lot of guidelines from traditional animation to make a scene come alive.

* Squish and Stretch * Fallow through * Secondary Action

These principles really help make dynamic scenes. If you look at the Chun Li animations, you can see that the older one does a better job of using these.

I feel like if the HD version took advantage of these, it would look the best out of all of them, but it failed to reach its full potential in it's medium.

Watch this early Pixar short: https://www.youtube.com/watch?v=D4NPQ8mfKU0

See how alive and full of emotion the little lamp is?


I can't directly compare the SF3 and SF4 animations: they're just totally different art forms.

That said, as somebody who loves pixel art, I still really love what Capcom did with SF4's art style.

The artists of SF4 definitely took a bold direction; the SF4 games don't really look like any other 3D fighting games. The characters are cartoonish without falling back on the crutch of cel shading; they're realistic without looking like drab psuedo-photo-realism. To me they look like childrens' action figures, fighting it out on the screen.

As the author says, SF4 could have been animated better. Specifically it probably would have benefited from some squash-and-stretch as he suggests.

It's also worth noting that SF3's traditional cel animation, while awesome, also has room for improvement. The animation style isn't very consistent from character to character. In a lot of cases (Chun Li in particular) it's not even consistent between her various moves.

A great critique here: http://www.finalformgames.com/uncategorized/style-study-moti...


> The artists of SF4 definitely took a bold direction; the SF4 games don't really look like any other 3D fighting games. The characters are cartoonish without falling back on the crutch of cel shading; they're realistic without looking like drab psuedo-photo-realism. To me they look like childrens' action figures, fighting it out on the screen.

Actually, Arc System Works ironically was the one who pushed the envelope here.

SF4 took a huge number of cues from Battle Fantasia. The "Super-Attack Zoom-in" animation, the dynamic camera movement, "Super Freeze", and so forth.

Its ironic, because Arc System Works made Battle Fantasia as a "learning project". In various interviews, Arc System Works noted that they had very little 3D skill and needed to train everyone up on 3D Animation skills, and the best way to do that was to make a 3D-animated video game.

Then a few years later, Capcom basically takes all the cues from Battle Fantasia and added a decent-style on top of it (the "Black Lines" and a unique style of cel-shading). And of course, Capcom's SF4 cast was much larger, more detailed animation and all that. Nonetheless, it is clear that it was Arc System Works that pushed the envelope in their experiments with the one-off Battle Fantasia.

https://www.youtube.com/watch?v=YZDJenxXPuM

IMO, Arc System Works has done 3D a massive favor here with Battle Fantasia, and they are once again pushing the envelope with Guilty Gear Xrd.

Not to hate on SF4's style of course. I think I prefer Capcom's SF4 style over say... Tekken, DOA or even MvC3. And Capcom definite added a lot of "love" in the art-style. But the _core_ of the animation techniques were more or less copied from Battle Fantasia years earlier.

Super Smash Bros always had the right idea with the attacks however. If you watch SSB:M carefully, the bones of the various characters expand with the hitboxes. For example, Mario's FAir attack has a huge exaggerated fist, and other characters shrink/ grow with their hitboxes. (Which adds for some interesting strategies / counter-strategies, since hitbox / hurtbox manipulation is a major element of high-level Fighting Games).

The Super Smash Brothers series has been the best at communicating the hurtboxes and hitboxes in a 3D environment. And it looks like Guilty Gear Xrd is finally a 2nd series that finally communicates those important cues as well.

----------------------------

Still, it is clear that the 3D Artwork style of SF4 is still relatively new and isn't as a refined of artwork. Again, the Chun-Li animation from SF3 is near the peak of Capcom's animation prowess, while SF4 is probably better described as a great first step for Capcom (even if it is in many ways copied over from Battle Fantasia).

Despite that fact, people are wow'ed by the the zooms, the buttery smooth animations, dynamic camera angles and automated shading. Things that honestly didn't take much effort on the part of SF4 artists. Heck, all of those things basically come for free when you use 3D Models.


I wasn't impressed with Battle Fantasia's animation at all. I thought it was kind of poor, actually, compared to other 360/PS3 efforts like the Soul Calibur games of the day. That's not to say you're wrong; we just had very different impressions of it. What animation techniques are you referring to when you talk about Battle Fantasia's innovations? (I'm not an animator; I'm surely missing some things there)


Again, Battle Fantasia was a learning project for Arc System Works. It was never meant to be an advanced artform. Which is why it is deeply ironic that almost all modern 2D fighting games using 3d art are based on Battle Fantasia camera mechanics.

Compare the super-attack animations between SF4 and Battle Fantasia. And note the following similarities:

1. The 'Super Zoom' that changes the camera angle, to focus on the character performing the super-attack. 2. The 'freeze frame' mechanics: which "pause" all other 3D animations while the super-attack user remains fully animated. 3. The way the background melts into a new environment, and then melts back into the stage as the super-attack either hits the opponent... or misses.

True, SF4 has better character models, better backgrounds, and better animations. But the camera mechanics were all invented and pioneered by Battle Fantasia.


Hasn't the zoom thing been around forever? It goes back at least as far as Rival School and probably earlier games I can't remember - I think Soul Edge did it too?

https://www.youtube.com/watch?v=Wb5rTtSrP3E

Rival Schools also changed the background during supers/knockouts, although it was just "fade to blacks" and 2D overlays rather than a new environment. But looking at this video of Battle Fantasia supers (been so long since I played it!) that's all that game did as well:

https://youtu.be/3TXk38j1wUc

The "freeze other animations while the super move executes" thing is such a minor stylistic thing that I'm having trouble really thinking of it as an innovation and to boot... is that even what Battle Fantasia does? I mean look at this super: the steam in the background is still animating:

https://youtu.be/3TXk38j1wUc?t=115


John Kricfalusi always laughs at and criticizes Pixar animation. Mainly because he's a hipster (even looks and dresses the part) who wants to go back to the good ole days of Bob Clampett, Chuck Jones, Tex Avery, etc. For most Pixar or other 3D animated productions, there just isn't enough squash and stretch or exaggeration to appeal to old time animation enthusiasts.

I think it's possible though. It's been done to a limited extent in video games: note how Link's sword arm and sword grow when he takes a swing in Wind Waker.


When it comes to cel animation I recommend watching Chuck Jones: Extremes and in Betweens https://www.youtube.com/watch?v=vrD0aog7Kts


That's fair, but if you want to make art, there's no sense in trying to appeal to everyone. The guy that disregards pixel art probably wouldn't see your traditional art for what it really is, either. You can be depressed that the "mainstream" doesn't appreciate what you do, you can crush your spirit trying to pander to them, or you can have fun in your cozy niche with a small handful of fans and peers that really get you.


The average mobile user plays exploitative "freemium" garbage to kill time in lines. How can you expect them to appreciate the art behind it?

How does your second statement follow from the first? Does choosing to play freemium mobile games somehow mean you have poor artistic taste?


I meant to imply that they (in the typical case) don't really have an interest in games as a medium, and are simply looking for small distractions to fill gaps in their lives. If they don't appreciate the games themselves, I don't expect them to appreciate the art behind them either.

But since you went there, I do think that exclusively playing freemium slot machine games is an indicator of bad taste. Just as someone that exclusively eats at fast food restaurants probably has bad taste in food (yeah, I made a fast food analogy).

I feel sorry for the kids that grew up with smart phones and have never known a game that wasn't vapid and exploitative. There are worse fates, but still.


Good question. It's also fairly simple to answer. Players of freemium games fall into two categories:

People who have some emotional draw to it (maybe they like the art) and play it despite recognizing that the mobile freemium game industry is a giant social engineering experiment aimed at extracting money by exploiting weaknesses in the unaware human mentality.

People who don't recognize this, either by simply lacking awareness, or lacking enough experience to draw a comparison to non-exploitative games. Regardless of which it is, in both of these case, on average over the entire population, it is fair to assume that they will either not be attentive enough to appreciate details in the art, or won't even have enough knowledge to draw a useful comparison to other kinds of art.


Are you implying that Players of paid games do not fall into the same two categories?


No, i am absolutely and determinedly not saying they do not fall into the same two categories.

I'm not implying, i'm explicitly saying that people who primarily play games that are either sold up-front and without any IAP, or only with IAP that unlocks the full-game part of a "free demo", are, on average over the entire population, are most likely to be found in the third category of people who either have enough experience with games that are not menschenverachtend or are able to recognize games that are menschenverachtend, and avoid those consciously, and are thus also more likely to have the experience or attention to appreciate small detail art decisions.

These two statements are wildly different.


I hate this attitude. It reeks of privilege and naivety of the working world outside the speaker's bubble, which is especially ironic considering that its proponents are usually very liberal (and thus love to talk up how pro-worker and considerate of their privilege they are, except when it concerns xenophobic straw conservatives). But I digress, so let me tell a story instead.

My dad was working in $TECH_FIELD for a subsidiary of a multinational megacorp ($BIGCO; you might have heard of them). Said subsidiary had decided to branch out into providing $CERTAIN_KIND_OF_TECH_SERVICES to $OTHER_BIGCOS.

My dad's team was one of the few that got their work done without trying to play "the game" too hard. Almost everyone else in the company would fight them every step of the way, fighting to gain control over a certain aspect of his account (for the power and influence, of course), then never ever doing any work towards it, forcing my dad's team to pick up the slack for everyone else while they took the credit. Say, a team would receive the job of designing $TECH_SOLUTION, but the deadline would loom and my dad's team would never, ever receive the design from the design team for him to implement, so it would end up being all on him and his teammates to design and implement $TECH_SOLUTION.

So it was just under a dozen people with my dad, working their asses off to provide services to this particular account. A friend in middle management let slip once that they were the only profitable account in the entire division, and their customer was the only one happy about the service they were receiving.

After a few years of depressingly poor management and vicious office politics, $BIGCO decided it was time for a change, and brought in a new CTO to turn the ship around. Naturally, said CTO decides that the best course of action would be to lay off almost the entire team working on the only profitable account in the whole fucking division, and replace them with offshore contractors. Only a few months before a critical infrastructural change required by the contract needed to be completed, and just over a year before the contract was to expire. I'm sure you can see where this is going.

My dad was one of the few spared from this show of gratitude, and was promptly tasked with training the offshore workers. Pretty straightforward, right? Employees are just cogs. It doesn't matter if they've spent decades, almost their entire working careers, mastering this field. You can just take any random college graduate and bring them up to speed in a month, right? Better yet, get an offshore one that costs a 10th or a 20th of what a Red Blooded American would demand, and pocket the difference. That's like, free money!

Wrong. Said contractors barely spoke English, and knew less about $TECH_FIELD than I did. As futile as it would be to train a western college grad up to the required proficiency before the deadline, it is downright impossible to do the same with language barrier erected in front of you. My dad and the rest of his team would spend hours on the phone with the outsourced workers, trying to walk them through a process, starting from very basic first principles that anyone with their degree in their field should know, and... silence.

Needless to say, my dad and most of the other remaining members of the team got out of there ASAP. $BIGCO realized their incompetence too late, tucked their tail between their legs and tried to hire back the laid off team members, but unlike most of the stories you had scoffed at, they were all able to get new jobs in the mean time. Service quality plummeted, the customer was appalled when they realized what had happened, and when the time came, decided not to renew their contract. A few months later, $BIGCO decided to get out of $BUSINESS and laid off the rest of the division.

---

Ok, so what can we learn from this? Let's consider a few (not necessarily mutually exclusive) possibilities:

1. (Some) corporations are mind-boggling stupid, with the foresight of a goldfish, and greed that would make Ebeneezer Scrooge blush. They will happily ruin a profitable business to save a few pennies in the short term.

2. (Some) offshore firms from third world countries know this, and build their business around pulling fast ones on these stupid corporate executives. They tell them everything they want to hear about how the workers in $COUNTRY are just as good as the ones in America, but will work for pennies on the dollar, and so much harder! Then, when they seal the deal, they go out and hire a bunch of newly minted college grads with zero experience in said field, and tell them to play the part while they cook up some nice resumes. Yeah, I said it. It's stupid enough to begin with to fire 75% of a business, leave it in the hands of a few college grads, and expect everything to work out. It's downright suicide when you consider the rampant degree and resume fraud that these offshore firms perpetrate, and how brazenly corrupt many universities from the third world are. And the beautiful thing is, the language barrier makes it extremely difficult for management to tell that anything is wrong until it's already too late.

3. Of course not all foreign workers, or even all foreign workers from the third world, are like this. When people talk about incompetent offshore workers taking their jobs, this is the kind of downright fraudulent practice they speak of, not the honest workers that really are just as good as their western counterparts (and will probably end up moving as soon as they can...)

4. Don't be intellectually lazy and lean on the perception of racism or xenophobia. Said stupid corporations will also happily fire older workers with decades of experience for clueless American college grads, and ruin businesses that way. It never occurs to them that you can train young employees while the old guard keeps things running smoothly, because they're seeking the petty short term profit at the long term detriment to the business. Why?

5. Corporations are managed by psychopaths. They ruin their businesses in these ways because the go-getters all want the short term boost in profitability that will promote them up the corporate ladder quickly enough that they won't have to deal with the consequences. Even if it destroys the company, these psychos will have long since bounced to another job beforehand. Said psychopaths wage wage war in the office. An interpretation that I didn't initially consider of my dad's story was that maybe said CTO or one of his new managers was deliberately trying to justify axing the division by destroying the only profitable team. So it's also entirely possible that in many cases of offshoring, the "incompetent" actions of the corporation at large is really just one person trying to snuff out someone else vying for the promotion they want.

---

So, all of this giant wall of text considered, my point is that, well, there are a lot of reasons beyond employee incompetence why a corporation might offshore a worker. It's intellectually lazy and downright rude to imagine some straw factory worker screaming "DEY TOOK ERR JEERRRBS" and shut off your brain every time you hear someone complain about the practice.

As for the question of "what about the guy that got offshored and has been unemployed since," I have more stories (some my own, some from others) I could tell, but since I've already overstayed my welcome, I'll be explicit: Economic downturns suck. Getting laid off or offshored during one could very well leave you unemployed for years, during which no one is willing to hire you. Even when the economy picks back up, it's going to look bad on your resume if you spent years unemployed (or employed in an unrelated field). It's even worse if you're older, and ageism kicks in.

In this scenario, you'd probably need to change careers to survive. As programmers, this doesn't sound so bad to us, because we know (knock on wood) that some kind of programmer will be demand for the foreseeable future, and we should always be able to change a technology "stack" or platform or whatever and find a new job doing very similar things. But not everyone is as fortunate as us. For most people, having to change careers means throwing everything away and learning something new. If you need to do that to keep the lights on, you do it, but it gets harder and harder to do so as you get older. So don't be so hard on people that made the wrong choice and picked a career that disappeared from under them.


IMO, shorter comments much more clearly express ideas.

I hate this attitude. It reeks of privilege and naivety of the working world outside the speaker's bubble. But I digress.

team got their work done without trying to play "the game" too hard. Almost everyone else in the company would fight them to gain control over a certain aspect of his account then never ever doing any work towards it, forcing my dad's team to pick up the slack for everyone else while they took the credit.

A friend in middle management let slip that they were the only profitable account in the entire division, and their customer was the only one happy A new CTO decides to lay off almost the entire team on the only profitable account in the whole division and replace them with offshore contractors. Remaining members of the team got out of there ASAP.

Said contractors barely spoke English...

$BIGCO tried to hire back the laid off team members, but, they were all able to get new jobs in the mean time. Service quality plummeted, the customer decided not to renew their contract. A few months later, $BIGCO decided to get out of $BUSINESS and laid off the rest of the division.


yeah man who the fuk has time to read two whole PAGES of words this aint social studies LMAO #yolo ☺☺☺☺☺☺☺☺☺☺☺☺

can i get the sparknotes for your middlebrow hacker news dismissal i got SHIT TO DO SON 💩


Overreacting much? His summary of your comment actually got me interested enough to read the whole comment, which I would otherwise have never read (it's not exactly a unique story).

I would like to add some constructive criticism: it seems like you are idealizing your dad in this story (his team was really the only one in this really big company making any money), which is a very normal thing to do, but it is not needed to make your point. Also, you are dehumanizing "corporate people" by calling them psychopaths. They are not (usually) psychopaths, they have feelings and empathy, but are just very good at rationalizing those feelings away. I think it is important to recognize that they are no different from you or me, since that might prevent you from doing the same thing in the future.


> They are not (usually) psychopaths, they have feelings and empathy, but are just very good at rationalizing those feelings away.

I'm really glad you made this point, and I'll add that we have to remember that most people can be induced to make callous decisions with the wrong incentive structures and the right pressures from their management. In some environments, behavior we might deem callous is merely institutional for others for pragmatic reasons.

All too often, we forget that when building institutions (commercial, government, etc.), it's critical we don't inadvertently construct systems that give people incentives to do the wrong thing. We have to stop labeling normal people as psychopaths and remind each other we can all act callously under normal circumstances. Not exceptional circumstances, but normal pressures from management and colleagues.

Dan Ariely wrote a good book that tries to explain some of the mechanics: http://danariely.com/tag/the-honest-truth-about-dishonesty/


Thank you for the whole post.


Because you commented before reading the link.


That's assuming that a "full pointer" (i.e. an unpredictable address to an arbitrary point in heap memory) is the only way to get a reference to an object. What about if you want to stick a bunch of objects in a vector, and iterate over them linearly? In the C++ approach, the data will have a nice linear cache access pattern. In the Rust approach (unless I'm missing something) you'd be storing a vector of (double sized) pointers, to god knows where in heap memory... suffering a cache miss on every access, on average.


Since you are talking about a heterogeneous [edit: homogeneous] array, you would store the concrete structs continously in Rust as well. Rust would also not waste space in the vector for storing a vtable-pointer, and would instead construct fat pointers dynamically when needed (since it knows the type, it knows which vtable to inject in the fat pointer).


> you would store the concrete structs continously in Rust as well. Rust would also not waste space in the vector for storing a vtable-pointer, and would instead construct fat pointers dynamically when needed (since it knows the type, it knows which vtable to inject in the fat pointer).

Uh, what? How is the compiler just supposed to magically know which of the structures in the array are of what type, without any additional identifying information? I'm assuming that in this optimized case, there's a hidden type field in each struct, that it would use to index into a table of vtable pointers? If so, there you go, that's yet another level of indirection.


I meant to say homogenous array, which I assume you were talking about.


No, I was talking about iterating over an array of objects calling their virtual functions (or either of the additional cases listed above). Of course it's easy to "do the right thing" with homogeneous arrays, either in the compiler or by hand if need be. But if you're iterating over a homogeneous array, calling the same virtual function on every single one, and your compiler somehow manages to notice this before you do, you probably screwed up in your design somewhere, so that's not the kind of problem I'm talking about.

It usually is smarter for performance to do the "data oriented design" thing and break the heterogeneous arrays into separate homogeneous arrays, so that you can potentially avoid a few levels of indirection, hoist loop invariants out, and maybe even make use of SIMD. But the whole point of the conversation was to talk about a nontrivial abstraction that (supposedly) trades performance for clarity. So I gave a scenario that would exercise that overhead.


> That's assuming that a "full pointer" (i.e. an unpredictable address to an arbitrary point in heap memory) is the only way to get a reference to an object.

No, it's assuming that a "full pointer" is the only way to get a reference to a polymorphic object. This is true in both C++ and Rust.

> What about if you want to stick a bunch of objects in a vector, and iterate over them linearly?

You can do this in both C++ and Rust. But you can only put a bunch of objects in a vector if they have a statically-known type. The point of dynamic dispatch is calling methods of an object where you don't statically know its type.


How would you put or index objects in a vector in C++ if they are virtual / dynamically sized? I'm in impression that unless you have the dynamically sized object behind a pointer, you get object slicing.


I didn't say they were dynamically sized. You could either have objects of the same type, but with virtual functions, or you could make a union of all applicable objects (thus guaranteed to be constant sized, at the size of the largest object) and store those in the array/vector, switching on a type enum or calling a function from an inherited base class.


> You could either have objects of the same type, but with virtual functions

If objects have the same type, which is statically known, there is no need for virtual functions because the compiler can resolve the specific method implementation at compile-time.

> or you could make a union of all applicable objects (thus guaranteed to be constant sized, at the size of the largest object) and store those in the array/vector, switching on a type enum

You can do this in both C++ and Rust easily, with equivalent efficiency in both. In Rust you would just use an enum type. This design is generally highly discouraged though, because it requires callers to be aware of all possible "derived classes".

My comments were only about the case where you are using true language-level polymorphism.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: