Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very interesting idea. I remember reading that in visual spoken communications, only 20% is the actual words. The rest is tone of voice, body language, context, emphasis, expressions, ... all that stuff.

I don't know if 20% is correct, but I feel it's very close to it. I also think a lot of internet arguments happen as a direct result of miscommunication. Emojis are great, but they get abused to the point that HN filters them out. Perhaps allow readers to toggle if they want to see emojis or not?



Easy to check: try to speak with someone talking foreign language you don't know and estimate what percentage of what they said you understood from tone of voice etc. I would guess it's less than 80%.


That's very easy and very wrong. Let's say you have a 100 page book. Page 1 contains fundamental knowledge that allows you to understand the rest of it. If you skip page 1 then you won't understand the other 99.

How much of the book will you understand if you only read page 1?


That then raises the question: what is a unit of communication?

If communication is 20% verbal and 80% nonverbal, and if communication is very nonlinear in understanding (as with your book example), how do we know what 1% of communication is? What does it mean, and how can we tell that the figure is correct, when our main or only way of detecting whether communication succeeded is through understanding or lack thereof?


> when our main or only way of detecting whether communication succeeded is through understanding or lack thereof

That's not even a good test, due to miscommunication. Both parties might think it succeeded, but then much later on you find out the truth (maybe).


But tonal information can be parsed without lexical understanding and vice versa.

Somebody cursing in French can still be interpreted as anger even if you don't understand French, and written profanity can still be interpreted as anger even if you didn't hear it spoken.

Tone and language do complent each other, but neither is a prerequisite for the other like your book analogy would suggest.


> but tonal information can be parsed without lexical understanding

Parsed perhaps, but it's so context sensitive that it's not useful, save for extremities. The same tone of voice can have so many meanings based on what's actually being said and yet another if you add context.


Maybe also control for cultural similarity, but I definitely agree


There's an acting exercise (it's from Joan Littlewood via Clive Barker) where one speaks "gibberish" - making language sounds, but not words - which, almost automatically, once they drop their terror of doing it, opens students up to all of those other avenues of communication. Later, you can switch students back and forth between the script and gibberish, and it becomes plain that if you can't play a scene as clearly (to those in it, not considering the audience) in gibberish as you can with words then you don't fully understand it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: