Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Can I Talk to that William Fellow? He was so Helpful (2009) (msdn.microsoft.com)
246 points by brudgers on Feb 14, 2017 | hide | past | favorite | 126 comments


This sort of thing is a very helpful exercise - to any of you who manage developers that work on anything customer-facing, consider setting up a day for them to spend fielding customer support contacts for your product.

It's amazing how myopic you can get after working on a product for awhile. The time spent in contact with ordinary users who are reporting issues helps to bring you back to reality: how do users typically user the product, what common edge cases do you fail to consider, what are common pain points in the UI, etc.


At KAYAK we had these red phones that would ring loudly until someone picked them up, and some technical worker would have to answer it and solve a customer's problem. This was controversial to say the least, for many reasons, but it was also extremely educational to see the misconceptions or worries or troubles that people had about the product you create. I enjoyed these even as a backend developer - often a problem or annoyance that manifests as a very surface-level thing to someone has deep code/architectural implications, and it grounded us to the people we were building things for, instead of striving for the perfect internals and architecture in la-la land.


Even taking the basic idea as a given, the "someone" part seems far more controversy-causing than just having some sort of basic assigned rotation.


Every developer at my company has to go through roughly 3 or 4 months of just customer support calls before they ever write a line of code. I hated the policy when I was going through it (I mean, what did I go to college for?) but I have to admit it did make me a better developer to know our customers and the industry.


3-4 months is way too long to spend on that. One month i could just about forgive, but 3-4 month doing something that isn't your real job just to start your real job isn't some super cool progressive forward thinking initiative - it's just plain stupid.


Without going too far into detail our product and industry are a bit more complicated than most. It's not a phone app or a startup in a new industry. Our software is primarily used by independent contractors who are subject to a lot of federal and state regulations. In fact, I'd say that our support staff spends more time in an advocacy role than actually supporting the software itself.

So I would disagree when you say it's "not really your job" and "just plain stupid". Our job (as developers) is to create a product that helps our clients navigate a very tricky business environment -- one which most people have never experienced before working here. The time frame of the initial support training and continued learning is justified.

That's not to say that our system's perfect, mind you. In the 6-plus years since I was hired the company has gone from around 25 employees to almost 100. My initial training before actually taking support calls lasted about 4 days, then I was thrown into the fire. The current new employees spend like 3 weeks in the classroom before ever picking up a phone, which I think is far too long.


You could probably get a large part of the benefit of that by just having new developers sit within earshot of the customer support people for a while.

That's been my experience when I've sat close enough to hear support calls (although in my case it has always been because of small offices, not because of any deliberate attempt by management to make developers more aware of real customer experiences).


Perhaps, but you'll also have the huge negative of your developers being unable to focus.


People hate flexible working spaces where you don't have your own desk, but they have at least this advantage: you can sit near the customer support people to hear some feedback when you can, and sit somewhere quiet when you need focus.


No giant monitors though? Typically I have the choice of "quiet side room" or "giant monitor at desk".


Well I'm only speaking out of personal experience, when I need focus I just need any old screen with fullscreen Emacs running on it, extra screen space just invites distractions. I'm probably a bit of an outlier there...


Depends how frequently you need documentation, and I do iOS so I'm typically running a large iPhone Simulator instance on screen as well.


I'll take quiet side room every time. I honestly question our need for giant monitors. The likes of Visual Studio is very wasteful of screen real estate.


Which was absolutely the case for us, at least for my first 5 years here before we opened up a new wing dedicated to people who are only sporadically on the phone (development staff in one form or another). I'm still distracted in this quieter but still open environment, but it's better than it used to be.


It's on-the-job training. Most CS degrees don't teach you shit about anticipating the needs of the typical end user.


I'm not arguing against on the job training. I'm arguing against 3-4 months of on the job training before you can write code.


It would take at least 3 or 4 months to even become good at customer service, if you had no experience. They probably wanted employees to be good at CS, for whatever reason.


The reason is that the industry in which we work is complicated and full of regulations, and the owner believes that if we really understand our customers we can, as developers, better tend to their needs.


This. I went from engineering to sales (solutions architect, woo!) for almost two years. Amazing experience, and one of the best and most informative parts was actually getting hands-on with the people who use the product. Get a much better idea of what matters, what doesn't, and why.


Yup. Same reason the medical instrumentation company I worked for used to try to get developers to shadow medical lab techs for a day. The things we as developers thought were important were often completely irrelevant to our end users and vice versa.


My company recently changed its organisational structure. As a result, developers now know about issues that the users of our website are facing (before, those issues were filtered by a non-technical manager and only those that were thought to be 'critical' would be notified to the developers).

The above is done by simply CC-ing some specific developers into every customer service ticket (which developer depends on what type of problem the ticket is about). Those developers then investigate and, if needed, communicate with the rest of the dev team and coordinate appropriate action.

Since the new policy kicked in 2-3 months ago, we had two main results:

- The development team discovered (and rapidly fixed in almost all cases) several bugs that were unknown up until that point. Some of those bugs had probably been present for some time.

- The UX/frontend devs had significant feedback and redesigned the interface based on it. As a result, we have a more user-friendly website now.


On our team at https://resin.io every engineer takes turns on customer support, something like a 5-day week with a 4-hour shift each day (and each day 4 of those shifts distributed across people), and repeat every few weeks. Sometimes it feels it's a big drain on people's time (for those weeks half your time is on support), but the end result is an amazing amount of learning across the entire team. What are the most important issues at any given time, what improvements would have the biggest impact, etc. It also feels that it makes it very easy to ask help across the whole team to debug any issue you find, as people are very open, supportive, and aware of what this "support-driven" approach requires to work. I'm only half on the technical side, but can find me doing support too. ;)


If we could put those government folks who signs the contracts to build customer facing government web applications through this exercise.


I answer second level support. I hate doing it, but it is true that we get to see some of the things that confuses our customers.

Plus it is therapeutic, you are never going to feel like you will look stupid to a support person again.


Or it can harden your view that all users are idiots, as seems to have happened with certain _nix DE devs...


This is apparently a strongly recommended exercise offered at Amazon. You can shadow a CS rep and follow along with voice calls, chat support, or whatever. I believe they also have an internal program where, for a day, you can relinquish your "normal" duties and spend your time working in a warehouse, or an Amazon Fresh freezer, or on a delivery fleet* - just for the empathetic experience.

* Definitely not sure how accurate this info is. Comes second-hand.


At the food delivery company I work for part of the onboarding process for all new hires is to have them place an order, handle some customer support calls, go out for a shift with a delivery driver, and shadow one of the shops packing orders. In my opinion its one of the best policies we have as everyone goes into the job with at least some understanding of what we actually do. Working remote I don't get the chance as often as I'd like, but even six years in I still learn new things everytime I go out and actually watch people using the systems and processes we design in the office.


While I was at Twilio (mid-2011, through late 2013), every new employee was required to respond to 20 tickets regardless of role. It may have been a hold over from when we were really small but I always thought it was a great reminder that at the other end of that API were people building real projects that might make or break their company, teach them something new, or might be having a bad day that you can help just a little.


Automattic does something similar too:

"When you join full-time, you’ll do customer support for WordPress.com for your first three weeks and spend a week in support annually, for evermore, regardless of your position"

https://automattic.com/work-with-us/


I like that. We do something semi-similar at Optimizely, where we have all go-to-market employees walk through the ticket answering process with our Technical Support Engineers. Makes the customer demands feel very real, and gives other parts of the org ideas for discrete improvements.


I spend a few days in the support trenches now and then to see what our agents were commonly facing when dealing with clients. One such day, it became apparent that most calls were investigating credit card charges since we're a SaaS that powers many sites. Watching the CSRs navigate through an ugly and slow transaction search screen made it painfully obvious we need to re-tool that for them. Within days we had a much faster db query and more flexible search parameters that helped them quickly find transactions. I'd often poll the CSRs for new feature requests and look for common threads. A small % increase in efficiency over many reps adds up to alot of $$$s saved and customer satisfaction.


I worked in a callcenter doing software/dev stuff, and would sometimes talk to the folks handling calls. I got in trouble for this, because they were supposed to talk to their manager for requests and such, and I'd occasionally fix stuff.

it was a west coast company that moved most operations to the east coast. however, the manager was still on the west coast, and wasn't seeing the day to day issues. In one major instance, I remember seeing that the initial 'customer info' screen was taking a LONG time to load. Like...30-40 seconds. In some cases, it was only a few seconds, but for important clients, it was taking a long time. They could vamp and chat for a bit, but it was getting worse.

The agents were hitting a webserver in california, and pulling HTML back, across a private network, and... it was a LONG time for older clients. It was pulling up the entire client history. For important clients - who did a lot of business - it was pulling back megs and megs of data.

When I looked under the hood, the data was being repeated - an entire block of data was being duplicated inside HTML comment tags - someone had put those in for debugging stuff and never got rid of it. So we were pulling back, say, 35 meg when maybe only 18 was needed. I simply deleted the commented info - it had not been touched in 6 months - commented the reason in the svn commit log, requested a new push go out, and the next day things were hugely faster.

I got my ass kicked. I "went behind someone's back" (untrue) and "jeopardized production systems" (untrue). Other people had seen the code and it was pushed out via normal process. It was simply the original developer didn't like that I'd highlighted something he'd forgotten to do. :(

I then angled for actually using gzip compression but was shot down at the time (no, "there's a bug in IE6 with gzip compression and printing" or something like that), even though the gzip would have brought this down to under 1 meg, and every call would have been 'up' in under 4 seconds.

We were further shielded from ever talking to CSRs, because they "might overload us with requests". It was (and still is) a bizarre rationale. :/


I work in a similar role where we previously did have a lot of contact with call centre folks, and the whole "overload us with requests" thing is very real. You become the go-to IT guy in no time at all, because if you make stuff for computers, surely when one breaks you can fix it, right? Or you could build something else in place of it? And suddenly, you have nearly no time to actually finish projects anymore. We had to distance ourselves by a floor and implement a ticketing system to avoid exactly this. Sounds like you had the opposite extreme though, which sucks just as bad but for the floor staff instead of the devops guys. Seems to be a fine balance.


It's not bizarre if you look at it from someone elses point of view: Someone (e.b. your manager, or your managers manager, or higher up) had said there wasn't anything they could do; you made them look bad.

It's stupid behavior as far as the company is concerned, but from whoever you made look bad, well, they stopped that from happening again, didn't they?


wasn't specifically a manager above me directly, but yeah, of course someone looked bad. other people looked bad directly to the people who were paying us money - our CS team said they kept raising the issue in meetings. just really bad rules - groups of people sitting literally 100 feet from each other not being allowed to talk and fix things.


Welcome to the third world.


My largest frustration with technology and specifically software developers is that they don't exist to code but most in the field seem to forget that simple fact.

They exist to solve real world problems for people, using software as a tool. I don't know how anyone can possibly do that even if they don't actively engage their user base and proactively solve workflow issues the end-users may not even know are technically possible.

Very frustrating to watch entire teams spend years on internal corporate business logic apps and never even once spend time with the teams consuming them. The results are always entirely predictable.


> They exist to solve real world problems for people, using software as a tool.

It's really, really frustrating when developers don't realize that. The most valuable thing I ever did at any company, saving just under a million dollars a year, was literally just me talking to three people, realizing that none of them were on the same page, and just talking to them to solve the problem.

More valuable than any code I've ever written. We're problem solvers, not just programmers.


The problem is that in many companies, sadly, the management do everything they can to isolate developers from their users.


Yep. I get told what to do by BA's at this company (I've had more direct interaction at some previous companies). I'm not allowed to talk with the users of the apps we make. I just get told what needs to get changed here by others. Presumably they're the ones going through that process.


To balanced it a bit, a lot of developers isolate themselves willingly against anything that is not obvious to involve coding immediately and throwing up their hands in despair - even if the work does involve coding a bit later on.


Phone support is hard. I did it as a full-time job for about 18 months and still do it regularly as part of my job now. Trying to convince someone who is frustrated, confused, angry or just obstinate to work with you is it's own skill. They are your eyes, ears and hands and it takes a lot more than just being good with the technology, you have to be good at support.


If you can take the person aside and ask them in the right way, they can tell you everything about what's not working - both what is blocking them today, and also the smaller problems that have been haunting them for a while.

The first part is to try to learn from them - how do they normally use the system, what isn't working today, and any errors that are (or aren't) showing up. The hard part in this is in actually wanting to help the person and/or solve the problem, and not getting caught up in your own superior ability to avoid the problem. The first step to good troubleshooting is checking your own ego at the door.


I met that guy.

I was working for Tandy, repairing TRS-80 Model I circuit boards when he and his partner, Paul came to Tandy Apparatus with some engineers from Tandy R&D. They had the first masked ROMs for Level II BASIC. They put them on a board and tried it and it didn't work.

I took the board I had just repaired and went to them and told them that they were in a repair area and all of the boards were in need of repair. I offered the board I had repaired and told them that it should be able to run their ROMs.

They moved the ROMs to that board and powered it up. It came up with a MEMORY SIZE? prompt and William said to press enter. I did and it gave a READY prompt. I typed in a one-line program to print numbers and it started scrolling numbers down the screen.

William was so happy that he offered me a job at his company. He said that he had about a dozen employees. For personal reasons I was unwilling to locate outside Texas and turned down his job offer.


Jeff Bezos does the same, also if you are working on one of the retail teams you are encouraged to take support calls once a year.


There's a bunch of tools that Amazon customer service reps have, and procedures, that have come directly out of Jeff doing customer support work every now and then, e.g. the Andon Cord (any customer service rep can pull any item from sale worldwide, instantly, through the andon cord process, from which a team of dedicated staff evaluate the product and why the cord was pulled).

http://www.shmula.com/customer-service-andon-cord-jeff-bezos...


I believe this was pioneered by Toyota - any one on the floor could flag that there was a problem with production or an inefficiency on the floor, and work would stop until a lasting solution was found.

EDIT - Wikipedia has more detail. Apparently the main reasons workers pulled the cord were "missing parts, safety issues, tool malfunctions and defects found/created."

https://en.wikipedia.org/wiki/Andon_(manufacturing)


Yeah it's one of the first things they tell you about in factory training. And they definitely emphasize pulling the cord (or pressing the button, depending on the line you're on) if in doubt at all.


Absolutely, Jeff nods to Toyota when he talks about the cords origins.


At Blizzard, Mike Morhaime came and answered support tickets as a Game Master for a day. The next day, several terrible policies disappeared.


That reminds me of that other episode in which Bill Gates sends an email to some Microsoft people about his failure to find information and run some program in his Windows machine, saying that everything was horribly difficult and obscure.



"So they told me that using the download page to download something was not something they anticipate"

This is comedy gold.

What boggles my mind is, there are lots of people who have good usability taste. But so many software vendors seem to have no-one with taste in a position to do anything about it as their priority. Hence so many crappy UIs…


The delta between the ability to identify that something is poorly designed (taste) and the ability to create a well designed version of a system is immense. The problem isn't that people with taste aren't in positions capable of effecting change. The problem is that design is very very hard.


The rant was about the difficulty of buying and installing Windows MovieMaker: http://blog.seattlepi.com/microsoft/2008/06/24/full-text-an-...

Funnily enough, I just shared this with a friend two days ago.


Legend has it that he'd call in to tech support for various products his company made when he couldn't get them working just to see what the experience was like.

Say what you will about Gates, he's a pretty normal guy at heart.

I remember when he was in town for the Windows 95 launch he was just poking around in various local shops, checking things out, no security detail or anything, just one scruffy haired guy going about his day. It's just like, oh, there's Bill Gates looking at bonsai trees.


Something feels a bit off about this story. At the end, the customer is said to exclaim - oh my god, that was Bill Gates!

At the end of 1989, Microsoft was not as famous as it later became. MS/DOS was just one of several competing DOSs in a much smaller market. And DOS was their main product.

I doubt that even amongst most MS customers there'd have been name recognition for Bill Gates, let alone such reverence.


It does seem a bit apocryphal, but I think that's okay if you take the story as more of a learning parable than an account of facts.


I agree to the extent that is not also hagiography. Unfortunately, cult of Bill / Steve etc. does tend to become dominant.


In 1987 he became the youngest person to become a billionaire, in 1990 he was the 29th richest person in the world. Those were definitely news stories.

I don't have any particular reason to believe the story as it is told, but I am not sure that is a reason to disbelieve it either.


What other DOSs? I know of DR-DOS, but not any others. Also, not sure, but I think MS-DOS was the biggest from pretty soon after it was created. I'm not even sure that DOS was their main product (though it was a big one). I had read the book Le Noveaux Magiciens, about Microsoft, and though I don't remember the chronology now, IIRC, language products were also big for them, like BASICs for various platforms. They even had an MS-C (used it for a product) and MS-Pascal and MS-Lisp. Could be wrong about languages being big earners for them, though.


There were a few :

https://en.m.wikipedia.org/wiki/Timeline_of_DOS_operating_sy...

I recall Amstrad in UK had its own DOS in the early 1990s. They had to declare MS compatibility, which supports your view of MS already being dominant.

I also don't remember much before 1989, but I believe the languages business was earlier and more niche. Ever read Gates' 1976 rant about hobbyists ripping him off?

https://en.m.wikipedia.org/wiki/Open_Letter_to_Hobbyists


> Amstrad in UK had its own DOS

I remember my PC-1512 (XT clone with 512 Kb RAM, two 5.25" floppies and a mono CGA monitor) and it came with MS-DOS 3.x and DR-DOS boot floppies and also included the GEM window manager. I believe the version of DR-DOS used could also run CP/M 86 executables as well as normal DOS programs.


Interesting that there were so many DOSes.

Yes, I had read his rant, I think it was originally to members of the Homebrew Computer Club, maybe paraphrased, in Steven Levy's book "Hackers: Heroes of the Computer Revolution", some years ago. Pretty interesting book, though I guess Levy may have hyped it up some for sales. Still remember about (Peter?) Deutsch, a lot of stuff about GNU, Lisp and Emacs development (some of the most interesting parts of the book), milliblatts, TECO and many other topics .... :)

And my favorite slogan of all from the book, "Tools to make tools" :)


There were other DOSs, but even by 1989 the use of unqualified "DOS" was usually taken to mean "IBM-compatible PC running MS-DOS" (eg. "Our school has both Apple and DOS computers in the library").


No. Microsoft had more than just MS DOS, ex. MS Word for DOS, which competed head-to-head against Wordperfect. You forget that the PC craze was well on its way by 1989, and Microsoft had their IPO 3 years previous, so obviously Bill Gates was known by many at that point because he was already a billionaire, and very young.

Sure, Bill Gates was even more famous later on, and Microsoft was even more of a dominating factor in the 90s, but he wasn't chump change in 1989 in the least.


I was thinking along that line too! I have only really heard of Bill Gates after Windows 3.1 came out


I wonder if there was a reason for taking the call? Possibly to see some aspect of support calls by doing it?


I think its interesting to see real-world customers and how they use software.

Being in university, I get to see a whole range of people. You realize that people don't really have a robust mental model of how their computer works or how the software they use is structured. Then we 'computer people' wonder why they struggle with our software.

There are so many times I'll watch someone go through a laborious process to find a google doc, despite having seen a link to it on their screen at least 4 or 5 times. Or I'll see them go to the search bar, type in google, then start their search. Or go to the file menu rather than use shortcuts (every. single. time.). Or use tabs to manually center things rather than use the center button.

Even relatively advanced users will completely miss the existence of styles in Docs & Office.

And all of these are just the basics. We really do need to make software easier/more intuitive.


I'm running out of people to do it with, but I love taking someone in the company who has never used, read about, or even seen a program and setting aside some time and just handing it to them and saying "do x".

It's such a massive help in finding what can be confusing, what requires industry knowledge, what is easy for a developer vs a user, etc... It's so often that we as developers take some knowledge for granted, and helping the user either get that knowledge, or not need it is monumentally helpful.

Sadly, I've found that once they do that, they are "tainted" and can't really do it again as well with that product, so it's something I try to do as late in the process as possible. But it has been what prompted me to throw out an entire redesign for a screen and start over very late in the process, and to good effect.

I'm seriously afraid our usability will suffer when we run out of people to do this with.


-I do this from time to time to test one-off service/support docs prior to shipping them off to my customers.

(I work in the marine industry; when a customer has a problem, he's often way out on the high seas, outside of even helicopter range - so, effectively, if whatever issue he's facing cannot be solved with what he's already got on board and some detailed instructions, he's SOL.)

After breaking down the solution (hopefully, we're able to isolate the problem!), I bring all the required hardware, software, scripts and my first draft for the solution document to someone Not Into Automation & Control(tm) - say, the cafeteria or cleaning staff and ask them to work their way through it.

It never ceases to amaze me how many things we just take for granted after working in a field for a few years. ("Oh, but surely it is self-evident that..." "F--k no, it isn't!")

Result: Someone not at all involved in the day-to-day management of our products feel a much stronger attachment to the company and what we do to stay in existence; a customer gets a procedure which is so detailed, they can have anyone do it; I get to sleep well as I know the tasks sent offshore are as detailed and correct as I can possibly make them - and everybody is better off. At least I like to think so.


Interns are a great solution to the "running out of new, untainted eyes" problem.


> Or use tabs to manually center things rather than use the center button. (...) We really do need to make software easier/more intuitive.

First thought: what if the software could guess that you're trying to use tabs to center text, and offer a hint to use the "center" button instead? Then I realized that Microsoft Office already went this way once, and we all know the result.

It's a tough thing, making people use your software well, especially if the target audience is not to be expected to actually want to learn anything.

Makes me wonder though, where does the difference between Word and e.g. Photoshop come from? Both are professional tools, but with the latter, you don't get people stuck at the most basic level and utterly clueless. They all know this is a tool and they need to learn to use it.


Word processors are professional tools, but they're also student tools, amateur tools, etc. They (and MS Word in particular) come pre-loaded on computers, given out by schools, etc. You've got a larger population with a larger range of goals trying to use them. And when someone transitions from student to professional, they're likely to continue making do with however they were using Word before.

Photoshop is a tool that you have to specifically seek out, whether that means buying it or "acquiring through other means". It's got a smaller population of professionals and hobbyists who are used to thinking of it as a tool. I think that's where the difference comes in.


How come 'we need to make this easier' is always the answer?

Why can't society shoulder some of this burden. We should be teaching people to understand computers and difficult software; the 'make it easier' answer leaves everyone worse off because software is everywhere


Let's transpose the difficult/easier dilemma to other areas, where you might be the newbie:

- Prepackaged food. Why should lettuce leaves be separated, cleaned and packaged in handy sizes? People should be totally okay with tearing apart heads of romaine if they want to make a salad.

- Zippers. Way back when, people fastened and unfastened buttons or laces to get their clothes on. That seemed like a reasonable burden at the time, so why didn't we just leave it like that, instead of switching to zippers?

- Highway exits and lane merges. Designing a network of roads can be difficult. How many signs alerting you to the next exit are sufficient? What if we asked everyone to pay attention to one clearly posted sign, because anything more is redundant?

I'm grateful for all the simplicity that keeps coming into my world. If grocers, clothes makers and urban planners can embrace simplicity, the software industry can, too.


> Prepackaged food. Why should lettuce leaves be separated, cleaned and packaged in handy sizes? People should be totally okay with tearing apart heads of romaine if they want to make a salad.

Tangent, but: prepackaged food irks me. So you pack lettuce leaves in nice small plastic bags, or wrap each cheese slice in a piece of paper and put ten of them in a plastic box - basically creating a humongous amount of plastic waste, all because people can't be bothered to use a cheese cutter or pick the lettuce with their hands, and express this as market preference.

> Highway exits and lane merges. (...)

I like to point at cars as something that somewhat got this right. We're not letting a person to enter the car first time in their life and go driving on public roads immediately. Everyone understands that a car is a tool and needs appropriate training to use. People are willing to spend 30+ hours on driving lessons without blinking, because this is obvious - it is expected by society.

I think we could use a bit of this expectation in software industry too. A lot of software is "unintuitive" only because the user couldn't be bothered to spend 30 seconds thinking about what's on the screen, not to mention using the "help" menu or reading a manual. We can keep dumbing down our software, but there's a trade-off - beyond some point, the only way you can make something easier to use is by removing its essential features. There's no amount of modern UX and Angular code that can make a CAD tool or an accounting package masterable in 30 seconds.


None of those things were designed for the fulfillment of complex, open-ended tasks. They are simple devices, ergo using these things should be simple.

I don't want simple software because my problems are not simple. Computers need to move with me at the speed of my thoughts, and simple software simply cannot do that. Our modern world of software feels like shit-plastic Duplo blocks when I grew up building skyscrapers in Lego


What tools for building skyscrapers used to be available but no longer are? My computer still has a CLI and all the esoteric flags my heart could ever wish for.


A few areas that I feel we've lost functionality:

* MP3 players: I used to have a device with buttons that fit into my pocket that was able to locally hold my entire music library. I could control the device through my pocket without needing to look at it or hold it in my hand, it presented a filesystem with folders that I could simply drop music into (or whatever files I wanted, at the time 20/60gb in your pocket was a big deal). Since everythings' transitioned, first into library-based management and then into touch interfaces, actually using an MP3 player (or more specifically, whatever music program is on your phone) sucks so much more now

* Software used to work regardless of what happened to its creator. Desktop software still largely does. Nowadays it's almost like entrepreneurs enjoy writing emotional sunset posts, completely oblivious to all the customers they've fucked over because they suck at business

* Computer internals and abstractions were more exposed. Computing is slowly moving away from keeping users close to abstractions -- using modern software it feels like Product Managers want to erase and destroy users' concepts of things like files and folders. Cloud services would much rather you think of their software as an interface to their silos, and the kind of interoperability that you used to get for free by sticking to a 'protocol' of files and folders, is no longer there. (How hard is it to examine iOS or Android at the individual file level? How hard did it used to be?)

* I used to be able to open most software on my phone without being nagged with a popup for some reason or another. If it isn't a "HEY LOOK AT THIS NEW FEATURE YOU STUPID USER WHO NEVER EXPLORES ANYTHING" it's a "PLEASE VOTE ME FIVE STARS". Software talks too much, and I don't want to have a relationship with its creator. Leave me alone!

* I used to be able to boot up my computer (or, hell, even my Playstation!) without being bombarded with update requests. Software used to be finished, and as a program matured you could expect patches and update frequency to fall off dramatically. Our brave new world of evergreen programming means I am perpetually hassled with updates that I don't want, don't care about, and are frankly unnecessary. Sometimes those updates even remove features, sometimes even features that I am using! (How the fuck is that acceptable nowadays anyway? In what industry is it acceptable to take something away from users that they have paid for? We have compromised our standards too far!)


I so much agree with all of your points. I want to add a few thoughts.

> Since everythings' transitioned, first into library-based management and then into touch interfaces

I can forgive touch interfaces (smartphones are awesome, and a good headset will have buttons that can be used for track control), but I don't understand the whole library management thing. How on Earth did that happen? Why did iTunes-like interface win, with all that bloat and pointless misfeatures when all one needs is simple way to filter a list of your music files and group them in logical playlists?

Moreover though, in the better days, you controlled your music. It was made of discrete data files. The modern way is all cloud bullshit you have to stream over the Internet every time you want to listen to it.

> Computing is slowly moving away from keeping users close to abstractions

This is huge, IMO. The "low-level" abstractions of files and folders are good, because that's the level software operates on. All those attempts at abstracting files away only lead to your system actually lying to you about how the data is structured, and this can be confusing to people because different software now tells different lies, and they don't add up to a coherent whole.


  > This is huge, IMO. The "low-level" abstractions of files and folders are good, because that's the level software operates on. All those attempts at abstracting files away only lead to your system actually lying to you about how the data is structured, and this can be confusing to people because different software now tells different lies, and they don't add up to a coherent whole.
I honestly feel that the only reason this happens is so that business-people can insert more "value-add" into the equation


I feel things would be much better if business-oriented people asked what value something adds for their user, instead of "value-add" being a code word for "more money for us".


I call this productification. "Can this feature be made into a product?"

Ah, your word processor can count words. Can we sell a separate product that counts words in documents? Ah, your network interface can be taken up or down. Can we sell a Network Management Product?


> I used to have a device with buttons that fit into my pocket that was able to locally hold my entire music library. I could control the device through my pocket without needing to look at it or hold it in my hand

Companies still make standalone MP3 players that use dropped-in files and dedicated button controls. In fact, they're cheaper and more available than ever. https://www.amazon.com/Sandisk-8GB-Clip-Player-Black/dp/B00V...

> How hard is it to examine iOS or Android at the individual file level? How hard did it used to be?

It's easier than it used to be, as iCloud on iOS effectively added a shared file system that didn't originally exist.


Actually, these aren't simple devices at all. Good food-sorting machines can cost $30,000 or more. They are loaded up with optical sensors, intelligent paddle ejectors and other goodies. Lots of software in them, too! Distinguishing between a "good" and "bad" leaf of lettuce at feed rates of 3,000 pieces a minute is a lot harder than it sounds.

They succeed in the market because they make complex tasks seem simple. Sorry about the Duplo feeling, and it's a familiar lament of craftspeople everywhere. But we aren't going back to the days of artisans trimming lettuce heads one at a time. Too slow; too costly.


Yeah but the use-case for a food sorter is to sort food (specifically, separating heads of lettuce at scale), and that's all.


It's a very UNIX-like machine.


Why? We make products for end users, most of which are relatively trivial and not very dangerous. It's unlikely you'd ever see some kind of government mandated or built into edu education on usage like we get with cars. And that's just about the only common product I can think of where users are forced into being trained in its usage. Cars and, depending on the state, certain guns. If we make a product for untrained users that is hard for them to actually use isn't it on us?


No, we make products to do stuff. The problem with simplifying is that you take power out of the hands of those who know how to use it, by reducing the combinatoric explosion of paths through your program down to just a few easy-to-understand use-cases. In the end you get software that looks and feels like it was built for children, but is instead targeted at adults, and if that adult wants to do anything outside of the easy-to-use program's very narrow and opinionated understanding of the world, that adult is shit-out-of-luck.

In a model of software consumption where users aspire to better themselves, yes the beginner is stymied but the possibility is still there for completing the task. That's not possible in our current model -- either the program works and you use it the way the designer intended, or you're fucked.


I know it's an unpopular opinion, but I agree with your general point. The trend in the last ~10 years of simplifying UIs did a lot of good; as low an opinion as I have of Apple, they deserve a lof of credit for spearheading and popularizing this trend. But for whatever reason, people are unable to understand the concept of trade-offs and flipped from "all power, no usability" to systematically ignoring the value of "ability to do custom things with your computing devices". The mantra seems to have become: every design decision should be made in favor of the user being a perpetual neophyte, no matter what the costs to the value and capability of the system. Systems that allow for gently increasing competence create the most value for the user, but we're stuck in a shitty low-utility equilibrium where it's easier to market systems where you can do ~100% right off the bat. The rub is that this was achieved by lowering the value of 100%, not making it easier to achieve.

There's an alternative which satisfies both constraints: simplify UIs for the most common use cases while still allowing access to arbitrary amounts of capability for those who want it. There's a reason that every personal computer in my family runs on some flavor of Ubuntu: the default GNOME setup is 50x easier to use than any Windows computer[1] for people like my dad, who's a non-savvy enough user that he still struggles with copy and paste. Yet, it still allows the motivated to Google around and do things like change a single file in a theme's assets folder to make your panel transparent, something I actually did back in college when I cared more about things like themes.

[1] Note that I'm talking purely about the OS itself as an illustrative example of my point: having to be careful and buy a computer without hardware compatibility issues is a separate issue and is an actual pain in the ass that very reasonably prevents many people from following my example and switching their family to Linux. I just did it because installing Linux once every few years was a hell of a lot less work for me than debugging Windows over the phone once a week.


Agree entirely! It isn't the simplified UI itself that I lament, it's the removal of advanced possibilities. In a lot of cases, the fix is easy- just add an "Advanced" button with options for the power users


Nobody makes products to do stuff except maybe people working on open source or so on. Products are made to sell, their functionality is incidental, for as much as possible to as many people as possible. If power users are too narrow to out buy uneducated end users, which they are, there is nothing to be done. Why would there be any reversal of this?


The other commenter addressed this well, but I want to add something of an ethical dilemma to the mix.

Sure we can have schools teach people how to use PCs, but then people only know how to use certain software. If you've been taught extensively how to use Office and Windows, and that's all a computer is, then LibreOffice, docs, and Linux will be a challenge. Microsoft loves this and Apple has used it extensively in the past.

The danger for competitors is real. Office has caught up in the few features it was lacking; I've seen a resurgence in usage and very rarely see freshman using Google docs anymore (when I entered uni it was the opposite). Google relying on a dated, unintiutive interface that kids don't learn in school anymore is going to kill them if they don't react.


So don't teach them Windows and Office. Teach them software and computing concepts.

Stuff like, "most software has a menu bar, you can expect to find File, Edit, Window, and Help menus, here's what they usually mean," and so on.


Pretty much this, teaching people "Office" or "Windows" is the wrong thing to do, the goal is to give someone tools to interact with a word processor, whether that be LibreOffice, MS Word, Etherpad Lite, or another word processor.

If you teach a narrow set of skills and don't train for flexibility, your robbing the students by pigeonholing their skills into only being applicable to a segment of the market.


I agree. Skills should be transferrable.

I've seen friends who can do mail-merge in Word but cannot navigate the internet on a desktop with the ease and speed I can because they learned mail-merge in school but never bothered to browse internet on a laptop/desktop (instead they use phones all the time).


Difficult-to-use software is most often the fault of bad design - and "software is everywhere" is precisely the reason we should care about making things easier and more accessible. The dozens or hundreds of engineering hours to revamp a feature or a set of controls could save tens of thousands of user hours over the lifetime of the software (especially if you have a significant number of useres).

And gives you a leg up on the competition.


For some cases of software I agree with you, my IDE should not be made less powerful just to make it easier to master, however even so it should be built up logically - e.g if editor settings are in one place, the option for which font to use should not be elsewhere. If the shortcut key to rename is ctrl+r r, the shortcut key to inline is ctrl+r i and the shortcut key to add a new method is ctrl+shift+m then that is a bad design.

Most software should aim for simplicity though, I am interested in using your software to pay my bills then get back to reddit, I don't want to learn a complex system.


> Most software should aim for simplicity though, I am interested in using your software to pay my bills then get back to reddit, I don't want to learn a complex system.

Software complexity should match the complexity of the problem space. The problem today is that the trend is to make oversimplified software. I guess it's fine if it lets you pay your bills, but maybe a little more complex software would let you earn more money to pay those bills with, by expanding the amount or complexity of things you're able to do.


Comment lower down remembers that the staff suggested it. And William did not perform so well.


I've seen the CEO of https://en.wikipedia.org/wiki/Wegmans ($7B grocery store chain) in the Pittsford Wegmans parking lot picking up trash all by himself (no photo op). He drives a Ferrari in.


I don't know what you do for a primary job, but don't you get a feeling of refreshment and accomplishment from doing something different now and then? Especially when you can see actual progress and improvement instead of going to meetings and reading summary and analysis documents all day.

For example, I'm a programmer, but every now and then I open up and clean the office coffee machine, or mop up a spill in the break room. This isn't what I'm being paid for but doing a good job at it makes me happy, and NOT doing it would be being a jerk.


Wow, I think you have written down what makes me not want to go into management (I've just graduated). The feeling of "seeing"/"measuring" impact (either code churn or shipping or affecting people) is a very good and satisfying one. I also think the satisfaction factor is different for different people but that is what eventually people call "Job Satisfaction" and is a very important thing in the long run. Without satisfaction I don't think someone will be able to do the same job (maybe at the same place) for his/her life.


I don't see how a role in management is at odds with job satisfaction. You mentioned you just graduated -- maybe your impression of management is based on the caricature that tends to be portrayed in pop culture? As a manager, I feel like I am able to see a (hopefully positive) impact of the work I do -- impact both on customers/users and on the members of my team.


I am not closed to the idea of management. I will likely try it once (once qualified enough) to get a feel for it and decide for myself if I like it or not.

My impression doesn't come from the pop culture portrayals but rather from people who have been/are managers and hearing them on podcasts. But my opinion isn't based on that.

To me (which is a very narrow worldview at the moment), the only measurable things in management are:

- product delivery - product cost - customer satisfaction - dev team satisfaction

And two of those cannot be measured in any meaningful way. Whereas in coding (and I don't like these measures) I have lines written/deleted, work items fixed, product completion, feature parity with the design document (and quite a lot more I'm sure) and most of them are measurable.

This is the basis for my view.

PS: Thanks for sharing your views and telling me that management can also allow you to see impact of the work. Also, I do think that more managers are needed who are knowledgeable (or at least willing to learn/work with other who know) with business, development and have domain knowledge of the product (like an EPR dev should have at least used an ERP) so I also want to improve the situation by going into management if I like it.


(I'm a manager.) One big thing this overlooks is helping people learn: management is almost an ideal job if you enjoy teaching and watching others grow. Your "product" is the team you build.


Well, I think there's a lot to learn about management then. This makes it interesting. Nobody in my family (even extended) works in tech and the places they work at management exclusively means managing people by giving them work and taking back product. Also micromanagement.

I'll have to experience it once I get into a job (starting this July). Thanks for the insight.


It's 2015. The scene: Wife and I (both experienced decision makers) are playing Fallout 4.

Me: "I'm supposed to be running a paramilitary campaign. Why am I repairing someone's carrot?"

Her: "All managers ask themselves this darling".

Cue hilarity all round.


Back in 2003 I saw him (Danny Wegman) just seeming to observe people inside the Pittsford store.

I approached him and spoke to him for a couple of minutes - primarily about some additional British products I thought they should have in that section of the store.


(Real) Cadbury's chocolates or Spotted Dick?


A chance to catch something obviously wrong that gets in the way of the workers but that they lack the power to fix. Big companies sometimes drop the ball. No grunt complains to the face of the CEO.

Apparently he missed one such thing by sheer bad luck. From the comments:

"Bill queried the knowledge base, which was normally painfully slow, but this time it was snappy and responsive."


It was also a very useful employee relations move: Bill wasn't "above" the task of a customer support call and demonstrated respect for the support workers by performing his best in their role, even if very temporarily.


I wonder if Bill wouldn't have happily fielded the second call from that customer too, had he been asked.


Well TFA says he was on a guided tour, so I imagine he wanted to get a better sense of what the support folks have to go through. We do something similar at work, where you can do a "ride along" to get a better idea of what folks using our internal tools needs are.


This is a best practice amongst many high performing organizations. Engineers who intuitively understand how their products are being used become much better designers. Similarly, managers get to understand the real pain of their customers.


I worked for some friends a long time ago doing traffic control plan software (at http://www.invarion.com/). We were a startup, three of us in the office, that's it, so when customers would phone one of us would have to talk them. A lot of the customers were not super technical, and I would have to answer questions about features that I had implemented, or bugs that I couldn't have imagined. It was illuminating, and gave me a new respect for developing software and really considering the end user and their needs and abilities.


Is this so surprising? Kevin Hale, YC partner, talks about how it was implemented at his startup Wufoo with interesting results about how it drove engineering to improve the product. On his Stanford lecture video for those who are interested.


Believable but at the same time might be just marketing, especially the last part. Call me cynical.


The story is consistent with Joel Spolsy's first person account of Bill Gates: https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...


Ugh, the guy who discourages people from working on side projects because of potential contract violations?


I would say that's a misreading of his intent - informing developers about the contract law governing their employment is to help people not get burned at a later date because they were iffy about the details of this or that clause.

Of course, being informed about it may very well discourage some people from pursuing side projects. But then this comes down to whether you'd rather have developers continue with their side projects oblivious to the potential consequences of doing so.


This actually happened. I wrote the account lower on that page attributed to 't-mikeha' which was my Microsoft username back in 1989. I also made the same comment here when this story originally appeared on HN years ago.

My friend and fellow intern Tad fielded the return call from the guy who had the issue, and while I didn't speak to the customer myself, I believe Tad's account. It makes sense to me that Bill's answer wasn't quite correct, he did just essentially read the customer the first result from querying the error message on the Knowledge Base.


That's what I thought too. Especially since the story mentions the username "billg." Wait, so did he log out the support person and then log himself in before taking the call?

I mean it's possible, but it's also possible it's just a nice story.


"Okay, let me see if William is available." The product support engineer brings up the customer's service record and looks at the name of the support engineer who handled the earlier call: billg.

Those details were fictionalized by the author. Everyone who wasn't actively on a call at the time new all about Bill's call, because we were all standing around my cube when it happened. Not long after the call was over our entire team knew all about the call, what the issue was, and how Bill handled it. If I recall it was about an obscure linker error message.


Given past history, it is highly unlikely that Raymond Chen is lying. Neutrality is not the same as rationality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: