I have worked with someone who used to work at Wealthfront, and said that he had said to them “do not speak to me unless I speak to you.” They left within months of starting.
I normally wouldn’t raise it if I couldn’t independently verify it happened myself, but this person is one of the nicest people I’ve ever met, so I totally believe it. They left a very cushy job to take a job at Wealthfront, so leaving after a few months had to take something pretty substantive.
Wealthfront isn't a very large company so if you are telling the truth you have potentially given enough information about your friend for people there to identify him/her.
I just finished the Eric Weinstein/Peter Thiel podcast, and came away mostly agreeing with their assessment that we’ve really stagnated when it has come to progressing scientifically.
I definitely feel like there’s this illusion of tech innovation coming from these big companies that suck up all the tech talent, but at end of day the best and brightest are working on optimizing ad clicks (FB, Goog) or getting people to buy crap (Amazon) or working on incremental hardware improvements (Apple).
If anything, I would hope any outcome against big tech would level the playing field when it comes to attracting talent, and create an environment where working on true “moonshot” tech was not so risky.
Historically, scientific progress has extremely rarely been made by companies. Most of the time, it's the result of academic research. Most researchers typically have neither time nor inclination to build a company/a product based on their own research, and having great research is no guarantee that the company/product will succeed. So the way research results become visible is typically through publications and teaching.
When companies hire graduates who have learnt from researchers, some of these graduates end up in position to "innovate", quite a few years after the actual research has been done.
For instance, I'm going to talk about the field that I know best: programming languages (to keep it simple, I'm not talking of VMs or compilers, just the languages). Pretty much everything you see in Java, C#, Python or Go dates back to the 70s (with broader testing and quality of life improvements, of course), Swift gets a few improvements imported from the 80s, but not that many. The only industrial languages that seem to have real innovations are F#, Rust and Scala, which are three cases in which the actual researchers managed to convince (or found) a company to support the language.
Anyway, it's really, really hard to measure scientific progress, and if you look at companies to try and gauge it, you're looking at stuff that is typically quite old.
Apple hired people who did academic work on LLVM and static analysis, and this turned into real compilers that people use every day (as well as Swift, which enabled SwiftUI.) There are many other examples at other companies.
I would also note: in spite of "novelty" requirements for publication, we tend to see many of the same ideas recycled over and over. Which makes sense, because they tend to be good and useful ideas that may be fundamental to the discipline.
I didn't want to talk about compilers, because examples are different, and despite the fact that I work on compilers at the moment, I don't have as much in-depth knowledge as on programming languages. In particular, while I'm sure that there are a number of novel things in LLVM, I have no clue which ones. In Swift, though? I see quality of life improvements, but nothing remotely scientifically novel in terms.
Agreed about ideas being recycled over and over in academia.
Regardless, I believe that my point holds: it's very hard to judge "scientific progress" by looking at industry, because most of the time, you're looking at stuff that was discovered decades earlier.
Aside from the transistor, negative feedback, Unix, troff, radio astronomy, the charge coupled device, cryptography, information theory, what did Bell Labs ever do for us?
Its hard to argue Bell Labs was much of a purely private company, AT&T was bound by law to invest a certain amount of money in hard scientific research, in return for its monopoly position in telecom.
So at the very least, bell labs was the direct result of regulation, and pretty arguably just an alternate form of taxation (AT&T was also legally bared from commercializing Bell Lab's research outside of telecom)
The issue is that academic research doesn't always translate into a product that can be useful. There are many things that can be done at small scale that look awesome, but once you try to apply it at a larger scale or trying to account for all the corner cases, it starts to fall apart. While this may not be as true for tech, it tends to happen a lot in the physical sciences.
A big recent tech-focused one that started in academia and is now in the hands of large companies is self driving cars. It looks like it may work, but then every started hitting those 1% cases and realized they can't actually ship a fully self driving car (and driver-assist features can lead to worse issues as people think they can trust the tech more than they should).
Also during war patents are shared between all companies in one country or ignored all together. I think this is an under appreciated fact of why wartime creates all sorts of innovation.
Woah, I never thought about that. Do you know where I might be able to read more about copyrights / patent enforcement (or lack thereof) during wartime?
As did lots of awesome materials. The material science advancements that happened with WW2 and the space race (fueled by the cold war) are pretty amazing. Nylon among many others came out of those drives.
Edit: someone corrected me about nylon. It was invented between 1927 and 1938 my Dupont. But it was still a commercial Enterprise doing the r&d for it.
nylon did not. you’re probably thinking of teflon, which also did not, but is at least a common misconception, unlike nylon, which is widely known to have first been used for women’s ‘nylons’ in the 30s.
My pet theory is that the best innovations come from contexts where loss of human life is acceptable - war being one such case.
The flip side of greater consumer safety is that there's less room to get things wrong, which in turn means it's harder to meaningfully innovate.
If my theory is correct, I expect the next paradigm shift will come from a country with less consumer/worker/civil protection. China (and to a lesser extent, India) are on my watch-list.
When I say this, people sometimes pigeonhole me as an anti-regulation nutcase, but that's not me. I don't know how I feel about this. There's a real Kantian dilemma, here.
The best innovations come without profit-driven motives. The thing about defense research is that it's more popular with the public than, let's call it, nerd research.
It's having money without trying to optimize solely on future money.
It's probably just as easy to say that funding comes from the war economy rather than research. However, I am also not entirely surprised that it's the high levels of funding combined with the adversarial nature of war that pushes the envelope of things getting bigger, smaller, faster, higher, etc.
However, while we do see this trickle back into civilian life, I imagine that if funding were directed towards solving civil needs instead (housing, food, water, environmental contamination, energy, community centers, schools, etc) we'd see very different things produced, and often more well suited for peaceful purposes.
What about the fact that some of the most prosperous times for average people come after epidemics and such which result in massive population declines?
I would say certain types of problem domains will be more likely to find solutions in academia vs companies.
For example, it's going to be pretty hard to get much innovation from academia in the area of operating planet scale networks/datacenters simply because they don't have access to such environments and the money to support it nor the day to day challenges of actually running something like this.
That said, much of innovation happening in companies is likely part of the secret sauce that makes the company successful so we're not going to see much of it make it out at the time the innovation takes place but over time companies and people working for them do open up about their solutions.
I have had similar thoughts about big tech companies, but lately I have started to realize the progress they have brought.
To name a few:
- Google, FB, Amazon basically wrote the book on distributed systems (Read Designing Data Intensive Applications), both from a research perspective and a very well architected, open source solution
- Google, FB have profoundly impacted front end development with cutting edge Javascript runtimes and open source front end frameworks
- Amazon, Google, Microsoft have basically invented/popularized a way to do computing(Cloud), server management that has enabled tiny tech companies to become giants by outsourcing IT infrastructure
- Apple/Google have created devices, OSs, and software that is nearly impossible to live without these days, additionally creating platforms for millions of developers to make a living on(App Store)
- Amazon has set the bar pretty high for automation in operations and made 2 day shipping a thing we expect from everyone
There are many other things I can’t think of right now, but long story short most of the companies you listed do have crappy parts of their business, but have also made incredible platforms that 3rd parties can leverage to make a ton of money.
I caveat all of this by saying that there are some practices that I don’t agree with at all of those firms, but by and large they have gotten so big because they are platforms.
There's a lot going on, but if you go back a few decades the list is more like: radio, television, cars, airplanes, skyscrapers, subways, refrigeration, plastic, aluminum, etc. This is a strange time period, in that technologically things seem to be moving too fast and stalled out at the same time.
> - Google, FB, Amazon basically wrote the book on distributed systems (Read Designing Data Intensive Applications), both from a research perspective and a very well architected, open source solution
Most of the science behind these things is actually older. They industrialized it, removed the kinks, built upon actual experience, all of which is extremely precious, but I don't think it's as groundbreaking as people believe.
> - Google, FB have profoundly impacted front end development with cutting edge Javascript runtimes and open source front end frameworks
If you're talking about JITs, that's gradual improvements on prior work on JITs (started during the 60s, ignored by industry until Sun picked it during the 90s... for an academic project). Again, very useful, but not necessarily groundbreaking.
> - Amazon, Google, Microsoft have basically invented/popularized a way to do computing(Cloud), server management that has enabled tiny tech companies to become giants by outsourcing IT infrastructure
Again, industrialization on prior academic work (e.g. virtualization, distributed component-based architectures, etc.)
> - Apple/Google have created devices, OSs, and software that is nearly impossible to live without these days, additionally creating platforms for millions of developers to make a living on(App Store)
> - Amazon has set the bar pretty high for automation in operations and made 2 day shipping a thing we expect from everyone
Mmmmh... I was talking of "scientific progress", you seem to be talking of something different :)
If you recall, my point was that it's very hard to measure "scientific progress" by looking at industry, because industrialization typically happens decades after the actual discoveries/inventions. I think your point is that "industrial progress" may be good, which I'm not debating :)
That are all nice product developments and industry standards, but nothing scientifically relevant. Some actual scientific research from Google is being done in autonomous vehicles, medical imaging, sensors and diagnostics automation, various areas of optimisation and of course information retrieval.
Front end development if anything has regressed with javascript and front end frameworks, about the only way it's improved is in the questionable cross platform story.
The smartphone is obviously a gradual (occasionally not so gradual) improvement on PDA's, app stores existed before them too, although usually belonging to the phone operator or network.
We hear about amazon because they're in the tech industry, I bet if you followed the logistics industry as closely you'd see a lot of improvements. Particularly outside of the US where amazon doesn't have the same footprint.
Science hasn't stagnated yet. As someone with a long career in physics/biology, this attitude only reflects the myopia and ignorance of the people that hold it. We're in the middle of a revolution in our understanding of biological systems and our ability to engineer them. But because that field is so technical and intrinsically slower-paced people discount it in these discussions because they don't know anything about it.
There's much more to science and technology than "tech" companies.
> There's much more to science and technology than "tech" companies.
Specific observation of mine is actually important tech takes decades to make it's presence felt. That's a totally different world than "tech companies" that are mostly fronts for ad dollar scams, casino's for international crooks, and laundromats for central bank money.
Lets talk solar panels, first developed in the 1950's. Variable Frequency drives first developed in the 1980's. Lithium Ion batteries developed in the late 1970's. All this stuff is decades old and still working it's way through the day to day infrastructure.
I feel like there are a bunch of biological and agricultural science stuff is coming, massive changes in the energy sector are coming, but it's painfully slow.
You seem very certain that today's tech companies aren't doing any real research, or if they are it won't have a significant impact. How could you possibly know this? It can't be learned from reading the news.
Lasers were invented in the early 1960's. And there was a bunch of hype for half a decade. Dozen years later people were referring to lasers as a 'failed' technology.
Transistors invented more or less in 1947. Twenty years later in 1967 my parents bought a BW TV, with tubes.
LED's. First invented in 1927, practical devices 40 years later in the mid 1960's. Widespread use in lighting, not until 40 years after that.
This is a reductive generalization, sort of like assuming people at Bell Labs were working on telephone equipment, or people at Xerox PARC were working on copiers. How a company makes its money isn't what determines the biggest impact of its research. In the end, it may be some other research area that they never figure out how to make money on at all.
Google famously has lots of "moonshots" and publishes lots of research and open source code. It's too soon to say what's going to turn out to be a real advance versus an "illusion".
what google calls "moonshots" is a pathetic joke compared to the real thing.
They called 15 million dollars in cancer research investment a "moonshot", as well as the various absurdly low x prizes (vs what they are asking for, and ONLY IF you succeed).
I definitely feel like there’s this illusion of tech innovation coming from these big companies that suck up all the tech talent, but at end of day the best and brightest are working on optimizing ad clicks (FB, Goog) or getting people to buy crap (Amazon) or working on incremental hardware improvements (Apple).
Wasn't just this sort of thing said about the financial world? (That all of the brains were being sucked up by finance, because that's what paid the most for technical and math talent.) I remember talking to coworkers about 15 years ago, quipping that pretty soon, it would all just be enigmatic, undecipherable servers humming in server rooms, exchanging arcane signals, causing money to move around for inscrutable purposes.
Doesn't this fit the way people talking about YouTube and Google talk about "The Algorithm?" It just means that the enigmatic, undecipherable servers humming in server rooms have also begun to take over media and the culture. It's not just money moving around, commanded by machines for inscrutable purposes. It's now also the information, the connections, and the interactions comprising human culture itself.
Supposedly we're not yet at the point of AGI, but somehow we've already built the tools for the not yet existing AI overlords to control all of humanity. (In pAIperclips, this would correspond to "Release the Hypnodrones!")
If anything, I would hope any outcome against big tech would level the playing field when it comes to attracting talent, and create an environment where working on true “moonshot” tech was not so risky.
At this point, maybe we need all of the talent just to try and keep the "Hypnodrones" out of the hands of evil hackers?
I used to think this, but I've come around somewhat. The economy as a basic premise is expected to grow at least 3% a year or bad stuff starts happening. Obviously no kind of exponential growth is sustainable, even at a low rate. The best minds are getting sucked into this at least partly because the work is necessary to keep the economic wheels spinning. If we give up on having an average % of yearly growth, we're committing to living in an economy that's a zero or negative sum game.
So yes, we are stagnating scientifically, but it's not necessarily for no reason. The longer we keep running the economy as is, the more smart people are going to have to be allocated to figuring out how to keep up with the exponential growth our system demands. Unfortunately, no one seems to have cracked steady-state economics, so it looks like we'll be doing this for the foreseeable future...
Additionally, I don't know how we're supposed to keep up with this absurd need for exponential growth without leaning hard on technology (at least if we presume we don't want to wreck the planet more.) I feel like big tech is actually doing something important with regard to growing the economy disproportionately with the physical resources they consume.
> The economy as a basic premise is expected to grow at least 3% a year or bad stuff starts happening.
That isn't really a thing. The 3% growth aligns closely with population growth (including immigration), and that's where it really comes from. If the average company makes widgets and the population grows by 3% then they hire 3% more people and sell 3% more widgets to 3% more customers, and the company is worth 3% more.
The unsustainable thing is really unlimited population growth, but steady state populations don't require some kind of cataclysm. They work a little differently, in particular people have to on average work longer before they retire because there are fewer working people to support them in retirement, but it's hardly mass starvation and nuclear war. And even the drawbacks are offset by things like automation -- not as good as both automation and population growth, but still probably not worse than your grandfather had it.
The real problem is that there is no iron law that says people have to spend their working hours on pie-growing activities like automation and honest medical research instead of pie-stealing activities like adtech and patent trolling, so if we have rules and institutions that make the latter more profitable, that's what we get.
Is it really unthinkable that a fixed population could each year turn a finite set of resources into a set of resources that is 3% more valuable?
If a tree falls in a forest, and someone turns it into some chairs, while planting at least one tree in its place, then that has made the economy more valuable (assuming there are people who want the chairs, and that leaving the fallen tree in place would be less valuable than having newly planted ones).
Zero percent growth means that every time someone creates something valuable out of less valuable materials, there has to be an equivalent amount of value destroyed somewhere. I suppose that entropy takes care of the destruction, to some extent, but it seems arbitrary to try to limit value creation to precisely match that level.
> Is it really unthinkable that a fixed population could each year turn a finite set of resources into a set of resources that is 3% more valuable?
No, of course not. The bulk of that 3% has always been population growth, but economic growth outside of population growth is possible -- it's even beneficial -- it's just not mandatory. There is nothing forcing it to happen, and the bad thing that happens if you don't have it is that you don't have it.
But that's still not great. Humans have had periods of stagnation that have lasted hundreds of years, and nobody really wants to live the lifestyle of the middle ages anymore.
Which is the point. If people spend their time optimizing click through rates and suing each other then humans don't immediately go extinct, but we also don't get any further toward curing cancer or colonizing other planets. And that's bad, directly, on its own, not because of some nebulous impact on economic indicators.
It's an even less insignificant part of "3%" given that real GDP growth during that period was less than 3%, in part due to declining birthrates and a population growth rate below historical norms.
But it's also kind of cherry picking numbers because that bracket starts just after the housing crisis and is measuring the recovery. The 2000-2010 bracket was 0.71%, compared to historical numbers around 2% (but also historical real GDP growth of more like 4%).
You also have to discount some amount of real GDP growth for the general category of "banks doing bank stuff" which increases GDP a lot on paper even though it's somewhere between net neutral and actually destructive of underlying value, e.g. credit availability and low interest rates causing higher housing prices and higher total interest paid which both get booked as GDP "growth" even though they don't imply any actual productivity increase and negatively impact quality of life for working people.
The result is that population growth is the majority -- but by no means the entirety -- of "real" real GDP growth. (There is also an interesting effect where population growth has non-linear effects, because more people results in both more minds working on productivity improvements and more people who can benefit from each of those improvements, resulting in a quadratic productivity increase with population.)
But it's concerning that real GDP growth per capita has declined from its historical norms since 2000, even though we now have even more "banks doing bank stuff" than we used to. And it's probably not a coincidence that this has coincided with increasing amounts of regulatory capture and business consolidation.
Zero percent growth means that each person made the same contribution as the previous year, not that there was no value added to the world. We shouldn't conflate income and wealth.
Completely agree with the last paragraph. The economy is tilted way too far towards incentivizing value extraction instead of incentivizing value creation.
Don't know that I agree with some of the previous stuff though. Even if we assume a steady state population, if the economy doesn't grow every year, we're playing a zero or negative sum game literally by definition.
RE your second paragraph, I both agree and disagree. As far as I can see (at least the US system) is predicated on ~4-7% economic growth being the way that people systematically grow relatively small savings into a large nest egg for retirement (not that this appears to be working particularly well, see your comments on pie stealing.) If we assumed a steady state for the population and economy, I don't see how people working for average salaries can set enough aside for retirement (without pushing themselves to a pretty low standard of living) without the benefit of ~4-7% compounding.
I have a developer's salary, so I could probably put away a couple million cash before my retirement even without the benefit of compound interest paid by economic growth. That's not the norm, or even the average though. Participating in society/the economy has to be an attractive enough proposition to incentivize people to do it, or bad social problems start popping up. In a steady state system, I don't see how we can make an offer more attractive than "Enjoy your subsistence lifestyle up to retirement, so you can have barely enough to have a subsistence level lifestyle in retirement."
> Even if we assume a steady state population, if the economy doesn't grow every year, we're playing a zero or negative sum game literally by definition.
It's really two independent games that just happen to have the same prize for winning. The people stealing the pie are the bums regardless of whether any other people are growing it.
And zero sum games aren't always the end of the world. If everybody is fine in year zero and then nothing changes, everybody is still fine. Not as good as things getting better, but stable. The problem is if things are getting worse -- increasing levels of consolidation over time. But you can have that even in the presence of growth. It's an independent problem.
> If we assumed a steady state for the population and economy, I don't see how people working for average salaries can set enough aside for retirement (without pushing themselves to a pretty low standard of living) without the benefit of ~4-7% compounding.
Interest rates and ROI are related to GDP growth, but they're not the same thing. Time value of money is a thing even in the absence of economic growth.
Suppose every year Farmer Joe has to borrow money to buy seeds and pay workers to plant, grow and harvest the crops. Then after harvest he sells the crops and uses the money to pay back the loan. Every year it's the same, every year he starts and ends the year with the same amount of money, there is no economic growth, but the lender is still earning interest on the loan every year.
Meanwhile the lender is your 401K, so it makes money over time which you then spend down in your retirement and die with the same amount of money in real terms as your parents did, and so on indefinitely.
And the interest rate has to do with supply and demand for currency, for which economic growth is a factor (affecting demand), but not the only one. The other obvious big one is that the Fed sets interest rates (supply) by fiat, as well as other things like bank reserve ratios.
I agree that in a vacuum, zero sum games aren't necessarily a bad thing. However, in our hypothetical economy, the actors obviously aren't at parity for their level of skill at playing the game. In a zero-sum economy, consolidation is inevitable. At least in the presence of growth, the avoidance of consolidation is a possibility.
> Suppose every year Farmer Joe has to borrow money to buy seeds and pay workers to plant, grow and harvest the crops. Then after harvest he sells the crops and uses the money to pay back the loan. Every year it's the same, every year he starts and ends the year with the same amount of money, there is no economic growth, but the lender is still earning interest on the loan every year.
> Meanwhile the lender is your 401K, so it makes money over time which you then spend down in your retirement and die with the same amount of money in real terms as your parents did, and so on indefinitely.
I feel like you're making the assumption that the agents in this system don't make an attempt to consolidate (and obviously if and when they do, they will have varying levels of skill at doing so.)
I listened to that podcast as well and have listened to Thiel/Weinstein talk about tech and science stagnation in past podcasts. If you listen to Thiel, he'd say that going into any engineering field in the 80's-2000's was a bad idea EXCEPT for computer science.
I'm more worried about ambitious and smart people putting their talents in investment banking and management consulting. Tech has been a great place to go for smart and scientific based people in part (at least) because it has been unregulated.
I'm not saying I'm against regulating big tech. But if anything, we need more people in the sciences and the sciences to be a good career choice over banking/consulting rather than to make computer science a somewhat less attractive career goal through more regulation (potentially leading to lower salaries).
The problem is, whenever something bad happens, people look to regulation, but they never look at the existing regulation to see how it might have led to this. They always want to add something new rather than fix what's already there.
One of the strongest defenses against monopolies is adversarial interoperability, because it blunts vertical integration. It allows a new competitor to replace one piece of the supply chain and still use the rest of it, even if it's operated by the same conglomerate competing with them for that one piece. Because then you can have five companies replace the five pieces one at a time and end up with competition on all fronts, without them having to find each other and coordinate ahead of time before any of them can act.
But we now have multiple laws conspiring to prevent that in tech. CFAA, DMCA 1201, EULAs, patent thickets, etc. We get monopolies because we passed laws that thwart competition. So how about we do something about those laws rather than adding new ones, when the incumbents are likely to have more influence in drafting the new ones than the average user or startup founder?
>ambitious and smart people putting their talents in investment banking and management consulting
Honestly, I think that is a big factor (along with a reaction to perceived political biases) in why people think tech companies need to be broken up, beyond any real or imagined monopolistic practices. Silicon Valley really shook up the business world at an accelerating rate over the past 2-3 decades, and it's threatening to people accustomed to the status quo, especially those who invested decades of their lives into a career in [finance, management consulting, high-powered law/politics] because they thought it was the best way forward.
That, or we've already seen where this road leads: our best and brightest talent being used to make rich people richer. Nobody with a high-paying job in finance is losing that job.
Personally I suspect another aspect are ugly aspects of (high school) social order and anti-intellectualism. If the people they look down on start succeeding they get really mad.
Virtually nobody complains about bankers and businessmen taking their jobs with outsourcing and wasn't taken seriously. Automation and AI creating more productivity instead of just moving jobs? The "nerds" doing it? "SHUT DOWN EVERYTHING!" hysterical reactions and rage. If tech becomes involved in an industry suddenly it is the great satan like Tesla, or any of the artificial meat companies and spoken of as not a part of the industry in spite of larger and smaller existing players and even foreign businesses receiving more respect.
The mythical long lasting tech-bro insited as everywhere from countless hit pieces which hold homeowners blameless for rising rents along with other business jobs but engineers and programmers solely are clearly the great evil responsible for high rents.
It harkens back to deep feudalistic stupid where the peasants and aristocracy distrusted the merchant and miller for making money from working smart and being "anomalys" to the social order. Money not made through serfs or conquest? Clearly deeply unnatural and wrong. Just look at Belphegor - literal demonization of discovery and ingenious inventions. Judging by actions and impacts alone he should be an angel.
Regulation can be freeing, especially anti-trust when it prevents incumbents from getting so large they take over. Things like privacy regulations are somewhat stifling, but over time they can become a lot more easily abstracted. There's no reason all the tooling around data pipelines we have right now can't take into account GDPR out of the box and with a simple API.
Some of the best and most impactful AI research comes from Google DeepMind or Facebook AI research. Those ad clicks are subsidizing a whole lot of scientific advancements.
You're not wrong, but that argument can kind of be used to justify almost anything.
"Yeah, what we're doing to make a ton of money right now is skeevy and not adding any actual value the world, maybe even creating negative value, but we promise we're going to spend all of that money on research to advance humanity."
Google and Facebook actually did end up doing that, to good effect, but 1) was it worth it? (maybe, especially if we get benevolent superintelligent AGI directly out of it and Google / FB actually agree to not monopolize or abuse it), but 2) how many times should we accept that excuse from companies? If in the next 10 years, a hot up-and-coming new startup begins to grow rapidly and looks like it'll end up in the FAANG tier, should we also trust them when they say their questionable tactics are worth it because they're gonna spend that money on cool science stuff?
What's the alternative? Increase government R&D spend to the same levels as FAANG companies? But will they have enough infrastructure to support that? For example, Tensorflow runs on Google's global datacenter infrastructure - this is the only way these latest monster transformer models can be trained, using thousands of GPUs or TPUs.
The alternative would be to create a tech or hard science company which doesn't choose advertising as their core business model. Many large and successful tech companies already do this, and are contributing a lot to ML research and other fields. It's just that Google, Facebook/Instagram, and Twitter, some of the largest companies in the space, are inextricably tied to that model.
Apple and Netflix don't share the same problem, and that gives them a lot more freedom in other areas, too (but perhaps resulting in them having less total user acquisition potential than Google or Facebook; but of course it's needed less, since every customer is directly giving them money). Uber also doesn't share that problem, and they've contributed quite a bit to the community, though of course they have other problems.
The main problem isn't tech companies or monopolies, IMO; it's the tech companies that survive only by cannibalizing both non-paying users and non-paying non-users who happen to visit just about any website on the Internet (because odds are any given website is using Google Analytics or has some sort of Facebook or Twitter integration). It creates bad incentives.
Of course, there are also separate debates to be had over general issues of monopolization and working conditions (like Amazon fulfillment workers not being able to take bathroom breaks without risking accumulating points which may result in them getting fired), and control over different media platforms (like Google owning the only real video platform in the world and Twitter owning the only real microblogging / "town square" platform, and issues that may come from how they regulate users and content). The advertising stuff is just the cherry on top.
I do think trying to regulate or break them up over the content policing stuff would be a huge mistake, though. Staunch conservatives say they want them to relax their standards and stop being biased against conservatives; staunch liberals say they helped the spread of disinformation, racism, and fascism and should have tighter standards. I think if either side has their way, to any degree, it will be a disaster.
The alternative would be to create a tech or hard science company which doesn't choose advertising as their core business model...It's just that Google, Facebook/Instagram, and Twitter, some of the largest companies in the space, are inextricably tied to that model.
The main problem isn't tech companies or monopolies, IMO; it's the tech companies that survive only by cannibalizing both non-paying users and non-paying non-users who happen to visit just about any website on the Internet (because odds are any given website is using Google Analytics or has some sort of Facebook or Twitter integration). It creates bad incentives.
Wouldn't the world be a better place if the Internet had true micropayments? Culture would no longer dance to the whims of advertisers and executives at huge companies. There would be a more direct connection between creatives and consumers. Then again, we've tried instantly collated online mob rule in the form of social media karma. The nearest equivalents, in the form Patreon supported Instagram models and YouTube stars like PewDiePie and Jake Paul, on the face of it, don't seem to point us in the the best direction.
A huge irony is that the public wants free stuff, and historically has railed against micropayments. However, while they complain, it would seem that microtransactions are going strong, though they are arguably exploitative.
I'm not sure how good of an idea micropayments are. If every single website and app you ever used required micropayments, using the Internet would just be annoying. You would be incentivized to use as few services as possible, for one.
They work for certain kinds of things, but I don't think we'll see the day where people are regularly sending money for the right to open a blog or create an email account.
I don't really know what the answer is for large scale apps with non-paying users. Hopefully other forms of monetization become more popular.
It wouldn't connect people across the borders anymore and would create firewalls across countries just based on money. Ironically even places like HN wouldn't work, sure for wealthy Americans, but not for poor Indian or Ukrainian hackers just about about to develop their interest in tech.
Exactly. Ads suck, but paywalls around every single thing on the Internet would suck more. Plus you can usually block ads, but not paywalls. Even if you couldn't ever block ads, they'd still be preferable to the universal paywall scenario.
But some kinds of companies should definitely look more into accepting donations and offering premium options in exchange for ad-free viewing plus extra features. I'm paying for YouTube Premium to avoid seeing ads (and they have like one premium show that's pretty good), and I think that's a pretty good deal.
What do you mean? " (like Amazon fulfillment workers not being able to take bathroom breaks without risking accumulating points which may result in them getting fired)." that isn't true and it's very misleading for you to post. I am a full time Amazon employee and I use the restroom as many times as I want in a day without being questioned and what are these points you're speaking of about accumulating points? You don't get fired from things that simple, you get fired from write ups, misconduct behavior etc.
Governments and government research institutes already are building supercomputers for research in domains like physics or biology. Given the increasing relevance of AI, it's only a matter of time that they also build supercomputers that have AI accelerators.
Government spend can't create a Google. Google depends on huge compensation packages to pampered engineers who get sushi for lunch and Segways for entertainment. A government facility of this nature would be political suicide.
Historically, a lot of successful public research was funded by either defense or intelligence agencies. It's hard to escape from "doing skeevy things"
That's true, but they already started out skeevy. Everyone already knows that stuff is gonna be skeevy. There's no other way for militaries or intelligence agencies to work. It pretty much has to be that way.
But some new tech startup certainly doesn't have to be skeevy to make money or to be in a position to contribute to research.
> That's true, but they already started out skeevy.
That's not true. A lot of times, the projects will start before funding.
> But some new tech startup certainly doesn't have to be skeevy to make money or to be in a position to contribute to research.
That is true, but you're also missing the reality of startups. Just survival alone is hard. Stubbornly sticking to idealism doesn't help that cause, which is why people take money from both China and Saudi Arabia.
It wasn't "basic" research though, as the basic research in AI happened a few decades ago. Google is industrializing Hinton's work, but hinton created most of it in toronto.
That said, industry can definitely do basic research work (e.g. Pharma does). But what google is doing is acquiring any company that might be a real threat to them.
Are you suggesting Google doesn't have real academic style research groups?
Surely you jest.
Didn't they even crossover into biology on one paper? They're a profit minded entity yes, but no way is it all shaving nanoseconds off of click throughs.
The broader theory of Peter Thiel is that most of these advancements have been done in the rather unregulated ‘world of bits’, while areas related to physics, chemistry etc. have seen less advancement.
Playing Go against an AI opponent while in a 1970s, crumbling, subway system would make for a good analogy.
Sure, but Google also acquired Android, Apple acquired Siri, and Microsoft purchased DOS. They were still developed for years under their new companies. At some point, it does become their own project.
I imagine you could make a convincing counterargument that Big Tech has done plenty of great things for developers at large. We could look to the historical example of Bell Labs under the ownership of AT&T, which brought us the C programming language. Today we could look to examples like Google's Material Design, the Go and Dart programming languages, or Facebook's stewardship of React.
You can take the cynical view that people at Google are just trying to get people to click on things, which, true, is their primary business model. But you can equally say that employees of Google (who work on Search) are trying to provide a good search experience to everyone around the world for free.
Tangentially, for Amazon, they deliver some of the best customer experience I've seen. I can start a Live Chat session at 3 AM (with a human) when I have a problem. It's not just getting people to buy crap.
> create an environment where working on true “moonshot” tech was not so risky.
this obsession with moonshot ideas is just as damaging as these big companies turning customers into statistics and sucking up talent and feigning innovation. long-term technological development simply does not occur in the united states. EVERYTHING is under accelerated schedules and thus entirely short-term. i do not know of a single entity other than possibly universities, although even they are suspect, in the united states that focuses on long-term research and development goals. there are many long-term investigations that are waiting to be undertaken that are not moonshot ideas. the united states has a real danger of falling behind due to this hyper-focus on short-term results.
But if you look at the things that were considered "science fiction" back in the day, like video phones, or augmented reality glasses... stuff like Facebook's Portal or Apple's ARKit or Microsoft's HoloLens really do fit the bill. The basic science is done in academia but there's no academic team that would create something as idiot-proof as Facetime.
I always feel like widespread high bandwidth cellular networks and ubiquitous smart phones are underappreciated in those sorts of discussions. We just achieved ubiquitous global high-resolution-video level mobile communication. I think this is very likely to go down in history as being on the order of the printing press in terms of technological breakthrough. We're still incorporating the fallout from this into our society. But apparently we're already stagnant!
> I definitely feel like there’s this illusion of tech innovation coming from these big companies that suck up all the tech talent, but at end of day the best and brightest are working on optimizing ad clicks (FB, Goog) or getting people to buy crap (Amazon) or working on incremental hardware improvements (Apple).
That seems ignorant and simplistic (or maybe just using a definition of "innovation" that is less generic than I have in mind). The vast majority of employees at those companies do NOT work on the main money generating product, it's such a small number compared to the rest that it's a running joke inside companies "hey you guys make the money, we burn it".
I feel that there has been made a lot of progress in many areas coming from people working for those companies. Not saying that maybe more progress could have been made if those people worked for smaller companies or for academia but I completely disagree that big tech hiring lots of smart people just automatically means they're all spending time doing some "useless" (to your mind) thing.
Some of the very few things I'm aware of:
- large improvements to distributed systems
- large software building/testing frameworks
- maintenance, scheduling, utilization of large datacenter hardware and networks
- all sorts of specific features being pushed in open source projects (like the Linux kernel) to support these companies' usecases
There's a running theme in most of those things of course, it's all to do with operating large datacenters and that makes sense (and I'd argue that without having had bigtech it would be much less likely to have the business support/money to work on such problems).
Peter Norvig had a point several years ago regarding the power of data. For instance, he mentioned some of the data Google collected could measure the effects of relativity. So in some ways, having the data makes it easier to “discover” something versus one genius toiling away in a room. This seemed to be where the lowest hanging fruit has been, particularly at grasping human or social behavior. Due to the need for data and tooling initially, research and discoveries shifted from academy to corporations. In some ways though, FOSS packages and the cloud are bringing research back into academia. In some ways you could make the argument that the government with the Manhattan project, NSA, etc. has driven technology more than academia for the last 70 years. I think you also could make the argument or an interesting research project could be investigating whether there historically has been cyclicality to theory and then practice, ie french enlightenment movement vs. US founding fathers.
The only corporations targeted are those that are not contributing to the current ruling party more than the opposition. Which is why only a tiny number of monopolies are being targeted.
So at best there will only be competition only in those areas dominated by the sliver of companies that happen to offend the current White-house. And new entrants to the field can expect the same treatment. Of course, the companies targeted for breakup may just play ball and change their campaign contribution strategy.
I'm not sure Apple has ever been known for "moonshots" but they have been great at introducing best-of-breed versions (iPod, iPhone, iPad...) of existing tech products (MP3 player, smartphone, tablet, etc..) and then improving them every year. This is a good thing!
Now if they could just start improving the MacBook Pro again...
> create an environment where working on true “moonshot” tech was not so risky.
I think its not the risk that prevents moonshots, but the consolidation in the industry and the moats that are built around the bigs.
The network effect in social media and tech platforms works both to provide more value to users but also to stifle innovation - Amazon's customer is no longer the customer, its the merchant selling on their platform. Google and Facebook's customer is ad buyers. Think about how many new useful features could be built using the networks that are not ad-friendly. I find that Facebook and Twitter to be more like the preventer of socialization, rather than the enabler. They prefer you engage with products, videos and ads than each other, and when engaging with each other not in meaningful ways. The biggest roadblock to innovation that makes an improvement to people's every day lives is that if its not on one of the platforms or ecosystems that already exist it matters little
It is risky. Why work at a unproven startup doing something really hard for $150k and equity of questionable value when you can work at a FANG for $350k/year doing less work?
For founders I don't think the calculus has changed much. You do it because you believe in it and have the stomach or resources to take the risk. For workers it has changed a bit from 10 years ago. I just call this making hay while the sun shines - I don't think the massive salaries at FAANG will last forever.
If you are a founder who takes on investments, the only thing “you believe in” is an exit strategy. Don’t drink the kool aid. Most founders are not trying to feed starving children.
Whatever your purpose, you believe in it, whether thats this is a business plan that will make me a bunch of money, or it'll make an impact, you still gotta believe.
yeah say what you want about Thiel, but he has had 20/20 vision in seeing where the trends are going. I also agree with Weinstein's assessment that silicon valley is hiring engineers that are not there to do business. It certainly feels like every redesign of the major apps is becoming needlessly inefficient and uninspired.
I think it depends on what your definition of "We" is. As a species, the scientific progress humans have experienced in the last century is unparalleled. But the United States is falling behind because we are not doing enough socially to promote careers in STEM and health.
STEM and heathcare training are hugely overemphasized, hence the poverty wages claimed by many such fields.
It's the delusion that market mechanisms can create sustainable socially beneficial (vs. unsustainable and. socially parasitic) business models within these domain that's the problem..
If Peter Thiel is so concerned about scientific progress, maybe he shouldn't spend so much time and money supporting a regime that's openly hostile to science. Just a thought.
He'd likely argue that, despite the prior regimes' professed love of Science!, the same trends were present before. I.e. the podcast's content wouldn't have changed at all in 2016 vs 2019.
Short-term control of the executive branch in the American system is fairly disconnected from the multi-decade incentives that move the directions of scientific research.
We've been hearing this "both sides are equally bad" argument from Republican apologists for years, but by now there's a mountain of evidence that it's simply not true. Republican efforts to gut and silence science within the government have done profound damage.
Perhaps Peter Thiel should allow his engineers to work on something other than “yet another spying tool” or financial transactions apps.
This is like Bezos saying he’s getting into space because the planet is going to hell.
“Look at these problems I helped create. Everyone better let me guide us elsewhere.”
Thiel won’t openly admit it’s self aggrandizing leaders like him that distract people with babysitting his wealth, instead of allowing them to chase their own novel ideas.
Really it is worse than Bezos. While there are legitimate things to blame on him a delivery centered infastructure is more environmentally responsible than a retail one - moving N pounds of stuff is cheaper than N + the weight of a human and their separate vehicle. Even if we make optimistic assumptions of public transit use there is less stuff moved around.
I spend a couple hours a week practicing guitar, and dozens of hours doing computer work.
I don’t consider myself a guitarist.
“Freedom to get stuff done.”, it says on the Fellowship site. For the few young folks that qualify.
The raw numbers tell the story better than a stat: some folks, not most folks. Most folks gotta shovel the money onto the pile so bits and pieces can trickle down at the rate he approves.
This isn’t limited to Peter Thiel. It’s the same old: kowtow to aristocrats.
He’s a first generation aristocrat who has gone on at length about the right of the powerful to dictate the outputs of the less powerful.
He believes he’s truly unique and above the rest of us, without seemingly considering none of us will lie down and die if an early end should be his fate (I hope it isn’t, of course).
Such a smart guy, I suspect he’d agree there’s no causal link between his existence and the rest of us. He leans on convenient emotional frameworks society has carried along for years.
Oh of course he does a bit of philanthropy to point at. How gracious of him to hand pick those who should have their minds to themselves!
He thinks Elizabeth Warren is dangerous. Says nothing about the doofus in Chief, but the contender with relatively little power. Be afraid of change! It might prove I’m wrong!
The common features of an aristocrat: money and indifference of those without money.
Or at least that’s my take given his writings. Maybe have a peak and see.
You only become first generation aristocrat once there is at least couple more generations. You might like him or not, but he got what he got and not inherited it or gained it from a name.
That’s not what the definition of the word aristocrat is? It has nothing to do with HOW the wealth was acquired. Just that one is a member of the privileged class.
The origin of the word too has nothing to do with how the money is acquired. Just means “member of the elite class”.
It seems you’re attached to a romanticized/nostalgic spin on the term. Not concerned with the actual definition.
Thiel's IMHO bizarre politics aside, I really agree with him on this particular subject.
I have believed for quite some time that we entered a kind of minor dark age around the early-mid 1970s. Virtually everything we're doing now from spacecraft to AI to the Internet was invented before 1970. For some reason around that time we largely stopped making major advancements. There are a few exceptions (CRISPR/CAS9, graphene, Li-Ion batteries) but nothing since then compares to the insane rate of progress experienced in the 50s and 60s. We went from horse drawn carts to moon rockets and supersonic aircraft in 50 years with a great deal of the progress happening between 1940 and 1970. That's 30 years of a rate of innovation orders of magnitude beyond anything in human history.
I am not certain of the cause. It's tempting to blame proximate things like the rise of neoliberalism, de-funding of basic research, or bureaucratization of science, but I wonder if the cause wasn't something deeper. It really seems to me that everything went off the rails around 1973. Take this graph for example:
Right around 1973 is when the decline of the US middle class really started. The effect just didn't become really acute until after the 2008 super-recession.
Here's a few of the ideas I've brainstormed:
* Did the collapse of Bretton Woods (a.k.a. the Nixon Shock) break something about the fundamental incentive structure of the economy?
* Was the generation that grew up on television less able to think innovatively? Did they start taking power around the early-mid 70s?
* Did we get scared? Was the rate of progress during those years so fast that it caused some kind of deep subconscious "future shock" backlash that manifested in things like the luddite wing of the green movement or the rise of fundamentalist religion?
* Energy got much more expensive after the oil shocks of the 70s and after US production of the really cheap easy oil peaked. Maybe rising energy costs cut margin from the economy?
I'm open to suggestions. I really think "what happened in 1973?" is a historical mystery.
That graph only goes back to 1947. If it went back to 1847 (I know, data isn't available) I suspect that we'd see the immediate post-WW II decades as unusual rather than a natural baseline.
Why was the economy so great for median American workers between 1945 and the mid 1970s?
One factor was that the USA was the world's largest manufacturer and running an export surplus. It was relatively easy to run an export surplus and offer good working conditions to the rank-and-file because there was limited competition (self-imposed isolation from the world market economy by Communist states, various market economies still recovering from the war).
Some of the economically glorious post war Golden Age was also built on environmental debt. Businesses faced less regulation on environmental, health, and safety issues and correspondingly spent less on pollution control, worker protection, and compliance documentation. Just don't drink the water or be "unlucky" on the shop floor. Building a $10 million chemical plant in 1965 counts toward that year's economic growth even if it's going to become a $100 million Superfund site by 1985.
Another factor was that there was a huge prolonged bounty from basic and applied research that came out of the physics revolutions from before the war. Nuclear fission, quantum mechanics, and relativity were all known by the 1930s. By 1970 most of the low hanging application fruit had been plucked. There's still plenty of basic research to do, and we still find new applications over time, but we haven't had another physics revolution like QM that just pops out one delightful surprise after another.
1) Moore law started at the seventies and its about to end within few years. It might be that we got too lazy?
2) The economy become much more market driven/competitive. I.e. management went from technical to financial. Hence when the goal of innovating is financial , it is much more short term.
The reason for optimism today are :
1) Moore law is ending, hence we should see many new hardware architectures and the software to follow.
2) Open source - companies can start innovating without the need to build basic staff.
Your first is really fascinating. I've seldom heard anyone talk about potential negatives from Moore's Law. It seems like you're suggesting that the rapid improvement of computer tech over the time since 1970 sucked all the air out of the room. All the brains, capital, and hype went there.
I'm not convinced it's enough but it is pretty interesting.
As a general rule, innovation in software always follow hardware. E.g. deep learning.
With Moore law, it was always the case that general purpose CPU will always catch up to special purpose one (e.g. This killed sun and SGI), so it was hurting innovation.
I’ve been playing around with the idea of building on demand phone booth conference rooms (the ones you see in coworking spaces) but reservable in 15m blocks that open with a code from an app. Could partner with coffee shops or even cities for people who need some sound proof time while out and about to take a call, zoom into a meeting while out in public, etc.
Yeah I’m referring to basically that, except it would be customized so that they’d be pay by 15m and put in public spaces or coffee shops instead of private offices. I have used them at companies I’ve worked for and they’re great, but thinking more about the remote worker or freelancer out in public (not at an office or coworking space) who might want to pay a few bucks in an otherwise loud public space for a quiet place to talk a call.
Yes, IAC's Match Group (Tinder, Match, PlentyOfFish, OKCupid...) seems to be where these things end up. IAC is the huge Internet company that gets little attention. They have over a hundred major web sites, which they run successfully, but don't try to make into one giant brand.
Publishing an advocacy magazine, with a paid editorial staff, was strange for a for-profit company. The article's main criticism of the new management seems to be that they focused on the core business of the company.
Agreed that IAC is an under-the-radar Internet behemoth. The Masters of Scale episode with Barry Diller, the founder of IAC, is worth listening to if you haven’t yet: https://mastersofscale.com/learn-to-unlearn/
They were bought by a Chinese bank. I don’t understand how that was allowed to happen in the first place... The security implications are mind boggling
Fun fact: Grindr majority stake is owned by a Chinese mobile game company and has been ordered by the US Government to divest as it poses a national security risk.
Yeah, US national security risks such as not being able to teach government & military personnel the very basics of how to pre-emptively avoid being blackmailed.
If such personnel willingly uploads compromising photos of themselves online, then it's their own fault, not the platform or the platform owner's.
"Mr. Smith that works for government/corporate office of company, we have chats of you trying to solicit other men, tut tut, what ever will your wife and daughter think, we don't have to tell them but you'll have to do something for us in return".
This is literally the kind of stuff that agencies like the Defense Security Service looks for when investigating people to issue security clearance, in the case of DSS for stuff under the Department of Defense umbrella. It's just one way you can be exploited.
Similarly, I have a bankruptcy, that makes it difficult for me to ever have a security clearance again, especially anything greater than Confidential. My poor credit choices make me a 'security risk' because I might amass debt again and be easily influenced eyeroll.
Are government/military personnel somehow entitled to being able to cheat and solicit casual sex? If someone does that while possessing top security clearance or sensitive info, they have clearly demonstrated themselves as vulnerable to exploitation. This is first and foremost an internal personnel problem. If such behaviour happens on U.S.-owned platforms it is still problematic.
Sex, debts and gambling are THE go to blackmail avenues for recruiting assets.
It needn't be someone cheating either, being gay is still not socially acceptable in MANY circles remove wife/daughter from the above and change it to pastor, co-worker, employer, mother, father.
It doesn't have to be just the fact that the individual is gay or bi either, a listed kink could be used to coerce someone into a mildly compromising situation to maintain their privacy, then once you get them to do something compromising with their employer you now have even more leverage "oh well we could tell them you did this for us, so you'd better keep co-operating".
This is also used in television, an example being I believe season 2 of Madam Secretary with the gay Russian student at the war college and the FBI secretary in The Americans is compromised through a vanilla heterosexual relationship.
It's also used in non-espionage/corporate sabotage stages, like sextortion with both adults and minors by ether coercing someone into doing sexual things on camera or hacking a webcam/hiding a camera and catching them doing something sexual (or just naked) and using it as leverage against them.
This is a fairly competent wiki article with the TL:DR being
>There have been various attempts to explain why people become spies. One common theory is summed up by the acronym MICE: Money, Ideology, Compromise or Extortion
Grindr is just one tool in a long list of possible extortion opportunities. And no, a gay pastor being blackmailed is nowhere near a "national security risk". This should strictly concern people who work for the U.S. government, intelligence, and military agencies.
None of the security concerns you mentioned will be solved if Grindr is placed under "trusted" ownership. Having a Grindr profile is a choice, not an entitlement or obligation. This is being framed as a national security issue 3 years after Kunlun Tech acquires majority stake in Grindr and 18 months after it fully bought out Grindr, because hate-fearing all things (and people) Chinese is too in vogue in the U.S. right now to not join on the bandwagon.
I never said 'a gay pastor' I said someone can be blackmailed with the threat of being revealed to their pastor, as an example, so someone like you didn't come along and Go "yeah but no but yeah but no but yeah but nooooo no one would ever threaten to report someone to their dog's groomer's uncle's cousin's girlfriend, so unbelievable".
The fact is this is an app owned by a Chinese company that is being used by a vulnerable, often still persecuted, community to hook up with anonymous individuals that may or may not be openly gay.
It is worth noting that homosexuality was illegal until 1997 in China, was classified a mentall inless until 2001, in a country that still regularly bans LGBTQ events, that does not allow homosexuality in television shows or movies as part of a list that also includes 'sexual perversion, sexual assault, sexual abuse, sexual violence, and so on', in a country where creating content on the internet that contains references to homosexuality and the scientifically accurate words for genitalia is strictly banned.
If that's not enough reasons for you to scratch your head at a Chinese company owning a gay hook up app... then I suppose we'll have to agree to disagree.
And? You think Chinese Gov will spend the time and effort to blackmail some random civilian on the other side of the world? Is this some regurgiated version of the false spiel claming that "all companies are direct arms of the Chinese government"?
When Kunlun Tech's ownership of Grindr was labelled as a national security risk, the U.S. government certainly did not take China's LGBT rights record or the privacy of regular American civilians into consideration. It was strictly about government & military personnel. I applaud your vivid & divergent thinking skills buy I'm afraid you've gone too far from the actual issue.
I find it all so bizarre. Everything from how big the font size is in the left nav, to the ordering of the nav items, to how the feed and right nav scroll together (can’t see the top of the right nav if you scroll down the main feed) to how huge tweets with images appear (can only see 2 tweets if they contain images on a massive desktop screen) to the fact that you cant resize the left or right nav on your own...
What's bizarre about it? It looks a lot like the reddit redesign to me, and seems to be focused on emphasizing sponsored content more, if I had to speculate.
Large-picture tweets taking up a lot of space means that large-picture interstitial ads and promoted tweets also take up lots of space. I would guess also that scrolling the right-hand column gives a lot more automatic eyeball space to trending stuff in sections below the fold, and is no longer constrained to just screen height for what everyone will usually see there. Who's going to willfully scroll down the trending column of promoted stuff separately from the main feed?
The left-nav stuff seems much more straightforward for new users, even if it's a useless change for long time users who already know how to operate the site.
Not at all. Reddit has one nav across the top and side and a large auto-expanding area for the main content. Twitter desktop now has navs on both sides which take about roughly the same amount of real estate as the main content area, which is not auto-expanding.
And reduce the font size from a minimum of 14 down to a minimum of, say, 6.
And for it to also reduce the menus. And for the site to actually use horizontal space. And for the site not to insert different scrollbars over the menus if you're using it in a small window. And for to let us reduce the negative space padding by 99.99999%.
Gah. It's so, so fucking bad. Twitter have the worst design team, and literally everything they have done for a decade has been garbage.
It's just another example of the latest trend in UI/UX "experts" trying to create an "experience" for users, instead of letting the user create their own experience through customization of the UI.
People generally don't know what they want. Expecting anything but a small percentage to customize their UI, let alone even know or care that feature exists, is optimistic at best. Think outside of the HN bubble, the other 99.999% of people that use these products.
It's not the HN bubble. There are very real users who want to do small customizations to the UI. Sometimes it's as simple as, in the case of GP, this sidebar is taking up too much space and I want to reduce it. Or it can be, let me choose how I want this list view to be sorted, and remember it. Or, this button on the toolbar is for a feature I never use, and let me hide it. All of this is incredibly common in well-designed Cocoa apps, but not at all in web apps or mobile apps (including iOS-builtin apps).
Sometimes ugly is functional. Sometimes worse is better. Sometimes designers don't know what they're doing as they chase after some idealistic goal that's detached from reality.
Functional design is sometimes not good design. It's pragmatic.
This redesign is hot garbage. Take it out back and shoot it.
Agree - 99% of what I do is read tweets, but now they take up only ~33% of my screen's real estate, and are outweighed visually by large distracting content on either side.
Perhaps I made some classic tourist mistake, but when I was in Paris last year I felt like every mid-priced French bistro was extremely mediocre and overpriced. Menus all the same and food did not taste fresh at all. Wine bad too.
There are tons of great places, just not along tourist-channeling roads. It's honestly just a function of rent. You can't be on a really nice spot and give good value in Paris. It's impossible. But yeah bistro-type place tend to be pretty mediocre in particular, they're much better in Lyon for example.
Just ask someone who lives there next time, you'll get appropriate feedback. They know "the good places". Actually do that for every major city you're going on vacation to. It works the same everywhere.
Depends on who you ask though. You have the equivalent of Applebee’s lovers in every country who will tell to go there. You also need to ask them some foodie shibboleths or you’re going to have a bad time.
Another easy one is asking about their favorite wine/beer, seeing if they seem interested in it's similarities and differences to other styles, and can reasonably defend why this is their choice. I usually feel like people who can talk about these things 1. In relation to food/flavor and 2. In relation to setting/mood have at least a reasonable amount of interest.
Three trips so far mostly relying on Messy Nessy's recommendations and have not gone wrong once. Most memorable experience so far was Chez Louisette in the flea market, but not for the food, which was fine.
I only have the barest minimum of French, but I can read a menu, and that is generally enough to figure out whether the house is phoning it in or not. I.e., if onion soup or coq a vin figure prominently, next!
The deal with Paris is to GTFO of the tourist areas. Go walk around Belleville, upper Canal St. Martin and neighborhoods like that. Level of civilization (and corner bistros) goes way up.
Yo natives: I'm marking your suggestions on my map, thanks!
Tangentially related: I spent a week and a ton of research in Paris looking for a truly good bowl of onion soup and it just doesn’t seem to exist. No one takes the (large) amount of time needed to properly caramelize the onions and the balance of flavor always skewed way too sweet. I gave up and just made my own using a classic French recipe when I returned from the trip. Light years better.
Yeah well Julia will not lead you astray nor will the early versions of the Joy of Cooking. There is no evading the time required to caramelize the onions. This adds cost and makes it non-profitable to serve to gullible tourists.
Oddly enough you can now get decent Pho (noodles not entirely correct) and Szichuan hot pot (also noodles sorta not correct, but fire, finally!!) and even a Japanese (not entirely sushi) joint, pretty good. Not quite up to California standards (sadly my apex referent).
Given the algorithmic pricing on a lot of things (everything?) sold on Amazon there’s probably money to be made building some bot that checks for pricing mistakes and buys when one is found.
I believe this was common on eBay. People also looked for common misspellings of items, misspellings that would lead items to get fewer ids and usually end underpriced relative to similar items.
I don't have this automated, but this is definitely still a thing, at least for used test equipment. I'm not clever enough to automate it fully, since often constructing the search terms is fairly difficult, but I definitely have scored some incredible deals ($40 for a perfectly fine HP spectrum analyzer a month ago). The only automation is some simple greasemonkey scripts, but there are definitely deals to be had on mis-categorized items. Likely this works best on specialized equipment that the average bulk auction buyer doesn't recognize, I would suspect that most industrial/scientific equipment would apply. This certainly won't apply to consumer items like a laptop though, the item has to be esoteric enough that a non-expert has difficulty even determining what it is.
Yikes. A little more than half the estimate. I’m pretty sure that is the worst miss the company has ever had.
With recent news that reruns like the Office and Friends are leaving next year, it would be really funny if a large chunk of Netflix subscribers are literally subscribing just to watch decades old reruns. It could be a bit of an Emperor Has No Clothes situation for them.
Netflix is a heavily data-oriented company; if a large chunk of Netflix subscribers just watch decades old reruns then they know it. What Netflix needs to do is build content specifically for these people and I don't believe they're doing that.
My anecdote on this: My collage age daughter watches Friends on Netflix basically continuously -- she's probably watched every season a dozen times now -- when she gets to last episode it just cycles around again. This is because she keeps on it in the background when she's studying or doing school work.
What can you build for those people except reboots of those old shows, which they're doing (e.g. Sabrina, Fuller House)? They can't force NBC to license them the old reruns forever if NBC doesn't want to. There's no amount of money they can offer that will work once a studio decides it's going to reserve the content for its own streaming service.
Yes, they are doing this (Fuller House, Gilmore Girls), but Sanrina is about as far from a reboot of the 90s version as you can imagine. Horror/drama remake, not sitcom - more akin to Riverdale.
You can’t make a new Office or Friends. Those were created by really special groups of people and I’m not confident today’s society allows groups like this to form.
This decade of outrage that we’re coming out of (hopefully) has crushed an entire generation of comedians and actors. Look at late night talk shows for example; they’re totally stale. Nobody has the balls to say anything, fearing the tweet storm will be turned against their production company.
It’ll take another generation before we’ll be in a place to create good shit again. Or maybe it will never happen again and the golden age is over?
Yeah those darn kids. Why won't they just get off your lawn. You know who said the same thing you just did? Every generation about the next generation. You can literally go back to how people were complaining books were causing corruption in kids and how things used to be better when. There are more comedy shows available than ever before on television and see the push off s button. Comedy is alive and well, it's just that comedy changes over time and what you find funny, the kids don't find edgy or relevant. Remember when Dane Cook was funny? How about Rodney Dangerfield? Hell, Richard Pryor before Eddie Murphy (Raw is still an amazing set). Comedy will always change and it's a reflection of the society that exists at that time, not the past. Plenty of comedians say really crazy things that don't create a tweet storm. Chris Hardwick survived a tweet storm based on some allegations from his ex and he's doing just fine now. Aziz Ansari seems to be humming along just fine too. I think the real point is that the kids are okay. You don't need to worry about their sense of humor it's still firmly intact.
>Netflix is a heavily data-oriented company; if a large chunk of Netflix subscribers just watch decades old reruns then they know it. What Netflix needs to do is build content specifically for these people and I don't believe they're doing that.
I know that a significant portion just bing rewatch the Office and Friends, however they do not own the rights and these will go back to NBC who will just host them in their own streaming service.
This is the main issue facing Netflix. The more their competitors wise up and take their content elsewhere people will be more reticent to pay for a sub fee since there are now way too many options to choose from.
That is definitely what their concern has been the last 7 or 8 years and why they've made such a massive push with original content. With so many content producers starting their own streaming services, they have to lure and keep customers the same way the other content producers will be: with original content not sourced from elsewhere. It's the only way forward for them and fortunately for them, they were aware enough to realize it almost a decade ahead of time.
Personally I haven’t found the quality of Netflix originals to be anywhere close to the same level as HBO. Their approach is more “consistently release a ton of average quality shows” and hope a few stick. Will be interesting to see if that actually work long term.
I agree for the most part. They definitely have their great content (Dark, Stranger Things, first few seasons of House of Cards, OITNB, I Am Mother, etc.) but it is incredibly inconsistent overall. I think they were more focused on beefing up their library so they had _something_ when the third-party content left the service, and hopefully get a few hits like Stranger Things in the process. With reports that they intend to slow down their investment in original content, I'm hoping they start focusing more on quality than quantity now.