> My dream is to turn computer performance analysis into a science, one where we can completely understand the performance of everything: of applications, libraries, kernels, hypervisors, firmware, and hardware.
> One interviewer who had studied my work asked "How many staff report to you?" "None." He kept returning to this question. I got the feeling that he didn't actually believe it, and thought if he asked enough times I'd confess to having a team.
My experience with this in interview loops is that it's less about admiring a persons technical abilities, but more a checkbox question to determine whether a person fits into a certain role model that companies have set up. At most FAANGs, interviews will expect that you mention you are being the tech lead of a team (5-10 engineers) at a (L|E)6 role, even if one isn't a manager. At (L|E)7, it will be 50+ engineers. As a regular engineer, one would probably have an issue getting hired at a high seniority level without answering the question the right way. Things might be different for well-known personalities like Brendan.
Had this experience - worked in a role where I was mostly a very senior IC but also managed a small team for strategic projects. Interviewed at FAANG and got convinced that corresponded to an L6 sort of role, definitely should have held out for L7 though.
Walk in the door to find that move of the L7s I've encountered have been leading 50 person teams (as manager or TL). Seems to range from more like 20-35 and, among ICs, there often not actually the only TL, just the most senior on the team.
At the distinguished engineer level, you can get away with not being a tech lead. But ya, you'll have problems if you go from a very individual IC to a FAANG where they expect more leadership to have been demonstrated.
This is a problem in many FAANG like companies. They have no real technical track for "individual contributors"... all paths turn into management roles eventually. Sucks that when companies finally got on board with technical tracks that didn't require switching to management they just did it by making the senior technical spots management spots. So still no real technical tracks.
A tech lead role is different from a management role.
The TL is the person who the buck stops with on technical discussions. While part of the job is providing mentoring for junior engineers, they aren't directly responsible for performance management, headcount allocation, etc, in the same way that a people manager is, and people don't usually directly report to them in the org chart.
By definition, no large engineering effort is solitary. It makes sense for the more senior eng to be responsible for the decisions that will affect more people, and that requires talking to and understanding them. If people want the senior eng title, they need to be able to do that kind of work.
To me the main difference is whether I get to work with code or people. Technical tracks are for people who want to work on code, but TLs don't. Technical leads spend all their time working through design documents and in meetings. They are technical meetings and documents, but they are still not actually creating things. They are artists who have gone from making art to helping guide other artists. Definitely a valuable role in a company, but whether you call it management or not doesn't change the fact that they no longer get to make the art.
I want roles in companies for the technical artists to progress. Where you get harder problems, work on more complex issues, do things that require more research and autonomy to solve. These skills are also valuable and needed in companies and most don't have a way of retaining that talent as they have no track for them.
I have very little interest in becoming a manager, and when I have been pressed into team-leadership roles in the past I've found it uncomfortable and stressful. That said, I have come around to the view that because of the simple fact that any large engineering project must be a team effort, once one's ambitions become large enough, the ability to manage people is the only tool powerful enough to implement one's technical vision. I believe it was reading Joe Sutter's memoirs (Boeing 747 chief engineer) that really crystallized that view for me.
In my own career, I am currently pursuing independent consulting as the "progression" of an individual contributor role. I believe this is fairly common, and I think there's kind of a mutual cause-and-effect going on there with the lack of good senior IC paths at many companies.
As someone that has been on that spot as well, consulting seems to be the only safe path to grow old and keep doing tech.
Most companies don't have any issue bringing in the aging expert that will code and lead their internal team, teaching them the in-outs of the technology they should master, yet have serious issues having such skillsets in-house.
You can progress to highest IC levels as someone who wants to work with their hands directly on code/issues (saw that myself at 2 different big tech). But it’s harder and more one off.
Reason for that is simple - in most areas you can achieve more, if you can properly guide dozens of engineers than lock yourself in the basement. Main exceptions are crazy specialized perf gurus.
The vast majority of time, a harder, more complex problem, requiring more research to solve implies more than one person working on the essence of solving it. As soon as you have more than one person working on it, someone is providing guidance to someone else who is, on balance, receiving guidance.
Or small teams that self-organize. Sort of the axiom behind "Agile Manifesto."
And some problems require not large numbers of people thinking a little about it, but rather long periods of time uninterrupted for one or a few people to think deeply about.
A large engineering org is almost always horizontal, whereas some narrow problems are unusually deep and/or important and therefore benefit from unusually skilled attention down to the micro level.
In some cases you want your three wizards supervising the architecture of 100 people working on 25 products… and sometimes you want them lovingly crafting every line of a framework or other core component that is going to mechanically influence all that work even more than design review ever could.
Yes, getting PMCs enabled in VMs was just the start, I think the next hardware capabilities to enable are:
- PEBS (Precise/Processor event based sampling, so that we can accurately get instruction pointers on PMC events)
- uncore PMCs (in a safe manner)
- LBR (last branch record, to aid stack walking)
- BTS (branch trace store, " ")
- Processor trace (for cycle traces)
Processor trace may be the final boss. We've got through level 1, PMCs, now onto PEBS and beyond.
Can this be safely/efficiently virtualized? I love using these tools but post-spectre I could understand people being hesitant to expose more internal "state" (I.e. Technically unique to a VM but only one processor bug away from kaboom?).
Thanks! We have to work through each capability carefully. Some won't be safe, and will be available on bare-metal instances only. That may be ok, as it fits with the following evolution of an application (this is something I did for some recent talks):
As (and if) an application grows, it migrates to platforms with greater performance and observability.
The ship has sailed on neighbor detection BTW. There's so many ways to know you're a VM with neighbors that disabling PMCs for that reason alone doesn't make sense.
In the crudest sense of "do I have a neighbour", sure. Of course, that's hardly secret -- if you're in EC2 you can just count your CPUs to figure that out.
But there's more questions you can ask:
1. Is my neighbour busy right now?
2. Is my neighbour a busy web server, a busy database, or a busy application server?
3. Is my neighbour hosting Brendan's website?
4. Is my neighbour hosting Brendan's website and he's logged in writing a blog post in vi right now?
5. What's Brendan writing right now?
It's not immediately clear which of these questions can be answered using certain capabilities! Few people would have guessed that you could read text off someone's screen using hyperthreading prior to 2005, for example. (Pretty simple although I don't know if anyone has published exploit code for it: Just look at which cache lines are fetched fetching glyphs to render to the screen.)
Congrats man, it sounds like a dream job for you. It will be fun to follow your blog at your next job. Thanks again for sharing everything that you do, it is so incredibly humbling and such a great learning experience.
On AMD systems, many hardware performance counters are locked behind BIOS flags/configuration.
I admit that I don't know how Intel works, but disabling the use of these performance-counters at startup should be sufficient for any potential security problem.
I'd expect that only development boxes (maybe staging?) would be interested in performance counters anyway. Maybe the occasional development box could be setup for performance-sampling and collecting these counters, but not all production boxes need to be run with performance-counters on.
Yes, getting LBR data from production workloads is the whole ballgame for AutoFDO/SamplePGO and BOLT/Propeller. You cannot access the LBR on any EC2 machine short of a "metal" instance.
When it comes to PGO (vs. profiling the whole system) though it's worth noting that a lot of the speedup comes from things which are too trivial for us humans to consider.
When I profiled the D compiler with and without PGO enabled it became obvious that a lot of the speedup of PGO basically comes just from running the program, the choice of testcases made almost no difference.
There was some speculation in the previous thread about brendangregg on where they're going next ((https://news.ycombinator.com/item?id=31051662)). Not a single person seems to have gotten it right :)
Caught me a bit by surprise as well, as Intel seems to have stagnated a bit as of late, but the opening paragraph seems to indicate Brendan thinks otherwise, and who am I to disagree.
I wish you luck on the new adventures, and hope you'll have tons of fun!
>Caught me a bit by surprise as well, as Intel seems to have stagnated a bit as of late
The best place for an engineer to be (in my observation) - is at a company that faces a fierce competition. To envision, design and build a pipe pumping gold vs to exploit such a pipe when it is built - requires different skills.
>I recently worked with another hardware vendor, who were initially friendly and supportive but after evaluations of their technology went poorly became bullying and misleading.
Intel has a ton of product lines, and a sort of parallel business of design and production, which they've started to split apart.
Their core CPU design has managed to stay relevant (and profitable) despite issues with production, and has regained some measure of success with their 12th generation.
Their production business has a good roadmap, but success in execution remains to be proven.
GPU design is a bit of a wildcard, and I'm excited to see if it pans out.
I would say it's fair to say that what came out of Intel certainly felt stagnant as the past decade drug on, but there's still promise that they remain competitive.
A high-profile hire probably would not word it this way exactly… but old org that is struggling can be an interesting place to be. Especially if there is new leadership looking to right the ship and put their stamp on things.
A lot of “we can’t do that” or “we don’t do that” in big corps actually comes down to “we don’t think we need to do that.” Because why mess with success?
But when everyone realizes success is slipping, a lot more things become possible. I’m going through that now at my employer and big ideas that bounced off of walls for years are now getting done in months. It’s fun.
While the fabs might have hit some missteps, Intel’s software group has still been very strong for a long time. MKL and icc are examples of Intel putting effort into software (I can’t remember whether they hobble it on AMD which would be a real shame). Still a great company and will likely be a leader again.
That has varied. At one point, you could set a "please don't sabotage performance" environment variable that made e.g. Matlab 2x faster. Then it started being ignored. Making software deliberately cripple performance to make your hardware look better reveals the kind of company it is.
> The geeks are back with Pat Gelsinger and Greg Lavender as the CEO and CTO;
This is something I am so excited to hear about Intel. As a consumer and user, I want Intel/Apple/AMD/NVidia all to be competitive and pushing the boundaries of what is possible in computing.
So far Apple has done an amazing job with M1. AMD has had super success with Ryzen. Nvidia has had great success with GPUs.
Recently, it seemed Intel was lagging, in large part due to their culture. It is exciting to see them get some "geeks" back in charge.
This! And I'd also add Qualcomm to the list of CPU companies that I hope to succeed.
Qualcomm is designing desktop-level ARM chips to compete with Apple's M series. The team came from Apple via a startup acquisition (Nuvia). Will be interesting to see how that turns out.
I strongly agree. It's my belief that companies whose management team has direct operational experience will long term outperform the ones run by bean counters. I'm very bullish on Intel's future!
One project that I would find really interesting would be to leverage the insane capabilities of GPUs and 3d graphics to make extremely detailed visualizations of millions and billions of performance events, intervals, code paths, data structures, heat maps, and the like. It'd be great to get views beyond the raw data that are not oversimplified and useless, but rather to get the feeling of being at the helm of some seriously detailed (and precise!) data with zoomable resolution and a lot of assistance of analyzers to surface visual artifacts. I think it'd be both entertaining and highly productive to engage our visual cortex more.
In the blog post, Brendan Gregg confirms that that is indeed his actual email address at Intel. That's actually an impressive recognition of Brendan's capabilities. I wonder how many engineers at Intel have a <firstname>@intel.com email address.
At my company (of about 22k people worldwide), your username is derived from the first and last letters in your name, and numbers appended to make it unique. VIP's don't get to pick their username as far as I know. Intel is large enough that I would guess that their default naming is also algorithmic, but apparently some VIPs like Brendan are able to get nice usernames.
Amazon assigns you an algorithmic name but you can request anything you want that isn't already taken. My friend got a three letter email addy within just the last couple of years.
It was the same deal when I got my first job out of college (20+ years ago) at Motorola (somewhere in the neighborhood of 200,000 employees at the time).
Requested my first name, it was available. I constantly got people asking questions in amazement at my email address - how did I do it??? I would just shrug and say "I asked".
My previous technical account manager got his three initials at amazon.com. One day I might join Amazon, but I don't think I'll get my initials (A. W. S.)
There used to be a director at Google with this name, which in a way made me not want to work there because I'd have to put up with a 2nd choice email.
This tale of super talented engineers having no one under them, being individual contributors, feels too common. Im sure Netflix had some idea how valuable, how much money Brendan was saving/making them, but I still would not be surprised to hear organization impedance was sometimes an issue.
It is impossible to remain an individual contributor at Intel. This will last about a year. There are simply too many employees to maintain a flat hierarchy. People dream of this, but in reality, if you're really that good, you will absolutely become a manager, there's just not enough hands on deck.
Plus there is also social pressure: while you're cavorting about as an individual contributor, many other peers will be crushed under management pressure, and will start to resent you, and demand to the VPs that you share the load.
That being said, Intel is really taking a hard turn back to engineering. I might even consider going back there, assuming Gelsinger doesn't go back to the old school ranking-and-ratings that forces you up or out, or the "you must give 120%" bullshit. Being forced to work in "dungeon mode" for months at a time is what drove me out. "Dungeon mode" is supposed to be a 1-2 week thing to fix a serious bug. (DM is spending 14 hours a day in a conference room 7 days a week.) YOu can only do that so many years in a row before saying enough. That's why Intel lost so many people to Apple a few years ago.
> It is impossible to remain an individual contributor at Intel. This will last about a year. There are simply too many employees to maintain a flat hierarchy. People dream of this, but in reality, if you're really that good, you will absolutely become a manager, there's just not enough hands on deck.
This is right. In most big techs, even if you manage to "technically" remain as a individual contributor at a principal/distinguished level, but you're very likely going to lead/manage a bunch of technical projects which will effectively give you tens of informal reports. Even without external social pressures, I saw many people voluntarily choose to become a manager because it's nearly impossible to handle the workload.
>(DM is spending 14 hours a day in a conference room 7 days a week.) YOu can only do that so many years in a row before saying enough. That's why Intel lost so many people to Apple a few years ago.
I refuse to believe that people went from Intel to Apple for quality of life improvements
There are many sides to apple. The CPU architecture side is not the same as software, cloud, or products/apps, it is run very differently. It's almost a different company. The architects and designers I know left because they were tired of Intel forcing crazy product roadmap twists and turns, and demands on their time, seemingly going nowhere. Apple's Mx silicon has been a smashing success, quite a change from Intel's architectural constipation and chain-yanking of their engineers. Most employees at Intel are pigeonholed, and it is rare that an opportunity to break out without having to move cities occurs, and many took it.
Per the recent story about Jony Ive burning out at Apple I wonder how well Brendan will be protected from the miasma of managing a staff in a highly political environment.
I'd think a smart move would be to hire a personnel manager for him to deal with all administrivia and let him focus on leveraging his talents with other like minds.
Interesting & telling (& not at all what i meant!) is most replies celebrating IC's.
It's a simplifcation for sure but "autonomy, mastery, & purpose" being the elements of fulfilment rings true however hou dice this. Personally I'd really like to see a lot of really good engineers given direct unfettered access to talent, given authority while trying to minimize their personell and management overheads: let the humam resources go wild, synergize, make good things. And stand the frak back.
It's a gaping vast culutral loss that Brendan hasnt had a team of 10 under him. There are people missing from the world who would be way awesome & have done great things if Brendan had simply been tossed some people. Byt usually corporations statt having political games, attachments, commitments, things other than the work itself that comes with authority. Im fascinated by & not sure this world will ever get to grapple with just how core & essential a problem creating "low-drag" organizations is. Allowing itself to acknowledge that much of what it does is far more excellent & powerful than those in the chain of command can really.tap, harvest, understand, much less take credit for or manage.
Orgs need an anti-gravity force, a depolitical defence, of stand the f back. Brendan's go do it figure it out brilliance needs a chance & positiom in this world, needs to be shareable. Or the idea of an organization is actually a hazard, is ossified & unworthy of being the thing that has overgrown this planet loke it has. Human dignity & potential should not remain so shadowed & bound.
is anybody else bothered by the use of capitalization in email addresses? I understand that it doesn't matter semantically, but I find myself thinking negatively of people who use this for some reason.
Especially when it's Intel, the company which traditionally has a lowercase i in its logo.
Also rather OT: The Intel l219 or I219 or i219 must be one of the worst names ever for a NIC. Even Intel doesn't seem to know whether that first letter is an uppercase I or a lowercase l.
Yes, I do know that theoretically there could be different mailboxes on a server that do depend on capitalization, but this is not really happening in practice as far as I'm aware.
The RFC specifies that the local (left of the @) is treated as case sensitive[0] so it can have semantic meaning.
In practice many hosted mail providers (Gmail, Yahoo, etc) treat their own accounts as case insensitive. But in MS Exchange, for example, you can have separate inboxes with only capitalization differences, so it's definitely not obscure.
> local-part of a mailbox MUST BE treated as case sensitive. Therefore, SMTP implementations MUST take care to preserve the case of mailbox local-parts. In particular, for some hosts, the user "smith" is different from the user "Smith".
Huge, huge fan of Brendan's work, "Systems Performance" is one of the few technical books I've read cover to cover twice! That being said I wonder if why he left netflix was due to its stock price cratering, the timing does seem suspicious :)
As a person known for performance work there is no way he wasn't looking at various performance and usage charts and didn't know well ahead of the earnings call that their numbers are not looking good and stock will fall.
Not saying the decision was caused by this. But he for sure is the "insider" that "insider trading" talks about.
As a former insider myself, I can say that it was pretty hard to divine the earnings report from service metrics. We could see over-all patterns of ups or downs, but Wall Street mostly reacts to the future estimations as well as profitability, neither of which we could derive from metrics. And while we had access to active subscriber numbers, again how Wall Street reacted to a miss or not was not always predictable.
Or in short, I doubt he could see this coming, especially given that his interview process had to start a few months ago.
> I wonder if why he left netflix was due to its stock price cratering
My understanding (backed up by levels.fyi) is that RSUs make up a negligible part of Netflix total comp. The deal I always heard when talking with recruiters there was that base salary was very high ($500k) but they were pretty aggressive about maintaining a churn in their employees.
But I never ended up accepting an offer there and it was awhile ago so hopefully some other Netflix employees can confirm whether or not RSUs are a major part of comp today.
Netflix doesn't do RSUs at all, they do options. You get a comp number, which is all cash. Then you choose, once a year, how much of that cash you want to use to buy options. The option discount changes occasionally, as well as the percent of your salary you can use to buy them. The options are 10 year options.
When I was there the option was 20% of the stock price and you could do 100% in stock if you wanted to. So if the stock was $100 a share I'd pay $20 for the option to buy a share at $100 for the next 10 years. In other words, I was break even if the stock went up 20%, and doubled my money if it went up 40%. It was a great program when the stock was growing more than 20% a year.
From what I understand most people take all cash now, or nearly all.
yeah it is curious, I know they used to have a $400-$500k chunk for total comp(every engineer was Sr+) and you picked what % you want towards salary vs. RSU's. This was a few years back so I think they may have gone the route you mentioned which is mostly all cash.
I would like to know who this "hardware vendor, who were initially friendly and supportive but after evaluations of their technology went poorly became bullying and misleading" was, or anyway wasn't.
A statement that it was not AMD would be meaningful. (We all know Qualcomm and Broadcom have their problems.)
Sun machines always seemed to have weird environment-dependent behavior. In college, one of our classmates got the nickname "the human eclipse" because no matter what time of day, no matter what else was going on, when he walked into the Sun lab the machines all went down.
I wonder if Brendan will directly contribute to Clear Linux. It's already the fastest Linux distro by many benchmarks. His software contribution to a distro that is focused on Intel proc would be really interesting.
Yes, a Sun Microsystems training room in Sydney. The entire building is now gone. I was teaching sysadmin and performance classes there in the early 2000s, for both internal and external staff.
For Netflix, I think the expectation used to be that they would establish themselves as a long-lasting company that would generate $20bn in profit per year. It's now looking dubious that they will exist in a meaningful way in a decade--that their value is now to milk the subscribers they have for all they're worth until they have no more.
Of the big tech companies, Netflix has the smallest most.
Google has search.
Apple has the iPhone.
Microsoft has Windows and Office.
Amazon has AWS and logistics.
All of these are huge competitive moats.
However, unlike when Netflix first started, the moat for streaming is not the tech it is the content. As such, Disney, Universal, etc which have decades worth of movies/shows have a huge advantage in this new reality where streaming can be implemented relatively easily.
Netflix's moat is the marginal unit economics of media. At present, they have twice the number of subscribers of the nearest competitor(Disney), and that ratio is probably far higher in non-US markets. They can outbid everyone for content and still deliver a service at a lower COGS than anyone else in the market. They still have to produce a high volume of good content, but it allows them far more mistakes in doing so. It also allows them to make content tailored to niches which aren't economic for others.
> they have twice the number of subscribers of the nearest competitor(Disney)
And Disney had 60% subscriber growth in the past year (74M Q4 2020, 118M Q4 2021), while Netflix had 9% growth in the past year (203M Q4 2020, 222M Q4 2021), meaning the distance is shrinking as we speak. Disney sits on a massive portfolio of content, especially compared to Netflix. They are the bigger company, and have infrastructure to sell the content they own via multiple ways instead of just streaming.
Upvoted your comment for a solid counter argument. Disney's growth is decelerating as well. If you measure from Q1 2021 to Q1 2022, growth was down to 37%. I do think they're a very viable competitor, but I don't think the two businesses are mutually exclusive at current ARPU.
The reliance on theatrical releases is a bit of a mixed bag. It is another mechanism for content generation that can add to their library, but it also comes at a loss of some value to users of Disney+ if they care about seeing stuff on release. Additionally, it's dependent on a distribution channel(cinemas) that is currently hemorrhaging money. If moviegoing doesn't recover to pre-pandemic levels before the apes' money runs out, it might prove to be a vulnerability.
the problem netflix faces is that more and more content won't be offered to netflix at any price since it'll be produced explicitly for other streaming services. If everyone has their own streaming services competing with netflix then eventually netflix is left with almost nothing but the content netflix itself produces.
Looking at the current market capitalization of both companies, Netflix is at 84.57 billion and Disney at 203.Billion.
The difference, however, is Netflix could hone squarely in content if it needed to, and can benefit from having a single focus of mind. Disney's resources are allocated into 5 primary verticals, and 2 subsidiaries, with multiple competing budgets, priorities, resource allocations, and most debilitating at a company their size, internal politics.
That being said, Disney already funds so much of their own content generation, so the question isn't can Disney match a bid by Netflix (that's already a losing question for Disney), but can Disney create competing content that is more compelling.
Netflix pays cash to its employees. If you opt-in to the Stock Option Plan that’s your choice (one thing I love about Netflix - you can Google all of this!).
Why would he be forced to sell his shares upon leaving? That's not how anything works. You can have options that you lose, RSUs as well (but Netflix doesn't have RSUs), but a public company can't force you to sell your shares.
> Why would he be forced to sell his shares upon leaving? That's not how anything works. You can have options that you lose, RSUs as well (but Netflix doesn't have RSUs), but a public company can't force you to sell your shares.
SVE is somewhat interesting, but I've generally found the AVX512 instructions more innovative. I really like AVX512's "compress" and "expand" instructions, for example... as well as the classic "vpermb" (but vector-permutation has been around since SSE and is an old trick: the old pshufb instruction).
Since SVE doesn't want to "set" its SIMD-width, it seems like these permute instructions (vpermb, or even compress/expand) aren't possible?
-------
I've always enjoyed Intel's innovative new instructions: PEXT, PDEP, and now AVX512 compress and AVX512 expand.
AVX512 also includes gather/scatter (but that's not innovative, been around for a long time but still nice to see it in prosumer systems)
Compress/Expand seems like a natural fit for something like SVE since it can still be phrased rather generically and I can easily see it fitting into loops that are written generically over vector length.
Free-form permutation does indeed seem like less of a fit. Though it still makes sense to define a minimum vector length of N for the ISA and support permutation ops that apply the same permutation on groups of N lanes.
Can you expand on why you find AVX512 instructions more innovative? I haven't had a chance to try SVE yet, but on paper it sounds very innovative and offers a wide range of new capabilities.
Gather/scatter have been around for a while, but it hasn't been until more recent Intel uarch that their cost makes them worth using in practice. Zen3 is still lagging quite a bit.
I've seen real-life situations in the past 5 years (albeit with my personal hobby code, nothing professionally), where VCOMPRESSPS or VEXPANDPS would quickly and simply solve my problem.
I personally would have never thought of making such an instruction, despite having written multiple sets of code that use a SIMD-compress or SIMD-expand pattern.
-------
Case in point, vpcompressb (byte-wise compress) is the most blatantly obvious way to "remove redundant XML whitespace" that I've ever seen.
Its just a thing that has obvious wide-spread applicability to many algorithms I've seen and keeps coming up again-and-again. Or determining which rays (in a raytracer) are "dead" vs "alive" (separating out hits vs misses). Or implementing quicksort (compress all items "less than pivot" to X array. Compress all items "greater than pivot" to a Y array. Quicksort done).
SVE is interesting but I'm surprised it actually works. A vector instruction set where you can change the vector width sounds like the classic CISC instruction that ends up unusably slow because it's microcoded. And yet ARM has it and x86 doesn't?
Also, it's an implementation choice to let you set SVE widths that aren't a power of two?
* https://www.youtube.com/watch?v=tDacjrSCeq4
* https://www.brendangregg.com/blog/2008-12-31/unusual-disk-la...