YouTube continued building their own POPs AND network for ~18 months AFTER the google acquisition. Google did not have the network capacity to carry it.
(Fun fact: YT had 25 datacenter contracts, and opened them at the rate of 1 a month) starting from March 2006 - 25 contracts were set up in 2 years. At the time of the google acquisition, there were, ~8. (So yeah, 17 additions over the next ~16 months)
Also YT had a far more streamlined (but less optimized) network architecture. Traffic was originally generated in the PoP and egressed out of the PoP. The was not a lot of traffic going across backbones (Unless if it was going to a settlement free peer). Initially, it was egressed as fast as possible. This was good for cost, not great for performance, but it did encourage peering, which also helped cost. Popular videos did go via CDN initially.
YouTube had a very scalable POP architecture. I agree with area_man that the collapse was not imminent. (See 17 additional pops) There were growing pains, sure, but there was a fairly good system.
Also, as it relates to bandaid from a datacenter and procurement perspective, the original bandaid racks were in YT cages. YT had space in datacenters, and Google didnt. (SV1, DC3). Also, the HWOps tech who went on-site to DC3, ended up tripping a breaker. (They were almost escorted out).
Side-note: the evolution/offshoot of bandaid into the offnet caching system - now called Google Global Cache, is what really helped scale into provider (end-user) networks, and remove a lot of load from their backbone, similar to an Akamai, or a Netflix open connect box. Last I heard GGC pushed significantly more traffic than the main google network.
The google netops teams that were of help in the first year of acquisition was the peering team, and some of the procurement team.
The peering team helped us leverage existing network relationships, to pick up peers (eg: SBC)
The procurement team gave us circuits from providers that had a long negotiation time (eg: sprint)
Google also helped YouTube procure various Juniper equipment, which was then installed by the YT Team.
I was there and saw some of this during my time at Netflix.
I've also been in the industry long enough to get my own sense of what is / what is not reasonable.
The first thing, Netflix wise, is to understand their culture deck at the time. One of the main things was "Act in Netflix's best interest". That basically described their philosophy of how employees should act.
So, when signing a contract, where you get a 10% kickback, (eg the company pays $200/hour and you get $20 as a commission, its better to have the company pay $180.)
Also, signing contracts that he was enriched by - stock, kickbacks etc. (he received what is now worth: $862,500 of sumologic, and $2,167,700 of netskope - trial document #276
He also signed contracts that were never deployed, had a long support lifetime, or didnt meet the companies needs - eg: Numerify, and docurated - trial document # 288
In some cases, I personally experienced us having to use tools that Mike had signed for that were not right for the job. Eg: Sumologic at the time was a horrendous product. It certainly was not a realtime logging system. Realtime was up to 15 minutes delayed. If you wanted realtime, it was all about syslog. I brought this up, and was told that we were using the product because of Mike, even though it clearly did not help our problems. Grep on the unix server was considerably faster and more up to date, (but it wouldnt have got Mike $2M of stock).
Mike also had me meet with him and various vendors who were pitching some fly-by-night ideas. In a normal world, I'd say they were very early startup ideas that weren't a match for our needs. Now, I'm wondering if these were meetings where Mike was looking to get an "advisory" angle.
In summary, I've been to coffee, dinners, very nice meals etc. with vendors. I've had them invite me places for meetings, and I've gone with my companies permission and understanding. I've had non-compensated advisory positions. The difference though, is my company was aware of it, and I did not receive stock or engineer contracts such that I received kickbacks. Thats where the line was, and thats why he's going to jail.
At a routing and peering level. Once you have an announcement for your netblock out there, traffic will start to head towards it. A lot of this is due to the BGP Path Selection Algorithm.
You can try and influence how traffic arrives, by doing things like, AS prepends, but you are still going to get traffic.
The main reason for this is that the other side that is egressing to you has their own egress policy that also follows path selection. Things like localpref and weight will force my traffic to leave via a path before it considers how a network has AS padded.
As an example:
Lets say I want to egress (company A) to a downstream company (company B). If I learn routes to Company B via multiple ways: peering fabric (low cost), paid peering (medium cost), transit1 (high cost, variable quality), transit2 (low cost variable quality), I can choose which way my traffic goes, via localpref, weight etc.
Only when I view the paths equally (equal localpref, weight etc.) will I evaluate the shortest AS Path (which the receiving company has influence on).
The only way to completely not get inbound traffic via a specific link, is to remove your BGP advertisement for your netblock from that link. (some providers also let you do this selectively via BGP communities).
There are also some other tips/tricks - such as adding a more specific prefix to a certain link, to attract traffic, but care needs to be made to have a fallback route in case things go wonky.
YouTube continued building their own POPs AND network for ~18 months AFTER the google acquisition. Google did not have the network capacity to carry it. (Fun fact: YT had 25 datacenter contracts, and opened them at the rate of 1 a month) starting from March 2006 - 25 contracts were set up in 2 years. At the time of the google acquisition, there were, ~8. (So yeah, 17 additions over the next ~16 months)
Also YT had a far more streamlined (but less optimized) network architecture. Traffic was originally generated in the PoP and egressed out of the PoP. The was not a lot of traffic going across backbones (Unless if it was going to a settlement free peer). Initially, it was egressed as fast as possible. This was good for cost, not great for performance, but it did encourage peering, which also helped cost. Popular videos did go via CDN initially.
YouTube had a very scalable POP architecture. I agree with area_man that the collapse was not imminent. (See 17 additional pops) There were growing pains, sure, but there was a fairly good system.
Also, as it relates to bandaid from a datacenter and procurement perspective, the original bandaid racks were in YT cages. YT had space in datacenters, and Google didnt. (SV1, DC3). Also, the HWOps tech who went on-site to DC3, ended up tripping a breaker. (They were almost escorted out).
Side-note: the evolution/offshoot of bandaid into the offnet caching system - now called Google Global Cache, is what really helped scale into provider (end-user) networks, and remove a lot of load from their backbone, similar to an Akamai, or a Netflix open connect box. Last I heard GGC pushed significantly more traffic than the main google network.
The google netops teams that were of help in the first year of acquisition was the peering team, and some of the procurement team. The peering team helped us leverage existing network relationships, to pick up peers (eg: SBC)
The procurement team gave us circuits from providers that had a long negotiation time (eg: sprint)
Google also helped YouTube procure various Juniper equipment, which was then installed by the YT Team.