Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Companies like Intel, who are complicit in helping CIA or any intel agency (government, rogue or otherwise) infiltrate and exploit our systems - need to be held accountable by the market.

Intel ME and the (assumed [0]) partnership with CIA to design and build this system - should be an absolute travesty blow to the integrity of their business long-term. Will you, as lead engineer or sys admin for your mission critical business now continue to choose Intel products to help build your infrastructure?

Unfortunately it seems that our modern market has not yet evolved enough to punish companies involved in such reckless behavior. I suspect the reason is primarily the ease of which governments can mass tax and create fiat currency. Perhaps there is some alternate decentarlized currency system that would limit government's ability to tax, print and award juicy big-brother contracts to these companies.

Anyway, for now at best - and perhaps somewhat encouraging - is the subsequent brain drain of engineers and hackers alike who want nothing to do with faceless corporations like Intel, Google, Facebook, IBM, et all who routinely deceive/exploit and work against the best interest of their own customers.

[0] https://twitter.com/9th_prestige/status/928740294090285057



> Intel ME and the (assumed [0]) partnership with CIA to design and build this system

I worked at Intel on ME and the things that came before it until around 2013. I can tell you two things --

1. No, Intel ME wasn't born out of a desire to spy on people nor was it -- to the best of my knowledge but I honestly believe I would know -- created at the request of the US government (or others). It was an honest attempt at providing a functionality that we believed was useful for sysadmins. If it was something done for the CIA, I believe it would probably have been kept secret instead of marketed.

2. It was initially going to be much "worse". Early pilots with actual customers -- such as a large british bank -- were going to run a lot more stuff -- think a full JVM -- and have a lot more direct access to the user land.) Security concerns scrapped those ideas pretty early on though.

In retrospect, I personally believe the whole thing was a bad idea and everybody is free to crap on Intel for it. But the thing was never intended as a backdoor or anything like that.


Right. ME does make sense as a feature for sysadmins. Except . . . . Well, can you shed light on the following:

1. Why did your team deem it necessary to deny the end-user the capability to disable this feature?

2. Why did your team decide to enable ME on ALL consumer grade chips? You could have only enabled it on, say, Xeon, as a value-add - exactly like you do for ECC support. You could have made more money this way. But . . . you didn't.

Without legitimate, sensical answers to the above questions, there is no reason for anyone to believe your team did anything other than design a backdoor for the Feds. Sorry.


Having been a sys-admin once upon a time (2006-2008), these answers are straight forward. Servers used to have discrete ME cards which were paid add-ons. Competition in the early 2000s drove these ME cards to be integrated in the motherboard in order to better compete on the low end of the market. I’ve had servers I was only able to remotely fix due to the out of band management interface (more than once). They pain they fix is real.

The same techniques for managing server farms are useful for managing hundreds/thousands of corporate desktops. Being able to power up a desktop (“lights out” management) and re-image it at 3:00AM is very useful for example. You could also install 3rd party security products to the ME to provide higher level threat detection that’s hard for a rootkit to hide from. So once the work of getting an integrated management engine production ready was complete, it made perfect sense to use it in corporate desktops. It’s expensive to produce chip variants, so doubtless that further cost pressures on Intel lead to them putting the ME their core shared across all products. Plus, now IT admins can let the VP of Sales get the laptop she wants knowing they can leverage their System Center/OpenDesk/etc. console to manage it via ME.

So no, they aren’t a Fed backdoor. Those of us who worked in IT 10 years ago remember how the market drove Intel to add the ME. That is doubtless why many are silently conflicted. They don’t want to take a big step back. They likely are expecting/hoping Intel will “fix” the problem.


The problem isn't the existence of the ME as such. Servers have a BMC which implements similar remote management functionality. People could order servers without BMCs, since they're discrete chips, but they don't.

Even Raptor's high-security Talos II has a BMC; the issue isn't having a BMC, the issue is that it's not owner controlled and it's not auditable.

What's wrong with the ME is that

a) it only accepts Intel-signed code; I can't replace the ME firmware with an implementation (e.g. of remote management functionality) that I trust. I also can't repair vulnerabilities in it without the cooperation of both Intel and the vendor (which is often not forthcoming).

Consider the Authorization header bug in the ME's webserver and multiply it by how many machines you claim use this remote management functionality. That's horrifying.

b) it has DMA access to main memory, which is insane.

Look at the fact that every server nowadays has a BMC, in addition to the ME. On a client device the ME would be used to implement similar functionality, so the BMC is actually a wasteful duplication - but server vendors have to use a BMC because they can't program the ME to implement the remote management functionality they need, because only Intel can program the ME. This is stupid.


> It’s expensive to produce chip variants, so doubtless that further cost pressures on Intel lead to them putting the ME their core shared across all products.

Would it be possible in future CPU designs to put a jumper in, e.g., the ME power path? Closed by default (and possibly forced closed in enterprise-targeted devices), but the option exists to disable the ME without requiring an additional CPU variant.


From a hardware perspective, it’s an easy problem to solve. This is a wetware problem, however.

Back when ME’s were discrete, you would inevitably have some with, some without. Someone would order a bunch of machines without them to “save money” or they bought a model that just didn’t have an ME add on offered by the OEM.

That meant that occasionally you had to actually have the machine in your presence to service it. You end up designing two processes/procedures based on whether you are remote or not. Lack of ME’s actually increased labor costs by reducing the number of machines a tech could manage (on average).

Having an CPU fuse essentially winds the clock back to the discrete ME days. Someone will place an order order for SKU ENCH-81-U instead of EMCH-81-U and you end up with 500 machines with the ME fuse blown. Inevitably there will be a big enough restock fee that someone in accounting will say “just use them.”

(The same applies to things like having/not having a TPM module, etc.)


I'm not OP, but to respond to question 1, allowing users to disable the feature would also allow attackers to disable the feature. If you're relying on ME to provide remote access so that you can clean and repair infected machines, then it's game over for you if the attacker can disable ME.

It would have made much more sense to require you to enable it before first use, and ship it as disabled from the factory. Enablement should work like blowing an eFuse where it's never off once it's turned on, but if you never turn it on it doesn't exist. Then I don't have to worry about the feature unless I know exactly what it is and how to use it.


"I'm not OP, but to respond to question 1, allowing users to disable the feature would also allow attackers to disable the feature."

Older products had jumpers, physical switches, or software mechanisms for securely updating firmware. The first two are immune to most types of remote attacks if done in hardware. Intel already uses signed updates for microcode which people aren't compromising remotely left and right. Intel supporting a mechanism like those that already existed in the market for disabling the backdoor would not give widespread, remote access to systems. If anything, it would block it by having less privileged, 0-day-ridden software running.

I'll also note that the non-Intel market, from OpenPOWER to embedded, has options ranging from open firmware (incl Open Firmware itself) to physical mechanisms to 3rd-party-software. Intel is ignoring those on purpose for reasons they aren't disclosing to the users that also probably don't benefit them.


> Why did your team decide to enable ME on ALL consumer grade chips?

Can you please provide a reference? I've been trying to enable ME forever for my consumer-grade i7 with Intel motherboard for remote management, and I can't seem to be able to.


The core of Intel ME is enabled on all chips since 2008. But features like remote management aren't. Intel ME is used for things like DRM (see PASP) or hardware bring-up and power management.


The ME is already enabled. Maybe you're referring to AMT [0,1] (which might not be included)?

[0]: https://en.wikipedia.org/wiki/Intel_Active_Management_Techno...

[1]: https://www.intel.com/content/www/us/en/architecture-and-tec...


I looked into something similar recently. The following is based on plenty of research, but Intel's information can be a bit ambiguous and incomplete at times, so I can't promise I have every detail right.

FOR OTHERS: Note that you might be able to disable ME's remote access simply by ordering a computer with "No VPro".

1. ME is a platform with many applications that run on it; AMT (Active Management Technology) is one of them.

2. AMT has many components; remote management is one of them.

3. AMT comes in multiple 'editions' (my word, not Intel's) with different features. The Small Business Technology (SBT) edition does not provide remote access by design, with the idea (AFAICT) that small businesses don't want to setup and manage management servers and therefore remote access is insecure.

4. If in MEBx[0], you see "Small Business Technology", then there's no remote management - unless there's another remote management function in ME that is independent of AMT. Also, the first reference below provides the official method of identifying SBT implementations (via a flag in some table). I discovered it on a system ordered with a "No VPro"[1] network card (I'm still not sure why that's a spec of the NIC and not the processor).

Here are a couple of useful references:

* SBT: https://software.intel.com/en-us/documentation/amt-reference...

* MEBx on i7 processors (the title also specifies a chipset; I'm not sure how much that matters): http://download.intel.com/support/motherboards/desktop/sb/in...

.............

[0] MEBx is Management Engine BIOS Extension: the text-mode, pre-OS console UI for configuring ME

[1] VPro is not a product or technology. It's merely branding for, AFAICT, an ambiguously defined group of products that IT professionals might be interested in. It includes AMT (which is also part of ME and often marketed independently), TXT (Trusted Execution Technology), and more.


  to the best of my knowledge but I honestly believe I would know
Honestly, if a three letter agency was working with a tech company to produce a back door, the last people I would expect to know would be most of the engineers involved in the implementation.


> Honestly, if a three letter agency was working with a tech company to produce a back door, the last people I would expect to know would be most of the engineers involved in the implementation.

Who would the first people be then?


If I were a 3 letter agency with a large budget, I would insert someone as a project manager at the target company. If I couldn't do that, I would work with their C level folks.


Preferably no one in the implementing company. Work using customer pressure, say an important bank. And later you swoop in and get exclusive access during a nice and cozy dinner with one of the Cs. It's more like judo than brute force arm twisting.


Kudos for speaking up about it, understandably with a throwaway account - which unfortunately doesn't help prove what you say is in any way truthful. But you probably still work for them and enjoy a nice salary. So can't blame you at all there. But I do just wish more people would be willing to put their careers on the line to say the right thing. This is one of the underlying problems: when smart people go along with bad things, very bad things can and will happen - if on the contrary smart people speak out about bad things then those bad things will be less likely to unfold on a large scale as we see them happening so often in SV.

To your points though, I mean it is a great perspective and helps to illustrate a sliver of possibility of innocence here on Intel's part but it's a little weak given that the operation would have been compartmentalized and political objectives/partnerships therof obviously not part of the system's technical development.


> But you probably still work for them and enjoy a nice salary. So can't blame you at all there.

Why not? Is a salary a good ethical justification for mistreating other people?


Well, you're right salary is not justification for mistreating other people but I can at least sympathize with OP who has maybe rationalized going along with something shady because he himself was mistreated/deceived and perhaps wants to believe ME is not a tool for exploitation; for many 'ignorance is bliss' is a way of life that works for them so it is what it is.


OP's explanation is that this was a shitty decision, made for decent reasons. So... no? But that's not a relevant question?


Man, anybody with a remote idea of how IT works would have said it was a very bad idea. I can't believe in genuineness here. Nobody smart enough to design that system is dumb enough no not understand the consequences. So it's been knowingly decided to create this monster and ship it to the entire world.


> Nobody smart enough to design that system is dumb enough no not understand the consequences.

Do you remember the plain text password leaks from Yahoo? In the real world nothing has to be true/good/secure. All it has to be is that users should be felt so, doesn't matter what the reality is.

As long as the focus is on earning more money/power/control, this is always going to happen.


Unfortunately, in the real world, many times those of us who do have a remote idea suggest that things are "very bad ideas" but nonetheless get ignored by those who actually make the decisions.


Why doesn't Intel offer their chips without an ME, as an option? The mandatory nature makes it malicious.


Saves money to have only one assembly line of chips. Hell, the i5 chips they make now are just i7s with some of the features disabled. So if they make an i7 and there's an error in some part of the chip that's specific to the i7 features, they can disable the i7 parts and sell it as a working i5. At least this is what I recall from an article a year ago on here about that.


This is known as the “Silicon Lottery”


What percentage of people would really want that? The vast majority of consumers don’t know/care. Or it’s sold as a feature.

Most businesses probaby WANT the feature. See other comments in this discussion about lights-out management.

I’m guessing t wouldn’t be economically worth it.


If you’re not using it, then it’s a big unpatched vulnerability. I’ve never worked anywhere that uses it, although I’m sure a few places do.


There are parts of ME that you need (like the BUP module for configuring on boot).


Why would they do something as ridiculous as telling you its true purpose?


They wouldn't. As I said, this is to the best of my knowledge.

However, I believe I would know because it's not like one day the CEO came to us with a folder filled with requirements to be implemented. This is something that started very small ("find a way to force reboot a PC remotely if it's non-responsive") and evolved from there over months/years. I endured way too many meetings were design decisions were made. Unless there were secret CIA agents disguised as my colleagues, I really believe it was designed by Intel engineers all the way through.

I have no issues with people criticizing the product for its failures. I agree with them. But every time I see someone claiming this was a CIA thing, it actually hits me personally.

Then again, I'll never be able to convince anyone of anything. I just felt like saying something this time.

I guess I'm having a bad morning :)


Having worked for Intel (in the open source org) I trust you. I've seen first hand how a cool, small, simple feature is blossoming into something dr. Frankenstein would be proud of.

Also, I think people here severely underestimate the red tape and huge efforts needed to implement something mildly complex, Intel scale. Developing ME under wraps with full CIA-like functionality is staggeringly difficult - I've seen the effort needed getting the BIOS to work on the prototype boards without crashing or destroying the HW; pulling ME to work reliably on all boards would be one order of magnitude harder; making it spy CIA-style - add two more orders of magnitude. I think people don't really understand how difficult is to get something that close to the metal work reliably; able to poke inside the memory of a running OS - forget about it.

Also, I think the readers of HN severely overestimate the effort CIA needs to spy on the internet users - why even try to bug the firmware when people actively share their privacy via apps that they themselves install???


> I've seen first hand how a cool, small, simple feature is blossoming into something dr. Frankenstein would be proud of.

Complete aside, but the whole story of Frankenstein is about how Dr. Frankenstein is repulsed by his actions the moment that he brings the monster to life. So he most certainly wasn't "proud" of his actions, he was horrified by them. But I agree that this is likely how some of the engineers who worked on Intel ME would feel too.

> why even try to bug the firmware when people actively share their privacy via apps that they themselves install???

We know (thanks to Snowden and WikiLeaks) that the NSA and CIA have programs like this, so it's actually more incredible that you don't believe that the CIA or NSA would invest resources in adding backdoors to things like Intel ME. I don't buy that they designed it, but given that we know they intentionally sabotage internet standards it's very likely they sabotaged it in some manner. Or at the very least they have security vulnerabilities they are not disclosing, so they can exploit them.


> So he most certainly wasn't "proud" of his actions, he was horrified by them.

In the end, yes. But the novel starts with him being so proud of the golem that he takes it home with disastrous results. Hmmm, maybe the comparison to ME isn't that far-fetched.

> I don't buy that they designed it

Yep, this is what I'm saying - it's unlikely that they ever told Intel "put this in there".

> it's very likely they sabotaged it in some manner. Or at the very least they have security vulnerabilities they are not disclosing

Absolutely, yes. They would be vastly incompetent not to have them, in fact. What I don't agree about with HN crowd is the threat profile of such an exploit.

I have trouble believing that they use them on a mass-scale. There are so many people looking at the ME, that using any exploit on a massive scale would disclose it almost immediately, and allow the 'enemy' to develop protections. Given the extraordinary capabilities of such an exploit giving it a very valuable status, they probably need to protect it, and will use it only when absolutely necessary; such as the vast, vast majority of the HN users would not ever be subjected to such an exploit.

On the other hand, if your person is interesting enough to NSA to deploy such an exploit against your devices, probably you have vastly more significant problems, like trying to stay outside the visual range of a Predator drone. If any Three Letter Agency will deploy such an exploit against your PC, you can be absolutely sure that they have already bugged your phones, and not with a Stinger device, but tapping directly into the data feed at the phone exchange. Probably you have to incinerate your trash because the garbage men are spooks - this is the kind of threat that I assume you're facing if a TLA is trying to bug your ME.


> In the end, yes. But the novel starts with him being so proud of the golem that he takes it home with disastrous results.

We must've read very different novels. In Chapter 5[1] (when he finally recounts how he brought the golem to life, after talking about his life and his studies up to that point) it's clear that he instantly regretted it.

> I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.

And he didn't take it home with him. He leaves his laboratory and heads back home. The golem lives in the forest for a long time, and finds a family living in a cottage. While hiding from them, he learns to speak, and tries to talk to them. They shun him, and he is filled with anger at his creator for creating him and leaving him alone. So he finds Frankenstein's home and then kills his family.

Maybe some adaptions of the book have different stories on this topic (I've only ever read the original) but I would argue that a depiction which shows Frankenstein regret his decision much later (and the golem's murder of his family being something other than revenge against his creator for abandoning him) is missing the point of Shelly's story.

[1]: https://www.gutenberg.org/files/84/84-h/84-h.htm#chap05


The concern isn't so much that the CIA will use it, but that someone else will find it and use it before the CIA does--a problem that is avoided by having the CIA disclose the vulnerability instead of keeping it for a rainy day.


How about this scenario.. there is now a common unified interface for all computers. If a vulnerability is discovered then all systems are vulnerable and must be patched. How are those patches delivered? It's protection may lie with a single signing-key. How well controlled is that signing key?


I believe you.

But looking at the International Obfuscated C Code Contest (http://www.ioccc.org/) entries, and knowing how much I have to force my eyes not to glaze over whenever a college sends me a 700 line pull-request, if one of your colleagues waited until the deadline to send a massive pull-request for their part of the project, can you say that the deadline is pushed back until every single line is meticulously analyzed by hand, to assert that nothing nefarious could possibly happen with their code?

Just one of your coworkers would need to believe in a greater purpose, for king and country, and grown up in a large family with a brother or cousin who's a part of the intelligence community.

It sounds far-fetched, but so does the Bay of Pigs.


> a folder filled with requirements

That's not how the intelligence agencies operate.

> evolved from there over months/years

THIS is how they influence standards and design choices. We know that in 2013 the NSA budgeted at least $250M to programs such as "BULLRUN" that which intended to "Insert vulnerabilities into commercial encryption networks, IT systems, and endpoint communication devices..."[1].

For an example of how this works, see John Gilmore's description[2] of how the NSA influenced IPSEC. They don't use a folder of requirements; instead they gain influence over enough people to complain about "efficiency" or other distractions and occasionally add a confusing or complicated requirement that just happens to weaken security.

PHK gave an outstanding talk[3] that everyone should see about the broader subject of how the common model most people have about how the NSA works is obsolete.

[1] http://www.nytimes.com/interactive/2013/09/05/us/documents-r...

[2] https://www.mail-archive.com/cryptography@metzdowd.com/msg12...

[3] https://archive.fosdem.org/2014/schedule/event/nsa_operation...


> Unless there were secret CIA agents disguised as my colleagues

That's a thing actually.

> I'll never be able to convince anyone of anything.

I believe you. Conspiracy theories are fun but ultimately I know that secrets are hard to keep secret.


Do you know technical details - like how many processes are even running under the Minix OS?

Also which internal or external groups lead the code development of those processes?

Is the code accessible to any employee/engineer with a technical relationship to IME?


Ok, however: sometimes to hide a thing, all you've gotta do is...you just hide it in plain sight and give it a slightly different "color scheme"


> > Intel ME and the (assumed [0]) partnership with CIA to design and build this system

Then why do you need security clearance to work on Intel ME?


Think about it logically for a second. Regardless of the decisions that led up to the inclusion of Intel ME in Intel CPUs (i.e. regardless of whether the CIA was involved or not), compromising Intel ME is still a security risk for Intel customers, so of course they're going to limit access to a select few that they trust, and that the government can trust.

I'm fairly certain similar restrictions will also apply to those who are granted access to knowledge about CPU microcode, which is an equally large security risk that nobody with even a basic understanding of how CPUs work is blaming on the CIA.


Security though obscurity doesn't work.


Do you have a citation for that? That sounds interesting


The "citation" is a Twitter post [0] that included a screenshot of an anonymous post to 4chan by a supposed Intel employee who claims to have worked on the Management Engine team for the last three years. It was linked upthread.

[0]: https://twitter.com/9th_prestige/status/928740294090285057


Thank you. That is worrisome.


Why not make a physical turn off switch on motherboard to disable it?


regarding 1): hiding in plain sight is sometimes a valid strategy. So is heavy compartmentalization.


wow, thought the exact same thing haha


It should've only been sold on a special "business class" series of CPUs. Intel already loves having dozens of variants, as evidenced by recent market offerings; and it's not like they don't already have dedicated business-class CPUs for workstations. Simply only sell ME as an "addon" tier, and that limits the potential damage.

Incidentally, did you hear about Silent Bob is Silent? What are your thoughts on that vuln?


> If it was something done for the CIA, I believe it would probably have been kept secret instead of marketed.

If that is the case, we would have doubted about this too much early than it would otherwise (because there is a dedicated hardware).

So some people friendly feature have to be dubbed along with the anti-features. This may not be the real case though, but one of the possibilities.


> were going to run a lot more stuff -- think a full JVM

Is a JVM really a lot more stuff than Minix OS?


"The market" is only going to "punish" you if..

- The masses actually care

- There is an alternative

Neither is the case here. Most people couldn't care less about things like ME and AMD and Intel are a oligopoly. If you want a modern x86-64 CPU you only have those two choices and both do this. That is the problem here, not fiat currency.


I'd say that there's a weird dependence between your two points; people often seem to care because there is an alternative.

Examples of this might be Fair Trade coffee, or energy saving light bulbs. Prior to their marketing, I doubt that vague ethical considerations were on the 'top 10' list of consumer wants from a new product, if they registered at all.

But when people are presented with a choice, if you can, why not get the better stuff?

Another analogy might be something like the TPM chips on iPhones. I very much doubt that focus groups or surveys at Apple found TPMs in the list of requested new features. However, things like TPMs get written up, and add to the things that journalists can describe around the vague theme of relative security and relative privacy; important concepts to consumers. Once this is internalized, when making a comparison between phones, a motivated consumer might consider the absence of a TPM a problem.

I doubt that Intel would start marketing _No Backdoor™_ chips, but I could imagine a consumer-facing hardware vendor coming up with some kind of comparison-based branding for avoiding the ME. There's a reasonable chance that Apple may continue to integrate vertically and get away from Intel over the next few years. And I was extremely surprised that Purism (a company basically founded on resentment towards the ME) could crowd-fund millions of dollars in the way it has.


Agreed. See my pie in the sky reply to majewsky in sibling comment - but yeah, no easy way for the average man to 'punish' big bad Intel it would seem.

Perhaps another technique to punish them is via class action lawsuit - with all the companies potentially affected by this and what now seems like evidence of intent forthcoming it maybe is solid basis for a case but I'm no lawyer.


> Companies like Intel, who are complicit in helping CIA or any intel agency (government, rogue or otherwise) infiltrate and exploit our systems - need to be held accountable by the market.

At the same time as "buy American"? You're aware that any American chipmaker will be gag-ordered to help the CIA?


Fair point, but let's not paint such a bleek picture. Gag-orders are an unfair (unconstitutional?) weapon of tyrannical regimes and should be condemned as so. Aside from taking political action to remove that tool from big brother's arsenal we as hacker/entreprenuers can build systems and strategize on how to mitigate and avoid gag-order scenarios altogether.

Perhaps this is pie in the sky but a future where open hardware is as ubiquitous/accessible/easy to use as open source software would make it easier to change chips or gut your laptop and re-build it with hardware that you can trust.


The landscape is of course complex... but I think that companies exposing their clients to such risk will only learn to protect their clients rights to privacy and to self-determination once organized groups of clients fight back on the court against this kind of practice. This is a question of human rights, not just a technical feature. Companies need to take legal responsibility over their decisions in any case where the security, freedom and free will of clients are at risk.


Seriously? A 4chan post?

While the ME is worrying for many reasons, there's absolutely zero evidence that the Intel ME contains a backdoor.

Backdoors don't stay hidden forever.


That's a bad argument. Firstly, it's my understanding that there have already been root-access 0 days discovered in the ME (and since patched since exposed). AND The USB jtag backdoor is the whole point of this post.

Secondly, a security hole and a backdoor are interchangeable these days. So we'll never be able to prove which new 0-days are deliberate, and as far as impact it kinda doesn't matter if they're deliberate.


We're talking about deliberate government backdoors, and it's my opinion that those are highly unlikely.

The ME is a really bad idea because it introduces massive, unnecessary attack surface and vulnerabilities are inevitable, but no conspiracy.


How can you prove a vulnerability isn't deliberate?

We've seen deliberate security vulnerabilities before (DUAL_EC).


You can't, but let me point out that DUAL_EC was a "nobody but us" backdoor that required their private key to use.

(and yes, it backfired)

If they're introducing regular vulnerabilities, they're also making themselves vulnerable, given that the US government is one of the biggest Intel customers.


I remember back in the good old days of cryptography export restrictions when the NSA had a much simpler "nobody but us" approach: you encrypted data with a xx-bit private key, half of which was shared with the NSA. Should they need to break content, the other half of the key could be brute-forced at costs that were economically feasible (for targeted use, not blanket suviellance) to the NSA but the full-length key would be unbreakable (in theory) by anyone without prior knowledge of that other half.


4chan has been popular for leaks/reverse engineering because of its anonymity and the fact that it's seen as (whether or not this is true, and I would wager that it isn't) as a "hacker haven". For example, a guy on 4chan reverse engineered Google's new captcha system almost as soon as it came out, leading Google to eventually hire him in exchange for his deleting the GitHub repo he was using.


4chan is also well known to fabricate evidence, and everything in the post (minus the alleged backdoors) has been publicly known.


Some diamonds, lots of rubble. Some land mines:)


Some would argue the entire design of ME is evidence that it IS a backdoor.


AMT (server grade ME) is definitely a door of some sort, but as an advertised feature, I don't know that Intel is hiding it in the back.

https://www.intel.com/content/www/us/en/architecture-and-tec...


> AMT (server grade ME)

AMT is an application that runs on ME, and it's on very many (most? all?) Intel-based desktop/laptops.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: