This type of result is why the buzz (at least in academic circles) about the value of "computational research" as a "third pillar" of research is wrong-headed. Running computer simulations is a great way to compare theory to real-world data or to extrapolate out the implications of a particular theoretical idea, but it's still just another tool of theoretical research, and not a separate type of research in and of itself. The computational results are only as good as the model, which is all theory.
That doesn't seem fair. The article doesn't say, but in this case it sounds like the theories were based on previous experiments, not on computer simulations. And the deeper, underlying theories are correct. The problem is that we can't efficiently simulate large enough systems to discover phenomena like this computationally. But in the future, with better simulation algorithms, or possibly even large quantum computers, it should be possible to get results like this purely from simulations.
(Of course, I qualified my argument by saying "in the future." So maybe you are right that computational research isn't yet a third pillar of research. But I do think it will become one, as our simulation tools incrementally increase in power.)
From my experience, there seems to be two theoretical/computational research scientist types: the one who doesn't really care about the numbers but focuses on equations and what they really tell us, and the one who only cares about numbers. The latter is dangerous.
Indeed, in computational chemistry there seems to be two classes of theorists. The first group attempts to simulate a realistic system using high performance computing and achieve, typically, agreement with existing experiments. The second group tries to think through problems and hopes to provide explanations that don't need the brute force approach. The former model often lacks both creativity and quantitative accuracy. The latter might be impossible for some chaotic or complex systems. I have to agree that having "theorists" who simply are trying to compute some quantity to match existing data must do so very carefully.
Interesting result. Even more if they can replace a metal mask layer with this, although given the requirements for growing it I'm guessing it would be a huge challenge to put it on top.
I like the implication that there is less difference between photons and fermions than we think.
I think the analogy used between the electron and photon travel was just that, an analogy with no real world implications. And that other statement (at the end of the article) about new physics was made too quick.
From the looks of it, they've just found that the mechanism that produces the drift velocity of electrons in normal wires has no relationship to the way electrons travel in these nanoribbons of graphene...
Their models probably used or were related (or based on) bounce and repel type algorithms that do not apply fully to this very special edge-case scenario.
In fiber optic cables photons travel close to the speed of light (50% of it or more) by reflection and the waveguide effect.
In normal metal wires electrons travel something like a few mm or cm per second, minute, or even hour (averaged out) - because they are all bouncing against one another.
In these nano-wires / ribbons there is little electron bounce, and some wave-guide like travel channels present... It has properties of both metal wires and fiberoptics... But the electrons are still typical electrons.
At the end of the day, there is still a fundamental difference between packets of energy (that some say are nothing more than vibrations traveling through whatever dimension) and elementary particles.
If electrons only traval a few mm or cm a sec how can I ping a server in Europe an fractions of a sec? I know much of the internet is fiber but surely there is a lot of copper there to. Could you explian?
You have to distinguish between "Electron Travel" and "Signal Propagation".
Take a 10 ft plastic pipe of the same diameter as ping-pong balls.
Fill this pipe from start to end sequentially and fully with ping-pong balls.
Now stick your finger in one end.
While you only moved the ping-pong balls a few inches, almost immediately a ping-pong ball will come out the other end... The "signal" "traveled" at a much faster speed than the actual ping-pongs.
Sending signals via electrons through wires is fundamentally different then sending photons through fibers.
To visualize electrical signals, imagine a tube completely full of balls. When you push a new ball into one end, a different ball comes out the other. The effect of adding the ball is seen very quickly (i.e. a change in the field), but the speed the ball moves through the tube (i.e. how fast electrons move through the conductor) is quite different.
Graphene has the potential to be used as poly or substrate. Much more relevant than replacing metal, but I don't know of anybody that could grow anything over it. (Maybe it's just a matter of doing it upside-down? How are modern fabs losing precision with added layers?)
I don't get why you see less difference. Fermions can have balistic trajectories too, that's not news.
Replacing metal could be a very big deal too. Chips are mostly power-limited these days; slashing the resistance of the metal layers would be a big active power win. In fact, while it would not help leakage (which constrains how many transistors you can pack in), improving active power consumption could correlate directly to improvements in operating frequency...
This research is very exciting. Hopefully we will soon start seeing graphene being used to implement microprocessors, for massive performance gains, and possibly other technologies will follow suit. I can't help but feel like this is start of the leap in technology we need in order to become more like futuristic human societies in science fiction that have more advanced technology which, to us, seems like it can only live in science fiction. Yet it's important to keep in mind that this same thing was said about submarines as well as other technologies which have been invented since the publication of the works of fiction which contain them, but were not even thought of at the time, or at least were non-existent. I'm excited to read this article and see what breakthroughs we may have had.
I doubt we'll see "massive performance gains" from the very first version of such a chip. The gains will come mostly the same way we saw Moore's Law in silicon. So we'll get 5Ghz graphene processors, then 7Ghz ones 2 years later, then 10 Ghz 2 years later, and so on, until we reach TerraHerz a couple of decades later.
Even if we do see somewhat big gains initially, it will probably have the price point to match, like we saw with SSDs, which were ~3x faster than HDD on average, but 10x more expensive per GB.
Something that I've been wondering lately but I haven't looked around to see if it's simple or possible is that nobody talks about it the material is reciclable or will it just fill the dumps with more eletronic trash that will polute on forever...
Besides that I really look for the future of the tech :)
Graphene is mentioned every 6 months in articles and magazines I read and I cant remember how long this has been going on. Same goes for quantum computing.
I really hope at some point it will actually become a reality. The skeptics at the bottom of article dodnt believe graphene will work for digital applications.
I recall reading about printable, flexible OLED displays in the 1990s, but it wasn't until recently that OLED displays reached consumer technology, and even more recently that they began improving in size and color fidelity relative to cost. Give graphene another 10-20 years and we might see applications.
My guess is that most of these "miracle technologies" won't work out the way we hope, but that doesn't mean it isn't interesting to track their progress. Plus the research being done on them might still be useful. Ultimately, there might be some property of graphene that makes it unsuitable for widespread use, but some other related material might slide into its expectations.
Unfortunately, we get attached to certain big names and lay all our hopes and dreams on their backs, and they so often don't work out. We are only human, and we can't follow the whole of all technology that lies on the horizon, but even if the big names don't work out, there's still a lot of promise in this area, and I get excited by the research.
Came here to say pretty much the same thing. I dont know if it is due to impatience, or this being over hyped, or what. I read articles ranging from graphene will revolutionize everything from battery tech to nanotechnology. It will be able to cheaply build a car that weighs thousands of pounds less than a car made of steel or aluminum. It can be produced by a dvd burner. But I never hear mention of mainstream use. Is this similar to solar power being perpetually 10 years away from breaking through to the mainstream?
quantum computing might become reality faster than you think. the next gen d-wave with 256 qubits has the potential find a commercial niche im super computing. they already have demos where a combo of d-wave and classical supercomputer can perform some annealing tasks much faster than classical only.
There was a recent article here on HN that seemed to debunk the d-wave machines, that showed that a classic (non-quantum) algorithm was probably the best candidate for the results the d-wave produces.
In light of that, true quantum computing may be farther off than you're expecting.
You mean the paper that has been countered with [1]? Regarding this I'd wait and see what the reaction to the counter is going to be, I for one am certainly not convinced that modeling one particular implementation classically would disprove quantum computing for any of the implementations. As far as I've read we have enough examples where only quantum models fit, which all would have to be debunked.
I think we will manufacture graphene in larger scales before we hit larger scale quantum computing. They are both awesome technologies that are very promising.
Of those, 3d integration is the one closest to market. It's already used in some cellphone chips, and will likely hit mass market within 3-5 years.
Fully light-based CPUs won't happen for a while, but we are getting to the point where having some optical components to manage long-distance communication (within the chip, and off-chip to memory) begins to get closer to feasible. Think less than 10 years.
Graphene based cpus are not near at all, as we are currently at the point where we are merely studying the qualities of the material. The time from that to mass production is measured in decades.
Maybe? All we have now are glorified press releases. They paint a very rosy picture, however, as they are press releases, we only get to see the rosy side of the picture.
The principle of Crossbar RRAM is very fantastic indeed -- take the ideas of RRAM worked on by other companies and develop an implementation of them that works on current fabs, tools and materials. However, a lot can go wrong at nano scale manufacturing, and a lot of the things they are doing have not been characterized before by anyone else. Frankly, we won't know how manufacturable it really is before it's in mass production.
Of the next-gen memory options, RRAM is the least disruptive of current systems. This is because it's properties are sufficiently similar to flash (specifically, it needs wear leveling, preventing use as universal ram) that if it wins, it will be used exactly as flash is used, only being denser and faster.