This is bogus. If the US really thinks that Russia did the hack, then it's not because of an originating IP or company in Russia.
The US probably has hacked Russian communications from somewhere else that actually points the finger. However, they would never give that information up.
Either that or they have hacked a huge chain of these proxy operations, but that seems like a tall order.
Most of what I've heard of being used as support for attribution are "fingerprints" left behind on compromised devices. Things like characteristics of how variables were named in interpeted code, how binaries were compiled, specific attack vectors used, etc. I can't recall anyone recently pointing to first-hop originating IPs as being definitive evidence, in this age of cheaply traded botnets.
As someone who does a ton of Node/JS coding as well as some Python, I am personally offended! Ok, not really, but I think you're wrong :)
I think VB is more similar to Python in the sense that both languages were designed with some sense of simplicity and straight-forwardness in mind.
Both PHP and Javascript are much more complex from a language design standpoint, which makes them more difficult to learn "well" for beginners. Yes, they are both hugely prolific because of the web, and lots of people copy/paste code around not having any clue how it works, so I won't deny your "generations of garbage" comment. But you're assessment is a little too simplistic.
A good engineer can write high quality code in "almost" any programming language. It's the coder, not the language, that determines engineering quality.
> Both PHP and Javascript are much more complex from a language design standpoint
Complex in the sense that they were originally thrown together and the results of the program might best be described as "stochastic output", yeah I can agree with that.
> A good engineer can write high quality code in "almost" any programming language. It's the coder, not the language, that determines engineering quality.
Sure, but it sure seems funny that "high quality code" is nearly non-existent in PHP and javascript world ...
> Complex in the sense that they were originally thrown together and the results of the program might best be described as "stochastic output", yeah I can agree with that.
Absolutely, they are complex because of poor initial designs, and actually even more complex now due to all the efforts to maintain backwards compatibility...
> Sure, but it sure seems funny that "high quality code" is nearly non-existent in PHP and javascript world ...
jQuery is an incredibly well engineered library by John Resig. I think that jQuery was one of the big technological leaps forward that led to the proliferation of web applications.
For a long time, people thought that cross browser JS inconsistencies and confusing API standards meant that you couldn't build interactive UI applications in the browser. And now they are ubiquitous.
So I guess it depends on your definition of "high quality code". I don't think quality code is about some arbitrary coding aesthetic. It's about results, and it's about what your code does, and how it creates value.
>> So I guess it depends on your definition of "high quality code". I don't think quality code is about some arbitrary coding aesthetic. It's about results, and it's about what your code does, and how it creates value.
Moving the goal posts so that the definition of 'high quality' is basically code that compiles and runs as long as it 'creates value'? What about when you have to change it later? What about if it has to be really fast?
People who think that Javascript and PHP are crappy languages aren't just comparing these languages to 'arbitrary coding aesthetics'.
They're both sequential by default and lack threading, and that's hard to mess up. They're both great languages for easy projects.
PHP code is a breeze to work with using composer, and most packages use namespaces and have great intellisense. But good luck doing anything concurrently.
NodeJS is fast, and boy can it handle a lot of tasks at once using the event loop! But I can't for the life of me figure out modules or get good intellisense.
There's plenty of high quality code in each. I'd just never write a complex app in either.
>> A good engineer can write high quality code in "almost" any programming language. It's the coder, not the language, that determines engineering quality.
That's just a platitude, and it's wrong.
Languages with better features allow engineers at varying levels of experience to write better code than they otherwise might.
First, ES6 brings a lot of enhancements to JS. However, it also makes JS even more complex and unwieldy because now you have all of these new and old language features co-existing.
If you use a tool like ESLint, you can limit yourself to a subset of the whole ES6 standard that I think is a decent programming language.
There's also other ways to go at it, by using a language that compiles to JS like TypeScript, however maybe that would be considered cheating in the context of this argument.
I think they differ in that PHP is copypasta and Javascript is an infinite dependency tree of third party libraries that the developer has no idea (or care) of their inner workings.
VB is also an incredibly complex language, especially the pre-.NET versions.
It didn't even have a regular syntax for the sake of backwards compatibility. For example, Line was a method on Form (kinda eh, but okay); but you couldn't actually invoke it as you'd normally invoke a method. It had to look like the LINE statement did in BASIC circa 70s:
form.Line (1, 1) - (100, 100), vbRed
Note that parentheses here are not some kind of tuple syntax. Instead, they're a part of the special Line syntax, as well as the dash between. On the other hand, vbRed is just a named constant.
But wait, it gets better. Say, you want to draw a filled rectangle. Well, a rectangle is also defined by two points, so BASIC has historically used the LINE statement for that as well, and VB follows suit:
form.Line (1, 1) - (100, 100), vbRed, BF
Unlike vbRed, BF here is not a variable name - it's more special syntax. "B" stands for "block", and "F" stands for "fill" (so you can do "B" without "F").
Note that you can have a variable named BF. And if you do something like, say:
form.Line (1, 1) - (100, 100), BF
That is legal, and uses the value of that variable as the color of the line (instead of drawing a rectangle, as you might have expected). And it'll probably work, too, regardless of the type of "BF", because VB tries incredibly hard to implicitly convert something to something else when types don't match.
Note that all of this is part of the language syntax, not the library. The actual function just takes a bunch of arguments, same as any other; but the language then piles all this useless syntactic syrup on top of that.
This is just one tiny corner of the language, and not even the most headache-inducing one - you hit it once, you read the manual, and mostly that's that. But then there's stuff like default properties and the Let/Set distinction:
Let x = y
Set x = y
In VB, these two statements are both assignments, but what they assign to is different. And you actually have to know and understand the difference, because there isn't just one assignment syntax that does the right thing for everything - some types only work with Let, and some types must be assigned with Set to get what you want.
There are basically two types of cryptocurrency enthusiasts as far as I can tell. The first one believes that decentralized cryptocurrency and smart contracts will lead to a better world, free from the authoritarian rule of central banking and legal systems. And the second type, which is far more numerous, is hoping to to make a quick buck off of price fluctuations in the underlying digital assets.
The NYT has a great article on this, where they suggest the bulk of Bitcoin users are Chinese citizens gambling on price fluctuations. And their stats seem to support this assertion:
There's at least one other type you're missing: people who appreciate the utility of an electronic equivalent to cash.
I pay for things with cryptocurrencies fairly frequently. I don't want every company I buy things from having my name, address, date of birth and credit card info if they don't need it. If I'm buying goods or services that don't require physical shipping, all they need to know is what I want and that the money has changed hands. Crypto facilitates those kinds of transactions.
PayPal, credit cards etc. all have their place, but for non-physical purchases from reputable parties I'm not sure how much you can improve on "pay money, receive thing."
It would have to be a very compelling argument to persuade me that transacting through a middle-man (who then holds all my personal information and card details and also takes a cut) is better than just handing over cash and getting what I want.
You don't need credit-card chargebacks until you do, but then you really need them. (And actually most of the time the threat of chargebacks keeps merchants honest).
Once you get to the point of using a reputable escrow system (which will have to charge a fee), that plus the bitcoin fees (either direct transfer fees or the implicit tax that is mining rewards) are unlikely to be cheaper than the credit card system (which doesn't have to burn immense amounts of processing power for business-as-usual). Having all participants be anonymous and untrusted adds a lot of overhead; in a civilized environment with a reasonable legal system you can shave that off by being willing to trust your counterparties (trust that is made possible by central clearing houses and verified identities).
One way to do escrow is with 2-of-3 signatures. If both parties to the transaction sign, the escrow agent doesn't have to get involved. On Ethereum it's easy to implement as a smart contract that lets the escrow agent choose who gets the money, and pays the agent a fee for the service (but nothing otherwise).
Participants aren't necessarily anonymous; if you're buying from a known vendor and having something shipped to your house, neither party is all that anonymous. People are working on adding verified identities, for people who want them.
Ethereum hopes to do away with mining by early 2017.
> One way to do escrow is with 2-of-3 signatures. If both parties to the transaction sign, the escrow agent doesn't have to get involved. On Ethereum it's easy to implement as a smart contract that lets the escrow agent choose who gets the money, and pays the agent a fee for the service (but nothing otherwise).
> Participants aren't necessarily anonymous; if you're buying from a known vendor and having something shipped to your house, neither party is all that anonymous. People are working on adding verified identities, for people who want them.
At which point why use this instead of a credit card?
> Ethereum hopes to do away with mining by early 2017.
How are they doing byzantine-fault-tolerant consensus without it?
With the escrow, you can set it up to pay a fee only if you need the judge's services, so if you don't have a problem it's free. Or you can set up whatever other arrangement you like. Either way you're paying only for the arbitration, not for stockholder dividends.
They're switching to proof of stake. Early PoS designs have some issues, like the infamous "nothing at stake" problem, but theirs addresses those. People lock up ether for several months, and bet it on which blocks will be included in the chain. The blocks that get the best odds in the betting are the ones that get included, so basically you're betting on what everybody else will do. You start with low-confidence bets that don't risk much, and as you see other people's bets you progress to high-confidence bets that pay off better, and it converges.
Miners essentially do the same thing: by choosing a block to mine on, they're betting their energy cost on that block being chosen.
> With the escrow, you can set it up to pay a fee only if you need the judge's services, so if you don't have a problem it's free. Or you can set up whatever other arrangement you like.
Sure, but that doesn't really change anything. Their service will cost a certain amount to run, and so you'll end up paying an average of x amount per transaction/per dollar spent, whichever way you slice it.
> Either way you're paying only for the arbitration, not for stockholder dividends.
And yet for-profit companies usually end up being the most effective way to get something done. If I need a tree cut down in my yard I don't try to find some non-profit tree-surgeon collective, I call a professional from a reputable company. (And I would think that anyone who supported cryptocurrencies - which are all about directly transferring money without involving a social layer - to feel this way even more strongly).
> They're switching to proof of stake. Early PoS designs have some issues, like the infamous "nothing at stake" problem, but theirs addresses those. People lock up ether for several months, and bet it on which blocks will be included in the chain. The blocks that get the best odds in the betting are the ones that get included, so basically you're betting on what everybody else will do. You start with low-confidence bets that don't risk much, and as you see other people's bets you progress to high-confidence bets that pay off better, and it converges.
> Miners essentially do the same thing: by choosing a block to mine on, they're betting their energy cost on that block being chosen.
Hmm. Doesn't that mean the reward for fraud is much higher? Can't someone just bet a massive amount that their fork will win, and then their fork wins precisely because they bet a massive amount on it?
And the gambling is what makes me skeptical of figures like "the DAO is worth $150 Million" or whatever.
I would like to know how much REAL MONEY went into this thing rather than its "value" as a result of funny-money speculation. I expect that any adult that put money or compute cycles into ethereum understands that the value could literally vaporize at any time, so it seems disingenuous to throw around dollar figures inflated by speculation. Is anyone _really_ losing their shirt at this point?
actually "gambling", aka "speculation", is the crucial ingredient in the building of credibility of any tradable financial contract. It increases liquidity, and contrary to popular belief, it usually dampens volatility like a shock absorber. This is because non-speculative supply and demand tends to be much less normally distributed (herd behaviour) than speculative transaction direction, leading to large price spikes (see: bitcoin and Cyprus). I welcome with open arms the "gamblers" because they provide the "other side", in return for a skewed future price distribution towards profit, when such "one way" stampedes occur.
You're completely right when it comes to regular currencies, stocks, and so on - speculation is how prices stay accurate. If the price goes too far out of whack, speculators can make large amounts of money off of everyone else.
With cryptocurrencies, however, there isn't enough "legitimate activity" (i.e. people actually conducting business with Bitcoin) to allow a stable price to emerge. This makes them vulnerable to manipulation because there is no function for the form to follow. At least with regular currencies, speculation has to follow reality. With cryptocurrencies, reality follows speculation!
This leads to a negative feedback loop; people are reluctant to use cryptocurrencies for business because the price is unstable, and the price is unstable because not enough people are using them for business.
>If the price goes too far out of whack, speculators can make large amounts of money off of everyone else.
That's a wholly circular argument, and a counterfactual one.
If speculation influences prices then it's in the interest of speculators to create pricing mechanisms that are perpetually "out of whack" so they can profit from them.
There is no such thing as an accurate commodity or currency price, and there never can be. There's only market sentiment, and that's largely based on optimism or pessimism about the future - which is unknown.
Markets are just entrail reading, with very expensive and complicated entrails.
DOA was more like a meta-entrail system with an extra layer or two of obfuscation. But it was no more stable than any other market, and fell prey to exactly the same problem - manipulation of mechanisms creating a dishonest illusion of objectivity for profit.
you clearly have absolutely no clue what you are talking about - a catastrophic dearth of experience in financial markets, nor any idea of the theory of speculation. I don't know where to start but this laughable statement is as good as any:
"If speculation influences prices then it's in the interest of speculators to create pricing mechanisms that are perpetually "out of whack" so they can profit from them"
How exactly will they influence said prices without trading? Which costs money? HOw would they move a price (cost money) then move it back (cost money) without constantly losing money? You need other people to take you out of your speculative position, and those other people must be (net-net) non-speculators, and sufficient in number (which was @omegaham's point). Separately, any market which is purposefully "perpetually out of whack" is not even a market, and will quickly tend to zero participants.
"Markets are just entrail reading"
Another eye-roller so vacant that I don't know how to respond.
I wish people like you wouldn't jump in with such certainty about subjects in which you are eminently and so evidently without the foggiest of any idea, but willing to get your word in anyway.
I think you're right that one of the big downsides currently for AWS lambda would be dev tooling, however one of the big upsides is that the code you throw up there should run "forever". This is potentially really useful if you are simply working on frontend code, and hitting these Lambda endpoints as a client.
I've set them up a few times for contact forms and other light pieces of functionality for static websites, and it fits that niche really well. Obviously, I think the serverless movement has some grander notions of the size and scale of applications that could be constructed, so we'll see.
There is no contract simulator AFAIK, but you can deploy contracts to a test network or your own private test network. The benefit there being it's cheaper from a Gas perspective, and you don't pollute the main network.
The VAE code and the semi-supervised part of the GAN code build on code that was developed about half a year ago, when Tensorflow was less developed and was lacking in speed and functionality compared to Theano. It has since caught up and most new projects at OpenAI are now done using Tensorflow, which is what we used for the newer additions.
Could you mention a bit about why you're using Tensorflow?
I'm glad you are since I'm using it myself, but I haven't used any other frameworks so I'm wondering if I should expect more people to head in this direction, or spend time learning others.
There are currently many excellent frameworks to choose from: TensorFlow, Theano, Torch, MXNet are all great. The comparative advantage of TensorFlow is mostly its support in the community (e.g. most stars on GitHub, most new projects being developed, etc).
The community around Tensorflow is great (lots of people that try to recreate results from new papers using TF), but if you're worried about putting all your eggs in one basket (or want to be a higher level up) you should checkout Keras if you haven't yet. It lets you write generic nets that can run on Theano or TF.
Responsive web apps are difficult to build, however settling for native Android and iOS is not sufficient for comparison since he's leaving out the desktop (so he should probably include Windows and OSX for some level of parity for what a responsive web app can do).
The real underlying problem is the state of the mobile web browser, which neither Apple nor Google have much incentive to improve due to their revenue generating app stores. That's not to say that there wouldn't still be some major differences between native apps and mobile web (especially in the discovery / delivery department), but if you had better feature parity between these platforms I think rants like this guys would be fewer and farther between.
tl;dr: He picked the wrong technology platform for his product, therefore since it didn't work for his use case, it must be fundamentally broken.
The largest float in the output layer (while the graph is yellow) is the activation. The largest activation in the final layer is the network's "guess". The guess is the index of the last layer, which corresponds to a particular digit.
Awesome, I just flipped that switch at the top and can see how it calculates the individual handwritten inputs. Great demo!
So I guess during training you're telling it that correct answers should be 1 and the incorrect answers should be 0.
Do the encoding choices that you make regarding the input / output of a neural network influence its performance at all? Maybe for MNIST the way you have it is the most common approach?
Usually the number of nodes in the input and output layers don't affect things all that much. They are relatively set based on the problem. The number of nodes in the hidden layers, the number of hidden layers, and various other parameters such as cost and activation functions, are mostly what you use to tune performance.
The US probably has hacked Russian communications from somewhere else that actually points the finger. However, they would never give that information up.
Either that or they have hacked a huge chain of these proxy operations, but that seems like a tall order.