Hacker Newsnew | past | comments | ask | show | jobs | submit | nbmh's commentslogin

I wonder to what extent the reduced accident rate is due to change rather than 'Shared Space' being a better design. The same thing happened when Sweden switched from driving on the left side of the road to the right. Accident rates initially dropped but soon rebounded to prior levels.[1]

[1] https://en.wikipedia.org/wiki/Dagen_H


The scheme has only been going for two years, so yes it would be worth checking again after say 10 years.

But there are significant differences. From the Wikipedia article:

> Indeed, fatal car-to-car and car-to-pedestrian accidents dropped sharply as a result, and the number of motor insurance claims went down by 40%.

> These initial improvements did not last, however. The number of motor insurance claims returned to 'normal' over the next six weeks and, by 1969, the accident rates were back to the levels seen before the change.

So those results returned to normal over 6 weeks, but the shared spaces were measured over 2 years. Also the changes were more drastic. 36 accidents over 4 years (9 per year) vs 4 over 2 years (2 per year).


This is interesting and impressive work, however, I noticed that they compared the algorithm's performance to dermatologists looking at a photo of a skin lesion. This seems like a straw man comparison because any dermatologist would presumably be looking directly at a patient and would benefit from a 3D view, touch, pain reception etc. I realize that this was the only feasible way to conduct this study, but still suggests that an algorithm looking at a photo cannot match the performance of a dermatologist looking directly at a patient.


Respectfully disagree. Telemedicine is going to be an important aspect of medicine, Dermatology in particular.

Rural and underdeveloped areas are going to be the largest market IMO. Everyone can access a smartphone but not everyone has the luxury of seeing a Doctor in person, and if they do the time/travel costs can be significant.

Disclosure, I work for an EHR startup with a Telemedicine product.


There seems to be a fundamental issue with this model. If it's economically viable for a user to use this service, there's no reason why the company wouldn't just do it themselves. The only exception is the cost of the hardware, but over the long term this is a relatively small factor compared to the cost of electricity and bandwidth. Especially considering that the company could use much more efficient hardware than the typical home or gaming computer.

I understand the 'sharing economy' desire to make use of underutilized resources, but this doesn't seem like an economically feasible way of doing so. The model works for Uber/Lyft: cars are a relatively high upfront cost compared to the cost of gas, but computer hardware is often less expensive upfront than the electricity costs of running it for a year. Additionally, much of the economic value in a service like Uber or Lyft is provided by the driver, not just the use of the car. In this service, the user doesn't provide any value, in fact, they're using up cycles/space that could otherwise be monetized.


It's viable because it allows you to sell electricity that other people are paying for, in return for money that you get to keep.

This is harder for the company itself to do, because if they just hire people to go into libraries, universities etc. to install mining bots they might be criminally liable. "Uber for CPU cycles" seems like a less felonious enterprise than installing malware on public-use hardware.


>The only exception is the cost of the hardware, but over the long term this is a relatively small factor compared to the cost of electricity and bandwidth.

My electricity rate is is 15.7 cents per kw-hour. During typical usage (MS Office, web browser, programming), my Intel 6-core desktop (without LCD monitor on) draws about 150 watts.

For back-of-the-napkin estimates, let's round the kwh cost up to 16 cents and the wattage to 300 watts (to cover scenario of some of cpu cores being 100% pegged). The electricity cost of 24x7 for one year would be ~$413.

What remains would be bandwidth costs -- if any. I like many others have Verizon FiOS and even if others have Comcast or ATT, there's no obvious residential bandwidth costs I can think of to calculate. Maybe... if the homeowner wants to upgrade the speed from 75GB@$99/month to 150@$199/month because the he wants to download the datasets faster. That extra $100 wouldn't have been spent for plain web browsing. So conceivably, that would be $1200 per year. What we don't know is how big the datasets are that must be downloaded. I assume the upload size would be minimal because the compute tasks appear to be variations on "y_output = computecombinationsmontecarlobruteforce(x_input)." The y_output answer would usually be order-of-magnitude(s) smaller than x_input.

Assuming there are no extra bandwidth costs, it would be hard for a company to buy computer hardware for less than a homeowner's $413/year electricity cost.

Perhaps suchflex's particular business model is financially wrong. However in general terms, it does seem possible to find a monetizing sweet spot of computing tasks that takes advantage of the idle and wasted resources of existing home computers. However, if the homeowner has to buy extra hardware that was only dedicated to suchflex, that's probably where the economics won't make as much sense.


>During typical usage (MS Office, web browser, programming)...

That's the issue though, this wouldn't be similar to your typical usage. Instead, if they're using your GPU to train neural networks, it'll be running close to or at full capacity.

I realize that you rounded the costs up, but lets just look at the costs of a GPU often used for machine learning - Nvidia GTX 980 TI. According to Nvidia, it draws 250W under load which according to your figures would result in a yearly cost of roughly $344. That's just for the reference card, a typical card that a consumer would purchase would draw even more. You can buy a 980 TI for a little more than $400. That doesn't even begin to look at hardware actually designed for commercial and research applications.

I think that it's possible to find a way of monetizing computer resources, however, I think it has more to do with arbitraging differences in electricity costs. Suchflex's model certainly wouldn't work where I live (electricity costs in NYC are roughly 20 cents per kw-hour) but parts of the US are under 10 cents. I could see a company attempting to profit from these differences by setting up hardware in a cheap state and negotiating a favorable electricity rate. Heavy computation could then be done on these networks for significantly less than it could in New York or California.

In summary, the value of a consumer's unused computer has more to do with their electricity rate than their hardware.


>In summary, the value of a consumer's unused computer has more to do with their electricity rate than their hardware.

I don't see how the calculations support that.

For example if we use your worse case scenarios of an entity (such as Suchflex) that had its own datacenter in a 20 cent kwh region and the crowdsourced homecomputers in a 9 cent kwh region, that's a difference of 11 cents.

If we round up the energy usage to 600 watts (pc + GPU), that 11 cents is an annual difference of ~$578. However, for Suchflex to even run computations at all on their own hardware -- whether its 20 or 9 cents -- they have to spend capex of ~$1500 of motherboard+cpu+gpu. That's the $1500 the homeowner already spent for his own purposes. Therefore, Suchflex can redirect $1500 to pay commissions/awards/etc towards pure computations instead of buying their own depreciating hardware (which includes buying/renting the physical datacenters to hold it all).

It seems like the homeowner's hardware is a very significant part of the arbitrage/monetization equation. Yes, there is also potential arbitrage in regional differences of electricity rates. However, the greater arbitrage (at least the first 3 years) is the unused time on residential pc that would have been wasted. That "unused time" arises from computer hardware that was already purchased for other purposes than Suchflex.


>crowdsourced homecomputers in a 9 cent kwh region

My suggestion was not that an entity use crowdsourced home computers, rather that it would be more efficient for a company to setup their own hardware and rent CPU cycles that way. The big difference is that Suchflex is limited to using hardware that consumers regularly purchase, whereas a company could use significantly more energy efficient setups and negotiate a better electricity rate. This is essentially what AWS already offers. Additionally, if you already have to transmit everything remotely, there's no need to stay in the US. Iceland offers rates around 4.3 cents. I chose the 980 TI for my example because it's about as close to perfect as you can find for this scenario while sticking with consumer grade hardware, average setups would be much worse.

My general point is that I don't think Suchflex's model is viable unless, as pliny mentioned, you have access to free electricity through some less-than-legal means (or you live in Iceland).


> it would be more efficient for a company to setup their own hardware and rent CPU cycles that way. [...] This is essentially what AWS already offers.

I think it's theoretically possible for electricity costs to overwhelm hardware costs but so far, I haven't seen any numbers that make this disparity obvious. Some example AWS costs[1]:

  g2.2xlarge is $0.65/hour
  g2.8xlarge is $2.68/hour
Notice how 65 cents and $2.68 costs significantly more than the Iceland electricity rates of 4.3 cents/kwh. The hardware capex is "baked" into the AWS rates. The hardware capex for residential home computers is $0.

More analysis would be required to see if particular computation tasks can done 15x faster on AWS optimized instances than the unoptimized residential computers ($0.65/$0.043==15x).

Without concrete spreadsheet of tasks, performance runtimes, and cloud costs, I still don't see obvious evidence that AWS (or Google Cloud) will be more cost efficient than unused home computers.

[1]https://aws.amazon.com/ec2/pricing/


those are guaranteed instance prices that you can do anything you want with. A distributed home computer cloud would be much more like AWS spot instances, which can be turned off whenever and you lose your data (unless it's backed up to an EBS).

Spot instance prices are typically far less - for g2.2xlarge they average around $0.1/hr - https://ec2price.com/?product=Linux/UNIX&type=g2.2xlarge&reg...

And note that customers can't run arbitrary or secure workloads with this proposal - they just want to mine crypto on your hardware and give a small percentage of returns to you as rent.

When theDAO was launched, I toyed with the idea of an etherium GAPP similar to Flex but for anything - but it would be a hell of a lot of effort to build, and I'm not sure there is demand - people won't bother for $10/week, especially if it takes up a lot of their HD space, bandwidth and makes their GPU take off (noise).


The whole thing is based on Gridcoin and BOINC anyway, so the good thing that comes out of it is that BOINC research projects like Rosetta@home get more computing power for free.

For this the suchflex guys earn Gridcoins, which they can sell directly on the market and convert to money. But a user could leave out the middleman alltogether and just mine Gridcoins (or alternatively other cryptocurrency but for that you need ASICS) themselves.


Exactly that. People have been running BOINC projects (which make up a great deal of the projects people list on that site such as Asteroids, Seti, Mind Modeling, etc.) and they are all volunteer efforts where people give their computing power away for free. I just don't see where they're coming from offering me $30/month for something people have been doing for free for years...


Projects like this will really shine, when your electrical car is already charged and you use it to sell unused energy from your solar system.


That's a really great insight and a beautiful vision!


Package delivery is one of the highest profile uses for drones, but I don't think it's a lasting one. Drones are unlikely to become cheap and efficient enough to eclipse current delivery techniques on a mass scale. Especially because traditional techniques will become much more effective as autonomous cars become feasible.

However, lots of the technology being developed (robust autonomous flight, improved endurance & range, lower costs) will speed up developments for other applications that make better use of their strengths. Even more importantly, companies like Google are finally forcing the FAA to craft realistic regulations that don't completely cripple commercial applications. The FAA's lethargic pace has already severely hampered domestic development to the degree that Google had to do most of their development in Australia.


I'm curious what the founders / anyone else think went wrong? Especially compared to ShareLaTeX


The short answer is we didn't find product / market fit. It made some people happy, and was useful to some people, but it didn't make people go out and tell everyone they know to start using it. ShareLaTeX on the other hand was growing organically and had people singing it's praises even when it would sometimes randomly lose 30 minutes worth of your latest changes... (yes really! That's very fixed now though don't worry). ShareLaTeX just filled a much deeper need for people. There are so many other Python/R options out there that we never filled a deep need with DataJoy.

The exception to that is in teaching. It did fill a big need there, but we never managed to make the business model work (long high touch sales cycles, but universities only willing to pay very low prices per class). We also never found a growth model for this.


For setting up a Mac, I can't recommend a .osx file highly enough. Dot files in general make setting up a new computer really easy, especially when combined with homebrew's cask. I've been able to setup a familiar dev environment on a new machine in less than 15 min because I've maintained dotfiles. It's a relatively small time investment upfront and the payoff can be massive, especially if anything ever goes wrong on your main machine.


I'm sorry I'm not totally sure I know what you are talking about. Care to link?


Scripting out your favorite settings, here’s an example: https://github.com/mathiasbynens/dotfiles/blob/master/.macos

I use something similar, along with dotbot (https://github.com/anishathalye/dotbot) to wire up other application’s settings files. There’s lots of good examples around to steal and tweak.


Mackup (https://github.com/lra/mackup) is a great tool for this. Beware, however, that it creates symlinks in place of your existing files. So if you decide you don't like it, run the uninstall command before removing the Mackup folder or bad things will happen (I learned from experience).



The other issue is the type of door they're proposing. The video showed a large elevator being raised and lowered from an elevated platform. That seems unnecessarily slow and complicated, but there's no easy way to get passengers out of an elevated bus above traffic


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: