I think that this statistic only makes sense if it is specific to the location. Otherwise, it is useless for planning purposes. If it is specific to the location then the birthday paradox does not apply.
Interesting note, this is Houstons third 500-year flood in three years [1].
Useless for removing CO2 from the atmosphere if we burn the ethanol. There are better methods of CO2 sequestration. That tidbit seems to have been thrown in there because climate change. Misleading or ignorant.
I find that I am actually more productive with constant stream of minor distractions. Although I do find real work distractions to be more effective, e.g. background conversation, people walking by.
I think you are missing the bigger picture. This is a story of a town dealing with the problems that arise in a failed state. The Mexican state and federal governments have failed to provide the basic services that they are responsible for, most importantly security.
Here's the punch line, space, time and matter are components of a user interface produced through evolution. We don't take the desktop and icons of our computer UI literally and we shouldn't take our evolved UI literally either.
He dialed back the radicalism of his position for his TED Talk. He concludes that consciousness must be something other than computation in the brain, something he just teases in the TED Talk. He spends most of the talk on the less radical Interface Theory of Perception.
If I understand him correctly, he says we don't perceive brains as they really are and therefore brains are not a physical basis of consciousness. Whoa.
I'm working on a project where I have a rails site running on a beaglebone, it is populating its database with data coming from a usb device (~200 byte/s continuous). I am finding that the SD card with all of the data and the OS fails quite quickly (scale of weeks). Are SD cards just not up to the task?
Yes, SD cards are not designed as disks. 'SSD' drives work by selling you 4 - 10x the amount of flash as 'advertised' and replacing failed pages from the excess over time. They are more like light bulbs than switches in that way (finite lifetime).
I figured as much, thanks for the input. It would be wise then for the OP to make sure that noatime (turns off date accessed) is set in fstab for the filesystem, you don't want to write to that SD card on every read.
Well, Intel used to have a service life comparison for flash up on their site but I can't find it now :-(. There was also a great article on it in EE Times which has also faded apparently. SanDisk in its 2004 document [1] re-iterates the 100,000 writes limit.
Typically flash is broken up into 'pages' and a page can be as small as 8K bytes and as large as 128K bytes, there are reasons for doing it different ways (mostly related to write latency) but if you consider an 8GB flash with 128K byte pages, that is 64K 'pages' (8G/128K) And if you wan to write one byte on that 'page' you have to rewrite the entire page.
San Disk and others will typically spec that a single 'page' can be written 100,000 times, and wear levelling software insures that no single page gets written more than any other page. So you can expect a total of 100,000 * 64K or 6.55B writes before you see a failure.
If you look at typical 'disk' sorts of statistics you will see that a typical SATA drive will do something like 80 - 100 IO operations per second (IOPS) so writing at 80 IOPS you would take nearly 23 thousand hours to wear out an 8G flash. But at Flash speeds (10,000 IOPS) that could easily drop to a mere 182 hours (which is why you don't see a lot of 8G SSDs)
But this is where the math gets fun. The card is spreading writes over total available pages, so assuming 128K pages and 100K writes per page and 500 IOPS (somewhere between a SATA SSD and a spinning rust drive, and well within the rate possible using USB channels or SPI) that kills off 128K bytes every 200 seconds.
Writes get flushed out to disk right away to keep the file system consistent and so you get more writes than you might expect. And writing one byte to a file, not only changes the inode holding that byte, but it also changes the inode holding the directory entry which has the length and modification time (assuming you've disabled atime). So every write from user land can be 2 writes to the device, and sometimes more if it results in overflowing a directory block.
Other variables include how effective the write leveling works, the card has to 'remember' its leveling. An early version I saw wrote a generation number into a page header (written with the same page) and then managing the map. On cheap cards the map (from the linear space presented to the user to the random space of the actual flash) I've seen code which basically mapped each page to 16 candidate pages and then linearly searched amongst those 16 for the next page to write. This had the effect that the single page write endurance lifetime was shorter than the aggregate device write endurance lifetime. (back when I was doing my embedded OS work I was using JTAG to read back flash state directly from flash drives (not the SD cards but the early drives offered for sale as replacement disks)
Anyway, like most things, its not immediately obvious what factors come into play and its easy to do things that screw you (like doing atime writes).
So I did a non-scientific test on my Raspberry Pi at home with a 4G card that has typically been my 'move an .iso around' it is a 'class 10' card from Microcenter Warehouse. Using this perl program:
#!/usr/bin/perl
use strict;
use warnings;
my $letters = "ABCEDEGHIJKLMNOPQRSTUVWXYZ-";
my $range = length $letters;
while (1) {
foreach (1..16) {
my $nm = "A";
foreach (1 .. 15) {
$nm .= substr($letters, rand($range), 1);
}
open(my $fh, ">", "$nm.delete-me");
foreach (1 .. 128) {
$nm .= substr($letters, rand($range), 1);
}
print $fh "$nm\n";
close $fh;
}
`sync`;
`rm *delete-me`;
}
Killed it dead in 3 hrs 18 minutes. Your mileage may vary.
Nice work, thanks. I can see now how the cloud will be important with these small devices. I wonder how OUYA (and similar devices) is going to manage data and what the lifetime of the storage will be.