This is great -- I hate having compiler/runtime advances in Mac OS X always tied to the latest platform release. It can take years before your customer base upgrades, leaving you unable to leverage what I'd consider basic language features -- like fast enumeration, or now, closures.
I upvoted this article because the conversation on this page is interesting and provides a lot more information than the original article. I always look at the comments before reading an article on HN, and often feel that they are more valuable and insightful than the linked page. This is the case here.
The subject matter is interesting, but what I find most interesting is a blog performing real, 'hard-hitting' local journalism, of genuine interest to a very specific social subgroup.
The mainstream media had this idea first, they picked up the story of the New York highschool student who tested Red Snapper in various sushi restaurants and found that it was often Talapia.
That story got repeated in some jurisdictions, now a year or so later someone gets around to doing it with vegan food.
If you really want a "what food is advertised as isn't what it really is" story search youtube for Vegan Marshmallows. (short: some guy was supplying a 'vegan' gelatin substitute to several groups (not just vegans) and when it was tested it had animal products in it and the guy 'disappeared').
Revealing that some companies/restaurants aren't always truthful isn't a new thing. Certainly not only a year old. It's really as old as investigative journalism in general. Dating all the way back to "The Jungle".
Edit: and what you're really saying is that 'mainstream media' just copied the idea too. And from a grassroots-level at that.
Until I clicked on the link, I was excited, as the comments on monetization lead me think they might be providing a marketplace for me to provide for-fee answers to users.
Unfortunately, this is not the case.
As it is, I don't bother to use StackOverflow. My questions would be too esoteric for the audience/format, and nearly all of the questions I see are boring, easily answerable with a search of the documentation. The questions would be less boring if I were paid to answer them, and then I'd be more likely to find a few gems to answer, too.
How much money would make it worth your time? If your expertise is too high / too esoteric for the StackOverflow community, then you should command high prices in the marketplace.
It's doubtful to me that you could appeal to expert users to exchange time for money when it is better leveraged by consulting and entrepreneurship.
This topic comes up quite often on the StackOverflow podcast.
How much money would make it worth your time? If your expertise is too high / too esoteric for the StackOverflow community, then you should command high prices in the marketplace.
$5-$15 per answer would be reasonable given the limited time involvement.
It's doubtful to me that you could appeal to expert users to exchange time for money when it is better leveraged by consulting and entrepreneurship.
I seem to have enough free time to blow a fair portion of it commenting on Hacker News.
Keep money per answer out of this. It's amazing what happens when you keep a site like SO working on social norms and avoiding market norms.
Other than money, how do you prevent an expert-exodus as expert users are deluged with simpleton questions? Amidst a deluge of bad questions, there's little value to remain active, as your own questions can rarely be answered, and users aren't providing interesting questions.
Something else is needed to provide a substitute for that value.
Alternatively, you must prevent the exodus by filtering the kinds of interactions/questions that occur. Mailing lists, for instance, have a barrier to entry to serve as a first-pass filter (the subscription), and then a community to enforce community norms.
Some experts are not only in for the money - they enjoy helping the novice, the less experienced. They're not only in for answering only the most interesting "gems" - a true expert can give a much better answer to a newbie question than an intermediate developer.
Similarly, (most) university professors don't teach for getting rich or just fort the research - they enjoy overseeing and helping the youngest generations of their particular field.
Listen to the StackOverflow podcasts and you'll read why money based answer systems don't work. In fact, this is why they are beating Experts Exchange.
On the other hand, Stack Overflow is racing to the bottom, so to speak, on the quality of questions and the people answering them.
Either way, money is what would entice me to answer questions on the web. Currently I do it for free on IRC, but only because standing community helps ensure that the quality of questions is reasonable on the channels I frequent.
I find StackOverflow very boring as well .. but I don't need to use it much. I've only visited the few times in interest because I always like to find nice technological discussions, but .. so far .. its been pretty "meh".
There haven't been any mind-blowing awesome gems of answers in there that have caught my eye - mostly pretty mundane things, content-wise, and as a programmer looking for an interesting community, I don't really get that vibe from it much at all.
To me it just seems like a place for kids to go and get their homework done for them by lonely strung out alpha dogs looking to place some authority in the world.
For me, sites like this will never replace the good ol' USENET groups and subsidiary mailing lists. Once again (as is the case with Twitter), a web site springs up to try to capture an audience from the pool of people who are just not competent enough with E-mail to manage it properly and exploit the results ..
I disagree. There is always something to be learned from Stackoverflow. It is impossible that 1 person has the same knowledge of all the users on the site. There are very interesting questions and answers.
Are there allot of simple questions? Yes because there are many people just learning how to do basic things in one language or another. However even in the most basic question answering it can be an worthwhile experience. You can filter out the simple questions and get on to the more advance ones pretty easy.
Programming is boring; unless it's not.
If you have something to say on a non-trivial subject, you better write an article.
Stack Overflow deals with boring, gadfly-style problems. You can get some specific bits of knowledge there, not the wide-spectrum wisdom.
My definition of "enterprise developer" comes directly from working in a gigantic financial company as well as an even more gigantic government defense contractor. Would you like some more anecdotes?
No, because an "anecdote", by definition, is not statistically relevant.
"Enterprise teams" are not, inherently, profoundly stupid in their practice of software engineering, and not all "hackers" inherently produce worthwhile, quality code.
This completely fabricated fable of software engineering is a simple straw man argument, and I've flagged it accordingly.
Every life lesson that you learn from experience is just a statistically irrelevant anecdote and results in the refinement of a crude heuristic or generalization mechanism for your limited human brain. Never did this fabricated fable state or even hint that this was the way all enterprise teams and all hackers are like. That, my friend, is the straw man argument coming from you.
If this fable didn't hint at that, what was its point?
That convincing management regardless of your productive output is what matters at the end of the day? Perhaps at some organizations, but it's not a particularly accurate, nuanced world view.
I doubt this story would be conveyed in reverse -- the stereotypical "rockstar hacker" produces vast reams of code that will fail catastrophically, but comes out ahead by, upon 'completion', immediately pushing responsibility for the disastrously buggy code to the "stodgy" enterprise engineers who get called in to maintain the project. The "rockstar" moves on to the next project, where he'll repeat this performance, and the stodgy developers get poor performance reviews.
I actually thought the story hinted at the opposite: that individual programmers who from the outside may look like slackers can produce good, simple code and that process isn't everything.
As an iPhone user, I strongly prefer native applications. They can integrate with the built-in technologies (address book, location, P2P, MDNS, etc), and they don't simply stop working for 30 minutes while I ride the subway.
It's actually not that good a point. Using JQuery you can get a web app as low as 1k so slow connection isn't that big a problem. Web apps can integrate into most of the iPhone's features like contacts (notable exceptions being location and camera). The other stuff he mentions like p2p, mDNS, etc... are very specialized applications that to the best of my knowledge only work on unlocked iPhones (I could be wrong though I know bit torrent has been banned from the app store)
A web app is never going to work for edge case style applications but for the majority of web sites it's probably a good idea just to spend a day customizing their site for an iPhone (I've become fond of iWebkit: http://iwebkit.net/) rather than buying a mac, learning objective-c, etc...
Bottom Line: Look at your requirements and decide if a Webapp will do. Don't just jump to native.
It's actually not that good a point. Using JQuery you can get a web app as low as 1k so slow connection isn't that big a problem.
1k is still a pretty unpleasant wait when you're looking for 'instant', and unfortunately, if I exit Safari to use another app (which I often will), Safari will very likely need to reload that page.
Web apps can integrate into most of the iPhone's features like contacts (notable exceptions being location and camera).
Web applications can't integrate with the address book, actually.
The other stuff he mentions like p2p, mDNS, etc... are very specialized applications that to the best of my knowledge only work on unlocked iPhones (I could be wrong though I know bit torrent has been banned from the app store)
The P2P I was referring to is WiFi/Bluetooth zero-configuration phone-to-phone 'networking', for magically connecting applications on two or more phones. It's pretty neat.
A web app is never going to work for edge case style applications but for the majority of web sites it's probably a good idea just to spend a day customizing their site for an iPhone (I've become fond of iWebkit: http://iwebkit.net/) rather than buying a mac, learning objective-c, etc...
I really don't think they should be considered 'edge cases'. There are so many ways that the user experience is better via integration opportunities, speed, and native look and feel, that I don't think anyone should consider a mobile webapp to be a viable replacement.
Webapps are a reasonable substitute assuming nothing else is available and you can't afford to produce a proper application, but I'm not convinced that you'll spend more time and money producing a native app than you'd spend producing an equivalently high-quality webapp alternative.
"Webapps are a reasonable substitute assuming nothing else is available and you can't afford to produce a proper application, but I'm not convinced that you'll spend more time and money producing a native app than you'd spend producing an equivalently high-quality webapp alternative."
I'm sorry but are you kidding here? You think a web developer who already has experience in all the tools used to create a web app is going to spend the equivalent amount of time learning a completely new languange, platform, developer tools et al AND THEN using them to program an app. I'm sorry but that's really a ridiculous thing to say.
(and I don't mean to be rude but really, who voted this comment up? If you don't know how software development works you shouldn't comment or vote on posts that involve it)
As for the rest of your argument,AT&T's Edge connection downloads at around 25KB. So a 1K page (which you claim causes an "unpleasant wait") would download in 1/25th of a second even on a slow connection.
As for the rest, my point still stands. Everything you spoke of requires a cracked iPhone which the great majority of people don't have anyway.
For the record, you're wrong below but I can't reply to you and I don't know why. But the short of it is anyone who is ACTUALLY a developer knows there's a lot more than picking up a language to development (though again there will be time to pick up the language and that will be time a web app developer wouldn't need to spend). You have all your tools including your editor, unit testing tool, etc... So again you're wrong. And yes there might be overhead to a web app but at 1k it's still going to come down in about a second or two. And Latency, this is the first link I found off google: http://www.engadget.com/2007/06/28/atandt-customers-seeing-s... I don't think .91 seconds is going to kill anyone
I'm sorry but are you kidding here? You think a web developer who already has experience in all the tools used to create a web app is going to spend the equivalent amount of time learning a completely new languange, platform, developer tools et al AND THEN using them to program an app. I'm sorry but that's really a ridiculous thing to say.
As a software developer, Objective-C is just another imperative C-derived language (a pure superset of C, actually), with Smalltalk-decedent OO features. It's not (or shouldn't be) an alien experience.
My comment assumed a baseline software developer proficiency. If simple high-level webapp development is all you've ever done, then of course -- writing an Objective-C application will be more difficult. Perhaps that's a good reason to write one.
As for the rest of your argument,AT&T's Edge connection downloads at around 25KB. So a 1K page (which you claim causes an "unpleasant wait") would download in 1/25th of a second even on a slow connection.
In addition to failing to account for additional resources (the page won't be 1K in total), you forgot to account for latency (there's quite a bit).
As for the rest, my point still stands. Everything you spoke of requires a cracked iPhone which the great majority of people don't have anyway.
Nothing I've mentioned requires a jailbroken phone for any purpose.
If that's the case, then the market is unsustainable. Players will fail, causing scarcity, raising the value of applications, and conditioning users to expect to pay more.
I seriously doubt that will happen. People have stopped buying $30 software for their computer, why would they buy it for their phone?
Do you have statistics on that? My wife just bought Balsamiq for her computer without blinking, and that was $79.
Anecdotally, I know quite a few indie and larger commercial developers paying their salary and more on $30+ desktop software. I know that Balsamiq sure isn't hurting.
It's primarily the webset that can afford to subsidize free product on the back of VC.
The iPhone is a platform for cheap apps, mostly entertainment related and games.
Games (even small ones) take a surprising amount of resources to create, from art assets to developer time. Unless your game is a lucky iPhone hit, you just can't cover development costs.
If you want to make a go at an iPhone business, you optimize around that fact. You don't try to drag your existing business model to the iPhone and hope that the market drastically changes.
If your existing business model is "pay the rent", much less "cover payroll", then yes, you're quite right -- you can't drag your existing business model to the iPhone.
We do bespoke development for iPhone customers. They lose money, we make payroll, and we wait for the market to mature. Until it does, the iPhone is a total wash, and don't be surprised when the smaller shops that can't eat the loss start dropping out. It's a gold rush.
$1.99 is less than the cost of a movie rental, but movies have massive leverage across an incredibly large market. This idea that software should only cost $1.99 is remarkably poisonous, but fortunately, the market will correct that.
I'm familiar with game development. I make games for the iPhone in my spare time and I make more money than I ever have in my life, and my apps aren't even popular compared to things like Ocarina or Tweetie or Pocket God.
I'm truly, genuinely surprised, as most indie game developers I know have been lucky to recoup their costs, and fewer have seen any sustained revenue to speak of. Some got lucky, most have not.
What games do you develop, if you don't mind shedding the mask of anonymity? (I understand if not, I'm anonymous here because it allows me to actually speak freely).
At this date in time, I am the model for an iPhone business. A single guy making indie games. It might morph into something different in the future. I imagine there will be a separate path for business applications. Perhaps $30 CRM apps will be sellable in a bundle with enterprise software to large companies. But I don't think end users will ever pay $30 for iPhone apps except for very niche cases. Most people view their phone as an entertainment device, apps are on the same level as ringtones. The market may correct itself by flushing out all the players who can't make a profit on a $1.99 game, but it's not going to correct itself by suddenly having mostly $30 apps on the app store.
Maybe you're right, but I hope not. I'd be curious how you can afford rent on $1.99 game sales, what sort of revenue "more money than I ever have in my life" means, and whether you've seen more than one of your released applications succeed.
Given that conforming with the standard is effectively free, do you have any other justification for your non-conformal position of "I'd say 64 is enough. Anything above is just weird"?
Willfully and capriciously ignoring standard requirements that you think are "weird" results in non-conformal implementations that confound users and other developers attempting to interoperate with your systems. I'm genuinely surprised to be writing a paragraph defending standards conformance -- I'd have thought that this position was basic common sense among software developers.
I wrote a greylist server ( http://www.x-grey.com/ ) and I arbitrarily capped emails at 108 characters. In testing, I collected 565,012 tuples (IP address, sender address, recpient address)---the longest in that testing corpus was 107 characters (15), the average length was 24.24 characters and the median was 23 (just checked now). Capping at 108 meant I could store two addresses, plus a IPv6 address, plus some timestamps in 256 bytes of memory (one feature of my greylist implementation---everything is stored in memory).
Over the year and a half it's been running, I have seen a few addresses exceed the 108 character limit (which isn't fatal as I do store such addresses, only the first 108 characters) and by few I mean "less than 1% of 1%". Bumping the record size to store a full 254 bytes (or is it characters? There is a difference) would double the memory consumption of the program for very little gain in return (but at least I have numbers to back up my position).
Conforming with the standard is not "effectively free".
The only way to _really_ validate an email address is to try to send mail to it. But that has non-zero cost (depending on how often you have to do it, what the odds are that you'll end up on a spam blacklist for no good reason, etc., etc.).
The alternative is to use purely server-side validation routines. But these become more and more expensive as you progress through less common edge cases (e.g., regular expressions are not capable of detecting every valid address). So most people, sooner or later, make a trade-off, favoring some more common subset of cases over some less common subset.
If anything, we should be arguing over what constitutes an acceptable place to make that trade-off. Should embedded comments be supported? What about bang paths?
If you really care, use a real standards-comformant address parser, most languages have at least one -- Java does. Otherwise you're just wasting your time, and the time of any users you hose with your amateur-hour validation.
I'm surprised (and confused) -- can you really include a great number of large, verbatim or nearly verbatim blocks of text from Wikipedia, cite them, and not be called out for plagiarism?
What's to stop any author from saving themselves hours/weeks/years of effort by simply by copying text and sticking a citation at the end of the paragraph, as it appears was done in this case?
Reuse can never be 'plagiarism' if you cite your source and present it as a quote. Plagiarism is only presenting someone else's work as if it were your own.
Now, if the book is 99% other people's work, but all copying is cited, you might still run afoul of copyright laws for copying without permission. But it won't be plagiarism. Whether 'fair use' applies depends on a test where many factors are weighed. See:
Very roughly, the idea is: does your use add value for society without damaging the value of the original work? Using less is better than using more. Using for educational/nonprofit uses is better than trying to make a quick buck. Trying to make a buck in a new transformative way is better than trying to make a buck at the expense of (by replacing) the original work in the marketplace. Using just enough to have a conversation about the work, adding your commentary, is better than just reusing the juicy parts to save yourself effort. Etc.
Unlike the usual arbitrary, capricious, and infuriating appstore rejections, _the developer deserved this._
In order to implement something like Quick Shot, you have to muck around in the undocumented innards (private API) of the Apple-provided camera view widget. This is clearly forbidden by the developer agreement, and will easily result in your application breaking across minor and major releases. This breakage is evidenced by Jared Brown having to submit a new update for 3.0.
I take umbrage with his characterization of Apple leaving his users in the lurch. This is incorrect. Apple's minor culpability is only in choosing to not provide a better camera API.
Knowingly selling a product that clearly violates the developer agreement and will break in the near future is dishonest. The product is defective. I have sharp words for the App Store, and Apple's ridiculously arbitrary revue process, however, in this case the developer is clearly in the wrong.
Just like in any other modal UINavigationController, to modify what's visible you use the public provided methods to traverse the hierarchy of UIViews. You don't need to know anything about the classes inside; you can remove them at will, just like you can remove any other subview from any other parent. This works in 2.0 and up.
So there's no private API here. The dude even says so in the article. Now ultimately, your application will break if you rely on UIImagePickerController's view hierarchy staying constant (doing stuff like "remove the third view from the image picker's subviews array", for instance) and aren't careful about checking results. In 3.0, the UIImagePickerController's view hierarchy looks significantly different from the way it did in 2.2.1, so a lot of people's apps blew up. (On the flip side, if you were being careful, things worked just fine.)
Forget all the blathering about who is culpable to who.
This guy's app (and presumably others) were defective. But platform makers make changes that break apps all the time; why should this guy get his app banned forever?
You're wrong. You don't have to use a single method or class that is undocumented to add or remove stuff to the camera control's UI.
No. You do everyone reading a remarkable disservice. The UIView hierarchy itself is undocumented. The contents and ordering thereof are undefined, may change at any future date, and can not be relied upon. The application did, in fact, break when the OS was upgraded.
Apple's use of "Private or Unpublished API" isn't intended to leave room for semantic arguments about the true meaning of "private", "unpublished", or "API", and they've clearly applied the standard industry definition in this particular instance.
But platform makers make changes that break apps all the time; why should this guy get his app banned forever?
Apple guarantees that their published, documented behavior (eg, API) will work across releases, and they strive to meet this guarantee. Where they fail they assume responsibility for fixing the issue -- file a bug.
"The UIView hierarchy itself is undocumented. The ordering of the contents is undefined, may change at any future date, and can not be relied upon."
This is exactly what I said above. If you expect a certain structure, your app will almost certainly blow up, and is defective. However, you can rely on the fact that the UIView hierarchy can always be modified, with public methods, regardless of what's inside (or not inside) it. Because that fact is documented.
This is exactly what I said above. If you expect a certain structure, your app will almost certainly, and is defective. However, you can rely on the fact that the UIView hierarchy can always be modified, regardless of what's inside (or not inside) it. Because that fact is documented.
No, you can not rely on that undocumented assumption:
1) You can not know what to modify in an opaque set of views, because the contents of that opaque view hierarchy is undocumented.
2) You can not know that it is safe to modify the opaque view hierarchy, as doing so may break the undocumented invariants of the opaque view hierarchy.
3) You can not assume that the members of the view hierarchy meet your assumptions regarding structure, subclass, or nature, as the view hierarchy is opaque and not subject to declared API invariants.
If it's not documented, it is not a defined invariant, and it can not be assumed.
To claim otherwise is to simply fail to understand the purpose of defined invariants. Software development is no place to rely upon empirically-derived knowledge.
You can empirically determine exactly what's in an opaque set of anything (NSArray's various public access methods) and exactly what part of the area of a view is covered by a subview. Then you can call the public removeFromSubview method on that view, and it will remove it from its subview. Then you can attach your own. Apple doesn't say you can in the documentation, but they say that UIImagePickerController is a UINavigationController, and you can do that to any UINavigationController.
You can do this at run time for any UIView, even those which are part of so-called opaque types. Those methods are documented, and you can read about them above. I'm done trying to tell you that.
I can't imagine how frustrating your iphone app development experience must be if you rule out experimentation of all types. How did you ever get past the code-signing step?
You can empirically determine exactly what's in an opaque set of anything (NSArray's various public access methods) and exactly what part of the area of a view is covered by a subview.
The facilities necessary to modify the hierarchy are defined.
The content of that hierarchy is undocumented and may change at any time. The behavior of modifying the opaque view hierarchy is undefined.
You can do this at run time for any UIView, even those which are part of so-called opaque types. Those methods are documented, and you can read about them above. I'm done trying to tell you that.
The contents are undocumented. You can not assume any behavior whatsoever if you modify the contents of the entirely undocumented view hierarchy.
Having been in the position of dealing with customers foolishly relying on internal implementation details, you honestly make me want to beat my head against the wall. It's one thing to make the mistake, it's another to proudly celebrate it.
I can't imagine how frustrating your iphone app development experience must be if you rule out experimentation of all types. How did you ever get past the code-signing step?
You still misunderstand. It's simple: you can not firmly rely upon undocumented, empirically-derived knowledge without a vendor documented invariant.
The contents of an opaque view hierarchy -- and its behavior if modified -- are entirely undocumented.
My capacity for civil dialog is exceeded, and I'll stop here.