Hacker Newsnew | past | comments | ask | show | jobs | submit | gchpaco's commentslogin

The main difference is that engineers have a code of ethics they're expected to uphold, with the threat of taking away your certification if you don't.


Note that licencing is country-specific issue. Where i live 'engineer' is just an academic degree on the same level as 'master', just issued by technical university instead of 'general' university.


Is that a common thing in the tech world? Regardless, I'm not sure what that has to do with the name of the degree one got.


If you read the article, they discuss it there.


They really don't. The whole article is basically a long winded elaboration of the title.


And said notation causes the generic type system to be Turing complete, as in the Java case.


There are only two "natural" problem classes known that are not either definitely in P or definitely in NP, and this article concerns one of them. (The other is integer factorization / discrete logs). And I'm not sure anybody believes there are any natural problems that are inherently in this in between state.


The thing is though -- at the very least they could be honest and clear about this just being the state of affairs, not being something more fundamental (if it isn't believed to be). I (and I'm sure you) see over and over again so many people (probably including myself at some point) who end up being left with the impression that graph isomorphism and factorization are NP-hard, because (a) they hear those problems are hard, and (b) they're told the polynomials are the easy problems, and (c) the only non-polynomial algorithms they've ever seen are exponential-time.

(Btw, I think you mean neither definitely P nor definitely NP-hard.)


> who end up being left with the impression that graph isomorphism and factorization are NP-hard, because (a) they hear those problems are hard, and (b) they're told the polynomials are the easy problems, and (c) the only non-polynomial algorithms they've ever seen are exponential-time.

In a decent textbook on computational complexity theory, you can read that a problem is NP-hard if for every problem in NP there exists a polynomial-time reduction to it. Nobody claims that such a reduction exists, e.g. for integer factorization and graph isomorphism.


In general his complaints regarding logic are incoherent. A function is not a messy weird object in mathematics; it's a possibly infinite set of ordered pairs where there are no duplicates of the first element of the ordered pair. That's it all it is, formally. Or "class" is a concept that has highly specific meaning in NBG theory—in ZFC it's not a formal distinction and so there's a transformation that must be obeyed where "x \in Class" means "{ x | x satisfies class definition}"; which due to Foundation must be the subset of some other, preexisting, class (and thus is Russell's paradox defeated). Almost all of these things exist and are present and are extremely well defined in the theory, he just doesn't like them for some reason.


The C and I presume C++ standard has been very carefully avoiding the idea of a preprocessor being separate at all. The standard was very carefully worded to prevent that being necessary, because most C compilers do not have a separate preprocessor. It is only Unix heritage compilers that really have one, and even they're not consistent about it.


Too small, slide mounts were already standardized on 24x36, and the square was never all that popular among the general photographer. The reason why the square shape in 120 was originally developed was a technical hack. Since it outputs a square, there was never any need to hold a Hasselblad vertically; just take the photo and crop it to suit. The only square consumer oriented cameras I can think of are twin-lens reflex cameras like the Rolleiflex, which are delightful but somewhat uncommon. A number of folks started trying to make use of the square format as a square format, but it was not originally, I think, intended for that purpose.

Re: too small; an 8x10, one of the smaller standard print formats for portraits, is about an 8x enlargement from a 35mm frame. With modern materials and good technique, 8x-11x is feasible, but starting to push it at the edges; I have printed 13x17s off 35mm but I would not want to push it much larger. 35mm does 4x6s, 5x7s and 8x10s perfectly reasonably, which is what it spent most of its time doing for common consumer work. It's worth noting that one of the other common consumer cameras of the 1940s was the Brownie, which output 6cmx9cm images and was routinely contact printed, producing something smaller even than a 4x6.

120 produces images that are between 1.8x (in the 645 format) or 2.5x (in most others) as large, physically, meaning that the common enlargements are only 4x-5x. If you push it, with quality equipment, you start getting into print sizes that are super clumsy to handle like 20x24. I've never printed, personally, anything larger than a 16x20. If you do your own wet processing they're also nicer to work with—35mm negatives are real small and kinda fiddly. 4x5 sheets are also delightful to work with, of course, but they require fighting the camera in the field.


> With modern materials and good technique, 8x-11x is feasible, but starting to push it at the edges; I have printed 13x17s off 35mm but I would not want to push it much larger.

Many pros push 35mm to billboard sizes. The size of the print doesn't matter. It's the viewing distance.


Not a pro here, but the greater the distance, the better my pictures look. ;) Back to the subject at hand, it is a little surprising to me that a format closer to square didn't catch on at some point. I suppose that the image quality of the image deteriorates more toward the edge of a circle around the center point. The format that gets the most out of a circle of acceptable quality is a square. As the shape becomes more oblong, more of that 'acceptable quality area' falls outside the image. Perhaps this is one reason that larger formats are closer to square than 35mm.


Well to be pedantic, the format that gets the most out of a circle of acceptable quality is a circle, not a square.


Point taken. That makes me curious if any cameras were ever produced in that format. I would suspect that circular format photography might be used in astronomy where every last bit of the image is valuable.


Yo’re right though, a square is certainly the rectangle of largest area from a circular lens.

If there was a need to maximize the capture for technical reasons I imagine it would be easier to just oversize the film or sensor and trim the corners.


There's a number of fisheyes that produce circular images on rectangular film (obviously, not using the entirety of the film surface).


> The only square consumer oriented cameras

> I can think of are twin-lens reflex cameras

> like the Rolleiflex, which are delightful

> but somewhat uncommon.

The Kodak Brownie, a cheap consumer camera that sold in the millions, was a 120 camera that shot square pictures, same format as a Hasselblad or a Rolleiflex. The original Box Brownie came out in 1900 and was the most popular camera in the world for years going through various models up to the 1960s. That's how long the 120 square format was a popular consumer format.


The #2 Brownie came out in 1901 and shot 6cm x 9cm, and all of the surviving Brownies I've seen shot rectangular formats. There was the Brownie 127 which shot square, but I only learned of its existence just now; have never seen one in the wild, probably because finding 127 film is functionally impossible.


once a year Ilford produces a limited run of 127 bulk rolls (and other odd sizes). I’ve got a 127 Spartus I wanted to get some for, but figured i’d Never use the whole spool.


Rolleiflex cameras were extremely expensive and not in any sense consumer oriented.


No, but it is the one name that people remember, and saves much explanation. The Rolleicord, the Mamiya C series, and the YashicaMats, one of which is my TLR, were more consumer oriented.


I damn near released a (private) message protocol without a version field a couple months ago, and I know better. Fortunately stopped myself and added it before any actual data got released.


A formal language is basically just a mathematical set of acceptable strings made up of symbols in a (typically finite) alphabet; thus the smallest is probably the empty set, or if you insist on an accepting path, a set containing only the empty string.

The interesting part comes up when you want to match an infinite set of strings with an algorithm. Probably the simplest such algorithm would be "regardless of input, accept" which isn't exactly useful but would suffice.

It's dissatisfying to have an algorithm that puts weird, arbitrary restrictions on composing languages; for example it's usually the case that if you take a language L1 and a language L2 that are both acceptable to the algorithm, the concatenation (for every string in L1 and every string in L2, the concatenation of their strings is in the new language) language is usually acceptable to the algorithm.

Some commonly used languages do not have all of these properties; in particular the intersection of two context-free languages may not be a CFL, nor the complement. Deterministic context free languages are even more restrictive. You seem to need to give some things up as you move up the language hierarchy in power; in particular string homomorphism and intersection seem to be mutually exclusive in languages more powerful than regular expressions.

If you're interested, the theory of abstract families of languages has been studied, although I do not know a lot about it myself.


Depends on the database, but Cassandra, for example, has quorum mode for writes, which requires a majority of the cluster members ack the write. This can be enabled on a per-query basis, and also for reads.

The other way of doing it is things like CRDTs (https://en.wikipedia.org/wiki/Conflict-free_replicated_data_...) which have a join operation for any two data values.

You have to keep it in the back of your mind that it's a thing, but working without consistency can be done.


Quorum and CRDTs deal with completely different problems.

CRDTs do one thing... they mitigate the issue of "lost updates". All acknowledged writes will be represented in the results of a query and no "winner" strategy is involved that would cause some acknowledged writes to be possibly incorrectly dominated by others and thus lost.

Quorum (strict) just provides a very, very, very weak form of consistency in the case of concurrent readers/writers (RYW) and just very, very weak consistency in the case of serialized readers/writers (RR).

My personal opinion is that any eventually consistent distributed database that doesn't have built-in CRDTs, or the necessary facilities to build application-level CRDTs, is a fundamentally dangerous and broken database because unless all you ever write to it is immutable or idempotent data, you're going to have your database silently dropping data during operation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: