Hacker Newsnew | past | comments | ask | show | jobs | submit | fzltrp's commentslogin

In my humble opinion, they should write formal specifications which are not just targetting a C API, but may degrade consistently to a C API. With a more formal description of each function call, including predicates for parameters, and possibly the context itself, they make things way easier for user land, where these could be mechanically translated into API bindings, and thus checked automatically. If all these checks are made on the user side, then they may add a simple toggle in the API to remove the checks on the driver side. Most users aren't using the C API directly anyway, and people writing bindings would welcome specs which could be mechanically translated into their favorite language bindings. The problem might remain for WebGL though.


There's actually an XML spec, don't know whether this is enough to automatically generate language bindings from it though: https://cvs.khronos.org/svn/repos/ogl/trunk/doc/registry/pub...


> don't know whether this is enough to automatically generate language bindings from it though

Yes it is and most language bindings are generated using them. This also includes "OpenGL extension loaders" which are required in C apps too.


I have to admit that, while I knew these documents existed, I was not aware they included as much information as they do: I was under the impression that it was merely an xml version of the original C headers to facilitate their parsing. Having a second look, it seems that it also covers information like array expected size and valid enum subsets for parameters, which is very valuable data for all players using the API (even driver implementors could use it to generate basic conformance tests).


> Then, if someone writes a popular API wrapper, Khronos can take that API and standardize it as the next version of OpenGL, no design-by-committee required.

If that wrapper actually supports all the features evenly, and doesn't favor a particular implementation. Is that possible outside a design by committee?


> Oh. Ocaml again.

Nothing forces you to read the posts you don't like.

> Well, I'll chirp on the opposite side of discusion. The syntax is completely <->:%^&#$^ $% %^&*% up. Unreadable. Yes, it maybe somewhat pleasant to write code in such syntax, but readability sucks.

Readability is mostly a matter of experience.

> Which means that there are uncountably many ways to screw up the design

And many ways to make it fit a given problem. It's again a matter of experience.

> And each OCaml primadonna developer thinks that his way is the right way. And the rest can't read his code. Fuck that.

Glad you give your opinion. Apparently you have an axe to grind against the OCaml community though. You could probably replace OCaml with any language with a lot of expressiveness, and still be correct - assuming there are "primadonna"s in the OCaml community and that "any other". Or do you mean that this happen only with OCaml?


>> Readability is mostly a matter of experience.

"It is totaly bogus claim." A single example of Perl, Brainfuck or Ocaml can prove that you are wrong. "The only people that can't read their own Perl after 6 months are the people that don't really know Perl." "Replace "perl" with "APL" or "BrainFuck" (or any language with baroque syntax) and the above sentence is as (in)valid."


This is the ability to abstract concepts and then recognize them in different settings (for instance, the idea of a child being a miniature version of a given animal, with less pronounced traits). In order to understand the clues you are talking about, an AI has first to be familiar with the terms used in the discussed topic, so as to be able to construct a definition by itself (what is "miniature", "traits", "pronounced"). These terms' definition must be synthetized somehow before hand, or perhaps as the discussion goes, but then the amount of necessary information in that discussion must be much larger, for the AI to untangle them properly.


The original point of patents is to provide protection to inventors and give them a head start in implementing their inventions. A company which hold patents without implementing them is basically not following the spirit of the law. Hence, there should be a drastic "countdown" to a patent viability, which would be dispelled by a producing a viable, marketed application of that patent. I would suggest a 6 months timeframe, non renewable. For software patents, given the very abstract nature of the invention, 3 months should be enough, and the patent lifetime itself should not exceed 3 years - 12 times the countdown to market (3 years is already quite long in the software industry, although not in the law one). This should be retroactive. Note that I'd rather see the concept of software patents completely invalidated, but I understand that the issue is quite complex.

Oh, and what should be done in the case of patent transfers? Should the new holder be subjected to that countdown as well? What if the products tied to a patent is eol'ed? Should the countdown be restarted? I do think so. It's all about the practical side of things.


"The original point of patents is to provide protection to inventors and give them a head start in implementing their inventions"

Not really true; the original point of patents (in the US, anyway) in the minds of Jefferson and others who (in some cases begrudgingly) defined them was to enhance the amount common, public knowledge by enticing inventors to publicly document their inventions rather than hold them as trade secrets. The "protection" and "head start" bits are really the payment made in return for that public documentation, but not the actual reason for the system to exist. The idea that patents exist to protect "the little guy" is a modern idea, ironically invented and propagated primarily by big corporations and "big law".

This makes most modern software patents (which are routinely violated by people reinventing fairly trivial systems even when having no prior knowledge of the existing patent implementations) all the more ridiculous.


Yes, this is correct. Thank you for pointing that out.

If a company is producing new ideas, and stock pile them in the form of patents, are they playing the game correctly?

Is it (just) an issue in the case of sotware patent?


A company which hold patents without implementing them is basically not following the spirit of the law.

Playing Devil's advocate, what about a company like ARM, which designs and patents new CPU designs and then licenses them to others? We can't claim their patents aren't implemented. Should they be forced to make the CPUs themselves?

And if not, what would prevent NPEs from licensing to one small company and then suing everyone else?


> Playing Devil's advocate

Actually you raise a good point. ARM has been always doing that though, they built their own cpus when they started, iirc. That would have large consequences on their current business model, it's true.

> And if not, what would prevent NPEs from licensing to one small company and then suing everyone else?

It's already happening. When one of those company win a lawsuit, it is usually followed by them selling a license to whomever they sued...

Which is the lesser evil? Forcing patent holders into producing practical implementation of their patents, or allowing NPEs to exist?

I guess my idea doesn't really hold waters. Most probably smarter people already have thought about that possibility and saw the issues you hinted at. Oh well.


Who actually produces it (you or a partner) is, at least in my mind, neither here nor there. A huge chunk of manufacturing is outsourced/subcontracted these days anyway, so it's pretty difficult to say who is the actual manufacturer of a product anyway.

The key question for me is the motivation for the patent - did you come up with some novel idea in order to get it manufactured (and are using patent protection primarily to stop someone ripping off your invention), or did you come up with (or buy) an idea and patent it purely in the hope that some other company independently comes up with the same idea and you can then sue them for patent infringement?


How would motivation be proven in a court?


Not sure - maybe by showing a company that is manufacturing your patented product under licence, or at least by showing your attempts to find a company to do this.


> isn't it still positive for the economy?

Why? Where's the benefit for the economy? If somehow the patent trolls reinject the benefits they make (ie what they obtain from the law suit minus the lawyers/legal fees) in the economy, perhaps we could say it's maybe a positive outcome, but there are no garantee that the infringing company would not have done so. So where is the benefit for the economy?


The insight there is that one should always try to wrap criticism with praises: people don't like being told that they suck at their job, even if it's true. If instead of showing themselves as destructors, they'd adopted an image of mentors or teachers, things would've gone way better. Hopefully Google's Project Zero will be wiser than the IBM team on this point.

Note that this is even truer when criticism comes from an outsider, and Google's team will be doing exactly that. If they also deal with companies whose culture is very much reputation based (like in Asia), they'll have to be even more cautious.


I think providing unsolicited advice is always going to be fraught. Showing up as "mentors" and "teachers" is not going to go over well if the person you show up to teach thinks that you don't know what you're talking about. It's certainly possible that a lot of people will welcome the help, but it seems just as likely that people will say, "You come in here and think that you know our applications, but you don't know the history and the specific compromises we decided to make, etc, etc."

One problem I think is that no one ever writes the story of the major bug that got fixed in time. If you could just check the counter-factual of what would happen without security upgrades, a team like this could build a reputation for saving a company millions of dollars and reams of bad PR, and they'd be more likely to be welcomed. As it is, it can be easy for entrenched interests to make the case that security-minded people are just obsessive because, "Hey, we haven't had a breach yet!"


I meant that mentor thing in the context of IBM. I agree that it would not be much better in the case of Project Zero.

That said, I still think that a positive approach (positive criticism) cannot be worse than plain critics.

> "You come in here and think that you know our applications, but you don't know the history and the specific compromises we decided to make, etc, etc."

That's exactly the sort of answers that team should prep for: it is obvious to me that whatever compromise I made for my software stack, if there's a security issue, I will have to reconsider them. The whole point is to not rub it up my face for me to accept the issue more easily (not everyone is an adept of egoless programming). I was also saying that with the perspective of the Sony situation: in Japan, losing face is an extremely serious matter. I don't know how this situation was handled by this guy though: perhaps he did all he could to manage their feelings. It's clear to me though that doing it the IBM black team way did is a recipe for failure.


> In my opinion schools should stick to basic teaching,

Define basic teaching.

> a high percentage of french teenagers cant even write french at the end of highschool,

Reference?


China has a team developping a mips like architecture, with several produced iterations: https://en.wikipedia.org/wiki/Loongson

I don't know if Russia has the industry to mass produce it, but surely it has the skilled people to design it, and it may probably use foreign fabs for the remaining steps if necessary.

As for the software, with an established ISA, they may quickly leverage exising open source solutions. Of course, Windows and OSX support will be a problem (given that they are the major desktop players).

Politic or not, they know it's doable.


If they are concerned about bugged hardware (which is difficult to do and expensive), I don't think "bugged" software (which is much cheaper to replace) will be on their shopping list either...


True, what I meanbt is that the consumer software marketplace is originally built around those platforms. During the last several years, the market has been largely fragmented by tablets, but I have the impression they still represent a major segment. That said, industries and services may more easily accomodate the absence of these players.


If you're just worried about bugged hardware, why develop a new architecture? Can't you use your own clean room implementation of an existing architecture?


Yes, you're right, they could. But I suspect the decision is made of things like:

- If it proves to be actually good, they might actually sell some of those chips, too (it's developing their local high-tech sector. Even a tiny piece of the market is better than none at all, especially if their own agencies kickstart it).

- It highers the cost for foreign intelligence to write software for that platform: their arsenal of viruses and trojans are completely useless. Unless you acquire detailed specifications of the platform, find some test chips, develop new software and finally infect your target... Takes time and money.

- They get some (positive, for once) international press.

- They can parade how awesome mother Russia is to their own nationalist ego.


Someone else mentioned that the architecture is based of Sparc technology (Sun's line of CPUs, which is a mips derivative iirc). It's hardly a completely new architecture.


Sparc isn't a derivative of MIPS.


Thanks for correcting me: indeed, it's an open architecture which was used by Sun, Fujitsu and TI for their CPUs, and it's unrelated to MIPS.


However, how difficult would it be for a foreign fab to rig the design of a customer and include backdoors?


> OCaml uses non-deterministic garbage collection, whereas python mostly uses reference counting (in CPython). This is another case where the first system was safer in one aspect.

How is reference counting safer?


It is deterministic. Meaning you know how it will behave each time the program is run. Whereas garbage collection can behave differently each time the program is run.


GC vs reference counting has nothing to do with determinism. For the same inputs, the same allocation profiles and same GC events will happen.

Note that CPython's reference counting is by means automatically simpler than OCaml's incremental GC. CPython has heuristics to detect cycles (which normally cause memory leaks in a pure reference counting system), as well as a number of heuristics to optimise performance based on locality and temporal properties of object allocation. All of these contribute to making the CPython runtime less predictable (but faster).

The OCaml GC is extremely predictable and has very few tricks. It's explained in one chapter of Real World OCaml: https://realworldocaml.org/v1/en/html/understanding-the-garb...


When is the GC activated exactly? When there is not enough memory available. This is the problem, because depending on the data the program is processing this changes.

So if you run your program with different inputs (why would you run it if the inputs are always the same?) then the GC will be activated in entirely different parts of your program!

This makes it non deterministic because objects in OCaml can destruct at different times in your program depending on the inputs. With reference counting, they are destructed at exactly the time when they have no references.

With OCaml GC, the code can run in a different order depending on the GC variables and depending on the program inputs.

The bug in the original article was caused because the GC was finalising it at a time the author did not consider. Probably because the code assumed reference counting behaviour of finalising when there are no references.

"OCaml's automatic memory management guarantees that a value will eventually be freed when it's no longer in use, either via the GC sweeping it or the program terminating"

So, it can guarantee your object will be freed... but only when the program terminates? That's not a very strong promise.


Well of course it can't guarantee that your object will be freed beforehand if the GC does not get a chance to run before the program terminates. It does say if the GC runs it will be freed if no longer in use.


> This makes it non deterministic because objects in OCaml can destruct at different times in your program depending on the inputs.

There's clearly a different notion of deterministic there! If your program using ref counting gets a different input, the same object might be released at a different time as well, or am I misunderstanding?

> "OCaml's automatic memory management guarantees that a value will eventually be freed when it's no longer in use, either via the GC sweeping it or the program terminating"

> So, it can guarantee your object will be freed... but only when the program terminates? That's not a very strong promise.

”A or B imply C” is not the same as ”B implies C”. There's a logic mistake there.


I think illumen is referring to the concept of "deterministic destruction", which exists in refcounted languages but not in GC'd ones. It's not quite the same as "determinism".


Oh, ok. So every python implementations follow that practice?


No, just CPython I think.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: