Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Algorithmatic.com: a repository and dev. env. for algorithms (algorithmatic.com)
28 points by ANaimi on Aug 29, 2009 | hide | past | favorite | 24 comments


"To view this page you need Microsoft Silverlight 3 plug-in."

I don't think that was a good idea.


If you design stuff for programmers it should be as platform independent as you could possibly make it.

I'm having a hard time with sites that use flash (scribd) or the one mentioned above for stuff that should be done with server side code or dhtml.

I realize that in the case of scribd that might be a bit much to ask, but it looks as though they gave up on page '1'.


Weird thing is that I already have Moonlight (Mono-powered Silverlight) installed, but it still isn't working :S


I'm not sure why one would choose to develop with Silverlight over Flash - straight away you've cut out the majority of your user base.


Agreed !!....it's really irritating aspect of theirs.


"Algorithmatic, by itself, is a small and simple dynamically typed Object Oriented programming language that serves as a common denominator among popular programming languages. This means that Algorithmatic doesn’t rely on any exclusive/new feature – this guarantees the ability to port descent implementations to any other language."

I don't get it. How is a bunch of common algorithms implemented in a custom site-specific language particularly helpful?

Pseudo code for algorithms is widespread, as are working algorithms in almost every language you can think of. How is porting from this "algorithmatic" langauge any better than porting from (say) a Java implementation?

from the FAQ, "Implementations of all type of algorithms do exist wildly on the internet. However, these implementations differ in quality as much as they differ in syntax (or programming language.) "

So how does adding a new syntax (and behind the syntax, an untested interpreter/compiler, runtime etc) and inviting random people to submit algorithms in this new language solve this problem?

I must be missing something.


From the perspective of implementing an individual algorithm, it's easier to just google an implementation. But pretend you're writing the standard library internals for a new language['s reference implementation]—where do you start? How do you compare algorithmic approaches, each written a slightly different way, to know which one you want to use? For that matter, how do you know which algorithms you "need" to put into your language (given that if you don't include it, your users will probably just write it themselves instead of complaining to you)?

I think the final goal, actually, would be to have an easy grammar spec that allows automated transformation of this particular code into a given language. That way, all languages, no matter the platform, can have the same core set of algorithm code. This would become the "language-neutral" encoding of the algorithms, open to inspection, and the other libraries produced from it would just be considered object code, not to be modified themselves. Whenever a flaw was found in any language's version of the library, the algorithm or the translating parser could be updated, and the flaw thus fixed in every version of the library. Whenever you came to a new language, you could rightfully expect the same core to be available. Whenever you started writing a new language, you'd get the core "for free" (in est, the price of writing a translator parser spec.)


"But pretend you're writing the standard library internals for a new language['s reference implementation]—where do you start?"

(Books by) Knuth? Cormen ?

Seriously though, if someone implementing "standard libraries" for a new programming language has to lookup implementations in some site-specific untested language, it doesn't bode well for the new language's future.

People who write such high impact libraries should be well versed in algorithmics, be able to (1) analyse and port pseudocode, (2)read and comprehend published papers (with mathematical proofs etc), (3) read comprehend and port existing implementations etc.

"I think the final goal, actually, would be to have an easy grammar spec that allows automated transformation of this particular code into a given language. That way, all languages, no matter the platform, can have the same core set of algorithm code."

Why not write an automated translator for(say) Java or C then? vs throwing yet another language into the mix.

Besides, this sounds too idealistic to me. But I guess seeing is believing. I'll believe it when I see a new language's implementor takes this path for a standard library.

I can't think of anyone good enough to write a robust, performant language interpreter/compiler and runtime balking at implementing algorithms and needing automated translation from a website.


> I can't think of anyone good enough to write a robust, performant language interpreter/compiler and runtime balking at implementing algorithms and needing automated translation from a website.

You're assuming we're talking about big, robust, powerful languages here. This wouldn't be for those, because those can survive without any help. We're talking about a long tail for language design: if you have an idea (say, Erlang's actor-model-as-core) and want to try it out, writing a standard library is a big barrier to putting out something satisfactory enough for "normal" programming that others will want to tinker on your language and help it grow. Imagine if Linus had to invent C when he was starting off with his tiny little prototype of Linux. We have rapid app development; why can't we have rapid language development?


"You're assuming we're talking about big, robust, powerful languages here."

No I wasn't. I said " anyone good enough to write robust, performant language interpreter/compiler".

I said nothing about the language size. Small langauges can be very powerful. You added the "big" adjective.

"If you have an idea (say, Erlang's actor-model-as-core) and want to try it out, writing a standard library is a big barrier to putting out something satisfactory enough for "normal" programming that others will want to tinker on your language and help it grow. "

And do you really think that a collection of random algorithms in an untested language on a website would solve this problem for them?

The way to test out that idea would be to use an existing language (like Scheme,or C) to build the interpreter or compiler for the language you have in mind instantiating the specific programming model you have in mind, write an application or two in your new language, and write the minimum amount of libraries you need, not build a "standard" library by automatic translation from another language.

In the meanwhile you use some kind of FFI to access the underlying language/OS libraries. (e.g arc using PLT Scheme's networking libs).

Besides, the kind of algorithms a "standard library" has are not the ones built on the site (fibonacci and ceasar's cypher in a standard library)?

A "standard library" has collections, os interfaces, guis, regexes, networking libraries and so on. I don't see those kind of algorithms being portable across languages not to mention paradigms. Or were you thinking that the new language would be a "small and simple" sequential object oriented language which is what the site's language seems to be?

I quote from the site

"Algorithmatic, by itself, is a small and simple dynamically typed Object Oriented programming language that serves as a common denominator among popular programming languages".

Says who?

How do you define "common denominator" among "popular programming languages" (like C, C++,Python, Java , C#, javascript, PHP, and Objective C)? I'd like to see a "common denominator" for such a language defined better. And assume you did design such a "common denominator", how would you go about automatically rewriting those algorithms to use an "erlang actor model"?

As I understand automatically converting a sequential non referentially transparent program to a concurrent one is still a research topic. So your "erlang actor model" language won't necessaarily get any benefit from algorithms written in this site's language anyway.

If I understand you right you are saying that someone designing languages (like PG is doing with arc) will now go and write some kind of automatic translator from this site-specific language to their own to get a "standard library"?

I've never heard of any language designer anywhere ever doing that. It maybe that such things have happened and I haven't heard of it. Any real world examples (vs suppositions of what could happen in the future) you have of such an effort would contribute greatly to the discussion.

I think (please correct me if I am wrong) that you are hypothesizing that such a thing might work vs having any successful real world examples of such an approach. I don't think there is a real problem language designers have that this site can solve.

If you have any examples of a language designer creating a standard library for his language by writing a translator for algorithms written in another language and running that translator, please give citations.

And if someone were (crazy enough to) to do something like this, I strongly suspect they would prefer to get some well tested algorithms written in the same language/paradigm "family" to "port" than some random collection of algorithms in a vaguely defined language on a random website.

IOW even if a langauge designer were to create another dynamic object oriented language, he'd do better to attempt porting well tested and robust smalltalk libraries than from (a future version of) this site.

As a thought experiment please imagine automatically converting smalltalk code to say Haskell, or Erlang. If that is too hard today why even bother with the silly algorithms on this site?

And I am not sure an automatic translation would work even in that case. Unless the new language were an exact clone of smalltalk, this "automatic" translation would have so many places the designer would jump in and hand code stuff anyway, that it just wouldn't work out in practice.

I would really appreciate any counter examples of a language designer actually building a standard library like you claim can be done.


No, no such thing exists. No one has ever tried to do such a thing before. I'm actually quite confused as to your saying "before any such claims are made"—I have made no claim that such a thing does exist, only that this site could be used to achieve the existence of such a thing. Specifically, I don't support the claims the site makes that it already achieves these things.

As far as I know, though, this non-existence is because no one has ever thought of doing such a thing before, not that it's been tried and didn't work out. No one knows if it can or cannot be done, because it has never been attempted. That's sort of why I had to explain the finer points of the idea, rather than just calling it "an algorithm warehouse" or whatever it would be dubbed if there had already been a prototypical example to classify.

Also, I meant "big" as in the phrase "big in Japan"—well-known or, at the very least, actively in development. A big commit log, in other words.

To be frank, I haven't actually looked at the site we're both talking about here (and Silverlight isn't quite executable on my machine, anyway.) From first glance, it looked to be trying to be a sort of encyclopedia of good, tested implementations of well-known algorithms for hashing, scheduling, compression, garbage collection, error detection, sorting, pathfinding, and so on—the sorts of things you think about when you think "algorithm"—but failing, because it wasn't reaching the right content-creating audience.

These implementations could certainly be written in C, but C isn't a very machine-transformable language—Scheme or Forth would probably be a better choice. The point, though, wouldn't be to transform the syntax (because that's the easy part) but to transform the paradigm—to munge typecasts into real types, or switch loops between iterative and recursive form, or, at its simplest, transform between OOP and functional styles. "Hand-writing" the examples would negate the point, which is to let a computer understand how to write in your language, given pseudocode (or a given Well-Known Language.) Basically, it would apply the goals of RDF to code instead of data.

If it was this thing, then my argument would hold, and you would also be agreeing with me. I'm beginning to suspect our disagreement lies in the fact that the site itself doesn't live up to the potential that I've layed out here, although doing so would be no more effort. Perhaps I should make my own site?

(And just so you know, yes, I'm talking about a completely hypothetical future thingy—this is what I think language design will be like in the future, thus my comment of "why can't we have rapid language development?" It would be completely different than what we're doing now, which is reading reference algorithm books (or not-too-well-known-and-quite-recent papers, where this would be much more of a help) and turning the pseudocode into a "pure idea" in your head, and then re-encoding the "pure idea" as code in your language. Instead, you'd teach the computer to do what you're doing, and then just give it the site's URL as training data—in other words, making your compiler an AI. I think this site is a good first step toward that—getting all the world's algorithms in one place, so we can build the training dataset. That's all it is, though, and all I claim it to be.)


"I'm actually quite confused as to your saying "before any such claims are made"—I have made no claim that such a thing does exist, only that this site could be used to achieve the existence of such a thing"

I am challenging the claim that "this site could be used to achieve the existence of such a thing". The claims on the site itself are (imo) not worth challenging.

I am saying that languages are different across multiple dimensions sequential vs concurrent (different types say threads, vs message passing ), paradigms - objects, vs functions vs imperative, many kinds of type systems, memory management strategies and so on) that the "automatic translation to get a standard library" is a doomed effort with today's technology in spite of the advances made to date in operational and denotational semantics.

At best, a lot of research needs to eb doneto make such a thing possible.

Is it possible? I guess, eventually. Is it probable in say the next five years? Imnsho, no.

And with that I have said everything I wanted to and this thread is getting too large, which is always a sign that it is time to stop.

Good luck with any effort you make to write such an automatically translatable collection of algorithms. If you can pull it off, I'll be the first to admit my error.

Over and out.

EDIT: I see you added "I'm talking about a completely hypothetical future thingy"

and

"in other words, making your compiler an AI".

OK then, I withdraw my arguments. Sufficiently advanced technology is, after all, indistinguishable from magic, as Arthur C Clarke said.


The implementaions here don't look that great.

Just take a look at the Naive Prime Generator. I know they call it Naive, but its very naive. To the point where if its checking if i is prime, it tries to see if number greater than i evenly divide it!

There are other optimizations they could toss in there (e.g. since they keep an array of prime numbers they need only divide by those) but you could try to argue that these make the code harder to understand... so for those I'm willing to give a pass.


I left the site as soon as I typed in quicksort and nothing came up.


Usually I just close the window when asked to upgrade Silverlight, this seems like a good enough reason to upgrade (as much as I dislike it). Really though, why is it needed here? Not trying to hate, just curious...


I see a lot of naysayers here, but I think this is a good idea. I was actually considering doing this myself a while ago. One of the best things about Matlab is the community of plugins and algorithms, and having a somewhat central repository that makes them easy to find.

That being said, requiring Silverlight is a deal breaker for me. I'd also like to see something more like github, where it's easy to fork and make improvements to other people's code.


When I first opened the page, and saw the front page, I went "Man, this is gonna be great. An in-browser IDE (probably using some site specific syntax) to try out the algorithms. Given a common run-time and an assortment of different types of datasets, you can try out different methods of optimizations right there in the browser without having to do the tedious parts." Then I actually clicked on some links. Then I saw what it was really doing. Then I was sad because I made assumptions.

Seriously though, I'd completely redesign the site. Needing Silverlight to view the algorithm is just insanity. Add in a large amount of varied datasets, and make the goal of it an easy way to experiment with and share algorithms/optimizations (ie, you have 20 different variants of quick sort algorithms. A search for quick sort returns all 20 with graphs indicating the performance of each on each dataset).

As it stands, Wikipedia is a better reference for algorithms, using well-defined and concise pseudo-code to demonstrate the algorithm.


Anyone want to compare this to http://rosettacode.org/ ?


'Type algorithm name here...' "Simplex" → 'your search term didn't match anything'

They forgot the "beta"-badge?


right, because everyone needs a quick simplex algorithm under their pillow


I think it would be fitting to have in a repository of algorithms; you don't?

Maybe I didn't understand the purpose of the website from its frontpage. (Which is a failure, either mine or the frontpages.)


Heh, the search returned nothing for "quick sort" and "hoare".


Doh! I thought it was the web page for a former U.S. Vice-President's new band. :(


This is a great project, looking forward to it bustling with code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: