Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's always possible to sandbox the processes. In that sense, it's not very different from the various http://try.$PROGRAMMINGLANGUAGE.org. For most of the sandboxing techniques, the performance can be however problematic.

What's peculiar about latex is that its compilation model is ridiculously inefficient. There's no separate compilation, so you have to re-run the whole compilation at the tiniest change. Also, there's a lot of I/O involved, and a several passes are needed. For example, to display the table of contents, it is necessary to append to a temporary file a list of commands (emitted by \section{} etc) which will be interpreted, in a following pass (it is necessary to manually re-run the compiler) by \tableofcontents.



Has anyone ever tried to refactor and sort Latex out? As such a widely used and important app, it really surprises me that there would be so many problems left in the code base. It is open source after all.


> [...] since Knuth highly values the reproducibility of the output of all versions of TeX, any changed version must not be called TEX, TeX, or anything confusingly similar.

I have heard that to be the reason that some don't want to "fix" TeX/LaTeX.

Source: http://en.wikipedia.org/wiki/TeX#License

LaTeX also has a similar clause (or whatever you call it): http://en.wikipedia.org/wiki/LaTeX_Project_Public_License#Un...


To be clear, it is allowed to be called a TeX if it passes a testsuite:

> To enforce this rule, any implementation of the system must pass a test suite called the TRIP test[39] before being allowed to be called TeX


I don't think that there is any particular problem in the code base. Its design, however, has been made with 1970s constraints in mind.

A lot of people have thought about rewriting (parts) of TeX, but the amount of legacy is so high and you have to keep compatibility with lots of existing user code.


So you don't think there is a problem in the code base, but you say: " legacy code is so high and you have to keep compatibility with lots of existing user code". That's a contradiction.


I mean that the code works perfectly in the sense that the behaviour conforms to the specification, but it is hard to modify it. So, there's a problem with the code base, just one that is not visible if you don't want to extend the program (one can argue that it's the worst kind of problem).


There are (at least) XeLaTeX, pdfTeX, and LuaTeX. There's also ConTeXt, which is not LaTeX compatible, extending TeX in a different direction.


I have tried all extensively (except ConTeXt), what I can tell you is that I now use LuaLateX, because it's the natural successor to the other implementations, which have all a really hard time with unicode, ie. with "→" and many other unicode characters. I know about different hacks like embedding a pdf document with the required symbols etc. but that's just reflecting how ignorant the LaTeX architecture is. The algorithms built-in may be clever, but the actual code-base and annual release-cycle is archaic and unmaintainable. The cross-dependencies are so deep, that updating one small package could break anything anytime. Spaghetti-Code Deluxe. Guess why no distribution ships "tlmgr" - the TeX package-manager.

I'm thankful for anything that helps to free us from the *TeX compilers. It's surely hyped too much. Do you believe me when I say that just because you wrote a document with LaTeX, it isn't auto-magically typographically perfect? Look for PDF Documents compiled with LaTeX, many of them have really ugly Typography, because people love to customize things to their own likings and many LaTeX templates even force ugly Typography on their users.


> What's peculiar about latex is that its compilation model is ridiculously inefficient. There's no separate compilation, so you have to re-run the whole compilation at the tiniest change.

Is there a scope for LaTex to have "delta compilation"? This is particularly required if you make presentations using LaTex (Beamer). The whole recompilation process is pretty painful once the number of slides (which includes transitions) go past ~20. Currently, there are some "workarounds" for it, but it would be nice if such a feature exists.


I use https://www.writelatex.com/ where I have a 17 pages (pdf) of math exercices, and I don't know how they deal with it, but apparently they manage to make incremental "recompilation process", because when I make an online modification on the Latex code, the modification appears almost anstantaneously on the preview pdf aside the code !


You are lucky. I am working on a ~80 page report and it takes 40 seconds for a new word to appear.


I'm astonished by that time number. I have a 400 page book, with a 150 page answers to exercises that is emmitted as part of the book compilation and then compiled on its own. I use a simple shell script that compiles each twice (for the cross-referencing). It takes perhaps 10 secs on my five year old middle of the road laptop. Are you remaking complex drawings each time? What is the performance sink, I wonder?


Sorry, it seems like there was a misunderstanding. It takes this on writelatex.com, not my local machine. There it's a normal 7-8 seconds. Just text and about 50 png/jpg images.


My workaround to cut down compilation time is splitting up the document into parts and always only compiling the part I'm working on.


I have tried a similar work around but it turns a bit messy if you are in the paper publication business (which I am). Camera ready copies (final publication) sucks the remaining "work around motivation" out of you.


In case people don't know, there's a built-in command to do this: \includeonly.


I am not sure if they are doing an "incremental" process. From what I understand, they have a real time preview which compiles the document in the background.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: