Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Conc: Better Structured Concurrency for Go (github.com/sourcegraph)
254 points by aurame420 on Jan 11, 2023 | hide | past | favorite | 154 comments


The WaitGroup looks suspiciously like errgroup, which even has the .WithMaxGoroutines() functionality: https://pkg.go.dev/golang.org/x/sync/errgroup

> A frequent problem with goroutines in long-running applications is handling panics. A goroutine spawned without a panic handler will crash the whole process on panic. This is usually undesirable.

In go land, this seems desirable. Recoverable errors should be propagated as return values, not as panics.


> The WaitGroup looks suspiciously like errgroup

I heavily used errgroup before creating conc, so the design is likely strongly influenced by that of errgroup even if not consciously. Conc was partially built to address the shortcomings of errgroup (from my perspective). Probably worth adding a "prior art" section to the README, but many of the ideas in conc are not unique.

> In go land, this seems desirable.

I mostly agree, which is why `Wait()` propagates the panic rather than returning it or logging it. This keeps panics scoped to the spawning goroutine and enables getting stacktraces from both the spawning goroutine and the spawned goroutine, which is quite useful for debugging.

That said, crashing the whole webserver because of one misbehaving request is not necessarily a good tradeoff. Conc moves panics into the spawning goroutine, which makes it possible to do things like catch panics at the top of a request and return a useful error to the caller, even if that error is just "nil pointer dereference" with a stacktrace. It's up to the user to decide what to do with propagated panics.


A "prior art" section is especially useful for people evaluating your library!


> That said, crashing the whole webserver because of one misbehaving request is not necessarily a good tradeoff. Conc moves panics into the spawning goroutine, which makes it possible to do things like catch panics at the top of a request and return a useful error to the caller, even if that error is just "nil pointer dereference" with a stacktrace. It's up to the user to decide what to do with propagated panics.

The problem is that panics aren't "goroutine scoped" in terms of their potential impact. So it really shouldn't be up to the user to decide how to handle a panic. Application code shouldn't handle panics at all! They're not just a different way to yield an error, they're critical bugs which shouldn't occur at all.


> panics aren't "goroutine scoped" in terms of their potential impact

I'm with ya there. However, there are also many classes of logic errors that are not goroutine-scoped. And there are many panics that do not have impact outside of the goroutine's scope. In my experience, this is true of most panics.

In practice, panics happen. They are (almost) always indicative of a bug, and almost always mean there is something that needs fixed. However, if a subsystem of my application is broken and panicking, there's a pretty good chance that reporting the panic without crashing the process will provide a better end user experience than just blowing up.

Yes, that means I'm accepting the risk that my application is left in an inconsistent state, but coupled with good observability/reporting, that's a tradeoff I'm willing to make.

(bonus: this is especially true when propagating panics allow me to capture more debugging information to fix the panics faster)


> In practice, panics happen.

I guess this is the crux of the issue. I don't think this is true, or needs to be true. It certainly hasn't been my experience. I think assuming panics are normal will take you down some paths that make it basically impossible to write reliable software. But, to each their own.

> I'm accepting the risk that my application is left in an inconsistent state,

Inconsistent state makes it impossible to reason about your program's execution or outcomes. An account value that previously had balance = 0 may now have balance = 1000. Is this acceptable risk?


Since defers run during panics for exactly this reason, no. You can in fact guarantee that is not the case.

Runtime-safety "panics" in Go, like concurrently modifying and iterating a map that can lead to other memory being corrupted, tend to abort the whole process immediately and not be suppress-able panics.


> Runtime-safety "panics" in Go, like concurrently modifying and iterating a map that can lead to other memory being corrupted, tend to abort the whole process immediately and not be suppress-able panics.

https://go.dev/doc/effective_go#panic

> The usual way to report an error to a caller is to return an error as an extra return value. . . . But what if the error is unrecoverable? Sometimes the program simply cannot continue. For this purpose, there is a built-in function panic that in effect creates a run-time error that will stop the program

Panics express unrecoverable failures. This is plainly stated in the language documentation. There are exceptions to this rule, but they are exceptional.


That's a style decision, not a correctness issue. You are claiming it is a correctness issue.


It is absolutely a correctness issue. Panics do not provide safety guarantees that generalize enough that it is safe to arbitrary recover from them. The statement in the previous sentence is not a subjective opinion, it's a statement of fact. I'm not sure how else to convey this information.


Panics do not violate any runtime guarantees, and defers run in the presence of panics.

All safety guarantees possible if there were no panics are possible with.


When some bit of code invokes `panic` it is saying that there is an error which is unrecoverable, and the default expectation is that the process will terminate. There is no way to assert that panics do not violate runtime or memory model expectations. They can.


> An account value that previously had balance = 0 may now have balance = 1000. Is this acceptable risk?

Your entire web app process crashes due to a panic every time a request triggers an extremely rare edge case. A hacker discovers this and uses it to conduct a DoS attack. Is this acceptable risk?


Yes, definitely preferable! Denial of service is definitely better than invalid state, right?


Why the heck are you writing web apps that panic?


This is equivalent to asking "Why the heck are you writing code with bugs?"

Sure, if we could write code without bugs, we wouldn't need to suppress panics. But since we do tend to write code bugs and some of them are bugs that can be detected by the runtime - we get panics.

If you hate panics, you can do better than Go and go for a language with a stronger type system, where you won't get nil pointer panics or interface conversion panics, but even an almost onerously-tyepesafe language like Haskell still panics on some bogus operations such as division by zero or trying to read the head of an empty list. Perhaps Idris really have no runtime errors but they are quite niche.


It is pretty easy to have accidental panics in Go, for instance due to a runtime assertion that unexpectedly failed


Runtime assertions without defensive checks are programmer errors that are not difficult to spot in code review and should not be expected to make it to deployed code.

    // RED FLAG
    x := y.(type)

    // good
    x, ok := y.(type)
    if !ok { return an error }


Because people make mistakes?


Classic Go programmer. This is why I use rust B)

(joke)


Joking aside, you could clearly plot the probability of running into a runtime error by programming language.

Of course, a language with less runtime errors is a far cry from being a panacea. Avoiding runtime errors is not the same as avoiding all categories of bugs. And while I personally prefer stronger type systems - they definitely come with increasing levels of cognitive cost.

But I still feel that the type-safety vs. runtime trade-off is more often ignored, underestimated or undersold than it is being hyped. Yes, certain languages (cough Rust cough) are being hyped, but not the conscious choice of balancing programmer learning curve with runtime type-safety.

And while on the topic of Rust, it's probably not the best choice for a language that sees less runtime panics. Especially since unwrapping an error is always the easiest way to handle an error, and thus quite common. But lazy error unwrapping aside, Rust does avoid null dereference exceptions, type casting exceptions and most types of race conditions that can be quite prevalent with go[1].

[1]: https://songlh.github.io/paper/go-study.pdf


> assuming panics are normal will take you down some paths that make it basically impossible to write reliable software

Na, citation needed. Assuming "panics are normal" is just extrapolating from "errors are normal". It makes reliable software more reliable.


it's pretty obvious that it could influence new developers into the wrong direction though. Saying things like "ha, let's not bother checking this, at worst it'll just panic and i'll simply abort the request".

Which would definitely impact the quality of the software overall in a bad way.


I'd not be so sure. Accepting that everything that can fail will fail shaped me as a young developer, and "Exceptional C++" had a huge influence on me. Now my approach for new code I review is this:

* Make sure you support properly unrolling the stack

* Keep a clean failure boundary, probably somewhere on top of your loop

* Fastidiously check your preconditions

* Fail brutally if they're not met

* Improve from there


Right, all of these are good points, but the problem is that the "failure boundary" of a panic is the entire process. You can't constrain it, or assume that it's scoped to a single goroutine. Errors do not have this property.


> the "failure boundary" of a panic is the entire process.

This is trivially falsifiable by panicking yourself and immediately recovering. Neither failure domain nor failure boundary need to align with the entire process.


The impact of a specific panic does not extrapolate to the impact of all panics, and my claim is not falsified by such an example. Panics are defined by the language to represent unrecoverable errors.

    func (x *Thing) Method() {
        x.somethingThatPanics()
        x.somethingThatAssumesTheAboveDidntPanic()
    }
Recovering from a panic thrown by Method invalidates the state of the Thing which threw that panic. If that Thing is shared among concurrent actors, the entire program state is invalidated.


> If that Thing is shared among concurrent actors

You're adding preconditions to your claim.

> the entire program state is invalidated.

No, the state represented by a connected graph of variables accessible by the concurrent actors is tainted. This is hardly "the entire program state". Often it's just a few cache entries.

Also, see my first, most important, bullet point:

"* Make sure you support properly unrolling the stack."

Which means a request to Thing errored out, but it never enters an invalid state. If you fail at that, all bets are off. But then you're dealing with a mediocre codebase anyway.

And finally, let me rewrite your example to something, that I see much more often in real life code, which problematic _even without concurrent actors_ because somethingThatAssumesTheAboveDidntReturnAnError might do horrible things all by themself:

func (x *Thing) Method() {

x.somethingThatReturnsAnError()

x.somethingThatAssumesTheAboveDidntReturnAnError()

}


I'm not sure how to respond to this. You seem to believe that panics express problems which are constrained to the call stack which instantiated the panic. This isn't true. But I'm not sure how to express this to you in a way that will convince you. So I guess we're at a stalemate.


> You seem to believe that panics express problems which are constrained to the call stack which instantiated the panic.

Not inherently, but it's your job as a developer to make sure this is the case, that's what:

"* Make sure you support properly unrolling the stack."

means.

In case you're dealing with unknown code it's your job to find out what the connected graph of potentially tainted objects is and discard them. That's what "keep a clean failure boundary" means.

If you can't, because you don't want to (short lived process, prototypes) or are unable to (hairy ball of code), tearing down the process is indeed the only option and a sane fallback choice made by the language designers. But it's not necessarily a hallmark of robust software.

I hope I had a final shot to clear up what I meant, thanks for the discussion anyway.


In Go when some code writes `panic` it is expressing an error condition which should not be intercepted by callers and is expected to terminate the process. A panic is not an error, and panics should not be recovered as if they were errors.


Panics are categorically different than errors. Errors are normal, panics are not normal.


> I guess this is the crux of the issue. I don't think this is true, or needs to be true. It certainly hasn't been my experience. I think assuming panics are normal will take you down some paths that make it basically impossible to write reliable software. But, to each their own.

I'd rather have the control to log the panic on a service rather than it forcibly dying and taking down any other connections with it. Kube will just spin up a new one anyway, which just introduces a downtime gap that doesn't need to exist.


I don't think I'm effectively communicating the impact of handling a panic and continuing program execution. A panic that comes from a memory model violation (as one example) can change the value of anything in the memory space of the program. If the program continues, that change will go undetected, and can have results that make the program completely nondeterministic. This isn't a doom and gloom, sky-is-falling prognostication, it's literally what is defined by the spec and memory model of the language.


> A panic that comes from a memory model violation (as one example) can change the value of anything in the memory space of the program ... This isn't a doom and gloom, sky-is-falling prognostication, it's literally what is defined by the spec and memory model of the language.

I do not think you are correct. Go has a class of unrecoverable panics for this specific reason. Go also runs deferred functions after a recoverable panic, so the notion that it's unsafe to handle it, or continue executiona after doesn't hold at all - it is literally a first-class feature of the language.

I have not seen an instance of a recoverable panic that is raised _after_ such a fatal operation. If you have an example of such, I would love to see it.


What are unrecoverable panics vs. recoverable panics? Where is that distinction defined?


There seems to not be any standard list of unrecoverable panics/aborts, but this Stackoverflow post [1] has a list of a few.

As far as the user/developers are concerned, it doesn't matter too much, since you have no option to recover them, but it would be nice if it was explained if defers are still ran. I'm assuming they are not.

1. https://stackoverflow.com/questions/57486620/are-all-runtime...


If there is no way for callers to reliably distinguish recoverable panics from unrecoverable panics, then this distinction doesn't really exist, does it? Panics are panics.


I'm not sure what point you are trying to make anymore.

Of course you cannot distinguish between unrecoverable and recoverable panics, because by definition an unrecoverable panic is not recoverable. There is no caller to distinguish between it - it is killed.


Oh. You're using the word panic to describe a superset of actual panics and other even more serious errors. Those things you call unrecoverable panics are not actually panics.

The point I'm trying to make is that panics are not errors by another name, and they are not safe to recover from in general.


I would agree if it weren’t super easy to cause a panic in go.

Index slice out of bounds? panic. Close a channel twice? Panic. Incorrect type assertion? Panic. Dereference nil pointer? Panic.

I would argue that all of these examples which are the most common in my experience are “goroutine scoped” because the goroutine was aborted before they potentially modified the application state in an unknown way.

It’s like not in C, or C++ where out of bounds access has now put the entire application into an unknown state.


> Index slice out of bounds? panic. Close a channel twice? Panic. Incorrect type assertion? Panic. Dereference nil pointer? Panic.

These are all really bad things which should never survive to production code. It is not difficult to detect and prevent them.

> I would argue that all of these examples which are the most common in my experience are “goroutine scoped” because the goroutine was aborted before they potentially modified the application state in an unknown way.

What makes you think that terminating the goroutine that triggered these panics prevents them from impacting the process state?

> It’s like not in C, or C++ where out of bounds access has now put the entire application into an unknown state.

What makes you think this is the case? Panics have unknowable impact, and many panics (e.g. data races) absolutely do put the program into an unknown state.


>These are all really bad things which should never survive to production code. It is not difficult to detect and prevent them.

This is equivalent to saying "out of bounds memory writes are not difficult to detect and prevent in C code". Like actually equivalent (possibly worse), not just "well if you squint they look similar".

Of course it's not hard most of the time. Being perfect is beyond hard though. And if you're not perfect, you might open the door to anything in C, or cluster-destroying rolling crashes in Go.

Sometimes shutting down every piece of your software if that happens is the correct choice, and sometimes it's so far beyond reasonable that it's ludicrous to argue in favor of "every panic is an abort".


> Sometimes shutting down every piece of your software if that happens is the correct choice, and sometimes it's so far beyond reasonable that it's ludicrous to argue in favor of "every panic is an abort".

Very much this. And even for the same project: in some cases, I'm a fan of employing a quite strict error handling policy in dev environments (crash and burn) and using a more lenient approach in prod (elevated log level). In my experience, this can result in a robust product. Most importantly, this means the decision is not even made by the application programmer, sometimes it's a config thing.


Go has much stronger memory safety guarantees than C does. They aren't really comparable.


x == y can panic if interface values contain incomparable fields in unexported nested structs, how would I check for that? Should we let it become a query of death and bet thousands of peers’ jobs on it never happening?


Is this really the case? Can you link to anything or an example on go.dev/play/ ?

I can find a mention of "cmp.Equal" having that behavior, but that's just a third-party package panic.


It's true, but you'd never really write code like this

https://go.dev/play/p/r9NkQb6bQTx


The problem also affects structs that happen to have a private map or cache or callback anywhere within.

https://go.dev/play/p/uP-vjpvuhku


Obviously `interface{}` values are not comparable?


The comparison is explicitly allowed in the language spec, there’s no warning for doing it, and it often works depending on the types. It’s a data-dependent runtime error, which is usually hard to guarantee test coverage for.


Link to an example? I don't think this is true, unless you're playing stupid games with your code, which wouldn't pass code review.


don't do that?

this kind of thing is why deepcompare exists to begin with


It seems unrealistic to assume that everyone on a reasonably sized team knows all of the subtle edge cases to avoid and never makes mistakes


I can nag everyone to use reflect.DeepEqual and live with some false negatives, but maps always use k1 == k2.


This is days later, sorry, but - you can't use an interface as a map key, so this shouldn't apply, right?


https://go.dev/ref/spec#Map_types says that is allowed.

> If the key type is an interface type, these comparison operators must be defined for the dynamic key values; failure will cause a run-time panic.


There are also significant performance and behavior differences between the two.

They are not inter-changeable, nor can one replace the other.


more specifically, it's really strange to hear of people doing equivalence checks on objects with structure. What are you expecting that comparison to do? I doubt it is doing what you think is happening, and is indeed risky of panics.


> (e.g. data races) absolutely do put the program into an unknown state. Data races do not necessarily result in panics.

Many (perhaps even most?) data races would not result in panic but just in garbled, missing, duplicate, out-of-order or otherwise incorrect data.

Data-race induced panics are generally the side-effect of a data race, not a direct protection against. They can often be inconsistent: e.g. a data race in a slice that contains a binary data format could garble a variable-length string prefix and produce an index-out-of-bounds panic. Or it could prematurely consume a shared pointer and overwrite it with nil, only to have the nil pointer dereferenced by another goroutine. These kind of panics are unpredictable.

If your application has shared global state (in-memory or even a database), it may become inconsistent due to data races. But whether data-race induced indicate irreversibly corrupted global state that requires (and can be fixed with) application restart - that is case-by-case thing.

Let's say your application has some shared state that got corrupted and the corruption triggered a panic down the line.

If your shared state is persisted in a database or some other distributed mechanism and that state got corrupted: restarting the application won't help you.

If your shared state is scoped at the HTTP request level (or whichever boundary you choose for suppressing your panics): you don't need to restart the application. The request is already terminated, along with its shared state.

Which leaves us with in-memory global state. This kind of state is generally minimized in the type of microservice and network infrastructure applications that Go is often used for.

A very small percentage of your panics will indicate corruption of such state. Will you be willing to risk service downtime in order to protect against the small possibility that the service has run into a state where its shared in-memory data became corrupted?

in memory or some other distributed mechanism and that state got corrupted: restarting the application won't help you.


I tend to agree with you that these are relatively easy things to detect. I see no reason for the downvotes.

Production systems should have relatively robust testing whose coverage can be increased over time. When something panics, the cause of the panic should be fixed so that the panic never happens again. Over time panics shouldn't be happening.

Then again the systems that I have relied on I have written on my own without other hands in the pot so maybe I just don't have to deal with the reality of other programmers phoning things in.


If your programming language handles very common errors by crashing the entire application, and if preventing these crashes is actively discouraged, then that suggests a flaw in the language itself.

This would be fine for a low-level language like C where you need to allow SEGFAULTs, but designing it into a high-level language makes no sense.


Go panics should not be used for very common errors.


A lot of very common operations can panic: division, dereferencing a pointer, invoking an interface method, indexing/slicing an array/slice/string, asserting the type of an interface, and converting a slice to pointer to array. It’s possible to check, but I’ve never seen a tool that verifies you never use any of these without checking. You also have to check for nil channels, though they block forever (maybe consuming a goroutine) rather than panicking.

And there are some operations where you cannot check in advance whether a panic will happen: comparing interfaces (underlying values might not be fully comparable), indexing a map (could blow up during any concurrent write), sending to a channel (might be closed), and closing a channel.


You can recover from a panic though, so if you are implementing something that may panic you should have some sensible defer/recover in there if you can't afford to have your process crash.


Division by zero, dereferencing a nil pointer, invoking methods on a nil interface, invalid indexing of an array, unchecked type assertions -- these are not common operations! These are always easily detectable programmer errors.


> This keeps panics scoped to the spawning goroutine

This is exactly what's undesirable. (IMO and from my reading the GP agrees.)


It is the way of things in an imperative language. If you catch a panic, you are also declaring to the runtime that there is nothing dangling, no locks in a bad state, etc. This is often the case. (Although since I don't think this is a well-understood aspect of what catching a panic means, it is arguably only usually true by a certain amount of coincidence.) But if you don't say that to the runtime, it can't assume it safely and terminating the program, while violent, is arguably either the best option or the only correct option.

Other paradigms, like the Erlang paradigm, can have better behaviors even if a top-level evaluation fails. But in an imperative language, there really isn't anything else you should do. It is arguably one of the Clues (in the "cluestick" sense) that the imperative paradigm is perhaps not the one that should be the base of our technology. But that's a well-debated matter.


I don’t think this is really a question of whether your code is imperative, since Haskell code will terminate just as surely as Go code if you try to access an array element out of range.

(Haskell’s lazy evaluation just makes it a bit harder to catch, since you need to force evaluation of the thunk within the catch statement, and it’s far too easy to end up passing your thunk to somebody who won’t catch the exception.)

As a matter of Go style, of course, you should almost always defer unlock() after you lock(), but some people sometimes get clever and think that they can just lock() and unlock() manually without using defer. This is not hypothetical, and it causes other problems besides leaving dangling locks after a panic(). Somebody sticks a “return” between lock() and unlock(), without noticing, for example.

So my impression of catching panic() is that it is about as safe as not catching panic(). What I mean by that is that if recover() is not safe in your code base, there is a good chance that there are other, related bugs in your code base, and being a bit more strict about using defer and not trying to be clever will go a long way.


Since the Go 1.14 optimizations I don't believe I've found a single case where `X(); defer Unx()` has been worse.

Unfortunately this was not the case before 1.14, so there's a lot of "middle-aged" code floating around setting a bad example.


Old habits die hard. For what it’s worth, the bugs I saw were long before the 1.14 release. Somewhere around 1.7 or something.


Agreed - it's also not nearly as clear for `ctx, cancel := ...; defer cancel()` and I see that reflex in a few places where it's not only often much less efficient, but logically wrong / less safe.


What issues do you see with defer cancel()?


I think the OP is saying defer cancel() is correct, and the stuff people would do to avoid it are the things that are bad.


It is a fundamental problem with the imperative paradigm because of the equivalent of:

    lock.Take()
    thingThatMayCrash()
    lock.Release()
The imperative/structured paradigm is that those statements are evaluated in order. While this is not the only way of writing Go, it is legal Go, and the runtime must account for it. Paradigms for which that is not fundamentally true have different options available to them. One of those options is still to crash.

However, "access an array element out of range" isn't the question. The question is, what can the language do in the face of any exception? Haskell, perhaps ironically, doesn't avail itself of the opportunity to do something useful about it. You can implement a wide variety of mechanisms that are safe to have exceptions in even in the face of concurrency, but at the base language, it is essentially as exception unsafe as Go, exactly as you observe. (And you can create "exception-safe wrappers" in Go as well, as easily as having an intermediate function that does something with panics, then invokes some code. Nominally this is unsafe because the code being invoked really ought to have a guarantee that the user has written it to be safe in this usage; in practice it works fairly well at significant scales due to a combination of other things beyond the scope of this already-large reply.) This causes significant practical stress within the community and probably would be in the Top 5 wishlist for a lot of people as to something a sequel language would fix. (It is very annoying that the supposedly "pure" code "head []", supposedly of type "[a] -> a", throws an IO exception.) Just as in Haskell you can write safe code, you can write safe code in Go here as well.

    lock.Take()
    defer lock.Release()
    thingThatMayCrash()
is every bit as safe as anything you can write in Haskell. (It may not compose as well, because "locks" are arguably the least composable primitive ever devised by computer science, but that's an entirely different discussion.)

Thus it is not a coincidence that when I named a language that has a different paradigm that allows it to systematically recover from this sort of fault, I named Erlang, not Haskell. Erlang has no locks. (At the Erlang level. NIFs may do their own thing but they are basically a form of "unsafe" and for language analysis should be treated as such.) Since it has no locks it can't blow up the way the Go code can and leave dangling locks. Processes get access to resources that need to be cleaned up through the IMHO somewhat confusingly named "ports" system, which can be conceived of as wrapping up sockets and files and other such resources behind other Erlang processes, and then there's a linking system that says "if this process crashes, send this message to that other process or crash it also". So an Erlang process can safely crash, release all associated resources, and the runtime may continue on with the assurance that there's no dangling locks or other concurrency things half done.

In fact in my opinion Erlang isn't even a particularly "functional" language. This is idiosyncratic and I don't deny it. A better way of understanding Erlang is that it was built around this functionality existing, and the ports system and immutable terms are the tools that were used to accomplish this goal, with the fact that the result looked sort of "functional" being an accident. So I wouldn't necessarily highlight "functional" languages in general as being natively good at this sort of handling. I think it's pretty clear it could be added to a Haskell++ without much effort but it is not something that merely "being functional" automatically gets you out of the box.

(Historically, Erlang actually descends from Prolog. While all the logic functionality is stripped out, the heritage is very visible. You can't really call it a "logic language" since it can do no logic, but in my opinion it isn't in the functional stream of languages either. Your mileage will vary; like I said I don't deny this is an idiosyncratic take, but for what it's worth, it's one from someone who used Erlang professionally for many years and knows Haskell fairly well too, so it's not an entirely uninformed one.)


Nice points

I suppose the core issue is transactions. Erlang could still fail with

  them ! {sub, n}
  crash()
  me ! {add, n}
But typically Erlang's isolated processes means many transactions can fail or rollback in isolation

I suppose instead transactions could be a core language semantic. But hard to do, and erlang actors gives a lot, for very little overhead


Erlang qua Erlang doesn't have any transactions, either, so Erlang qua Erlang can't fail that way.

In general, whatever it was the crashing process was doing can still have failed. This obviously doesn't magically fix database transactions, or make it so files are never half written because it crashed in the meantime, or any other "external" action that you can still have failed to protect properly from crashes. It only means that the runtime can't get stuck due to locks failing to be released.

But hey, that's something!


A goroutine created inside an http request handler (itself a goroutine) which then panics, by default will crash the whole server, not the single request. The panic could simply be an out of bounds access. That should not crash the whole server.

It’s a logic bug, but you can’t “not panic”. You can trap and recover it though.

Bit orthogonal to OP but relevant to your reasoning.


> simply be an out of bounds access

If you have any care for quality at all, there's nothing "simple" about your invariants being violated.


So you've never written code with a bug? There are other ways to panic in go - concurrent map writes, nil pointer dereference. I'm not saying it should happen, but best practice would be a defensive posture especially when it's effectively zero cost, not hoping for the best.


Crashing the program _is_ the defensive posture. Panics -- concurrent map writes or nil pointer dereferences or almost anything else -- usually mean the program state has become invalid. You can't treat them like errors.


> usually mean the program state has become invalid.

_some_ part of a program's state has become invalid. Probably a variable on the stack. In web servers, almost always a part that becomes very irrelevant as soon as the current request ends. In databases, the same is true at transaction boundaries. In shells it's the REPL. Exit(1) in any of those deep down the stack? No thank you, wouldn't touch that software with a 10 foot pole.


Taking my own example - you can both monitor for panics per http request, while preserving SLO for other requests server by the same process.

Crashing the program is strictly worse imo, since there are other concurrent requests which will now fail for no reason.

I agree unhandled panicking makes sense sometimes. It’s context dependent.

Program state is a good argument, but I think it really depends.


> So you've never written code with a bug?

No, I write bugs all the fucking time! That's why I want my code to panic, so I know I wrote a bug. If that shit gets stashed away in a 500 response in a Prometheus metric maybe I'll chance to see it three days later. But if the container gets blown away, I'll notice.


If your observability tools aren't firing alarms when that panic happens, you have other problems besides the panic. I would much rather find out about a bug that way vs reports from people that the service is down.


On a memory unsafe language if you find your invariants violated all bets are off and it is possible that the integrity of the runtime is compromised. The only safe option is to bail out to the nearest protection boundary (i.e. the process).

On a memory safe language, the blast radius is potentially much smaller (in principle all objects transitively reachable from the failure point, but this is very pessimistic), so it is realistic to be able to isolate the failure in process.


Well, memory safety is not a boolean, it's a property that's defined by the memory model of the language. That model also defines the impact of a violation.


This isn't catching the panic though, this is propagating the panic through the parent goroutine. The whole program will still shut down, but the stacktrace that shows up in the panic contains not only information information about the goroutine that panicked, but also the launching goroutine. That can help you figure out why the panic happened to begin with.


if you've ever had to deal with a zombie JVM where an uncaught exception eventually put the whole application into a state where it wasn't, oh, hoovering the tens of thousands of temporary files it was creating because the vacuumer thread died undetected, the Go behavior is _extremely_ attractive.


How the heck should a library author know whether an error is recoverable or not? In many situations that depends entirely on the caller.

Not-so-contrived example: a config options contains a value that leads to a division by zero in some library I'm using. I don't want that to crash my whole program, I want to tell the user "hey this part didn't work" but be able to continue with the other parts.

Without writing a ton of boilerplate code.


I started to write Go recently and I very much prefer panics to errors. For example in web apps. Panic produces nice stack trace. Panic bubbles automatically, I can reduce amount of code significantly. Errors are awful and panics allow to write code without losing sanity. It's like good old exceptions. And if I really need it, I can catch panic and act as needed, e.g. return HTTP code or whatever.


You should rarely reach for panic. Panics are not like exceptions, really. It's very frowned upon for panics to pass API boundaries. Use errors always, unless the state of the world is irrecoverably broken.

It's difficult to come up with examples of when this may be the case. Often it's a "you're holding it wrong" kind of thing. For example, a common idiom is wrapping something like

    func Parse(s string) (Foo, error)
to be usable in contexts where error handling isn't possible, such as variable initialization.

    func Must(f Foo, err error) Foo {
        if err != nil {
            panic(err)
        }
        return f
    }
which could be used at the global level like so:

    var someFoo = Must(Parse("hey"))
This is acceptable because this code is static and will either panic at boot always or never.


Errors don't have stack traces so they're unusable. And I see nothing wrong with handling panics. They're the same exceptions. I got SQL exception, I panic, handler catches panic, logs its stacktrace and returns HTTP 500. Awesome and no boring error handling.


It's fine to dislike Go's philosophy on error handling, but in order to save you and your co-workers a lot of headache down the road, I'd recommend you just use another language. This is not something you want to do in Go and very few folks who program in Go would be happy to work with that style of code.

In case you actually are interested in Go's take on stack traces: You are intended to annotate your errors so you can build your own stack traces with whatever information you want. This leads to better error messages because additional context (values of things) can be logged with your custom stack trace.


Do you suggest me to annotate my errors with "file:line" strings? Do you suggest me to grep source files with error messages to deduce stack trace? Because that's what I'm doing right now when I have to deal with stacktrace-less errors and it's not pleasant.


The idea is you `fmt.Errorf("invalid foo, given '%v' important argument: %w", arg, err)` and you end up with errors like:

    http GET /abc: start DB: run migration COOL_MIGRATION: invalid foo, given 'bar' important argument: file not exist
If done well, you might be able to fully diagnose the bug just by reading the error, since it may include every bit of relevant context. If not, you can grep for the original error substrings to find the relevant code.

The point is that these aren't just Xs on a map telling you where something went wrong, they're supposed to be meaningful descriptions of why something went wrong.


In the sense that errors are values, you should not normally need to hunt for the origin of errors, and so stack traces should unnecessary.

Many errors don't need metadata. For example, io.EOF signaling the end of a stream is a normal condition. But there are those "true" errors which are unexpected conditions indicative of a bug or a scenario that should be handled differently. To make those easier to find, you can annotate the error values to make it clear what the origin was. This is why there's now a well-established convention to wrap with contextual descriptions at every point in the chain.

In about 8 years of working full-time with Go, the number of situations where I've struggled to hunt down the source of an error value is pretty close to zero. Not zero, but close.


> Do you suggest me to grep source files with error messages to deduce stack trace

Yes, that's common. Perhaps a gopls search engine for error messages would be useful?


Also if that is your concern you'd just wrap the panic on your own anyway.


Hi! Author here. Conc is the result of generalizing and cleaning up an internal package I wrote for use within Sourcegraph. Basically, I got tired of rewriting code that handled panics, limited concurrency, and ensured goroutine cleanup. Happy to answer questions or address comments.


I literally started drafting my own structured concurrency proposal for Go 2 today, due to exactly the same frustrations you mention. Such a coincidence, and thanks for writing this lib. I will most certainly use it.

Please could you tell me if you have any thoughts on how to integrate these ideas into the language?

One thing I think should be solved (and that appears not addressed by your lib?) is the function coloring problem of context. I would really, really love if context was implicit and universal cancel/deadline mechansim, all the way down to IO. That way, only parts of the chain that NEED to use the context would be affected (which is a minority in practice). Any thoughts on that?

Finally, I think somebody should create a more visionary and coherent proposal for improved concurrency in Go2, because individual proposals on Go’s issue tracker are shut down quickly due to not making enough sense in isolation - and status quo bias. It’s a shame because I really love Go and want to see reduced footguns and boilerplate – especially with concurrency often being the largest source of complexity in a project. Please find my email in profile if you’re interested to pursue this further.

Thanks again.


> I would really, really love if context was implicit and universal cancel/deadline mechansim, all the way down to IO.

I don't think this is an improvement. Implicit behavior is difficult to identify and reason about. The only criticism of context that seems valid to me, aside from arbitrary storage being a huge antipattern, is that it was added long after nearly the entire standard library was authored, and it's usage is still relatively sparse.

We can agree that concurrency is difficult to use correctly, but since the introduction of generics it's much easier to wrap channels and goroutines.

Aside, in my experience if you're worried about boilerplate you're almost always looking at the problem wrong and optimizing for convenience over simplicity.


It may not have sounded like it, but I agree with your philosophy entirely as the most important factor for language design. It’s also a big reason for why I prefer Go.

However, I also believe there are a few situations where explicitness should step aside. Go already breaks these rules for memory management - it has a GC, simply because it outweighs the benefits of explicitness. I argue that haphazard concurrency is comparable to manual memory management when it comes to both footguns and boilerplate, so I truly think it’s worth entertaining these ideas. That said, I can’t even back such a proposal myself, without first having it in hand and carefully studying it or even playing with it.

As for context, I’d like to have it trimmed down to ONLY cancelation and timeouts, possibly with user-provided errors (see Go1.20 cancel cause in exp). It shouldn’t have values at all. The reasons for the implicitness though, in order of importance:

1. Offer the caller to time out IO if the callee isn’t context aware or has forgotten to set a deadline. The majority of 3p library code I’ve seen makes such mistakes all the time, leading to socket leaks and the inability to tear down goroutines correctly.

2. To prevent the function coloring problem, which introduces substantial boilerplate duplication.


It's a good package in general, save for the panic handling. Panics should not be handled in this way. Remove that wart, and it's solid.


Is the alternative to crash and restart the whole process on panic? It would make sense if someone wants to write an in-process supervisor (similar to Erlang?) but this would be basically a main-wrapper - not a per-http-request thing (because Golang http itself would be corrupted).

I don’t know enough to say where the crash isolation boundary should best lie, BUT, assuming that you can catch panics at all, it makes a lot of sense that they are propagated upwards in the call stack. The idea of structured concurrency is that concurrent code is attached to the callers stack, at least in spirit.


> Is the alternative to crash and restart the whole process on panic?

Yes.

> assuming that you can catch panics at all

A panic may happen to be safe to catch and recover from, but this isn't guaranteed, and can't be assumed in general. It's only safe to recover from a panic which you know is benign -- in all cases, across all build and runtime architectures. This is possible in packages that you fully control and which have no external dependencies, or (by fiat) in stdlib packages like net/http. But it's not the case for your service binary with a go.mod that's 100 lines long.


Great project. It seems like channels are just the wrong tool for a lot of concurrency problems. More powerful than needed and easy to get wrong. Lots of nice ways to make go concurrency safer.

The problem that bothers me (and isnt in Conc), is how hard it is to run different things in the background and gather the results in different ways. Particularly when you start doing those things conditionally and reusing results.

Something like go-future helps. https://github.com/stephennancekivell/go-future


>The problem that bothers me (and isnt in Conc), is how hard it is to run different things in the background and gather the results in different ways. Particularly when you start doing those things conditionally and reusing results.

Do you have any examples ? About only that I can think of is "parse something to a bunch of different types" and that can be solved easily enough. What do you mean by "reusing results" ?

> Something like go-future helps. https://github.com/stephennancekivell/go-future

    f := future.New(func() string {
        return "value"
    })

    value := f.Get()
that looks pretty awkward. with channels it would just be

    f := Async(func() Type{return t})
    v := <- f


The main difference is that reading from channels will block if its empty where Futures, return the same value.

Written more concisely.

f := New(func() Type{return t}) v := g.Get() w := g.Get()


When do you find reusing a promise useful? I know one canonical example is a loadable hashmap as in Concurrent Programming in Java, but I was curious if you knew of other examples.


I wish Go had macros a'la Rust, it would be possible to write the whole thing in so much nicer way.


say good bye to all your nice tooling, fast compile times, uniform code bases...

Programmer write cost <<<<<<<<< read cost


> run different things in the background and gather the results in different ways

I'd be curious to see an example of the type of task you want to be able to do more safely


Just took a glance but it seems like this is exactly the kind of project I saw coming out of generics going live. I was really surprised to see how subtly hard go concurrency was to do right when initially learning it. Something like this that formalizes patterns and keeps you from leaking goroutines / deadlocking without fuss is great.


Its was initially something that surprised me about Go. It was famous for good concurrency and yet compared to many functional languages and contemporary OO languages it had a lot of foot guns. There is a lot of repetition of code that even in languages like Java had long been made common. Go seems to lack obvious concurrency abstractions.

When they announced generics the first thing I did with it was rewrite my common slice parallel algorithm and my limited concurrency pool. It is an obvious area needing improvement for common use cases.


> It was famous for good concurrency

That’s because people confuse “easy” and “good”.

Go has easy concurrency, but it’s not good. The opposite really. It’s at best the similar problematic model you get in other, older, procedural / oo langages. At worst it’s a lot worse given how difficult it is (and even more so was before generics) to build abstractions and concurrent data structures.


It is easy to learn the concepts, but not actually to use them. It is like with assembly, learning mnemonics is easy, but writing useful programs is a whole lot different story.


What is bad about it? An example would help me understand you better.


> What is bad about it?

That nothing about it’s good? It’s the same shared-memory concurrency everybody else has, that it had to built-in a thread-safe queue doesn’t fix that.

In some ways it makes it worse, because it’s easy for the unwary to assume channel = safe, whereas nothing could be further from the truth: a channel only protects itself, sending a slice through a channel is not safe, sending a map through a channel is not safe, sending a pointer to a channel is not safe, etc… and this is transitive, sending by value a struct which contains any of those isn’t safe either.

Not to mention the easy and common issue of implicit concurrent sharing via `go` closures.

Here’s a bunch of examples: https://www.uber.com/blog/data-race-patterns-in-go

And these are data races, not merely concurrency issues.


> That nothing about it’s good

You have a very myopic view of what makes concurrency designs good.

I suggest you broaden your perspective, and consider the practical effects of language design on humans, as opposed to just mathematically provable safety.


Ah, you’re a fanboy. You could have just put that upfront I would not have wasted my time taking you seriously.


You wrote "That nothing about it’s good". That's stupid. I agree we are wasting our time.


Well, Ok, so the main thing that's good about it is that it's dead easy to start a goroutine. But that's actually bad, just like randomly jumping to any piece of code whatsoever is bad in in a procedural language.

Read the Structured Concurrency notes (https://vorpus.org/blog/notes-on-structured-concurrency-or-g...) for the reasons why.


I don’t know. I think the culture around go encourages effective/safe strategies for using go routines + channels. I think when channels are used as intended they can be quite effective. They’re not technically more safe than anything else, but I think the fact that they make concurrency more manageable is good.


It is surprising to me that so many people think that message passing is simpler to get right than shared memory concurrency, when in fact it is not: https://cseweb.ucsd.edu/~yiying/GoStudy-ASPLOS19.pdf


If you’re only going to learn one concurrency paradigm, it has to be message passing, because that’s the only one that works between machines.

The only reason why people don’t see the similarities when it comes to in-process concurrency is that the request-response paradigm muddies the waters. Anyway, that’s the world we live, so people have to work around the N+1 problem and similar, living in a mixed sync-async world of pain :)

It’ll take at least a decade to fix, if we ever do.


> because that’s the only one that works between machines

There is a non-negligible number of problems that don't need multiple machines and are much easier to deal with using other concurrency paradigms. When the language forces only one model onto the developer, things are actually more complex than they need to be.


Yeah, I agree, but Go is allowing traditional mutices that work the same way (except the rwlock discrepancy as pointed out by the paper). Channels aren’t forced, just encouraged. There are cases where locks are preferable.

My main criticism against the paper is that what tipped the scales for concurrency bugs is the “blocking category” which of course is going to be higher with message passing since it’s a blocking paradigm. Locks only block until unlocked, and isn’t used as a general purpose signaling mechanism.

The “non-blocking” bugs are over represented by locks, which are generally more serious (non-deterministic data races vs deterministic deadlocks).

That said, I have criticism against Go as well. The decision to not implement structured concurrency (fire-and-forget goroutines) makes these deadlocks largely go unnoticed. If goroutines had ownership or scope, runtime detection a la ‘-race’ would help detect a lot of these bugs.


The Java Stream API is pretty awesome, it's my biggest wishlist item for Go.


I think one of the examples they give is a bit misleading. This

  func process(stream chan int) {
    var wg sync.WaitGroup
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for elem := range stream {
                handle(elem)
            }
        }()
    }
    wg.Wait()
  }

And

  func process(stream chan int) {
    p := pool.New().WithMaxGoroutines(10)
    for elem := range stream {
        elem := elem
        p.Go(func() {
            handle(elem)
        })
    }
    p.Wait()
  }

Do slightly different things. The first one has 10 independent, long-lived, go-routines that are all consuming from a single channel. The second one has the current thread read from the channel and dynamically spawn go-routines. They have the same effect, but different performance characteristics.


I haven't looked at the implementation at all, but it is possible that the pool is keeping goroutines alive, and the `Go()` method writes to a single `chan func()` that those goroutines read off of.

Which still isn't exactly equivalent, there's still an additional channel read due to the `for elem := range stream {}` loop, and likely an allocation due to the closure.


This is exactly correct. Behavior is equivalent, performance is not. It's probably still not a great example because if reading from a channel already, you're probably better off spawning 10 tasks that read off that channel, but the idea of the example was that it can handle unbounded streams with bounded concurrency.


Ahh having a generic pool abstraction that collects results is tempting... nice work. I likely won't use this library though, since

  - I don't want to mess with panics.
  - The default concurrency GOMAXPROCS is almost never what I want.
  - Aggregated errors are almost never what I want. (I haven't read the implementation, but I worry about losing important error codes, or propagating something huge across an RPC boundary).
  - Using it would place a burden on any reader that isn't familiar with the library


> nice work

Thanks!

> The default concurrency GOMAXPROCS is almost never what I want.

FWIW, the default concurrency has been changed to "unlimited" since the 0.1.0 release.

> Aggregated errors are almost never what I want.

Out of curiousity, what do you want? There is an option to only keep the first error, and it's possible to unwrap the error to an array of errors that compose it if you just want a slice of errors.

> Using it would place a burden on any reader that isn't familiar with the library

Using concurrency in general places a burden on the reader :) I personally find using patterns like this to significantly reduce the read/review burden.


> FWIW, the default concurrency has been changed to "unlimited" since the 0.1.0 release.

Nice! Will that end up on Github?

> Out of curiousity, what do you want

Most often I want to return just the first error. Some reasons: (1) smaller error messages passed across RPC boundaries (2) original errors can be inspected as intended (e.g. error codes) (3) when the semantics are to cancel after the first error, the errors that come after that are just noise.

A couple other thoughts

  - I think the non-conc comparison code is especially hairy since it's using goroutine pools. There's nothing wrong with that and it's fast, just not the easiest to work with. Often goroutine overhead is negligible and I would bound concurrency in dumber ways e.g. by shoving a sync.Semaphore into whatever code I otherwise have
  - I like that errgroup has .Go() block when concurrency limit is reached. Based on a quick skim of the code I think(?) conc does that too, but it could use documentation


Thanks for the feedback!

> Will that end up on Github?

It's already there! I just haven't cut a release since the change.

> Most often I want to return just the first error.

In many cases, I do too, which is why (*pool).WithFirstError() exists :)

> original errors can be inspected as intended

If you're using errors.Is() or errors.As(), error inspection should still work as expected.

> Often goroutine overhead is negligible and I would bound concurrency in dumber ways

Yes, definitely. And that's what I've always done too. However, I've found it's surprisingly easy to get subtly wrong, especially when modifying code I didn't write, and even more especially if I want to propagate panics (which I do, though that seems to be a somewhat controversial opinion in this thread). Conc is intended to be a well-known pattern that I don't have to think about too much when using it.

> I think(?) conc does that too, but it could use documentation

It does! I'll update the docs to make that more clear.


Is it just me or are the names and descriptions really confusing? i.e.

    p.WithCollectErrored() configures result pools to only collect results that did not error


Think it's an error on the homepage.

If you click through to the actual api doc it makes a lot more sense: " WithCollectErrored configures the pool to still collect the result of a task even if the task returned an error. By default, the result of tasks that errored are ignored and only the error is collected."


Whoops, yep, thanks for pointing it out. Just fixed it


If your code panics, your process should probably just crash (unless you're abusing panics to pass state around to yourself, but that's another story). The overall program state could be invalid and continuing to run may be dangerous. Your program needs to be able to recover from an unexpected termination anyway.

Not recovering panics goes extra for generic packages that are executing user code. You have no idea how badly broken things are.


I like this concept. I recently created a library that helps just with the problem of recovering goroutine panics and errors: https://github.com/gregwebs/go-recovery


The comparison code in the "Goal #2: Handle panics gracefully" section is unconvinced.


Go's standard library has some really great code, and some really terrible code (error--particularly related to wrapping/unwrapping/is/as). I'll definitely look at this to see if it gets our team's services better stability/performance.


Y’all need to be reminded of Go’s wisdom to share memory by communicating instead of communicating by sharing memory.


    func process(stream chan int) {
        p := pool.New().WithMaxGoroutines(10)
        for elem := range stream {
            elem := elem
            p.Go(func() {
                handle(elem)
            })
        }
        p.Wait()
I did something similar just with input (optionally output) channel. Close input, goroutines stop, when all of them stop the output is closed [1]. No need to incur function call just to add elements (although I'd imagine go would just inline it so it might not matter either way)

This

    func mapStream(
        in chan int,
        out chan int,
        f func(int) int,
    ) {
        s := stream.New().WithMaxGoroutines(10)
        for elem := range in {
            elem := elem
            s.Go(func() stream.Callback {
                res := f(elem)
                return func() { out <- res }
            })
        }
        s.Wait()
    }
also seems awfully verbose vs just function (that is now easy and safe thanks to generics) with this signature

    WorkerPool[T1, T2 any](input chan T1, output chan T2, worker func(T1) T2, concurrency int)
I do like idea of waitgroup on steroids, I might steal it for my generic library.

* [1] https://github.com/XANi/goneric/blob/master/worker.go#L92


Fair criticism. The nice thing about the current API is it works with any input that's iterable (channels, slices, readers, etc.) and any output (callback, channel, append to slice, etc.). In most code I write, I avoid channels because I find them easy to misuse. I used channels in the examples because it's the easiest way to represent "some potentially unbounded stream of input."


I just wrote a bunch of separate short functions for different types (channels, slices, maps). There is more code duplication but code itself is simpler, and less boiler-platey and more embeddable.

For example

    out := MapSlice(
       func(i int) string { return fmt.Sprintf("-=0x%02x=-", i) },
       MapSlice(
          func(i int) int { return i + 1 },
          MapSlice(
             func(i int) int { return i * i },
             GenSlice(10, func(idx int) int { return idx }),
          ),
       ),
    )
or piping workers

    out :=
       WorkerPoolBackgroundClose(
          WorkerPoolBackgroundClose(
             WorkerPoolBackgroundClose(
                GenChanNClose(3, func(idx int) int { return idx + 1 }),
                func(i int) string { return strconv.Itoa(i) },
                4),
             func(s string) string { return ">" + s },
             5,
          ),
          func(s string) string { return " |" + s },
          6)


https://github.com/sourcegraph/conc/blob/main/iter/iter.go#L...

    // Map applies f to each element of input, returning the mapped result.
    func Map[T, R any](input []T, f func(*T) R) []R {
     res := make([]R, len(input))
     ForEachIdx(input, func(i int, t *T) {
      res[i] = f(t)
     })
     return res
    }
Seems a little silly to farm that off to a custom func when you could just write the for-loop, but it's probably fine / may be no different in practice.

Let's go see what ForEach does, to make sure: https://github.com/sourcegraph/conc/blob/main/iter/iter.go#L...

    // ForEachIdx is the same as ForEach except it also provides the
    // index of the element to the callback.
    func ForEachIdx[T any](input []T, f func(int, *T)) {
     numTasks := runtime.GOMAXPROCS(0)
     numInput := len(input)
     if numTasks > numInput {
      // No more tasks than the number of input items.
      numTasks = numInput
     }
    
     var idx atomic.Int64
     // Create the task outside the loop to avoid extra closure allocations.
     task := func() {
      i := int(idx.Add(1) - 1)
      for ; i < numInput; i = int(idx.Add(1) - 1) {
       f(i, &input[i])
      }
     }
    
     var wg conc.WaitGroup
     for i := 0; i < numTasks; i++ {
      wg.Go(task)
     }
     wg.Wait()
    }
Yeah I'm gonna go with a giant nope. Getting that through review is rather concerning to say the least, for something you'd be basing your most-complex and most-needing-correctness code around.


I'm not sure what you don't like? Sorry, must just be missing your point.


I'm guessing it's about creating a GOMAXPROCS pool of goroutines which are then (wastefully) competing with each other for an atomic loop counter.

It got me curious what's faster when looping over a slice of million elements: to use a million separate goroutines or to use that pool.


You gotta appreciate good branding.


Does this have anything to do with conc trees?


Nope, just a catchy short form of "concurrent"


If as in Fortess, then no




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: