Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, I have not heard any good arguments for why TCP should be in a kernel, other than, it's convenient, and it's always been done that way, and your apps get to share it. Like using the kernel as a shared library...

You could put BitTorrent in the kernel. It makes about as much sense architecturally, but isn't too widely used.



POSIX supports sharing filedescriptors between processes.

For example, you can have a process that reads a few bytes from a TCP socket and then passes the socket to another process.

Unix tries to model all I/O, including networking, as operations on files. Realistically it is only possible to get this right if the kernel is involved.

Of course it is quite possible to come up with different models. But Unix seems to be uniquely powerful in its ability to create complex systems from lots of small processes.

The kernel being involved does not imply that all code has to be in the kernel. The FUSE file system interface is a well known way to run filesystem code as user processes. Likewise, there are ways to run device drivers in user space.

The disavantage is that the extra context switches cause performance loss. So this approach is use for protocols that are rarely used and do not warrent full kernel implementation.


taken to the logical extreme, why do anything in the kernel for that matter?


I think a good rule of thumb should be that a kernel should be responsible making hardware devices safe to use among multiple processes, and little else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: