Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you determine if code is written by an AI?


This does not really matter in practice. The risk of legal sanctions is too high for most businesses, they will follow the law. This is kind of similar to pirate software, businesses almost never use them even if they could definitely get away with it in most cases. The issue is that a single angry former employee is all it takes to make your life hell. This is even more true for large organizations where many people would know about the unlawful practice.


I have two thoughts about this.

First is who cares about large corporations? Sure large corporations have the money to buy licensed software, but I know plenty of small-to-medium corporations that operate on pirated software.

Second is your statement doesn't mean anything at all. Yes, you can enact a policy in your corporation that no one is allowed to use AI tools like Copilot to write code, but how do you monitor this? How do you know if some developer did use Copilot? This all feels like complete lip service with no actual force behind it. I am 100% sure that even my corporation's code base already contains stuff written with help of an AI, but there is also no question that the code is fully copyrighted.


> Yes, you can enact a policy in your corporation that no one is allowed to use AI tools like Copilot to write code, but how do you monitor this? How do you know if some developer did use Copilot?

That’s easy: corporate firewalls that block all traffic to openai.com, its subdomains and the IP ranges used by GitHub Copilot.

Enterprises that care about exfiltration of code from their internal networks (e.g. banks and other heavily regulated entities) typically hand out computers that are locked down to their employees, including developers. So any engineer that wants to install the GitHub Copilot extension or indeed any non-approved third party extension from the VSCode Marketplace will first have a word with the folks in IT.


> corporate firewalls

Yeah, just like how last week I requested IT to open up twitch.tv so I can watch programming live streams over lunch, but I was denied (however Youtube is wide open, so I could just watch the VODs anyway) and instead I just used my phone's data to watch twitch on lunch.

If corporate firewall is anything but a slight inconvenience for you then you are not technical.

Last part is such nosense I can't even respond.


yeah but you could get the AI to write the code on your personal laptop, then copy it over to the work laptop.

I can see this being a thing.. "I have three jobs as Senior Engineer for three different organisations. All I do is copy code from an AI engine to my work laptop all day"


I guess that could happen and we will definitely see some people try this. But in the grand scheme of things, it will be exceptionally rare. Most developers can't work outside their developer environment set up by their company, they often rely on internal tools, services hosted on the internal network, stuff like that. If stackoverflow and google didn't cause this to happen, I don't see how GPT will.


>Most developers can't work outside their developer environment set up by their company

Are you kidding me? Is this really how you see our industry? You really think that most developers literally can not do work without their company's IT setting up their machine?

Is this normal? This to me sounds like you are saying most devs are such noobs that they can't do their jobs.


In practice, the larger the organization the less likely the potential legal sanctions are to dissuade them. My observation has been that once an organization (in the US anyway) grows large enough it is in a special protected status where no real penalties can come to it and there is certainly no risk of exposure to criminal charges for the decision makers.

Source: front page here every single day.


As someone who works for large enterprises: they are absolutely terrified of legal sanctions and pay huge amounts to contractors who can mitigate the risk. And sanctions do regularly happen, they are just not advertised on the HN frontpage I guess :)


Good question. I assume the methods they're using to determine if an essay is written by an AI won't work on code?


How do you determione with an essay is written by an AI?


There have been some papers and articles on it. Apparently it's possible (with GPT3 anyway, maybe not GPT4).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: