Ask HN: Privacy concerns when using AI assistants for coding?

6 points by Kholin 2 days ago

I've recently seen some teams claim to use third-party AI assistants like Claude or ChatGPT for coding. Don't they consider it a problem to feed their proprietary commercial code into the services of these third-party companies?

If you feed the most critical parts of your project to an AI, wouldn't that introduce security vulnerabilities? The AI would then have an in-depth understanding of your project's core architecture. Consequently, couldn't other AI users potentially gain easy access to these underlying details and breach your security defenses?

Furthermore, couldn't other users then easily copy your code without any attribution, making it seem no different from open-source software?

jonplackett 5 hours ago

If your code is written properly then it would be secure even if someone can see the source code (unless there’s environment keys in there that shouldn’t be exposed).

If the only security you have it that your code / site structure is secret that’s not good.

apothegm 2 days ago

In theory, these companies all claim they don’t use data from API calls for training. Whether or not they adhere to that is… TBD, I guess.

So far I’ve decided to trust Anthropic and OpenAI with my code, but not Deepseek, for instance.

baobun 2 days ago

Especially under current US administration and geopolitical climate?

Yeah, we're not doing that.

Also moved our private git repos and CIs to self-managed.

ATechGuy a day ago

I believe enterprises that care about privacy are using private AI from big tech (say Github copilot), others may not care so much about it.

bhaney 2 days ago

> The AI would then have an in-depth understanding of your project's core architecture

God how I wish this were true

rvz 2 days ago

Don't forget that your env API keys are getting read and sent to Cursor, Anthropic, OpenAI and Gemini as well.