Safe AI Development with Claude Code Permission Controls

Safe AI Development with Claude Code Permission Controls

Introduction

You've seen how generative AI can automate boilerplate setup and accelerate project work to 80-90% completion, leaving you to focus on the complex, high-value tasks. The potential productivity gains are significant, and the use case is clear.

Yet many organizations have legitimate concerns about using AI tools along with proprietary code. Security teams need assurance that sensitive information won't be exposed, and that's entirely reasonable. The good news is that modern AI coding assistants, such as Claude Code, are built with these concerns in mind.

This is a hands-on guide to addressing security requirements when using Claude Code with proprietary code. As generative AI becomes more sophisticated, the question isn't whether to use these tools; it's how to use them responsibly. I'll walk you through practical solutions that protect sensitive information while still enabling the productivity benefits of AI-assisted development.

Understanding Claude Code's Security Architecture

Permission-Based System

Claude Code operates on a strict permission model as documented in the official IAM guide. Each action the agent takes will be reviewed and allowed or blocked based on your configuration. If you haven't configured anything, the defaults are displayed in the following table.

Tool Permission Tiers:

ToolTypeExampleApproval Required
Read-onlyFile reads, LS, GrepNo
Bash CommandsShell executionYes
File ModificationEdit/write filesYes

What Gets Sent to Anthropic's API?

Sent to Claude API:
- Code you explicitly include in prompts
- Files Claude Code reads (with your permission)
- Command outputs you approve
- Conversation history

NOT sent:
- Files blocked by permission rules
- Environment variables (unless explicitly referenced)
- Files outside your working directory
- Other projects on your machine

Note - According to Anthropic's commercial terms, your code is not used to train models. See the Trust Center for more info and FAQ.

image.png

The following sections outline several strategies you can use to maintain safety while working.

Use Permission Settings to Protect Sensitive Files

The official way to exclude sensitive files is to use permissions.deny in settings.json.

Create Project Security Settings

# Create .claude directory example
mkdir -p .claude

# Create settings.json with deny rules
cat > .claude/settings.json << 'EOF'
{
  "permissions": {
    "deny":[
      "Read(config/production.yml)"
    ],
    "ask": [
      "Bash(git push *)",
      "Bash(npm install *)"
    ],
    "allow": [
      "Read(src/public/**)",
      "Read(tests/**)",
      "Edit(src/public/**)"
    ]
  }
}
EOF

# Commit to share with the team
git add .claude/settings.json
git commit -m "Add Claude Code security settings"

Screenshot 2025-11-13 at 15.00.26.png

Path Pattern Types

Claude Code uses the gitignore specification, as explained in the IAM guide:

//pathAbsolute from filesystem rootRead(//Users/alice/secrets/**)/Users/alice/secrets/**
~/pathFrom home directoryRead(~/Documents/*.pdf)/Users/alice/Documents/*.pdf
/pathRelative to the settings fileEdit(/src/**/*.ts)<settings-file-path>/src/**/*.ts
pathRelative to the current directoryRead(*.env)<cwd>/*.env

Important: A pattern like /Users/alice/file is NOT absolute - use //Users/alice/file for absolute paths!

Test Your Permissions

Create your settings files and the permission boundaries you want, then run clode and review them.

# View current permissions
claude
/permissions
# Try to access blocked file
claude "Show me the .env file"
# Should be denied

# Try to access allowed file
claude "Show me src/api.py"
# Should work

image.png

Segment Your Workflow by Project Structure

With this approach, you can split your project into public and proprietary sections. This keeps sensitive files out of reach for Claude while allowing the model to access more generic parts. You can protect your unique business code and still use AI to help with routine tasks, like exposing data from a database through a web API.

my-project/
├── public/                # ✅ Safe for Claude
│   ├── api/               # Public API interfaces
│   └── utils/             # Generic utilities
│
├── proprietary/           # ⚠️ Sensitive
│   ├── algorithms/        # Proprietary business logic
│   └── integrations/      # Third-party secrets
│
└── .claude/
    └── settings.json      # Protection rules

Following the earlier pattern, edit .claude/settings.json:

{
  "permissions": {
    "deny": [
      "Read(proprietary/**)",
      "Read(**/proprietary/**)"
    ],
    "allow": [
      "Read(public/**)",
      "Edit(public/**)"
    ]
  }
}

Settings Precedence

Good to know: as documented in IAM settings precedence

1. Enterprise policies (highest - cannot be overridden)
2. Command line arguments
3. Local project settings (`.claude/settings.local.json`)
4. Shared project settings (`.claude/settings.json`)
5. User settings (`~/.claude/settings.json`)

Summary

If your company or administration doesn’t allow you to use generative AI in your workflow, the company will fall behind. It’s your job to make them understand that there are solutions to work with these tools and still protect your edge.

The permissions are settings.json provided:

1. Granular Control: You can do more than hide files. You can control reads, writes, commands, and network access.
2. Team Sharing: Commit .claude/settings.json to share security rules
3. Enterprise Enforcement: IT can enforce policies that can't be bypassed
4. Flexibility: Different rules for different projects and team members
5. Auditability: All permissions are explicit and version-controlled

Next Steps:

  1. Tell the person blocking you from using generative AI that it’s possible to set permission boundaries for the models.

  2. Focus on the essential tasks and let the AI handle the routine work.