Core
Audit logs
Every tool call is permanently recorded in a SHA-256 chained log. Tamper-evident, queryable, forwardable to any SIEM.
How it works
The audit log is an append-only newline-delimited JSON file at ~/.conductor/audit.log. Each entry includes a SHA-256 hash of itself chained to the previous entry's hash. Modifying, inserting, or deleting any entry breaks the chain — detectable immediately with conductor audit verify.
Every call logged
All tool calls, regardless of result, are written before the response is returned. There is no way to skip this step.
Input hashed, not stored
The raw input is hashed (SHA-256) before logging. Sensitive values don't appear in the log in cleartext.
SHA-256 chain
Each entry's hash includes the previous entry's hash. Breaking the chain at any point is cryptographically detectable.
Immutable by design
The log is append-only. Conductor never modifies or deletes existing entries at runtime.
Log entry format
{
"id": "a3f2c1d4-8b7e-4a9f-b2c3-1d4e5f6a7b8c",
"timestamp": "2025-04-03T14:22:11.847Z",
"tool": "filesystem.read",
"inputHash": "sha256:e3b0c44298fc1c149afb...",
"result": "ok",
"durationMs": 3,
"user": "alex",
"sessionId": "sess_9k2mxp",
"prev": "sha256:d4e5f6a7b8c9d0e1f2a3...",
"hash": "sha256:f7a8b9c0d1e2f3a4b5c6..."
}| Field | Description |
|---|---|
id | UUID v4. Unique per call. |
timestamp | ISO 8601 UTC timestamp of when the call completed. |
tool | Fully-qualified tool name (e.g. filesystem.read). |
inputHash | SHA-256 of the raw input arguments. Not the input itself. |
result | ok | error | rejected (user denied approval gate). |
durationMs | Wall-clock time from call receipt to response, in milliseconds. |
user | OS username of the process that made the call. |
sessionId | Identifies the client session. Consistent across calls within one session. |
prev | SHA-256 hash of the previous log entry. null for the first entry. |
hash | SHA-256 of this entry (including the prev field). Forms the chain. |
CLI commands
# Print the last 20 entries
conductor audit list
# Filter by tool name
conductor audit list --tool filesystem.write
# Filter by date range
conductor audit list --since 2025-04-01 --until 2025-04-03
# Filter by result
conductor audit list --result error
# Export as JSON
conductor audit export --output audit-2025-04.json
# Verify chain integrity
conductor audit verifyChain verification
# Verify the entire audit log chain
conductor audit verify
# Output:
# ✓ Chain intact (1,847 entries)
# First entry: 2025-01-12T09:00:00Z
# Last entry: 2025-04-03T14:22:11Z
# What a tampered chain looks like:
# ✗ Chain broken at entry 423
# Expected: sha256:abc123...
# Got: sha256:def456...Querying with jq
The log is newline-delimited JSON — pipe it through jq for ad-hoc analysis.
# Using jq to analyze audit logs
# Raw log location
cat ~/.conductor/audit.log | jq .
# Count calls per tool (last 24h)
cat ~/.conductor/audit.log \
| jq -r '.tool' \
| sort | uniq -c | sort -rn
# Find all failed calls
cat ~/.conductor/audit.log \
| jq 'select(.result == "error")'
# Average duration per tool
cat ~/.conductor/audit.log \
| jq -r '[.tool, (.durationMs | tostring)] | join(",")' \
| awk -F, '{sum[$1]+=$2; count[$1]++}
END{for(t in sum) print sum[t]/count[t], t}' \
| sort -rnForwarding to a SIEM
Use conductor audit tail to stream new entries to any log aggregator.
# Forward to Splunk via syslog
conductor audit tail | logger -t conductor -p local0.info
# Forward to Datadog
conductor audit tail --json | datadog-agent stream-logs
# Forward to S3 (example with AWS CLI)
conductor audit export --output /tmp/audit.json
aws s3 cp /tmp/audit.json s3://your-bucket/conductor/audit-$(date +%Y%m%d).jsonRetention and rotation
// ~/.conductor/config.json
{
"audit": {
"retentionDays": 90, // Auto-archive entries older than 90 days
"maxSizeMb": 500, // Roll over when file exceeds 500 MB
"compress": true, // Gzip archived files
"archivePath": "~/.conductor/audit-archive"
}
}