Ecosyste.ms: Advisories

An open API service providing security vulnerability metadata for many open source software ecosystems.

Security Advisories: GSA_kwCzR0hTQS1mOG02LWgyYzctOGg5eM0g4Q

Inefficient Regular Expression Complexity in nltk (word_tokenize, sent_tokenize)

Impact

The vulnerability is present in PunktSentenceTokenizer, sent_tokenize and word_tokenize. Any users of this class, or these two functions, are vulnerable to a Regular Expression Denial of Service (ReDoS) attack.
In short, a specifically crafted long input to any of these vulnerable functions will cause them to take a significant amount of execution time. The effect of this vulnerability is noticeable with the following example:

from nltk.tokenize import word_tokenize

n = 8
for length in [10**i for i in range(2, n)]:
    # Prepare a malicious input
    text = "a" * length
    start_t = time.time()
    # Call `word_tokenize` and naively measure the execution time
    word_tokenize(text)
    print(f"A length of {length:<{n}} takes {time.time() - start_t:.4f}s")

Which gave the following output during testing:

A length of 100      takes 0.0060s
A length of 1000     takes 0.0060s
A length of 10000    takes 0.6320s
A length of 100000   takes 56.3322s
...

I canceled the execution of the program after running it for several hours.

If your program relies on any of the vulnerable functions for tokenizing unpredictable user input, then we would strongly recommend upgrading to a version of NLTK without the vulnerability, or applying the workaround described below.

Patches

The problem has been patched in NLTK 3.6.6. After the fix, running the above program gives the following result:

A length of 100      takes 0.0070s
A length of 1000     takes 0.0010s
A length of 10000    takes 0.0060s
A length of 100000   takes 0.0400s
A length of 1000000  takes 0.3520s
A length of 10000000 takes 3.4641s

This output shows a linear relationship in execution time versus input length, which is desirable for regular expressions.
We recommend updating to NLTK 3.6.6+ if possible.

Workarounds

The execution time of the vulnerable functions is exponential to the length of a malicious input. With other words, the execution time can be bounded by limiting the maximum length of an input to any of the vulnerable functions. Our recommendation is to implement such a limit.

References

For more information

If you have any questions or comments about this advisory:

Permalink: https://github.com/advisories/GHSA-f8m6-h2c7-8h9x
JSON: https://advisories.ecosyste.ms/api/v1/advisories/GSA_kwCzR0hTQS1mOG02LWgyYzctOGg5eM0g4Q
Source: GitHub Advisory Database
Origin: Unspecified
Severity: High
Classification: General
Published: almost 3 years ago
Updated: 17 days ago


CVSS Score: 7.5
CVSS vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Identifiers: GHSA-f8m6-h2c7-8h9x, CVE-2021-43854
References: Repository: https://github.com/nltk/nltk
Blast Radius: 35.7

Affected Packages

pypi:nltk
Dependent packages: 1,440
Dependent repositories: 57,572
Downloads: 22,314,213 last month
Affected Version Ranges: < 3.6.6
Fixed in: 3.6.6
All affected versions: 0.9.3, 0.9.4, 0.9.5, 0.9.6, 0.9.7, 0.9.8, 0.9.9, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.5, 3.4.1, 3.4.2, 3.4.3, 3.4.4, 3.4.5, 3.6.1, 3.6.2, 3.6.3, 3.6.4, 3.6.5
All unaffected versions: 3.6.6, 3.6.7, 3.8.1, 3.8.2, 3.9.1