diff mbox series

[meta-oe,kirkstone,1/4] nodejs: fix CVE-2024-22019

Message ID 20240223083620.182565-1-archana.polampalli@windriver.com
State New
Headers show
Series [meta-oe,kirkstone,1/4] nodejs: fix CVE-2024-22019 | expand

Commit Message

Polampalli, Archana Feb. 23, 2024, 8:36 a.m. UTC
From: Archana Polampalli <archana.polampalli@windriver.com>

A vulnerability in Node.js HTTP servers allows an attacker to send a specially
crafted HTTP request with chunked encoding, leading to resource exhaustion and
denial of service (DoS). The server reads an unbounded number of bytes from a
single connection, exploiting the lack of limitations on chunk extension bytes.
The issue can cause CPU and network bandwidth exhaustion, bypassing standard
safeguards like timeouts and body size limits

Signed-off-by: Archana Polampalli <archana.polampalli@windriver.com>
---
 .../nodejs/nodejs/CVE-2024-22019.patch        | 241 ++++++++++++++++++
 .../recipes-devtools/nodejs/nodejs_16.20.2.bb |   1 +
 2 files changed, 242 insertions(+)
 create mode 100644 meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-22019.patch

Comments

Mittal, Anuj Feb. 23, 2024, 9:45 a.m. UTC | #1
On Fri, 2024-02-23 at 08:36 +0000, Polampalli, Archana via
lists.openembedded.org wrote:
> From: Archana Polampalli <archana.polampalli@windriver.com>
> 
> A vulnerability in Node.js HTTP servers allows an attacker to send a
> specially
> crafted HTTP request with chunked encoding, leading to resource
> exhaustion and
> denial of service (DoS). The server reads an unbounded number of
> bytes from a
> single connection, exploiting the lack of limitations on chunk
> extension bytes.
> The issue can cause CPU and network bandwidth exhaustion, bypassing
> standard
> safeguards like timeouts and body size limits
> 
> Signed-off-by: Archana Polampalli <archana.polampalli@windriver.com>
> ---
>  .../nodejs/nodejs/CVE-2024-22019.patch        | 241
> ++++++++++++++++++
>  .../recipes-devtools/nodejs/nodejs_16.20.2.bb |   1 +
>  2 files changed, 242 insertions(+)
>  create mode 100644 meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-
> 22019.patch
> 
> diff --git a/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-
> 22019.patch b/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-
> 22019.patch
> new file mode 100644
> index 000000000..26fd2ff87
> --- /dev/null
> +++ b/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-22019.patch
> @@ -0,0 +1,241 @@
> +From 03a5c34a829742f1c47b68f831b2940af44addf6 Mon Sep 17 00:00:00
> 2001
> +From: Paolo Insogna <paolo@cowtech.it>
> +Date: Wed, 3 Jan 2024 07:23:15 +0100
> +Subject: [PATCH] http: add maximum chunk extension size
> +
> +PR-URL: https://github.com/nodejs-private/node-private/pull/518
> +Fixes: https://hackerone.com/reports/2233486
> +Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
> +Reviewed-By: Marco Ippolito <marcoippolito54@gmail.com>
> +Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
> +
> +CVE-ID: CVE-2024-22019
> +
> +Upstream-Status: Backport
> [https://github.com/nodejs/node/commit/03a5c34a829742f]

You didn't mention the changes done to this commit in this patch ...

Are we sure that this change is correct for nodejs 16.20.2? It looks
like this fix would need llhttp to be bumped as well for these checks
to work. 

For nodejs 18:
https://github.com/nodejs/node/commit/911cb33cdadab57a75f97186290ea8f3903a6171



> +
> +Signed-off-by: Archana Polampalli <archana.polampalli@windriver.com>
> +---
> + lib/_http_server.js                           |   8 ++
> + src/node_http_parser.cc                       |  19 ++-
> + .../test-http-chunk-extensions-limit.js       | 131
> ++++++++++++++++++
> + 3 files changed, 157 insertions(+), 1 deletion(-)
> + create mode 100644 test/parallel/test-http-chunk-extensions-
> limit.js
> +
> +diff --git a/lib/_http_server.js b/lib/_http_server.js
> +index 4e23266..263bb52 100644
> +--- a/lib/_http_server.js
> ++++ b/lib/_http_server.js
> +@@ -706,6 +706,11 @@ const requestHeaderFieldsTooLargeResponse =
> Buffer.from(
> +   `HTTP/1.1 431 ${STATUS_CODES[431]}\r\n` +
> +   'Connection: close\r\n\r\n', 'ascii'
> + );
> ++const requestChunkExtensionsTooLargeResponse = Buffer.from(
> ++  `HTTP/1.1 413 ${STATUS_CODES[413]}\r\n` +
> ++  'Connection: close\r\n\r\n', 'ascii',
> ++);
> ++
> + function socketOnError(e) {
> +   // Ignore further errors
> +   this.removeListener('error', socketOnError);
> +@@ -719,6 +724,9 @@ function socketOnError(e) {
> +         case 'HPE_HEADER_OVERFLOW':
> +           response = requestHeaderFieldsTooLargeResponse;
> +           break;
> ++	case 'HPE_CHUNK_EXTENSIONS_OVERFLOW':
> ++          response = requestChunkExtensionsTooLargeResponse;
> ++          break;
> +         case 'ERR_HTTP_REQUEST_TIMEOUT':
> +           response = requestTimeoutResponse;
> +           break;
> +diff --git a/src/node_http_parser.cc b/src/node_http_parser.cc
> +index 74f3248..a137fd7 100644
> +--- a/src/node_http_parser.cc
> ++++ b/src/node_http_parser.cc
> +@@ -79,6 +79,8 @@ const uint32_t kOnExecute = 5;
> + const uint32_t kOnTimeout = 6;
> + // Any more fields than this will be flushed into JS
> + const size_t kMaxHeaderFieldsCount = 32;
> ++// Maximum size of chunk extensions
> ++const size_t kMaxChunkExtensionsSize = 16384;
> +
> + const uint32_t kLenientNone = 0;
> + const uint32_t kLenientHeaders = 1 << 0;
> +@@ -206,6 +208,7 @@ class Parser : public AsyncWrap, public
> StreamListener {
> +
> +   int on_message_begin() {
> +     num_fields_ = num_values_ = 0;
> ++    chunk_extensions_nread_ = 0;
> +     url_.Reset();
> +     status_message_.Reset();
> +     header_parsing_start_time_ = uv_hrtime();
> +@@ -443,9 +446,22 @@ class Parser : public AsyncWrap, public
> StreamListener {
> +     return 0;
> +   }
> +
> +-  // Reset nread for the next chunk
> ++  int on_chunk_extension(const char* at, size_t length) {
> ++    chunk_extensions_nread_ += length;
> ++
> ++    if (chunk_extensions_nread_ > kMaxChunkExtensionsSize) {
> ++      llhttp_set_error_reason(&parser_,
> ++          "HPE_CHUNK_EXTENSIONS_OVERFLOW:Chunk extensions
> overflow");
> ++      return HPE_USER;
> ++    }
> ++
> ++    return 0;
> ++  }
> ++
> ++  // Reset nread for the next chunk and also reset the extensions
> counter
> +   int on_chunk_header() {
> +     header_nread_ = 0;
> ++    chunk_extensions_nread_ = 0;
> +     return 0;
> +   }
> +
> +@@ -887,6 +903,7 @@ class Parser : public AsyncWrap, public
> StreamListener {
> +   const char* current_buffer_data_;
> +   bool pending_pause_ = false;
> +   uint64_t header_nread_ = 0;
> ++  uint64_t chunk_extensions_nread_ = 0;
> +   uint64_t max_http_header_size_;
> +   uint64_t headers_timeout_;
> +   uint64_t header_parsing_start_time_ = 0;
> +diff --git a/test/parallel/test-http-chunk-extensions-limit.js
> b/test/parallel/test-http-chunk-extensions-limit.js
> +new file mode 100644
> +index 0000000..6868b3d
> +--- /dev/null
> ++++ b/test/parallel/test-http-chunk-extensions-limit.js
> +@@ -0,0 +1,131 @@
> ++'use strict';
> ++
> ++const common = require('../common');
> ++const http = require('http');
> ++const net = require('net');
> ++const assert = require('assert');
> ++
> ++// Verify that chunk extensions are limited in size when sent all
> together.
> ++{
> ++  const server = http.createServer((req, res) => {
> ++    req.on('end', () => {
> ++      res.writeHead(200, { 'Content-Type': 'text/plain' });
> ++      res.end('bye');
> ++    });
> ++
> ++    req.resume();
> ++  });
> ++
> ++  server.listen(0, () => {
> ++    const sock = net.connect(server.address().port);
> ++    let data = '';
> ++
> ++    sock.on('data', (chunk) => data += chunk.toString('utf-8'));
> ++
> ++    sock.on('end', common.mustCall(function() {
> ++      assert.strictEqual(data, 'HTTP/1.1 413 Payload Too
> Large\r\nConnection: close\r\n\r\n');
> ++      server.close();
> ++    }));
> ++
> ++    sock.end('' +
> ++      'GET / HTTP/1.1\r\n' +
> ++      'Host: localhost:8080\r\n' +
> ++      'Transfer-Encoding: chunked\r\n\r\n' +
> ++      '2;' + 'A'.repeat(20000) + '=bar\r\nAA\r\n' +
> ++      '0\r\n\r\n'
> ++    );
> ++  });
> ++}
> ++
> ++// Verify that chunk extensions are limited in size when sent in
> intervals.
> ++{
> ++  const server = http.createServer((req, res) => {
> ++    req.on('end', () => {
> ++      res.writeHead(200, { 'Content-Type': 'text/plain' });
> ++      res.end('bye');
> ++    });
> ++
> ++    req.resume();
> ++  });
> ++
> ++  server.listen(0, () => {
> ++    const sock = net.connect(server.address().port);
> ++    let remaining = 20000;
> ++    let data = '';
> ++
> ++    const interval = setInterval(
> ++      () => {
> ++        if (remaining > 0) {
> ++          sock.write('A'.repeat(1000));
> ++        } else {
> ++          sock.write('=bar\r\nAA\r\n0\r\n\r\n');
> ++          clearInterval(interval);
> ++        }
> ++
> ++        remaining -= 1000;
> ++      },
> ++      common.platformTimeout(20),
> ++    ).unref();
> ++
> ++    sock.on('data', (chunk) => data += chunk.toString('utf-8'));
> ++
> ++    sock.on('end', common.mustCall(function() {
> ++      assert.strictEqual(data, 'HTTP/1.1 413 Payload Too
> Large\r\nConnection: close\r\n\r\n');
> ++      server.close();
> ++    }));
> ++
> ++    sock.write('' +
> ++    'GET / HTTP/1.1\r\n' +
> ++    'Host: localhost:8080\r\n' +
> ++    'Transfer-Encoding: chunked\r\n\r\n' +
> ++    '2;'
> ++    );
> ++  });
> ++}
> ++
> ++// Verify the chunk extensions is correctly reset after a chunk
> ++{
> ++  const server = http.createServer((req, res) => {
> ++    req.on('end', () => {
> ++      res.writeHead(200, { 'content-type': 'text/plain',
> 'connection': 'close', 'date': 'now' });
> ++      res.end('bye');
> ++    });
> ++
> ++    req.resume();
> ++  });
> ++
> ++  server.listen(0, () => {
> ++    const sock = net.connect(server.address().port);
> ++    let data = '';
> ++
> ++    sock.on('data', (chunk) => data += chunk.toString('utf-8'));
> ++
> ++    sock.on('end', common.mustCall(function() {
> ++      assert.strictEqual(
> ++        data,
> ++        'HTTP/1.1 200 OK\r\n' +
> ++        'content-type: text/plain\r\n' +
> ++        'connection: close\r\n' +
> ++        'date: now\r\n' +
> ++        'Transfer-Encoding: chunked\r\n' +
> ++        '\r\n' +
> ++        '3\r\n' +
> ++        'bye\r\n' +
> ++        '0\r\n' +
> ++        '\r\n',
> ++      );
> ++
> ++      server.close();
> ++    }));
> ++
> ++    sock.end('' +
> ++      'GET / HTTP/1.1\r\n' +
> ++      'Host: localhost:8080\r\n' +
> ++      'Transfer-Encoding: chunked\r\n\r\n' +
> ++      '2;' + 'A'.repeat(10000) + '=bar\r\nAA\r\n' +
> ++      '2;' + 'A'.repeat(10000) + '=bar\r\nAA\r\n' +
> ++      '2;' + 'A'.repeat(10000) + '=bar\r\nAA\r\n' +
> ++      '0\r\n\r\n'
> ++    );
> ++  });
> ++}
> +--
> +2.40.0
> diff --git a/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb
> b/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb
> index 16593a0fe..b786c0273 100644
> --- a/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb
> +++ b/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb
> @@ -27,6 +27,7 @@ SRC_URI =
> "http://nodejs.org/dist/v${PV}/node-v${PV}.tar.xz \
>            
> file://0001-mips-Use-32bit-cast-for-operand-on-mips32.patch \
>             file://0001-Nodejs-Fixed-pipes-DeprecationWarning.patch \
>             file://CVE-2022-25883.patch \
> +           file://CVE-2024-22019.patch \
>             "
>  SRC_URI:append:class-target = " \
>             file://0001-Using-native-binaries.patch \
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> View/Reply Online (#109004):
> https://lists.openembedded.org/g/openembedded-devel/message/109004
> Mute This Topic: https://lists.openembedded.org/mt/104524912/3616702
> Group Owner: openembedded-devel+owner@lists.openembedded.org
> Unsubscribe:
> https://lists.openembedded.org/g/openembedded-devel/unsub [
> anuj.mittal@intel.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
Polampalli, Archana Feb. 27, 2024, 11:37 a.m. UTC | #2
Kindly ignore this patch.

Regards,
Archana
akuster808 Feb. 28, 2024, 1:56 p.m. UTC | #3
On 2/27/24 6:37 AM, Polampalli, Archana via lists.openembedded.org wrote:
> Kindly ignore this patch.

thanks for letting me know.

- Armin
>
> Regards,
> Archana
> ------------------------------------------------------------------------
> *From:* openembedded-devel@lists.openembedded.org 
> <openembedded-devel@lists.openembedded.org> on behalf of Polampalli, 
> Archana via lists.openembedded.org 
> <archana.polampalli=windriver.com@lists.openembedded.org>
> *Sent:* Friday, February 23, 2024 14:06
> *To:* openembedded-devel@lists.openembedded.org 
> <openembedded-devel@lists.openembedded.org>
> *Subject:* [oe][meta-oe][kirkstone][PATCH 2/4] nodejs: fix CVE-2024-21892
> From: Archana Polampalli <archana.polampalli@windriver.com>
>
> On Linux, Node.js ignores certain environment variables if those may 
> have been
> set by an unprivileged user while the process is running with elevated 
> privileges
> with the only exception of CAP_NET_BIND_SERVICE. Due to a bug in the
> implementation of this exception, Node.js incorrectly applies this 
> exception
> even when certain other capabilities have been set. This allows 
> unprivileged
> users to inject code that inherits the process's elevated privileges.
>
> Signed-off-by: Archana Polampalli <archana.polampalli@windriver.com>
> ---
>  .../nodejs/nodejs/CVE-2024-21892-0001.patch   | 97 +++++++++++++++++++
>  .../nodejs/nodejs/CVE-2024-21892-0002.patch   | 58 +++++++++++
>  .../recipes-devtools/nodejs/nodejs_16.20.2.bb |  2 +
>  3 files changed, 157 insertions(+)
>  create mode 100644 
> meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-21892-0001.patch
>  create mode 100644 
> meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-21892-0002.patch
>
> diff --git 
> a/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-21892-0001.patch 
> b/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-21892-0001.patch
> new file mode 100644
> index 000000000..0eb988fac
> --- /dev/null
> +++ b/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-21892-0001.patch
> @@ -0,0 +1,97 @@
> +From 3f619407fe1e597657b598383d0b5003a064311b Mon Sep 17 00:00:00 2001
> +From: Daniel Bevenius <daniel.bevenius@gmail.com>
> +Date: Wed, 17 Mar 2021 13:48:51 +0100
> +Subject: [PATCH 2/5] src: allow CAP_NET_BIND_SERVICE in SafeGetenv
> +
> +This commit updates SafeGetenv to check if the current process has the
> +effective capability cap_net_bind_service set, and if so allows
> +environment variables to be read.
> +
> +The motivation for this change is a use-case where Node is run in a
> +container, and the is a requirement to be able to listen to ports
> +below 1024. This is done by setting the capability of
> +cap_net_bind_service. In addition there is a need to set the
> +environment variable `NODE_EXTRA_CA_CERTS`. But currently this
> +environment variable will not be read when the capability has been set
> +on the executable.
> +
> +PR-URL: https://github.com/nodejs/node/pull/37727
> +Reviewed-By: Anna Henningsen <anna@addaleax.net>
> +Reviewed-By: Richard Lau <rlau@redhat.com>
> +Reviewed-By: James M Snell <jasnell@gmail.com>
> +Reviewed-By: Michael Dawson <midawson@redhat.com>
> +
> +CVE: CVE-2024-21892
> +
> +Upstream-Status: Backport 
> [https://github.com/nodejs/node/commit/3f619407fe1e5976]
> +
> +Signed-off-by: Archana Polampalli <archana.polampalli@windriver.com>
> +---
> + src/node_credentials.cc | 38 +++++++++++++++++++++++++++++++++++++-
> + 1 file changed, 37 insertions(+), 1 deletion(-)
> +
> +diff --git a/src/node_credentials.cc b/src/node_credentials.cc
> +index 4c098c9..7688af8 100644
> +--- a/src/node_credentials.cc
> ++++ b/src/node_credentials.cc
> +@@ -12,6 +12,11 @@
> + #include <unistd.h>  // setuid, getuid
> + #endif
> +
> ++#ifdef __linux__
> ++#include <linux/capability.h>
> ++#include <sys/syscall.h>
> ++#endif  // __linux__
> ++
> + namespace node {
> +
> + using v8::Array;
> +@@ -33,14 +38,45 @@ bool linux_at_secure = false;
> +
> + namespace credentials {
> +
> +-// Look up environment variable unless running as setuid root.
> ++#if defined(__linux__)
> ++// Returns true if the current process only has the passed-in 
> capability.
> ++bool HasOnly(int capability) {
> ++  DCHECK(cap_valid(capability));
> ++
> ++  struct __user_cap_data_struct cap_data[2];
> ++  struct __user_cap_header_struct cap_header_data = {
> ++    _LINUX_CAPABILITY_VERSION_3,
> ++    getpid()};
> ++
> ++
> ++  if (syscall(SYS_capget, &cap_header_data, &cap_data) != 0) {
> ++    return false;
> ++  }
> ++  if (capability < 32) {
> ++    return cap_data[0].permitted ==
> ++        static_cast<unsigned int>(CAP_TO_MASK(capability));
> ++  }
> ++  return cap_data[1].permitted ==
> ++      static_cast<unsigned int>(CAP_TO_MASK(capability));
> ++}
> ++#endif
> ++
> ++// Look up the environment variable and allow the lookup if the current
> ++// process only has the capability CAP_NET_BIND_SERVICE set. If the 
> current
> ++// process does not have any capabilities set and the process is 
> running as
> ++// setuid root then lookup will not be allowed.
> + bool SafeGetenv(const char* key,
> +                 std::string* text,
> +                 std::shared_ptr<KVStore> env_vars,
> +                 v8::Isolate* isolate) {
> + #if !defined(__CloudABI__) && !defined(_WIN32)
> ++#if defined(__linux__)
> ++  if ((!HasOnly(CAP_NET_BIND_SERVICE) && 
> per_process::linux_at_secure) ||
> ++      getuid() != geteuid() || getgid() != getegid())
> ++#else
> +   if (per_process::linux_at_secure || getuid() != geteuid() ||
> +       getgid() != getegid())
> ++#endif
> +     goto fail;
> + #endif
> +
> +--
> +2.40.0
> diff --git 
> a/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-21892-0002.patch 
> b/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-21892-0002.patch
> new file mode 100644
> index 000000000..efb64db7d
> --- /dev/null
> +++ b/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-21892-0002.patch
> @@ -0,0 +1,58 @@
> +From 10ecf400679e04eddab940721cad3f6c1d603b61 Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Tobias=20Nie=C3=9Fen?= <tniessen@tnie.de>
> +Date: Sat, 4 Nov 2023 00:39:57 +0000
> +Subject: [PATCH 3/5] src: fix HasOnly(capability) in node::credentials
> +
> +SYS_capget with _LINUX_CAPABILITY_VERSION_3 returns the process's
> +permitted capabilities as two 32-bit values. To determine if the only
> +permitted capability is indeed CAP_NET_BIND_SERVICE, it is necessary to
> +check both of those values.
> +
> +Not doing so creates a vulnerability that potentially allows
> +unprivileged users to inject code into a privileged Node.js process
> +through environment variables such as NODE_OPTIONS.
> +
> +PR-URL: https://github.com/nodejs-private/node-private/pull/505
> +Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
> +
> +CVE-ID: CVE-2024-21892
> +
> +Upstream-Status: Backport 
> [https://github.com/nodejs/node/commit/10ecf400679e04ed]
> +
> +Signed-off-by: Archana Polampalli <archana.polampalli@windriver.com>
> +---
> + src/node_credentials.cc | 12 +++++-------
> + 1 file changed, 5 insertions(+), 7 deletions(-)
> +
> +diff --git a/src/node_credentials.cc b/src/node_credentials.cc
> +index 7688af8..3dcbc8a 100644
> +--- a/src/node_credentials.cc
> ++++ b/src/node_credentials.cc
> +@@ -43,7 +43,7 @@ namespace credentials {
> + bool HasOnly(int capability) {
> +   DCHECK(cap_valid(capability));
> +
> +-  struct __user_cap_data_struct cap_data[2];
> ++  struct __user_cap_data_struct cap_data[_LINUX_CAPABILITY_U32S_3];
> +   struct __user_cap_header_struct cap_header_data = {
> +     _LINUX_CAPABILITY_VERSION_3,
> +     getpid()};
> +@@ -52,12 +52,10 @@ bool HasOnly(int capability) {
> +   if (syscall(SYS_capget, &cap_header_data, &cap_data) != 0) {
> +     return false;
> +   }
> +-  if (capability < 32) {
> +-    return cap_data[0].permitted ==
> +-        static_cast<unsigned int>(CAP_TO_MASK(capability));
> +-  }
> +-  return cap_data[1].permitted ==
> +-      static_cast<unsigned int>(CAP_TO_MASK(capability));
> ++  static_assert(arraysize(cap_data) == 2);
> ++  return cap_data[CAP_TO_INDEX(capability)].permitted ==
> ++             static_cast<unsigned int>(CAP_TO_MASK(capability)) &&
> ++         cap_data[1 - CAP_TO_INDEX(capability)].permitted == 0;
> + }
> + #endif
> +
> +--
> +2.40.0
> diff --git a/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb 
> b/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb
> index b786c0273..9540ed44e 100644
> --- a/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb
> +++ b/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb
> @@ -28,6 +28,8 @@ SRC_URI = 
> "http://nodejs.org/dist/v${PV}/node-v${PV}.tar.xz \
> file://0001-Nodejs-Fixed-pipes-DeprecationWarning.patch 
> <file://0001-Nodejs-Fixed-pipes-DeprecationWarning.patch> \
> file://CVE-2022-25883.patch <file://CVE-2022-25883.patch> \
> file://CVE-2024-22019.patch <file://CVE-2024-22019.patch> \
> + file://CVE-2024-21892-0001.patch <file://CVE-2024-21892-0001.patch> \
> + file://CVE-2024-21892-0002.patch <file://CVE-2024-21892-0002.patch> \
>             "
>  SRC_URI:append:class-target = " \
> file://0001-Using-native-binaries.patch 
> <file://0001-Using-native-binaries.patch> \
> -- 
> 2.40.0
>
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> View/Reply Online (#109039): https://lists.openembedded.org/g/openembedded-devel/message/109039
> Mute This Topic: https://lists.openembedded.org/mt/104600757/3616698
> Group Owner: openembedded-devel+owner@lists.openembedded.org
> Unsubscribe: https://lists.openembedded.org/g/openembedded-devel/unsub [akuster808@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
diff mbox series

Patch

diff --git a/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-22019.patch b/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-22019.patch
new file mode 100644
index 000000000..26fd2ff87
--- /dev/null
+++ b/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2024-22019.patch
@@ -0,0 +1,241 @@ 
+From 03a5c34a829742f1c47b68f831b2940af44addf6 Mon Sep 17 00:00:00 2001
+From: Paolo Insogna <paolo@cowtech.it>
+Date: Wed, 3 Jan 2024 07:23:15 +0100
+Subject: [PATCH] http: add maximum chunk extension size
+
+PR-URL: https://github.com/nodejs-private/node-private/pull/518
+Fixes: https://hackerone.com/reports/2233486
+Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
+Reviewed-By: Marco Ippolito <marcoippolito54@gmail.com>
+Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
+
+CVE-ID: CVE-2024-22019
+
+Upstream-Status: Backport [https://github.com/nodejs/node/commit/03a5c34a829742f]
+
+Signed-off-by: Archana Polampalli <archana.polampalli@windriver.com>
+---
+ lib/_http_server.js                           |   8 ++
+ src/node_http_parser.cc                       |  19 ++-
+ .../test-http-chunk-extensions-limit.js       | 131 ++++++++++++++++++
+ 3 files changed, 157 insertions(+), 1 deletion(-)
+ create mode 100644 test/parallel/test-http-chunk-extensions-limit.js
+
+diff --git a/lib/_http_server.js b/lib/_http_server.js
+index 4e23266..263bb52 100644
+--- a/lib/_http_server.js
++++ b/lib/_http_server.js
+@@ -706,6 +706,11 @@ const requestHeaderFieldsTooLargeResponse = Buffer.from(
+   `HTTP/1.1 431 ${STATUS_CODES[431]}\r\n` +
+   'Connection: close\r\n\r\n', 'ascii'
+ );
++const requestChunkExtensionsTooLargeResponse = Buffer.from(
++  `HTTP/1.1 413 ${STATUS_CODES[413]}\r\n` +
++  'Connection: close\r\n\r\n', 'ascii',
++);
++
+ function socketOnError(e) {
+   // Ignore further errors
+   this.removeListener('error', socketOnError);
+@@ -719,6 +724,9 @@ function socketOnError(e) {
+         case 'HPE_HEADER_OVERFLOW':
+           response = requestHeaderFieldsTooLargeResponse;
+           break;
++	case 'HPE_CHUNK_EXTENSIONS_OVERFLOW':
++          response = requestChunkExtensionsTooLargeResponse;
++          break;
+         case 'ERR_HTTP_REQUEST_TIMEOUT':
+           response = requestTimeoutResponse;
+           break;
+diff --git a/src/node_http_parser.cc b/src/node_http_parser.cc
+index 74f3248..a137fd7 100644
+--- a/src/node_http_parser.cc
++++ b/src/node_http_parser.cc
+@@ -79,6 +79,8 @@ const uint32_t kOnExecute = 5;
+ const uint32_t kOnTimeout = 6;
+ // Any more fields than this will be flushed into JS
+ const size_t kMaxHeaderFieldsCount = 32;
++// Maximum size of chunk extensions
++const size_t kMaxChunkExtensionsSize = 16384;
+
+ const uint32_t kLenientNone = 0;
+ const uint32_t kLenientHeaders = 1 << 0;
+@@ -206,6 +208,7 @@ class Parser : public AsyncWrap, public StreamListener {
+
+   int on_message_begin() {
+     num_fields_ = num_values_ = 0;
++    chunk_extensions_nread_ = 0;
+     url_.Reset();
+     status_message_.Reset();
+     header_parsing_start_time_ = uv_hrtime();
+@@ -443,9 +446,22 @@ class Parser : public AsyncWrap, public StreamListener {
+     return 0;
+   }
+
+-  // Reset nread for the next chunk
++  int on_chunk_extension(const char* at, size_t length) {
++    chunk_extensions_nread_ += length;
++
++    if (chunk_extensions_nread_ > kMaxChunkExtensionsSize) {
++      llhttp_set_error_reason(&parser_,
++          "HPE_CHUNK_EXTENSIONS_OVERFLOW:Chunk extensions overflow");
++      return HPE_USER;
++    }
++
++    return 0;
++  }
++
++  // Reset nread for the next chunk and also reset the extensions counter
+   int on_chunk_header() {
+     header_nread_ = 0;
++    chunk_extensions_nread_ = 0;
+     return 0;
+   }
+
+@@ -887,6 +903,7 @@ class Parser : public AsyncWrap, public StreamListener {
+   const char* current_buffer_data_;
+   bool pending_pause_ = false;
+   uint64_t header_nread_ = 0;
++  uint64_t chunk_extensions_nread_ = 0;
+   uint64_t max_http_header_size_;
+   uint64_t headers_timeout_;
+   uint64_t header_parsing_start_time_ = 0;
+diff --git a/test/parallel/test-http-chunk-extensions-limit.js b/test/parallel/test-http-chunk-extensions-limit.js
+new file mode 100644
+index 0000000..6868b3d
+--- /dev/null
++++ b/test/parallel/test-http-chunk-extensions-limit.js
+@@ -0,0 +1,131 @@
++'use strict';
++
++const common = require('../common');
++const http = require('http');
++const net = require('net');
++const assert = require('assert');
++
++// Verify that chunk extensions are limited in size when sent all together.
++{
++  const server = http.createServer((req, res) => {
++    req.on('end', () => {
++      res.writeHead(200, { 'Content-Type': 'text/plain' });
++      res.end('bye');
++    });
++
++    req.resume();
++  });
++
++  server.listen(0, () => {
++    const sock = net.connect(server.address().port);
++    let data = '';
++
++    sock.on('data', (chunk) => data += chunk.toString('utf-8'));
++
++    sock.on('end', common.mustCall(function() {
++      assert.strictEqual(data, 'HTTP/1.1 413 Payload Too Large\r\nConnection: close\r\n\r\n');
++      server.close();
++    }));
++
++    sock.end('' +
++      'GET / HTTP/1.1\r\n' +
++      'Host: localhost:8080\r\n' +
++      'Transfer-Encoding: chunked\r\n\r\n' +
++      '2;' + 'A'.repeat(20000) + '=bar\r\nAA\r\n' +
++      '0\r\n\r\n'
++    );
++  });
++}
++
++// Verify that chunk extensions are limited in size when sent in intervals.
++{
++  const server = http.createServer((req, res) => {
++    req.on('end', () => {
++      res.writeHead(200, { 'Content-Type': 'text/plain' });
++      res.end('bye');
++    });
++
++    req.resume();
++  });
++
++  server.listen(0, () => {
++    const sock = net.connect(server.address().port);
++    let remaining = 20000;
++    let data = '';
++
++    const interval = setInterval(
++      () => {
++        if (remaining > 0) {
++          sock.write('A'.repeat(1000));
++        } else {
++          sock.write('=bar\r\nAA\r\n0\r\n\r\n');
++          clearInterval(interval);
++        }
++
++        remaining -= 1000;
++      },
++      common.platformTimeout(20),
++    ).unref();
++
++    sock.on('data', (chunk) => data += chunk.toString('utf-8'));
++
++    sock.on('end', common.mustCall(function() {
++      assert.strictEqual(data, 'HTTP/1.1 413 Payload Too Large\r\nConnection: close\r\n\r\n');
++      server.close();
++    }));
++
++    sock.write('' +
++    'GET / HTTP/1.1\r\n' +
++    'Host: localhost:8080\r\n' +
++    'Transfer-Encoding: chunked\r\n\r\n' +
++    '2;'
++    );
++  });
++}
++
++// Verify the chunk extensions is correctly reset after a chunk
++{
++  const server = http.createServer((req, res) => {
++    req.on('end', () => {
++      res.writeHead(200, { 'content-type': 'text/plain', 'connection': 'close', 'date': 'now' });
++      res.end('bye');
++    });
++
++    req.resume();
++  });
++
++  server.listen(0, () => {
++    const sock = net.connect(server.address().port);
++    let data = '';
++
++    sock.on('data', (chunk) => data += chunk.toString('utf-8'));
++
++    sock.on('end', common.mustCall(function() {
++      assert.strictEqual(
++        data,
++        'HTTP/1.1 200 OK\r\n' +
++        'content-type: text/plain\r\n' +
++        'connection: close\r\n' +
++        'date: now\r\n' +
++        'Transfer-Encoding: chunked\r\n' +
++        '\r\n' +
++        '3\r\n' +
++        'bye\r\n' +
++        '0\r\n' +
++        '\r\n',
++      );
++
++      server.close();
++    }));
++
++    sock.end('' +
++      'GET / HTTP/1.1\r\n' +
++      'Host: localhost:8080\r\n' +
++      'Transfer-Encoding: chunked\r\n\r\n' +
++      '2;' + 'A'.repeat(10000) + '=bar\r\nAA\r\n' +
++      '2;' + 'A'.repeat(10000) + '=bar\r\nAA\r\n' +
++      '2;' + 'A'.repeat(10000) + '=bar\r\nAA\r\n' +
++      '0\r\n\r\n'
++    );
++  });
++}
+--
+2.40.0
diff --git a/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb b/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb
index 16593a0fe..b786c0273 100644
--- a/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb
+++ b/meta-oe/recipes-devtools/nodejs/nodejs_16.20.2.bb
@@ -27,6 +27,7 @@  SRC_URI = "http://nodejs.org/dist/v${PV}/node-v${PV}.tar.xz \
            file://0001-mips-Use-32bit-cast-for-operand-on-mips32.patch \
            file://0001-Nodejs-Fixed-pipes-DeprecationWarning.patch \
            file://CVE-2022-25883.patch \
+           file://CVE-2024-22019.patch \
            "
 SRC_URI:append:class-target = " \
            file://0001-Using-native-binaries.patch \