From patchwork Fri Dec 22 15:11:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kanavin X-Patchwork-Id: 36862 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 642E9C41535 for ; Fri, 22 Dec 2023 15:11:36 +0000 (UTC) Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by mx.groups.io with SMTP id smtpd.web11.24907.1703257887481332876 for ; Fri, 22 Dec 2023 07:11:28 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@gmail.com header.s=20230601 header.b=iY0F/pch; spf=pass (domain: gmail.com, ip: 209.85.208.47, mailfrom: alex.kanavin@gmail.com) Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-5537114380bso1552898a12.3 for ; Fri, 22 Dec 2023 07:11:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703257886; x=1703862686; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=FHMtovc0n/0MQOLfBWK756GoyZBrfCOPyRIfcVO4V1A=; b=iY0F/pch1k7MDjEZ4pS2w6PG5Z5P5ixF1QheYuHr6sGCc/CXVz6VpLTO5qSWNPRv0N Q8S8T8KAFJVt6s7I83DglT6qiVOjHvMHVSTKiH0BACiFwvUdOj05vtpupr1fghWUft3C SCfNoiZ0Mx+8lk2LJpdVAFE2lm/XJcWbmEnb0VTly4VnSyEmBbL9Fpl48OrDYHtjSYdr B222XqymbFPbXTb856NPYZnbSG3bWH6+StZQq/w85teQ1tIP7Q6G0jJ5BQua8FRAyF49 GNHNXN97xVV8PFCzTYA+GLbT4+0o4V/x/mJqPv05S3hJYf0vfiK5WywDNMgnYJEXoU+Z 5Sxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703257886; x=1703862686; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FHMtovc0n/0MQOLfBWK756GoyZBrfCOPyRIfcVO4V1A=; b=uCu+8Ga9ubvn0IUN5NY2ZmWnUaDDqU7ynWnOkgpE6sdO90gJpHXFGxu5bPF3z2L7mN qgKzz1BaOWow0y2k1+cGEuyYc1GL3dD5PjEQ6MPdc+gmQx5a7mA7liPLDHzlqptYdVqW L69Fnw8PrH4UxzGj6fJr7hOE5MkQh5vf0YR17Go9sYSCbw6PzroMSBsXKuXX3txdCl2c sCBcGJbE91IsAMmKw89Pt6lbIRiVExLAHLp7x+37SFdFMkTPhDTXHoUkUbPY/WJlaH4v bjr72uR1WIwCDUv+CI+8wYg53AhY0yncsmjgFxluk0aOPlj+icO+2mzi9BwxqUWE739A qgYA== X-Gm-Message-State: AOJu0Yy0YiLzF4VgepyHABPMbcT0kdIsMz2zT1lLxBo8tzDn+OvxHACb CycMlXftaP6TLlXhG6Tn0Q+9qaYrAd4ykA== X-Google-Smtp-Source: AGHT+IEINhHf1OKhnO9pTK2yikPj11Gs/YFA1j51eo4+X15HMgwygRdyZWk31zRFXB4wpVEaTtPrXw== X-Received: by 2002:a50:8a9c:0:b0:554:4ad0:9374 with SMTP id j28-20020a508a9c000000b005544ad09374mr808822edj.52.1703257884969; Fri, 22 Dec 2023 07:11:24 -0800 (PST) Received: from Zen2.lab.linutronix.de. (drugstore.linutronix.de. [80.153.143.164]) by smtp.gmail.com with ESMTPSA id m9-20020aa7c2c9000000b00552666f4745sm2650247edp.22.2023.12.22.07.11.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 07:11:24 -0800 (PST) From: Alexander Kanavin X-Google-Original-From: Alexander Kanavin To: openembedded-devel@lists.openembedded.org Cc: Alexander Kanavin Subject: [PATCH 1/9] python3-yappi: update 1.4.0 -> 1.6.0 Date: Fri, 22 Dec 2023 16:11:00 +0100 Message-Id: <20231222151108.645675-1-alex@linutronix.de> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Fri, 22 Dec 2023 15:11:36 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-devel/message/107753 Drop patches: 0002-add-3.11-to-the-setup.patch - issue resolved upstream 0001-Fix-imports-for-ptests.patch - unclear what the problem is, too difficult to rebase. Signed-off-by: Alexander Kanavin --- .../0001-Fix-imports-for-ptests.patch | 3895 ----------------- .../0002-add-3.11-to-the-setup.patch | 26 - ...-yappi_1.4.0.bb => python3-yappi_1.6.0.bb} | 8 +- 3 files changed, 2 insertions(+), 3927 deletions(-) delete mode 100644 meta-python/recipes-devtools/python/python3-yappi/0001-Fix-imports-for-ptests.patch delete mode 100644 meta-python/recipes-devtools/python/python3-yappi/0002-add-3.11-to-the-setup.patch rename meta-python/recipes-devtools/python/{python3-yappi_1.4.0.bb => python3-yappi_1.6.0.bb} (74%) diff --git a/meta-python/recipes-devtools/python/python3-yappi/0001-Fix-imports-for-ptests.patch b/meta-python/recipes-devtools/python/python3-yappi/0001-Fix-imports-for-ptests.patch deleted file mode 100644 index 476db4b7d..000000000 --- a/meta-python/recipes-devtools/python/python3-yappi/0001-Fix-imports-for-ptests.patch +++ /dev/null @@ -1,3895 +0,0 @@ -From 0dedc1c573ddc4e87475eb03c64555cd54a72e92 Mon Sep 17 00:00:00 2001 -From: Trevor Gamblin -Date: Mon, 7 Jun 2021 09:40:20 -0400 -Subject: [PATCH] Fix imports for tests - -Signed-off-by: Trevor Gamblin ---- -Upstream-Status: Pending - - tests/test_asyncio.py | 2 +- - tests/test_asyncio_context_vars.py | 2 +- - tests/test_functionality.py | 2 +- - tests/test_hooks.py | 2 +- - tests/test_tags.py | 2 +- - 5 files changed, 6 insertions(+), 6 deletions(-) - ---- a/tests/test_asyncio.py -+++ b/tests/test_asyncio.py -@@ -2,7 +2,7 @@ import unittest - import yappi - import asyncio - import threading --from utils import YappiUnitTestCase, find_stat_by_name, burn_cpu, burn_io -+from .utils import YappiUnitTestCase, find_stat_by_name, burn_cpu, burn_io - - - async def async_sleep(sec): ---- a/tests/test_asyncio_context_vars.py -+++ b/tests/test_asyncio_context_vars.py -@@ -5,7 +5,7 @@ import contextvars - import functools - import time - import os --import utils -+import tests.utils as utils - import yappi - - async_context_id = contextvars.ContextVar('async_context_id') ---- a/tests/test_functionality.py -+++ b/tests/test_functionality.py -@@ -1,1916 +1,1916 @@ --import os --import sys --import time --import threading --import unittest --import yappi --import _yappi --import utils --import multiprocessing # added to fix http://bugs.python.org/issue15881 for > Py2.6 --import subprocess -- --_counter = 0 -- -- --class BasicUsage(utils.YappiUnitTestCase): -- -- def test_callback_function_int_return_overflow(self): -- # this test is just here to check if any errors are generated, as the err -- # is printed in C side, I did not include it here. THere are ways to test -- # this deterministically, I did not bother -- import ctypes -- -- def _unsigned_overflow_margin(): -- return 2**(ctypes.sizeof(ctypes.c_void_p) * 8) - 1 -- -- def foo(): -- pass -- -- #with utils.captured_output() as (out, err): -- yappi.set_context_id_callback(_unsigned_overflow_margin) -- yappi.set_tag_callback(_unsigned_overflow_margin) -- yappi.start() -- foo() -- -- def test_issue60(self): -- -- def foo(): -- buf = bytearray() -- buf += b't' * 200 -- view = memoryview(buf)[10:] -- view = view.tobytes() -- del buf[:10] # this throws exception -- return view -- -- yappi.start(builtins=True) -- foo() -- self.assertTrue( -- len( -- yappi.get_func_stats( -- filter_callback=lambda x: yappi. -- func_matches(x, [memoryview.tobytes]) -- ) -- ) > 0 -- ) -- yappi.stop() -- -- def test_issue54(self): -- -- def _tag_cbk(): -- global _counter -- _counter += 1 -- return _counter -- -- def a(): -- pass -- -- def b(): -- pass -- -- yappi.set_tag_callback(_tag_cbk) -- yappi.start() -- a() -- a() -- a() -- yappi.stop() -- stats = yappi.get_func_stats() -- self.assertEqual(stats.pop().ncall, 3) # aggregated if no tag is given -- stats = yappi.get_func_stats(tag=1) -- -- for i in range(1, 3): -- stats = yappi.get_func_stats(tag=i) -- stats = yappi.get_func_stats( -- tag=i, filter_callback=lambda x: yappi.func_matches(x, [a]) -- ) -- -- stat = stats.pop() -- self.assertEqual(stat.ncall, 1) -- -- yappi.set_tag_callback(None) -- yappi.clear_stats() -- yappi.start() -- b() -- b() -- stats = yappi.get_func_stats() -- self.assertEqual(len(stats), 1) -- stat = stats.pop() -- self.assertEqual(stat.ncall, 2) -- -- def test_filter(self): -- -- def a(): -- pass -- -- def b(): -- a() -- -- def c(): -- b() -- -- _TCOUNT = 5 -- -- ts = [] -- yappi.start() -- for i in range(_TCOUNT): -- t = threading.Thread(target=c) -- t.start() -- ts.append(t) -- -- for t in ts: -- t.join() -- -- yappi.stop() -- -- ctx_ids = [] -- for tstat in yappi.get_thread_stats(): -- if tstat.name == '_MainThread': -- main_ctx_id = tstat.id -- else: -- ctx_ids.append(tstat.id) -- -- fstats = yappi.get_func_stats(filter={"ctx_id": 9}) -- self.assertTrue(fstats.empty()) -- fstats = yappi.get_func_stats( -- filter={ -- "ctx_id": main_ctx_id, -- "name": "c" -- } -- ) # main thread -- self.assertTrue(fstats.empty()) -- -- for i in ctx_ids: -- fstats = yappi.get_func_stats( -- filter={ -- "ctx_id": i, -- "name": "a", -- "ncall": 1 -- } -- ) -- self.assertEqual(fstats.pop().ncall, 1) -- fstats = yappi.get_func_stats(filter={"ctx_id": i, "name": "b"}) -- self.assertEqual(fstats.pop().ncall, 1) -- fstats = yappi.get_func_stats(filter={"ctx_id": i, "name": "c"}) -- self.assertEqual(fstats.pop().ncall, 1) -- -- yappi.clear_stats() -- yappi.start(builtins=True) -- time.sleep(0.1) -- yappi.stop() -- fstats = yappi.get_func_stats(filter={"module": "time"}) -- self.assertEqual(len(fstats), 1) -- -- # invalid filters` -- self.assertRaises( -- Exception, yappi.get_func_stats, filter={'tag': "sss"} -- ) -- self.assertRaises( -- Exception, yappi.get_func_stats, filter={'ctx_id': "None"} -- ) -- -- def test_filter_callback(self): -- -- def a(): -- time.sleep(0.1) -- -- def b(): -- a() -- -- def c(): -- pass -- -- def d(): -- pass -- -- yappi.set_clock_type("wall") -- yappi.start(builtins=True) -- a() -- b() -- c() -- d() -- stats = yappi.get_func_stats( -- filter_callback=lambda x: yappi.func_matches(x, [a, b]) -- ) -- #stats.print_all() -- r1 = ''' -- tests/test_functionality.py:98 a 2 0.000000 0.200350 0.100175 -- tests/test_functionality.py:101 b 1 0.000000 0.120000 0.100197 -- ''' -- self.assert_traces_almost_equal(r1, stats) -- self.assertEqual(len(stats), 2) -- stats = yappi.get_func_stats( -- filter_callback=lambda x: yappi. -- module_matches(x, [sys.modules[__name__]]) -- ) -- r1 = ''' -- tests/test_functionality.py:98 a 2 0.000000 0.230130 0.115065 -- tests/test_functionality.py:101 b 1 0.000000 0.120000 0.109011 -- tests/test_functionality.py:104 c 1 0.000000 0.000002 0.000002 -- tests/test_functionality.py:107 d 1 0.000000 0.000001 0.000001 -- ''' -- self.assert_traces_almost_equal(r1, stats) -- self.assertEqual(len(stats), 4) -- -- stats = yappi.get_func_stats( -- filter_callback=lambda x: yappi.func_matches(x, [time.sleep]) -- ) -- self.assertEqual(len(stats), 1) -- r1 = ''' -- time.sleep 2 0.206804 0.220000 0.103402 -- ''' -- self.assert_traces_almost_equal(r1, stats) -- -- def test_print_formatting(self): -- -- def a(): -- pass -- -- def b(): -- a() -- -- func_cols = { -- 1: ("name", 48), -- 0: ("ncall", 5), -- 2: ("tsub", 8), -- } -- thread_cols = { -- 1: ("name", 48), -- 0: ("ttot", 8), -- } -- -- yappi.start() -- a() -- b() -- yappi.stop() -- fs = yappi.get_func_stats() -- cs = fs[1].children -- ts = yappi.get_thread_stats() -- #fs.print_all(out=sys.stderr, columns={1:("name", 70), }) -- #cs.print_all(out=sys.stderr, columns=func_cols) -- #ts.print_all(out=sys.stderr, columns=thread_cols) -- #cs.print_all(out=sys.stderr, columns={}) -- -- self.assertRaises( -- yappi.YappiError, fs.print_all, columns={1: ("namee", 9)} -- ) -- self.assertRaises( -- yappi.YappiError, cs.print_all, columns={1: ("dd", 0)} -- ) -- self.assertRaises( -- yappi.YappiError, ts.print_all, columns={1: ("tidd", 0)} -- ) -- -- def test_get_clock(self): -- yappi.set_clock_type('cpu') -- self.assertEqual('cpu', yappi.get_clock_type()) -- clock_info = yappi.get_clock_info() -- self.assertTrue('api' in clock_info) -- self.assertTrue('resolution' in clock_info) -- -- yappi.set_clock_type('wall') -- self.assertEqual('wall', yappi.get_clock_type()) -- -- t0 = yappi.get_clock_time() -- time.sleep(0.1) -- duration = yappi.get_clock_time() - t0 -- self.assertTrue(0.05 < duration < 0.3) -- -- def test_profile_decorator(self): -- -- def aggregate(func, stats): -- fname = "tests/%s.profile" % (func.__name__) -- try: -- stats.add(fname) -- except IOError: -- pass -- stats.save(fname) -- raise Exception("messing around") -- -- @yappi.profile(return_callback=aggregate) -- def a(x, y): -- if x + y == 25: -- raise Exception("") -- return x + y -- -- def b(): -- pass -- -- try: -- os.remove( -- "tests/a.profile" -- ) # remove the one from prev test, if available -- except: -- pass -- -- # global profile is on to mess things up -- yappi.start() -- b() -- -- # assert functionality and call function at same time -- try: -- self.assertEqual(a(1, 2), 3) -- except: -- pass -- try: -- self.assertEqual(a(2, 5), 7) -- except: -- pass -- try: -- a(4, 21) -- except: -- pass -- stats = yappi.get_func_stats().add("tests/a.profile") -- fsa = utils.find_stat_by_name(stats, 'a') -- self.assertEqual(fsa.ncall, 3) -- self.assertEqual(len(stats), 1) # b() should be cleared out. -- -- @yappi.profile(return_callback=aggregate) -- def count_down_rec(n): -- if n == 0: -- return -- count_down_rec(n - 1) -- -- try: -- os.remove( -- "tests/count_down_rec.profile" -- ) # remove the one from prev test, if available -- except: -- pass -- -- try: -- count_down_rec(4) -- except: -- pass -- try: -- count_down_rec(3) -- except: -- pass -- -- stats = yappi.YFuncStats("tests/count_down_rec.profile") -- fsrec = utils.find_stat_by_name(stats, 'count_down_rec') -- self.assertEqual(fsrec.ncall, 9) -- self.assertEqual(fsrec.nactualcall, 2) -- -- def test_strip_dirs(self): -- -- def a(): -- pass -- -- stats = utils.run_and_get_func_stats(a, ) -- stats.strip_dirs() -- fsa = utils.find_stat_by_name(stats, "a") -- self.assertEqual(fsa.module, os.path.basename(fsa.module)) -- -- @unittest.skipIf(os.name == "nt", "do not run on Windows") -- def test_run_as_script(self): -- import re -- p = subprocess.Popen( -- ['yappi', os.path.join('./tests', 'run_as_script.py')], -- stdout=subprocess.PIPE -- ) -- out, err = p.communicate() -- self.assertEqual(p.returncode, 0) -- func_stats, thread_stats = re.split( -- b'name\\s+id\\s+tid\\s+ttot\\s+scnt\\s*\n', out -- ) -- self.assertTrue(b'FancyThread' in thread_stats) -- -- def test_yappi_overhead(self): -- LOOP_COUNT = 100000 -- -- def a(): -- pass -- -- def b(): -- for i in range(LOOP_COUNT): -- a() -- -- t0 = time.time() -- yappi.start() -- b() -- yappi.stop() -- time_with_yappi = time.time() - t0 -- t0 = time.time() -- b() -- time_without_yappi = time.time() - t0 -- if time_without_yappi == 0: -- time_without_yappi = 0.000001 -- -- # in latest v0.82, I calculated this as close to "7.0" in my machine. -- # however, %83 of this overhead is coming from tickcount(). The other %17 -- # seems to have been evenly distributed to the internal bookkeeping -- # structures/algorithms which seems acceptable. Note that our test only -- # tests one function being profiled at-a-time in a short interval. -- # profiling high number of functions in a small time -- # is a different beast, (which is pretty unlikely in most applications) -- # So as a conclusion: I cannot see any optimization window for Yappi that -- # is worth implementing as we will only optimize %17 of the time. -- sys.stderr.write("\r\nYappi puts %0.1f times overhead to the profiled application in average.\r\n" % \ -- (time_with_yappi / time_without_yappi)) -- -- def test_clear_stats_while_running(self): -- -- def a(): -- pass -- -- yappi.start() -- a() -- yappi.clear_stats() -- a() -- stats = yappi.get_func_stats() -- fsa = utils.find_stat_by_name(stats, 'a') -- self.assertEqual(fsa.ncall, 1) -- -- def test_generator(self): -- -- def _gen(n): -- while (n > 0): -- yield n -- n -= 1 -- -- yappi.start() -- for x in _gen(5): -- pass -- self.assertTrue( -- yappi.convert2pstats(yappi.get_func_stats()) is not None -- ) -- -- def test_slice_child_stats_and_strip_dirs(self): -- -- def b(): -- for i in range(10000000): -- pass -- -- def a(): -- b() -- -- yappi.start(builtins=True) -- a() -- stats = yappi.get_func_stats() -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- self.assertTrue(fsa.children[0:1] is not None) -- prev_afullname = fsa.full_name -- prev_bchildfullname = fsa.children[fsb].full_name -- stats.strip_dirs() -- self.assertTrue(len(prev_afullname) > len(fsa.full_name)) -- self.assertTrue( -- len(prev_bchildfullname) > len(fsa.children[fsb].full_name) -- ) -- -- def test_children_stat_functions(self): -- _timings = {"a_1": 5, "b_1": 3, "c_1": 1} -- _yappi._set_test_timings(_timings) -- -- def b(): -- pass -- -- def c(): -- pass -- -- def a(): -- b() -- c() -- -- yappi.start() -- a() -- b() # non-child call -- c() # non-child call -- stats = yappi.get_func_stats() -- fsa = utils.find_stat_by_name(stats, 'a') -- childs_of_a = fsa.children.get().sort("tavg", "desc") -- prev_item = None -- for item in childs_of_a: -- if prev_item: -- self.assertTrue(prev_item.tavg > item.tavg) -- prev_item = item -- childs_of_a.sort("name", "desc") -- prev_item = None -- for item in childs_of_a: -- if prev_item: -- self.assertTrue(prev_item.name > item.name) -- prev_item = item -- childs_of_a.clear() -- self.assertTrue(childs_of_a.empty()) -- -- def test_no_stats_different_clock_type_load(self): -- -- def a(): -- pass -- -- yappi.start() -- a() -- yappi.stop() -- yappi.get_func_stats().save("tests/ystats1.ys") -- yappi.clear_stats() -- yappi.set_clock_type("WALL") -- yappi.start() -- yappi.stop() -- stats = yappi.get_func_stats().add("tests/ystats1.ys") -- fsa = utils.find_stat_by_name(stats, 'a') -- self.assertTrue(fsa is not None) -- -- def test_subsequent_profile(self): -- _timings = {"a_1": 1, "b_1": 1} -- _yappi._set_test_timings(_timings) -- -- def a(): -- pass -- -- def b(): -- pass -- -- yappi.start() -- a() -- yappi.stop() -- yappi.start() -- b() -- yappi.stop() -- stats = yappi.get_func_stats() -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- self.assertTrue(fsa is not None) -- self.assertTrue(fsb is not None) -- self.assertEqual(fsa.ttot, 1) -- self.assertEqual(fsb.ttot, 1) -- -- def test_lambda(self): -- f = lambda: time.sleep(0.3) -- yappi.set_clock_type("wall") -- yappi.start() -- f() -- stats = yappi.get_func_stats() -- fsa = utils.find_stat_by_name(stats, '') -- self.assertTrue(fsa.ttot > 0.1) -- -- def test_module_stress(self): -- self.assertEqual(yappi.is_running(), False) -- -- yappi.start() -- yappi.clear_stats() -- self.assertRaises(_yappi.error, yappi.set_clock_type, "wall") -- -- yappi.stop() -- yappi.clear_stats() -- yappi.set_clock_type("cpu") -- self.assertRaises(yappi.YappiError, yappi.set_clock_type, "dummy") -- self.assertEqual(yappi.is_running(), False) -- yappi.clear_stats() -- yappi.clear_stats() -- -- def test_stat_sorting(self): -- _timings = {"a_1": 13, "b_1": 10, "a_2": 6, "b_2": 1} -- _yappi._set_test_timings(_timings) -- -- self._ncall = 1 -- -- def a(): -- b() -- -- def b(): -- if self._ncall == 2: -- return -- self._ncall += 1 -- a() -- -- stats = utils.run_and_get_func_stats(a) -- stats = stats.sort("totaltime", "desc") -- prev_stat = None -- for stat in stats: -- if prev_stat: -- self.assertTrue(prev_stat.ttot >= stat.ttot) -- prev_stat = stat -- stats = stats.sort("totaltime", "asc") -- prev_stat = None -- for stat in stats: -- if prev_stat: -- self.assertTrue(prev_stat.ttot <= stat.ttot) -- prev_stat = stat -- stats = stats.sort("avgtime", "asc") -- prev_stat = None -- for stat in stats: -- if prev_stat: -- self.assertTrue(prev_stat.tavg <= stat.tavg) -- prev_stat = stat -- stats = stats.sort("name", "asc") -- prev_stat = None -- for stat in stats: -- if prev_stat: -- self.assertTrue(prev_stat.name <= stat.name) -- prev_stat = stat -- stats = stats.sort("subtime", "asc") -- prev_stat = None -- for stat in stats: -- if prev_stat: -- self.assertTrue(prev_stat.tsub <= stat.tsub) -- prev_stat = stat -- -- self.assertRaises( -- yappi.YappiError, stats.sort, "invalid_func_sorttype_arg" -- ) -- self.assertRaises( -- yappi.YappiError, stats.sort, "totaltime", -- "invalid_func_sortorder_arg" -- ) -- -- def test_start_flags(self): -- self.assertEqual(_yappi._get_start_flags(), None) -- yappi.start() -- -- def a(): -- pass -- -- a() -- self.assertEqual(_yappi._get_start_flags()["profile_builtins"], 0) -- self.assertEqual(_yappi._get_start_flags()["profile_multicontext"], 1) -- self.assertEqual(len(yappi.get_thread_stats()), 1) -- -- def test_builtin_profiling(self): -- -- def a(): -- time.sleep(0.4) # is a builtin function -- -- yappi.set_clock_type('wall') -- -- yappi.start(builtins=True) -- a() -- stats = yappi.get_func_stats() -- fsa = utils.find_stat_by_name(stats, 'sleep') -- self.assertTrue(fsa is not None) -- self.assertTrue(fsa.ttot > 0.3) -- yappi.stop() -- yappi.clear_stats() -- -- def a(): -- pass -- -- yappi.start() -- t = threading.Thread(target=a) -- t.start() -- t.join() -- stats = yappi.get_func_stats() -- -- def test_singlethread_profiling(self): -- yappi.set_clock_type('wall') -- -- def a(): -- time.sleep(0.2) -- -- class Worker1(threading.Thread): -- -- def a(self): -- time.sleep(0.3) -- -- def run(self): -- self.a() -- -- yappi.start(profile_threads=False) -- -- c = Worker1() -- c.start() -- c.join() -- a() -- stats = yappi.get_func_stats() -- fsa1 = utils.find_stat_by_name(stats, 'Worker1.a') -- fsa2 = utils.find_stat_by_name(stats, 'a') -- self.assertTrue(fsa1 is None) -- self.assertTrue(fsa2 is not None) -- self.assertTrue(fsa2.ttot > 0.1) -- -- def test_run(self): -- -- def profiled(): -- pass -- -- yappi.clear_stats() -- try: -- with yappi.run(): -- profiled() -- stats = yappi.get_func_stats() -- finally: -- yappi.clear_stats() -- -- self.assertIsNotNone(utils.find_stat_by_name(stats, 'profiled')) -- -- def test_run_recursive(self): -- -- def profiled(): -- pass -- -- def not_profiled(): -- pass -- -- yappi.clear_stats() -- try: -- with yappi.run(): -- with yappi.run(): -- profiled() -- # Profiling stopped here -- not_profiled() -- stats = yappi.get_func_stats() -- finally: -- yappi.clear_stats() -- -- self.assertIsNotNone(utils.find_stat_by_name(stats, 'profiled')) -- self.assertIsNone(utils.find_stat_by_name(stats, 'not_profiled')) -- -- --class StatSaveScenarios(utils.YappiUnitTestCase): -- -- def test_pstats_conversion(self): -- -- def pstat_id(fs): -- return (fs.module, fs.lineno, fs.name) -- -- def a(): -- d() -- -- def b(): -- d() -- -- def c(): -- pass -- -- def d(): -- pass -- -- _timings = {"a_1": 12, "b_1": 7, "c_1": 5, "d_1": 2} -- _yappi._set_test_timings(_timings) -- stats = utils.run_and_get_func_stats(a, ) -- stats.strip_dirs() -- stats.save("tests/a1.pstats", type="pstat") -- fsa_pid = pstat_id(utils.find_stat_by_name(stats, "a")) -- fsd_pid = pstat_id(utils.find_stat_by_name(stats, "d")) -- yappi.clear_stats() -- _yappi._set_test_timings(_timings) -- stats = utils.run_and_get_func_stats(a, ) -- stats.strip_dirs() -- stats.save("tests/a2.pstats", type="pstat") -- yappi.clear_stats() -- _yappi._set_test_timings(_timings) -- stats = utils.run_and_get_func_stats(b, ) -- stats.strip_dirs() -- stats.save("tests/b1.pstats", type="pstat") -- fsb_pid = pstat_id(utils.find_stat_by_name(stats, "b")) -- yappi.clear_stats() -- _yappi._set_test_timings(_timings) -- stats = utils.run_and_get_func_stats(c, ) -- stats.strip_dirs() -- stats.save("tests/c1.pstats", type="pstat") -- fsc_pid = pstat_id(utils.find_stat_by_name(stats, "c")) -- -- # merge saved stats and check pstats values are correct -- import pstats -- p = pstats.Stats( -- 'tests/a1.pstats', 'tests/a2.pstats', 'tests/b1.pstats', -- 'tests/c1.pstats' -- ) -- p.strip_dirs() -- # ct = ttot, tt = tsub -- (cc, nc, tt, ct, callers) = p.stats[fsa_pid] -- self.assertEqual(cc, nc, 2) -- self.assertEqual(tt, 20) -- self.assertEqual(ct, 24) -- (cc, nc, tt, ct, callers) = p.stats[fsd_pid] -- self.assertEqual(cc, nc, 3) -- self.assertEqual(tt, 6) -- self.assertEqual(ct, 6) -- self.assertEqual(len(callers), 2) -- (cc, nc, tt, ct) = callers[fsa_pid] -- self.assertEqual(cc, nc, 2) -- self.assertEqual(tt, 4) -- self.assertEqual(ct, 4) -- (cc, nc, tt, ct) = callers[fsb_pid] -- self.assertEqual(cc, nc, 1) -- self.assertEqual(tt, 2) -- self.assertEqual(ct, 2) -- -- def test_merge_stats(self): -- _timings = { -- "a_1": 15, -- "b_1": 14, -- "c_1": 12, -- "d_1": 10, -- "e_1": 9, -- "f_1": 7, -- "g_1": 6, -- "h_1": 5, -- "i_1": 1 -- } -- _yappi._set_test_timings(_timings) -- -- def a(): -- b() -- -- def b(): -- c() -- -- def c(): -- d() -- -- def d(): -- e() -- -- def e(): -- f() -- -- def f(): -- g() -- -- def g(): -- h() -- -- def h(): -- i() -- -- def i(): -- pass -- -- yappi.start() -- a() -- a() -- yappi.stop() -- stats = yappi.get_func_stats() -- self.assertRaises( -- NotImplementedError, stats.save, "", "INVALID_SAVE_TYPE" -- ) -- stats.save("tests/ystats2.ys") -- yappi.clear_stats() -- _yappi._set_test_timings(_timings) -- yappi.start() -- a() -- stats = yappi.get_func_stats().add("tests/ystats2.ys") -- fsa = utils.find_stat_by_name(stats, "a") -- fsb = utils.find_stat_by_name(stats, "b") -- fsc = utils.find_stat_by_name(stats, "c") -- fsd = utils.find_stat_by_name(stats, "d") -- fse = utils.find_stat_by_name(stats, "e") -- fsf = utils.find_stat_by_name(stats, "f") -- fsg = utils.find_stat_by_name(stats, "g") -- fsh = utils.find_stat_by_name(stats, "h") -- fsi = utils.find_stat_by_name(stats, "i") -- self.assertEqual(fsa.ttot, 45) -- self.assertEqual(fsa.ncall, 3) -- self.assertEqual(fsa.nactualcall, 3) -- self.assertEqual(fsa.tsub, 3) -- self.assertEqual(fsa.children[fsb].ttot, fsb.ttot) -- self.assertEqual(fsa.children[fsb].tsub, fsb.tsub) -- self.assertEqual(fsb.children[fsc].ttot, fsc.ttot) -- self.assertEqual(fsb.children[fsc].tsub, fsc.tsub) -- self.assertEqual(fsc.tsub, 6) -- self.assertEqual(fsc.children[fsd].ttot, fsd.ttot) -- self.assertEqual(fsc.children[fsd].tsub, fsd.tsub) -- self.assertEqual(fsd.children[fse].ttot, fse.ttot) -- self.assertEqual(fsd.children[fse].tsub, fse.tsub) -- self.assertEqual(fse.children[fsf].ttot, fsf.ttot) -- self.assertEqual(fse.children[fsf].tsub, fsf.tsub) -- self.assertEqual(fsf.children[fsg].ttot, fsg.ttot) -- self.assertEqual(fsf.children[fsg].tsub, fsg.tsub) -- self.assertEqual(fsg.ttot, 18) -- self.assertEqual(fsg.tsub, 3) -- self.assertEqual(fsg.children[fsh].ttot, fsh.ttot) -- self.assertEqual(fsg.children[fsh].tsub, fsh.tsub) -- self.assertEqual(fsh.ttot, 15) -- self.assertEqual(fsh.tsub, 12) -- self.assertEqual(fsh.tavg, 5) -- self.assertEqual(fsh.children[fsi].ttot, fsi.ttot) -- self.assertEqual(fsh.children[fsi].tsub, fsi.tsub) -- #stats.debug_print() -- -- def test_merge_multithreaded_stats(self): -- import _yappi -- timings = {"a_1": 2, "b_1": 1} -- _yappi._set_test_timings(timings) -- -- def a(): -- pass -- -- def b(): -- pass -- -- yappi.start() -- t = threading.Thread(target=a) -- t.start() -- t.join() -- t = threading.Thread(target=b) -- t.start() -- t.join() -- yappi.get_func_stats().save("tests/ystats1.ys") -- yappi.clear_stats() -- _yappi._set_test_timings(timings) -- self.assertEqual(len(yappi.get_func_stats()), 0) -- self.assertEqual(len(yappi.get_thread_stats()), 1) -- t = threading.Thread(target=a) -- t.start() -- t.join() -- -- self.assertEqual(_yappi._get_start_flags()["profile_builtins"], 0) -- self.assertEqual(_yappi._get_start_flags()["profile_multicontext"], 1) -- yappi.get_func_stats().save("tests/ystats2.ys") -- -- stats = yappi.YFuncStats([ -- "tests/ystats1.ys", -- "tests/ystats2.ys", -- ]) -- fsa = utils.find_stat_by_name(stats, "a") -- fsb = utils.find_stat_by_name(stats, "b") -- self.assertEqual(fsa.ncall, 2) -- self.assertEqual(fsb.ncall, 1) -- self.assertEqual(fsa.tsub, fsa.ttot, 4) -- self.assertEqual(fsb.tsub, fsb.ttot, 1) -- -- def test_merge_load_different_clock_types(self): -- yappi.start(builtins=True) -- -- def a(): -- b() -- -- def b(): -- c() -- -- def c(): -- pass -- -- t = threading.Thread(target=a) -- t.start() -- t.join() -- yappi.get_func_stats().sort("name", "asc").save("tests/ystats1.ys") -- yappi.stop() -- yappi.clear_stats() -- yappi.start(builtins=False) -- t = threading.Thread(target=a) -- t.start() -- t.join() -- yappi.get_func_stats().save("tests/ystats2.ys") -- yappi.stop() -- self.assertRaises(_yappi.error, yappi.set_clock_type, "wall") -- yappi.clear_stats() -- yappi.set_clock_type("wall") -- yappi.start() -- t = threading.Thread(target=a) -- t.start() -- t.join() -- yappi.get_func_stats().save("tests/ystats3.ys") -- self.assertRaises( -- yappi.YappiError, -- yappi.YFuncStats().add("tests/ystats1.ys").add, "tests/ystats3.ys" -- ) -- stats = yappi.YFuncStats(["tests/ystats1.ys", -- "tests/ystats2.ys"]).sort("name") -- fsa = utils.find_stat_by_name(stats, "a") -- fsb = utils.find_stat_by_name(stats, "b") -- fsc = utils.find_stat_by_name(stats, "c") -- self.assertEqual(fsa.ncall, 2) -- self.assertEqual(fsa.ncall, fsb.ncall, fsc.ncall) -- -- def test_merge_aabab_aabbc(self): -- _timings = { -- "a_1": 15, -- "a_2": 14, -- "b_1": 12, -- "a_3": 10, -- "b_2": 9, -- "c_1": 4 -- } -- _yappi._set_test_timings(_timings) -- -- def a(): -- if self._ncall == 1: -- self._ncall += 1 -- a() -- elif self._ncall == 5: -- self._ncall += 1 -- a() -- else: -- b() -- -- def b(): -- if self._ncall == 2: -- self._ncall += 1 -- a() -- elif self._ncall == 6: -- self._ncall += 1 -- b() -- elif self._ncall == 7: -- c() -- else: -- return -- -- def c(): -- pass -- -- self._ncall = 1 -- stats = utils.run_and_get_func_stats(a, ) -- stats.save("tests/ystats1.ys") -- yappi.clear_stats() -- _yappi._set_test_timings(_timings) -- #stats.print_all() -- -- self._ncall = 5 -- stats = utils.run_and_get_func_stats(a, ) -- stats.save("tests/ystats2.ys") -- -- #stats.print_all() -- -- def a(): # same name but another function(code object) -- pass -- -- yappi.start() -- a() -- stats = yappi.get_func_stats().add( -- ["tests/ystats1.ys", "tests/ystats2.ys"] -- ) -- #stats.print_all() -- self.assertEqual(len(stats), 4) -- -- fsa = None -- for stat in stats: -- if stat.name == "a" and stat.ttot == 45: -- fsa = stat -- break -- self.assertTrue(fsa is not None) -- -- self.assertEqual(fsa.ncall, 7) -- self.assertEqual(fsa.nactualcall, 3) -- self.assertEqual(fsa.ttot, 45) -- self.assertEqual(fsa.tsub, 10) -- fsb = utils.find_stat_by_name(stats, "b") -- fsc = utils.find_stat_by_name(stats, "c") -- self.assertEqual(fsb.ncall, 6) -- self.assertEqual(fsb.nactualcall, 3) -- self.assertEqual(fsb.ttot, 36) -- self.assertEqual(fsb.tsub, 27) -- self.assertEqual(fsb.tavg, 6) -- self.assertEqual(fsc.ttot, 8) -- self.assertEqual(fsc.tsub, 8) -- self.assertEqual(fsc.tavg, 4) -- self.assertEqual(fsc.nactualcall, fsc.ncall, 2) -- -- --class MultithreadedScenarios(utils.YappiUnitTestCase): -- -- def test_issue_32(self): -- ''' -- Start yappi from different thread and we get Internal Error(15) as -- the current_ctx_id() called while enumerating the threads in start() -- and as it does not swap to the enumerated ThreadState* the THreadState_GetDict() -- returns wrong object and thus sets an invalid id for the _ctx structure. -- -- When this issue happens multiple Threads have same tid as the internal ts_ptr -- will be same for different contexts. So, let's see if that happens -- ''' -- -- def foo(): -- time.sleep(0.2) -- -- def bar(): -- time.sleep(0.1) -- -- def thread_func(): -- yappi.set_clock_type("wall") -- yappi.start() -- -- bar() -- -- t = threading.Thread(target=thread_func) -- t.start() -- t.join() -- -- foo() -- -- yappi.stop() -- -- thread_ids = set() -- for tstat in yappi.get_thread_stats(): -- self.assertTrue(tstat.tid not in thread_ids) -- thread_ids.add(tstat.tid) -- -- def test_subsequent_profile(self): -- WORKER_COUNT = 5 -- -- def a(): -- pass -- -- def b(): -- pass -- -- def c(): -- pass -- -- _timings = { -- "a_1": 3, -- "b_1": 2, -- "c_1": 1, -- } -- -- yappi.start() -- -- def g(): -- pass -- -- g() -- yappi.stop() -- yappi.clear_stats() -- _yappi._set_test_timings(_timings) -- yappi.start() -- -- _dummy = [] -- for i in range(WORKER_COUNT): -- t = threading.Thread(target=a) -- t.start() -- t.join() -- for i in range(WORKER_COUNT): -- t = threading.Thread(target=b) -- t.start() -- _dummy.append(t) -- t.join() -- for i in range(WORKER_COUNT): -- t = threading.Thread(target=a) -- t.start() -- t.join() -- for i in range(WORKER_COUNT): -- t = threading.Thread(target=c) -- t.start() -- t.join() -- yappi.stop() -- yappi.start() -- -- def f(): -- pass -- -- f() -- stats = yappi.get_func_stats() -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- fsc = utils.find_stat_by_name(stats, 'c') -- self.assertEqual(fsa.ncall, 10) -- self.assertEqual(fsb.ncall, 5) -- self.assertEqual(fsc.ncall, 5) -- self.assertEqual(fsa.ttot, fsa.tsub, 30) -- self.assertEqual(fsb.ttot, fsb.tsub, 10) -- self.assertEqual(fsc.ttot, fsc.tsub, 5) -- -- # MACOSx optimizes by only creating one worker thread -- self.assertTrue(len(yappi.get_thread_stats()) >= 2) -- -- def test_basic(self): -- yappi.set_clock_type('wall') -- -- def dummy(): -- pass -- -- def a(): -- time.sleep(0.2) -- -- class Worker1(threading.Thread): -- -- def a(self): -- time.sleep(0.3) -- -- def run(self): -- self.a() -- -- yappi.start(builtins=False, profile_threads=True) -- -- c = Worker1() -- c.start() -- c.join() -- a() -- stats = yappi.get_func_stats() -- fsa1 = utils.find_stat_by_name(stats, 'Worker1.a') -- fsa2 = utils.find_stat_by_name(stats, 'a') -- self.assertTrue(fsa1 is not None) -- self.assertTrue(fsa2 is not None) -- self.assertTrue(fsa1.ttot > 0.2) -- self.assertTrue(fsa2.ttot > 0.1) -- tstats = yappi.get_thread_stats() -- self.assertEqual(len(tstats), 2) -- tsa = utils.find_stat_by_name(tstats, 'Worker1') -- tsm = utils.find_stat_by_name(tstats, '_MainThread') -- dummy() # call dummy to force ctx name to be retrieved again. -- self.assertTrue(tsa is not None) -- # TODO: I put dummy() to fix below, remove the comments after a while. -- self.assertTrue( # FIX: I see this fails sometimes? -- tsm is not None, -- 'Could not find "_MainThread". Found: %s' % (', '.join(utils.get_stat_names(tstats)))) -- -- def test_ctx_stats(self): -- from threading import Thread -- DUMMY_WORKER_COUNT = 5 -- yappi.start() -- -- class DummyThread(Thread): -- pass -- -- def dummy(): -- pass -- -- def dummy_worker(): -- pass -- -- for i in range(DUMMY_WORKER_COUNT): -- t = DummyThread(target=dummy_worker) -- t.start() -- t.join() -- yappi.stop() -- stats = yappi.get_thread_stats() -- tsa = utils.find_stat_by_name(stats, "DummyThread") -- self.assertTrue(tsa is not None) -- yappi.clear_stats() -- time.sleep(1.0) -- _timings = { -- "a_1": 6, -- "b_1": 5, -- "c_1": 3, -- "d_1": 1, -- "a_2": 4, -- "b_2": 3, -- "c_2": 2, -- "d_2": 1 -- } -- _yappi._set_test_timings(_timings) -- -- class Thread1(Thread): -- pass -- -- class Thread2(Thread): -- pass -- -- def a(): -- b() -- -- def b(): -- c() -- -- def c(): -- d() -- -- def d(): -- time.sleep(0.6) -- -- yappi.set_clock_type("wall") -- yappi.start() -- t1 = Thread1(target=a) -- t1.start() -- t2 = Thread2(target=a) -- t2.start() -- t1.join() -- t2.join() -- stats = yappi.get_thread_stats() -- -- # the fist clear_stats clears the context table? -- tsa = utils.find_stat_by_name(stats, "DummyThread") -- self.assertTrue(tsa is None) -- -- tst1 = utils.find_stat_by_name(stats, "Thread1") -- tst2 = utils.find_stat_by_name(stats, "Thread2") -- tsmain = utils.find_stat_by_name(stats, "_MainThread") -- dummy() # call dummy to force ctx name to be retrieved again. -- self.assertTrue(len(stats) == 3) -- self.assertTrue(tst1 is not None) -- self.assertTrue(tst2 is not None) -- # TODO: I put dummy() to fix below, remove the comments after a while. -- self.assertTrue( # FIX: I see this fails sometimes -- tsmain is not None, -- 'Could not find "_MainThread". Found: %s' % (', '.join(utils.get_stat_names(stats)))) -- self.assertTrue(1.0 > tst2.ttot >= 0.5) -- self.assertTrue(1.0 > tst1.ttot >= 0.5) -- -- # test sorting of the ctx stats -- stats = stats.sort("totaltime", "desc") -- prev_stat = None -- for stat in stats: -- if prev_stat: -- self.assertTrue(prev_stat.ttot >= stat.ttot) -- prev_stat = stat -- stats = stats.sort("totaltime", "asc") -- prev_stat = None -- for stat in stats: -- if prev_stat: -- self.assertTrue(prev_stat.ttot <= stat.ttot) -- prev_stat = stat -- stats = stats.sort("schedcount", "desc") -- prev_stat = None -- for stat in stats: -- if prev_stat: -- self.assertTrue(prev_stat.sched_count >= stat.sched_count) -- prev_stat = stat -- stats = stats.sort("name", "desc") -- prev_stat = None -- for stat in stats: -- if prev_stat: -- self.assertTrue(prev_stat.name.lower() >= stat.name.lower()) -- prev_stat = stat -- self.assertRaises( -- yappi.YappiError, stats.sort, "invalid_thread_sorttype_arg" -- ) -- self.assertRaises( -- yappi.YappiError, stats.sort, "invalid_thread_sortorder_arg" -- ) -- -- def test_ctx_stats_cpu(self): -- -- def get_thread_name(): -- try: -- return threading.current_thread().name -- except AttributeError: -- return "Anonymous" -- -- def burn_cpu(sec): -- t0 = yappi.get_clock_time() -- elapsed = 0 -- while (elapsed < sec): -- for _ in range(1000): -- pass -- elapsed = yappi.get_clock_time() - t0 -- -- def test(): -- -- ts = [] -- for i in (0.01, 0.05, 0.1): -- t = threading.Thread(target=burn_cpu, args=(i, )) -- t.name = "burn_cpu-%s" % str(i) -- t.start() -- ts.append(t) -- for t in ts: -- t.join() -- -- yappi.set_clock_type("cpu") -- yappi.set_context_name_callback(get_thread_name) -- -- yappi.start() -- -- test() -- -- yappi.stop() -- -- tstats = yappi.get_thread_stats() -- r1 = ''' -- burn_cpu-0.1 3 123145356058624 0.100105 8 -- burn_cpu-0.05 2 123145361313792 0.050149 8 -- burn_cpu-0.01 1 123145356058624 0.010127 2 -- MainThread 0 4321620864 0.001632 6 -- ''' -- self.assert_ctx_stats_almost_equal(r1, tstats) -- -- def test_producer_consumer_with_queues(self): -- # we currently just stress yappi, no functionality test is done here. -- yappi.start() -- if utils.is_py3x(): -- from queue import Queue -- else: -- from Queue import Queue -- from threading import Thread -- WORKER_THREAD_COUNT = 50 -- WORK_ITEM_COUNT = 2000 -- -- def worker(): -- while True: -- item = q.get() -- # do the work with item -- q.task_done() -- -- q = Queue() -- for i in range(WORKER_THREAD_COUNT): -- t = Thread(target=worker) -- t.daemon = True -- t.start() -- -- for item in range(WORK_ITEM_COUNT): -- q.put(item) -- q.join() # block until all tasks are done -- #yappi.get_func_stats().sort("callcount").print_all() -- yappi.stop() -- -- def test_temporary_lock_waiting(self): -- yappi.start() -- _lock = threading.Lock() -- -- def worker(): -- _lock.acquire() -- try: -- time.sleep(1.0) -- finally: -- _lock.release() -- -- t1 = threading.Thread(target=worker) -- t2 = threading.Thread(target=worker) -- t1.start() -- t2.start() -- t1.join() -- t2.join() -- #yappi.get_func_stats().sort("callcount").print_all() -- yappi.stop() -- -- @unittest.skipIf(os.name != "posix", "requires Posix compliant OS") -- def test_signals_with_blocking_calls(self): -- import signal, os, time -- -- # just to verify if signal is handled correctly and stats/yappi are not corrupted. -- def handler(signum, frame): -- raise Exception("Signal handler executed!") -- -- yappi.start() -- signal.signal(signal.SIGALRM, handler) -- signal.alarm(1) -- self.assertRaises(Exception, time.sleep, 2) -- stats = yappi.get_func_stats() -- fsh = utils.find_stat_by_name(stats, "handler") -- self.assertTrue(fsh is not None) -- -- @unittest.skipIf(not sys.version_info >= (3, 2), "requires Python 3.2") -- def test_concurrent_futures(self): -- yappi.start() -- from concurrent.futures import ThreadPoolExecutor -- with ThreadPoolExecutor(max_workers=5) as executor: -- f = executor.submit(pow, 5, 2) -- self.assertEqual(f.result(), 25) -- time.sleep(1.0) -- yappi.stop() -- -- @unittest.skipIf(not sys.version_info >= (3, 2), "requires Python 3.2") -- def test_barrier(self): -- yappi.start() -- b = threading.Barrier(2, timeout=1) -- -- def worker(): -- try: -- b.wait() -- except threading.BrokenBarrierError: -- pass -- except Exception: -- raise Exception("BrokenBarrierError not raised") -- -- t1 = threading.Thread(target=worker) -- t1.start() -- #b.wait() -- t1.join() -- yappi.stop() -- -- --class NonRecursiveFunctions(utils.YappiUnitTestCase): -- -- def test_abcd(self): -- _timings = {"a_1": 6, "b_1": 5, "c_1": 3, "d_1": 1} -- _yappi._set_test_timings(_timings) -- -- def a(): -- b() -- -- def b(): -- c() -- -- def c(): -- d() -- -- def d(): -- pass -- -- stats = utils.run_and_get_func_stats(a) -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- fsc = utils.find_stat_by_name(stats, 'c') -- fsd = utils.find_stat_by_name(stats, 'd') -- cfsab = fsa.children[fsb] -- cfsbc = fsb.children[fsc] -- cfscd = fsc.children[fsd] -- -- self.assertEqual(fsa.ttot, 6) -- self.assertEqual(fsa.tsub, 1) -- self.assertEqual(fsb.ttot, 5) -- self.assertEqual(fsb.tsub, 2) -- self.assertEqual(fsc.ttot, 3) -- self.assertEqual(fsc.tsub, 2) -- self.assertEqual(fsd.ttot, 1) -- self.assertEqual(fsd.tsub, 1) -- self.assertEqual(cfsab.ttot, 5) -- self.assertEqual(cfsab.tsub, 2) -- self.assertEqual(cfsbc.ttot, 3) -- self.assertEqual(cfsbc.tsub, 2) -- self.assertEqual(cfscd.ttot, 1) -- self.assertEqual(cfscd.tsub, 1) -- -- def test_stop_in_middle(self): -- _timings = {"a_1": 6, "b_1": 4} -- _yappi._set_test_timings(_timings) -- -- def a(): -- b() -- yappi.stop() -- -- def b(): -- time.sleep(0.2) -- -- yappi.start() -- a() -- stats = yappi.get_func_stats() -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- -- self.assertEqual(fsa.ncall, 1) -- self.assertEqual(fsa.nactualcall, 0) -- self.assertEqual(fsa.ttot, 0) # no call_leave called -- self.assertEqual(fsa.tsub, 0) # no call_leave called -- self.assertEqual(fsb.ttot, 4) -- -- --class RecursiveFunctions(utils.YappiUnitTestCase): -- -- def test_fibonacci(self): -- -- def fib(n): -- if n > 1: -- return fib(n - 1) + fib(n - 2) -- else: -- return n -- -- stats = utils.run_and_get_func_stats(fib, 22) -- fs = utils.find_stat_by_name(stats, 'fib') -- self.assertEqual(fs.ncall, 57313) -- self.assertEqual(fs.ttot, fs.tsub) -- -- def test_abcadc(self): -- _timings = { -- "a_1": 20, -- "b_1": 19, -- "c_1": 17, -- "a_2": 13, -- "d_1": 12, -- "c_2": 10, -- "a_3": 5 -- } -- _yappi._set_test_timings(_timings) -- -- def a(n): -- if n == 3: -- return -- if n == 1 + 1: -- d(n) -- else: -- b(n) -- -- def b(n): -- c(n) -- -- def c(n): -- a(n + 1) -- -- def d(n): -- c(n) -- -- stats = utils.run_and_get_func_stats(a, 1) -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- fsc = utils.find_stat_by_name(stats, 'c') -- fsd = utils.find_stat_by_name(stats, 'd') -- self.assertEqual(fsa.ncall, 3) -- self.assertEqual(fsa.nactualcall, 1) -- self.assertEqual(fsa.ttot, 20) -- self.assertEqual(fsa.tsub, 7) -- self.assertEqual(fsb.ttot, 19) -- self.assertEqual(fsb.tsub, 2) -- self.assertEqual(fsc.ttot, 17) -- self.assertEqual(fsc.tsub, 9) -- self.assertEqual(fsd.ttot, 12) -- self.assertEqual(fsd.tsub, 2) -- cfsca = fsc.children[fsa] -- self.assertEqual(cfsca.nactualcall, 0) -- self.assertEqual(cfsca.ncall, 2) -- self.assertEqual(cfsca.ttot, 13) -- self.assertEqual(cfsca.tsub, 6) -- -- def test_aaaa(self): -- _timings = {"d_1": 9, "d_2": 7, "d_3": 3, "d_4": 2} -- _yappi._set_test_timings(_timings) -- -- def d(n): -- if n == 3: -- return -- d(n + 1) -- -- stats = utils.run_and_get_func_stats(d, 0) -- fsd = utils.find_stat_by_name(stats, 'd') -- self.assertEqual(fsd.ncall, 4) -- self.assertEqual(fsd.nactualcall, 1) -- self.assertEqual(fsd.ttot, 9) -- self.assertEqual(fsd.tsub, 9) -- cfsdd = fsd.children[fsd] -- self.assertEqual(cfsdd.ttot, 7) -- self.assertEqual(cfsdd.tsub, 7) -- self.assertEqual(cfsdd.ncall, 3) -- self.assertEqual(cfsdd.nactualcall, 0) -- -- def test_abcabc(self): -- _timings = { -- "a_1": 20, -- "b_1": 19, -- "c_1": 17, -- "a_2": 13, -- "b_2": 11, -- "c_2": 9, -- "a_3": 6 -- } -- _yappi._set_test_timings(_timings) -- -- def a(n): -- if n == 3: -- return -- else: -- b(n) -- -- def b(n): -- c(n) -- -- def c(n): -- a(n + 1) -- -- stats = utils.run_and_get_func_stats(a, 1) -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- fsc = utils.find_stat_by_name(stats, 'c') -- self.assertEqual(fsa.ncall, 3) -- self.assertEqual(fsa.nactualcall, 1) -- self.assertEqual(fsa.ttot, 20) -- self.assertEqual(fsa.tsub, 9) -- self.assertEqual(fsb.ttot, 19) -- self.assertEqual(fsb.tsub, 4) -- self.assertEqual(fsc.ttot, 17) -- self.assertEqual(fsc.tsub, 7) -- cfsab = fsa.children[fsb] -- cfsbc = fsb.children[fsc] -- cfsca = fsc.children[fsa] -- self.assertEqual(cfsab.ttot, 19) -- self.assertEqual(cfsab.tsub, 4) -- self.assertEqual(cfsbc.ttot, 17) -- self.assertEqual(cfsbc.tsub, 7) -- self.assertEqual(cfsca.ttot, 13) -- self.assertEqual(cfsca.tsub, 8) -- -- def test_abcbca(self): -- _timings = {"a_1": 10, "b_1": 9, "c_1": 7, "b_2": 4, "c_2": 2, "a_2": 1} -- _yappi._set_test_timings(_timings) -- self._ncall = 1 -- -- def a(): -- if self._ncall == 1: -- b() -- else: -- return -- -- def b(): -- c() -- -- def c(): -- if self._ncall == 1: -- self._ncall += 1 -- b() -- else: -- a() -- -- stats = utils.run_and_get_func_stats(a) -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- fsc = utils.find_stat_by_name(stats, 'c') -- cfsab = fsa.children[fsb] -- cfsbc = fsb.children[fsc] -- cfsca = fsc.children[fsa] -- self.assertEqual(fsa.ttot, 10) -- self.assertEqual(fsa.tsub, 2) -- self.assertEqual(fsb.ttot, 9) -- self.assertEqual(fsb.tsub, 4) -- self.assertEqual(fsc.ttot, 7) -- self.assertEqual(fsc.tsub, 4) -- self.assertEqual(cfsab.ttot, 9) -- self.assertEqual(cfsab.tsub, 2) -- self.assertEqual(cfsbc.ttot, 7) -- self.assertEqual(cfsbc.tsub, 4) -- self.assertEqual(cfsca.ttot, 1) -- self.assertEqual(cfsca.tsub, 1) -- self.assertEqual(cfsca.ncall, 1) -- self.assertEqual(cfsca.nactualcall, 0) -- -- def test_aabccb(self): -- _timings = { -- "a_1": 13, -- "a_2": 11, -- "b_1": 9, -- "c_1": 5, -- "c_2": 3, -- "b_2": 1 -- } -- _yappi._set_test_timings(_timings) -- self._ncall = 1 -- -- def a(): -- if self._ncall == 1: -- self._ncall += 1 -- a() -- else: -- b() -- -- def b(): -- if self._ncall == 3: -- return -- else: -- c() -- -- def c(): -- if self._ncall == 2: -- self._ncall += 1 -- c() -- else: -- b() -- -- stats = utils.run_and_get_func_stats(a) -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- fsc = utils.find_stat_by_name(stats, 'c') -- cfsaa = fsa.children[fsa.index] -- cfsab = fsa.children[fsb] -- cfsbc = fsb.children[fsc.full_name] -- cfscc = fsc.children[fsc] -- cfscb = fsc.children[fsb] -- self.assertEqual(fsb.ttot, 9) -- self.assertEqual(fsb.tsub, 5) -- self.assertEqual(cfsbc.ttot, 5) -- self.assertEqual(cfsbc.tsub, 2) -- self.assertEqual(fsa.ttot, 13) -- self.assertEqual(fsa.tsub, 4) -- self.assertEqual(cfsab.ttot, 9) -- self.assertEqual(cfsab.tsub, 4) -- self.assertEqual(cfsaa.ttot, 11) -- self.assertEqual(cfsaa.tsub, 2) -- self.assertEqual(fsc.ttot, 5) -- self.assertEqual(fsc.tsub, 4) -- -- def test_abaa(self): -- _timings = {"a_1": 13, "b_1": 10, "a_2": 9, "a_3": 5} -- _yappi._set_test_timings(_timings) -- -- self._ncall = 1 -- -- def a(): -- if self._ncall == 1: -- b() -- elif self._ncall == 2: -- self._ncall += 1 -- a() -- else: -- return -- -- def b(): -- self._ncall += 1 -- a() -- -- stats = utils.run_and_get_func_stats(a) -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- cfsaa = fsa.children[fsa] -- cfsba = fsb.children[fsa] -- self.assertEqual(fsb.ttot, 10) -- self.assertEqual(fsb.tsub, 1) -- self.assertEqual(fsa.ttot, 13) -- self.assertEqual(fsa.tsub, 12) -- self.assertEqual(cfsaa.ttot, 5) -- self.assertEqual(cfsaa.tsub, 5) -- self.assertEqual(cfsba.ttot, 9) -- self.assertEqual(cfsba.tsub, 4) -- -- def test_aabb(self): -- _timings = {"a_1": 13, "a_2": 10, "b_1": 9, "b_2": 5} -- _yappi._set_test_timings(_timings) -- -- self._ncall = 1 -- -- def a(): -- if self._ncall == 1: -- self._ncall += 1 -- a() -- elif self._ncall == 2: -- b() -- else: -- return -- -- def b(): -- if self._ncall == 2: -- self._ncall += 1 -- b() -- else: -- return -- -- stats = utils.run_and_get_func_stats(a) -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- cfsaa = fsa.children[fsa] -- cfsab = fsa.children[fsb] -- cfsbb = fsb.children[fsb] -- self.assertEqual(fsa.ttot, 13) -- self.assertEqual(fsa.tsub, 4) -- self.assertEqual(fsb.ttot, 9) -- self.assertEqual(fsb.tsub, 9) -- self.assertEqual(cfsaa.ttot, 10) -- self.assertEqual(cfsaa.tsub, 1) -- self.assertEqual(cfsab.ttot, 9) -- self.assertEqual(cfsab.tsub, 4) -- self.assertEqual(cfsbb.ttot, 5) -- self.assertEqual(cfsbb.tsub, 5) -- -- def test_abbb(self): -- _timings = {"a_1": 13, "b_1": 10, "b_2": 6, "b_3": 1} -- _yappi._set_test_timings(_timings) -- -- self._ncall = 1 -- -- def a(): -- if self._ncall == 1: -- b() -- -- def b(): -- if self._ncall == 3: -- return -- self._ncall += 1 -- b() -- -- stats = utils.run_and_get_func_stats(a) -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- cfsab = fsa.children[fsb] -- cfsbb = fsb.children[fsb] -- self.assertEqual(fsa.ttot, 13) -- self.assertEqual(fsa.tsub, 3) -- self.assertEqual(fsb.ttot, 10) -- self.assertEqual(fsb.tsub, 10) -- self.assertEqual(fsb.ncall, 3) -- self.assertEqual(fsb.nactualcall, 1) -- self.assertEqual(cfsab.ttot, 10) -- self.assertEqual(cfsab.tsub, 4) -- self.assertEqual(cfsbb.ttot, 6) -- self.assertEqual(cfsbb.tsub, 6) -- self.assertEqual(cfsbb.nactualcall, 0) -- self.assertEqual(cfsbb.ncall, 2) -- -- def test_aaab(self): -- _timings = {"a_1": 13, "a_2": 10, "a_3": 6, "b_1": 1} -- _yappi._set_test_timings(_timings) -- -- self._ncall = 1 -- -- def a(): -- if self._ncall == 3: -- b() -- return -- self._ncall += 1 -- a() -- -- def b(): -- return -- -- stats = utils.run_and_get_func_stats(a) -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- cfsaa = fsa.children[fsa] -- cfsab = fsa.children[fsb] -- self.assertEqual(fsa.ttot, 13) -- self.assertEqual(fsa.tsub, 12) -- self.assertEqual(fsb.ttot, 1) -- self.assertEqual(fsb.tsub, 1) -- self.assertEqual(cfsaa.ttot, 10) -- self.assertEqual(cfsaa.tsub, 9) -- self.assertEqual(cfsab.ttot, 1) -- self.assertEqual(cfsab.tsub, 1) -- -- def test_abab(self): -- _timings = {"a_1": 13, "b_1": 10, "a_2": 6, "b_2": 1} -- _yappi._set_test_timings(_timings) -- -- self._ncall = 1 -- -- def a(): -- b() -- -- def b(): -- if self._ncall == 2: -- return -- self._ncall += 1 -- a() -- -- stats = utils.run_and_get_func_stats(a) -- fsa = utils.find_stat_by_name(stats, 'a') -- fsb = utils.find_stat_by_name(stats, 'b') -- cfsab = fsa.children[fsb] -- cfsba = fsb.children[fsa] -- self.assertEqual(fsa.ttot, 13) -- self.assertEqual(fsa.tsub, 8) -- self.assertEqual(fsb.ttot, 10) -- self.assertEqual(fsb.tsub, 5) -- self.assertEqual(cfsab.ttot, 10) -- self.assertEqual(cfsab.tsub, 5) -- self.assertEqual(cfsab.ncall, 2) -- self.assertEqual(cfsab.nactualcall, 1) -- self.assertEqual(cfsba.ttot, 6) -- self.assertEqual(cfsba.tsub, 5) -- -- --if __name__ == '__main__': -- # import sys;sys.argv = ['', 'BasicUsage.test_run_as_script'] -- # import sys;sys.argv = ['', 'MultithreadedScenarios.test_subsequent_profile'] -- unittest.main() -+import os -+import sys -+import time -+import threading -+import unittest -+import yappi -+import _yappi -+import tests.utils as utils -+import multiprocessing # added to fix http://bugs.python.org/issue15881 for > Py2.6 -+import subprocess -+ -+_counter = 0 -+ -+ -+class BasicUsage(utils.YappiUnitTestCase): -+ -+ def test_callback_function_int_return_overflow(self): -+ # this test is just here to check if any errors are generated, as the err -+ # is printed in C side, I did not include it here. THere are ways to test -+ # this deterministically, I did not bother -+ import ctypes -+ -+ def _unsigned_overflow_margin(): -+ return 2**(ctypes.sizeof(ctypes.c_void_p) * 8) - 1 -+ -+ def foo(): -+ pass -+ -+ #with utils.captured_output() as (out, err): -+ yappi.set_context_id_callback(_unsigned_overflow_margin) -+ yappi.set_tag_callback(_unsigned_overflow_margin) -+ yappi.start() -+ foo() -+ -+ def test_issue60(self): -+ -+ def foo(): -+ buf = bytearray() -+ buf += b't' * 200 -+ view = memoryview(buf)[10:] -+ view = view.tobytes() -+ del buf[:10] # this throws exception -+ return view -+ -+ yappi.start(builtins=True) -+ foo() -+ self.assertTrue( -+ len( -+ yappi.get_func_stats( -+ filter_callback=lambda x: yappi. -+ func_matches(x, [memoryview.tobytes]) -+ ) -+ ) > 0 -+ ) -+ yappi.stop() -+ -+ def test_issue54(self): -+ -+ def _tag_cbk(): -+ global _counter -+ _counter += 1 -+ return _counter -+ -+ def a(): -+ pass -+ -+ def b(): -+ pass -+ -+ yappi.set_tag_callback(_tag_cbk) -+ yappi.start() -+ a() -+ a() -+ a() -+ yappi.stop() -+ stats = yappi.get_func_stats() -+ self.assertEqual(stats.pop().ncall, 3) # aggregated if no tag is given -+ stats = yappi.get_func_stats(tag=1) -+ -+ for i in range(1, 3): -+ stats = yappi.get_func_stats(tag=i) -+ stats = yappi.get_func_stats( -+ tag=i, filter_callback=lambda x: yappi.func_matches(x, [a]) -+ ) -+ -+ stat = stats.pop() -+ self.assertEqual(stat.ncall, 1) -+ -+ yappi.set_tag_callback(None) -+ yappi.clear_stats() -+ yappi.start() -+ b() -+ b() -+ stats = yappi.get_func_stats() -+ self.assertEqual(len(stats), 1) -+ stat = stats.pop() -+ self.assertEqual(stat.ncall, 2) -+ -+ def test_filter(self): -+ -+ def a(): -+ pass -+ -+ def b(): -+ a() -+ -+ def c(): -+ b() -+ -+ _TCOUNT = 5 -+ -+ ts = [] -+ yappi.start() -+ for i in range(_TCOUNT): -+ t = threading.Thread(target=c) -+ t.start() -+ ts.append(t) -+ -+ for t in ts: -+ t.join() -+ -+ yappi.stop() -+ -+ ctx_ids = [] -+ for tstat in yappi.get_thread_stats(): -+ if tstat.name == '_MainThread': -+ main_ctx_id = tstat.id -+ else: -+ ctx_ids.append(tstat.id) -+ -+ fstats = yappi.get_func_stats(filter={"ctx_id": 9}) -+ self.assertTrue(fstats.empty()) -+ fstats = yappi.get_func_stats( -+ filter={ -+ "ctx_id": main_ctx_id, -+ "name": "c" -+ } -+ ) # main thread -+ self.assertTrue(fstats.empty()) -+ -+ for i in ctx_ids: -+ fstats = yappi.get_func_stats( -+ filter={ -+ "ctx_id": i, -+ "name": "a", -+ "ncall": 1 -+ } -+ ) -+ self.assertEqual(fstats.pop().ncall, 1) -+ fstats = yappi.get_func_stats(filter={"ctx_id": i, "name": "b"}) -+ self.assertEqual(fstats.pop().ncall, 1) -+ fstats = yappi.get_func_stats(filter={"ctx_id": i, "name": "c"}) -+ self.assertEqual(fstats.pop().ncall, 1) -+ -+ yappi.clear_stats() -+ yappi.start(builtins=True) -+ time.sleep(0.1) -+ yappi.stop() -+ fstats = yappi.get_func_stats(filter={"module": "time"}) -+ self.assertEqual(len(fstats), 1) -+ -+ # invalid filters` -+ self.assertRaises( -+ Exception, yappi.get_func_stats, filter={'tag': "sss"} -+ ) -+ self.assertRaises( -+ Exception, yappi.get_func_stats, filter={'ctx_id': "None"} -+ ) -+ -+ def test_filter_callback(self): -+ -+ def a(): -+ time.sleep(0.1) -+ -+ def b(): -+ a() -+ -+ def c(): -+ pass -+ -+ def d(): -+ pass -+ -+ yappi.set_clock_type("wall") -+ yappi.start(builtins=True) -+ a() -+ b() -+ c() -+ d() -+ stats = yappi.get_func_stats( -+ filter_callback=lambda x: yappi.func_matches(x, [a, b]) -+ ) -+ #stats.print_all() -+ r1 = ''' -+ tests/test_functionality.py:98 a 2 0.000000 0.200350 0.100175 -+ tests/test_functionality.py:101 b 1 0.000000 0.120000 0.100197 -+ ''' -+ self.assert_traces_almost_equal(r1, stats) -+ self.assertEqual(len(stats), 2) -+ stats = yappi.get_func_stats( -+ filter_callback=lambda x: yappi. -+ module_matches(x, [sys.modules[__name__]]) -+ ) -+ r1 = ''' -+ tests/test_functionality.py:98 a 2 0.000000 0.230130 0.115065 -+ tests/test_functionality.py:101 b 1 0.000000 0.120000 0.109011 -+ tests/test_functionality.py:104 c 1 0.000000 0.000002 0.000002 -+ tests/test_functionality.py:107 d 1 0.000000 0.000001 0.000001 -+ ''' -+ self.assert_traces_almost_equal(r1, stats) -+ self.assertEqual(len(stats), 4) -+ -+ stats = yappi.get_func_stats( -+ filter_callback=lambda x: yappi.func_matches(x, [time.sleep]) -+ ) -+ self.assertEqual(len(stats), 1) -+ r1 = ''' -+ time.sleep 2 0.206804 0.220000 0.103402 -+ ''' -+ self.assert_traces_almost_equal(r1, stats) -+ -+ def test_print_formatting(self): -+ -+ def a(): -+ pass -+ -+ def b(): -+ a() -+ -+ func_cols = { -+ 1: ("name", 48), -+ 0: ("ncall", 5), -+ 2: ("tsub", 8), -+ } -+ thread_cols = { -+ 1: ("name", 48), -+ 0: ("ttot", 8), -+ } -+ -+ yappi.start() -+ a() -+ b() -+ yappi.stop() -+ fs = yappi.get_func_stats() -+ cs = fs[1].children -+ ts = yappi.get_thread_stats() -+ #fs.print_all(out=sys.stderr, columns={1:("name", 70), }) -+ #cs.print_all(out=sys.stderr, columns=func_cols) -+ #ts.print_all(out=sys.stderr, columns=thread_cols) -+ #cs.print_all(out=sys.stderr, columns={}) -+ -+ self.assertRaises( -+ yappi.YappiError, fs.print_all, columns={1: ("namee", 9)} -+ ) -+ self.assertRaises( -+ yappi.YappiError, cs.print_all, columns={1: ("dd", 0)} -+ ) -+ self.assertRaises( -+ yappi.YappiError, ts.print_all, columns={1: ("tidd", 0)} -+ ) -+ -+ def test_get_clock(self): -+ yappi.set_clock_type('cpu') -+ self.assertEqual('cpu', yappi.get_clock_type()) -+ clock_info = yappi.get_clock_info() -+ self.assertTrue('api' in clock_info) -+ self.assertTrue('resolution' in clock_info) -+ -+ yappi.set_clock_type('wall') -+ self.assertEqual('wall', yappi.get_clock_type()) -+ -+ t0 = yappi.get_clock_time() -+ time.sleep(0.1) -+ duration = yappi.get_clock_time() - t0 -+ self.assertTrue(0.05 < duration < 0.3) -+ -+ def test_profile_decorator(self): -+ -+ def aggregate(func, stats): -+ fname = "tests/%s.profile" % (func.__name__) -+ try: -+ stats.add(fname) -+ except IOError: -+ pass -+ stats.save(fname) -+ raise Exception("messing around") -+ -+ @yappi.profile(return_callback=aggregate) -+ def a(x, y): -+ if x + y == 25: -+ raise Exception("") -+ return x + y -+ -+ def b(): -+ pass -+ -+ try: -+ os.remove( -+ "tests/a.profile" -+ ) # remove the one from prev test, if available -+ except: -+ pass -+ -+ # global profile is on to mess things up -+ yappi.start() -+ b() -+ -+ # assert functionality and call function at same time -+ try: -+ self.assertEqual(a(1, 2), 3) -+ except: -+ pass -+ try: -+ self.assertEqual(a(2, 5), 7) -+ except: -+ pass -+ try: -+ a(4, 21) -+ except: -+ pass -+ stats = yappi.get_func_stats().add("tests/a.profile") -+ fsa = utils.find_stat_by_name(stats, 'a') -+ self.assertEqual(fsa.ncall, 3) -+ self.assertEqual(len(stats), 1) # b() should be cleared out. -+ -+ @yappi.profile(return_callback=aggregate) -+ def count_down_rec(n): -+ if n == 0: -+ return -+ count_down_rec(n - 1) -+ -+ try: -+ os.remove( -+ "tests/count_down_rec.profile" -+ ) # remove the one from prev test, if available -+ except: -+ pass -+ -+ try: -+ count_down_rec(4) -+ except: -+ pass -+ try: -+ count_down_rec(3) -+ except: -+ pass -+ -+ stats = yappi.YFuncStats("tests/count_down_rec.profile") -+ fsrec = utils.find_stat_by_name(stats, 'count_down_rec') -+ self.assertEqual(fsrec.ncall, 9) -+ self.assertEqual(fsrec.nactualcall, 2) -+ -+ def test_strip_dirs(self): -+ -+ def a(): -+ pass -+ -+ stats = utils.run_and_get_func_stats(a, ) -+ stats.strip_dirs() -+ fsa = utils.find_stat_by_name(stats, "a") -+ self.assertEqual(fsa.module, os.path.basename(fsa.module)) -+ -+ @unittest.skipIf(os.name == "nt", "do not run on Windows") -+ def test_run_as_script(self): -+ import re -+ p = subprocess.Popen( -+ ['yappi', os.path.join('./tests', 'run_as_script.py')], -+ stdout=subprocess.PIPE -+ ) -+ out, err = p.communicate() -+ self.assertEqual(p.returncode, 0) -+ func_stats, thread_stats = re.split( -+ b'name\\s+id\\s+tid\\s+ttot\\s+scnt\\s*\n', out -+ ) -+ self.assertTrue(b'FancyThread' in thread_stats) -+ -+ def test_yappi_overhead(self): -+ LOOP_COUNT = 100000 -+ -+ def a(): -+ pass -+ -+ def b(): -+ for i in range(LOOP_COUNT): -+ a() -+ -+ t0 = time.time() -+ yappi.start() -+ b() -+ yappi.stop() -+ time_with_yappi = time.time() - t0 -+ t0 = time.time() -+ b() -+ time_without_yappi = time.time() - t0 -+ if time_without_yappi == 0: -+ time_without_yappi = 0.000001 -+ -+ # in latest v0.82, I calculated this as close to "7.0" in my machine. -+ # however, %83 of this overhead is coming from tickcount(). The other %17 -+ # seems to have been evenly distributed to the internal bookkeeping -+ # structures/algorithms which seems acceptable. Note that our test only -+ # tests one function being profiled at-a-time in a short interval. -+ # profiling high number of functions in a small time -+ # is a different beast, (which is pretty unlikely in most applications) -+ # So as a conclusion: I cannot see any optimization window for Yappi that -+ # is worth implementing as we will only optimize %17 of the time. -+ sys.stderr.write("\r\nYappi puts %0.1f times overhead to the profiled application in average.\r\n" % \ -+ (time_with_yappi / time_without_yappi)) -+ -+ def test_clear_stats_while_running(self): -+ -+ def a(): -+ pass -+ -+ yappi.start() -+ a() -+ yappi.clear_stats() -+ a() -+ stats = yappi.get_func_stats() -+ fsa = utils.find_stat_by_name(stats, 'a') -+ self.assertEqual(fsa.ncall, 1) -+ -+ def test_generator(self): -+ -+ def _gen(n): -+ while (n > 0): -+ yield n -+ n -= 1 -+ -+ yappi.start() -+ for x in _gen(5): -+ pass -+ self.assertTrue( -+ yappi.convert2pstats(yappi.get_func_stats()) is not None -+ ) -+ -+ def test_slice_child_stats_and_strip_dirs(self): -+ -+ def b(): -+ for i in range(10000000): -+ pass -+ -+ def a(): -+ b() -+ -+ yappi.start(builtins=True) -+ a() -+ stats = yappi.get_func_stats() -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ self.assertTrue(fsa.children[0:1] is not None) -+ prev_afullname = fsa.full_name -+ prev_bchildfullname = fsa.children[fsb].full_name -+ stats.strip_dirs() -+ self.assertTrue(len(prev_afullname) > len(fsa.full_name)) -+ self.assertTrue( -+ len(prev_bchildfullname) > len(fsa.children[fsb].full_name) -+ ) -+ -+ def test_children_stat_functions(self): -+ _timings = {"a_1": 5, "b_1": 3, "c_1": 1} -+ _yappi._set_test_timings(_timings) -+ -+ def b(): -+ pass -+ -+ def c(): -+ pass -+ -+ def a(): -+ b() -+ c() -+ -+ yappi.start() -+ a() -+ b() # non-child call -+ c() # non-child call -+ stats = yappi.get_func_stats() -+ fsa = utils.find_stat_by_name(stats, 'a') -+ childs_of_a = fsa.children.get().sort("tavg", "desc") -+ prev_item = None -+ for item in childs_of_a: -+ if prev_item: -+ self.assertTrue(prev_item.tavg > item.tavg) -+ prev_item = item -+ childs_of_a.sort("name", "desc") -+ prev_item = None -+ for item in childs_of_a: -+ if prev_item: -+ self.assertTrue(prev_item.name > item.name) -+ prev_item = item -+ childs_of_a.clear() -+ self.assertTrue(childs_of_a.empty()) -+ -+ def test_no_stats_different_clock_type_load(self): -+ -+ def a(): -+ pass -+ -+ yappi.start() -+ a() -+ yappi.stop() -+ yappi.get_func_stats().save("tests/ystats1.ys") -+ yappi.clear_stats() -+ yappi.set_clock_type("WALL") -+ yappi.start() -+ yappi.stop() -+ stats = yappi.get_func_stats().add("tests/ystats1.ys") -+ fsa = utils.find_stat_by_name(stats, 'a') -+ self.assertTrue(fsa is not None) -+ -+ def test_subsequent_profile(self): -+ _timings = {"a_1": 1, "b_1": 1} -+ _yappi._set_test_timings(_timings) -+ -+ def a(): -+ pass -+ -+ def b(): -+ pass -+ -+ yappi.start() -+ a() -+ yappi.stop() -+ yappi.start() -+ b() -+ yappi.stop() -+ stats = yappi.get_func_stats() -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ self.assertTrue(fsa is not None) -+ self.assertTrue(fsb is not None) -+ self.assertEqual(fsa.ttot, 1) -+ self.assertEqual(fsb.ttot, 1) -+ -+ def test_lambda(self): -+ f = lambda: time.sleep(0.3) -+ yappi.set_clock_type("wall") -+ yappi.start() -+ f() -+ stats = yappi.get_func_stats() -+ fsa = utils.find_stat_by_name(stats, '') -+ self.assertTrue(fsa.ttot > 0.1) -+ -+ def test_module_stress(self): -+ self.assertEqual(yappi.is_running(), False) -+ -+ yappi.start() -+ yappi.clear_stats() -+ self.assertRaises(_yappi.error, yappi.set_clock_type, "wall") -+ -+ yappi.stop() -+ yappi.clear_stats() -+ yappi.set_clock_type("cpu") -+ self.assertRaises(yappi.YappiError, yappi.set_clock_type, "dummy") -+ self.assertEqual(yappi.is_running(), False) -+ yappi.clear_stats() -+ yappi.clear_stats() -+ -+ def test_stat_sorting(self): -+ _timings = {"a_1": 13, "b_1": 10, "a_2": 6, "b_2": 1} -+ _yappi._set_test_timings(_timings) -+ -+ self._ncall = 1 -+ -+ def a(): -+ b() -+ -+ def b(): -+ if self._ncall == 2: -+ return -+ self._ncall += 1 -+ a() -+ -+ stats = utils.run_and_get_func_stats(a) -+ stats = stats.sort("totaltime", "desc") -+ prev_stat = None -+ for stat in stats: -+ if prev_stat: -+ self.assertTrue(prev_stat.ttot >= stat.ttot) -+ prev_stat = stat -+ stats = stats.sort("totaltime", "asc") -+ prev_stat = None -+ for stat in stats: -+ if prev_stat: -+ self.assertTrue(prev_stat.ttot <= stat.ttot) -+ prev_stat = stat -+ stats = stats.sort("avgtime", "asc") -+ prev_stat = None -+ for stat in stats: -+ if prev_stat: -+ self.assertTrue(prev_stat.tavg <= stat.tavg) -+ prev_stat = stat -+ stats = stats.sort("name", "asc") -+ prev_stat = None -+ for stat in stats: -+ if prev_stat: -+ self.assertTrue(prev_stat.name <= stat.name) -+ prev_stat = stat -+ stats = stats.sort("subtime", "asc") -+ prev_stat = None -+ for stat in stats: -+ if prev_stat: -+ self.assertTrue(prev_stat.tsub <= stat.tsub) -+ prev_stat = stat -+ -+ self.assertRaises( -+ yappi.YappiError, stats.sort, "invalid_func_sorttype_arg" -+ ) -+ self.assertRaises( -+ yappi.YappiError, stats.sort, "totaltime", -+ "invalid_func_sortorder_arg" -+ ) -+ -+ def test_start_flags(self): -+ self.assertEqual(_yappi._get_start_flags(), None) -+ yappi.start() -+ -+ def a(): -+ pass -+ -+ a() -+ self.assertEqual(_yappi._get_start_flags()["profile_builtins"], 0) -+ self.assertEqual(_yappi._get_start_flags()["profile_multicontext"], 1) -+ self.assertEqual(len(yappi.get_thread_stats()), 1) -+ -+ def test_builtin_profiling(self): -+ -+ def a(): -+ time.sleep(0.4) # is a builtin function -+ -+ yappi.set_clock_type('wall') -+ -+ yappi.start(builtins=True) -+ a() -+ stats = yappi.get_func_stats() -+ fsa = utils.find_stat_by_name(stats, 'sleep') -+ self.assertTrue(fsa is not None) -+ self.assertTrue(fsa.ttot > 0.3) -+ yappi.stop() -+ yappi.clear_stats() -+ -+ def a(): -+ pass -+ -+ yappi.start() -+ t = threading.Thread(target=a) -+ t.start() -+ t.join() -+ stats = yappi.get_func_stats() -+ -+ def test_singlethread_profiling(self): -+ yappi.set_clock_type('wall') -+ -+ def a(): -+ time.sleep(0.2) -+ -+ class Worker1(threading.Thread): -+ -+ def a(self): -+ time.sleep(0.3) -+ -+ def run(self): -+ self.a() -+ -+ yappi.start(profile_threads=False) -+ -+ c = Worker1() -+ c.start() -+ c.join() -+ a() -+ stats = yappi.get_func_stats() -+ fsa1 = utils.find_stat_by_name(stats, 'Worker1.a') -+ fsa2 = utils.find_stat_by_name(stats, 'a') -+ self.assertTrue(fsa1 is None) -+ self.assertTrue(fsa2 is not None) -+ self.assertTrue(fsa2.ttot > 0.1) -+ -+ def test_run(self): -+ -+ def profiled(): -+ pass -+ -+ yappi.clear_stats() -+ try: -+ with yappi.run(): -+ profiled() -+ stats = yappi.get_func_stats() -+ finally: -+ yappi.clear_stats() -+ -+ self.assertIsNotNone(utils.find_stat_by_name(stats, 'profiled')) -+ -+ def test_run_recursive(self): -+ -+ def profiled(): -+ pass -+ -+ def not_profiled(): -+ pass -+ -+ yappi.clear_stats() -+ try: -+ with yappi.run(): -+ with yappi.run(): -+ profiled() -+ # Profiling stopped here -+ not_profiled() -+ stats = yappi.get_func_stats() -+ finally: -+ yappi.clear_stats() -+ -+ self.assertIsNotNone(utils.find_stat_by_name(stats, 'profiled')) -+ self.assertIsNone(utils.find_stat_by_name(stats, 'not_profiled')) -+ -+ -+class StatSaveScenarios(utils.YappiUnitTestCase): -+ -+ def test_pstats_conversion(self): -+ -+ def pstat_id(fs): -+ return (fs.module, fs.lineno, fs.name) -+ -+ def a(): -+ d() -+ -+ def b(): -+ d() -+ -+ def c(): -+ pass -+ -+ def d(): -+ pass -+ -+ _timings = {"a_1": 12, "b_1": 7, "c_1": 5, "d_1": 2} -+ _yappi._set_test_timings(_timings) -+ stats = utils.run_and_get_func_stats(a, ) -+ stats.strip_dirs() -+ stats.save("tests/a1.pstats", type="pstat") -+ fsa_pid = pstat_id(utils.find_stat_by_name(stats, "a")) -+ fsd_pid = pstat_id(utils.find_stat_by_name(stats, "d")) -+ yappi.clear_stats() -+ _yappi._set_test_timings(_timings) -+ stats = utils.run_and_get_func_stats(a, ) -+ stats.strip_dirs() -+ stats.save("tests/a2.pstats", type="pstat") -+ yappi.clear_stats() -+ _yappi._set_test_timings(_timings) -+ stats = utils.run_and_get_func_stats(b, ) -+ stats.strip_dirs() -+ stats.save("tests/b1.pstats", type="pstat") -+ fsb_pid = pstat_id(utils.find_stat_by_name(stats, "b")) -+ yappi.clear_stats() -+ _yappi._set_test_timings(_timings) -+ stats = utils.run_and_get_func_stats(c, ) -+ stats.strip_dirs() -+ stats.save("tests/c1.pstats", type="pstat") -+ fsc_pid = pstat_id(utils.find_stat_by_name(stats, "c")) -+ -+ # merge saved stats and check pstats values are correct -+ import pstats -+ p = pstats.Stats( -+ 'tests/a1.pstats', 'tests/a2.pstats', 'tests/b1.pstats', -+ 'tests/c1.pstats' -+ ) -+ p.strip_dirs() -+ # ct = ttot, tt = tsub -+ (cc, nc, tt, ct, callers) = p.stats[fsa_pid] -+ self.assertEqual(cc, nc, 2) -+ self.assertEqual(tt, 20) -+ self.assertEqual(ct, 24) -+ (cc, nc, tt, ct, callers) = p.stats[fsd_pid] -+ self.assertEqual(cc, nc, 3) -+ self.assertEqual(tt, 6) -+ self.assertEqual(ct, 6) -+ self.assertEqual(len(callers), 2) -+ (cc, nc, tt, ct) = callers[fsa_pid] -+ self.assertEqual(cc, nc, 2) -+ self.assertEqual(tt, 4) -+ self.assertEqual(ct, 4) -+ (cc, nc, tt, ct) = callers[fsb_pid] -+ self.assertEqual(cc, nc, 1) -+ self.assertEqual(tt, 2) -+ self.assertEqual(ct, 2) -+ -+ def test_merge_stats(self): -+ _timings = { -+ "a_1": 15, -+ "b_1": 14, -+ "c_1": 12, -+ "d_1": 10, -+ "e_1": 9, -+ "f_1": 7, -+ "g_1": 6, -+ "h_1": 5, -+ "i_1": 1 -+ } -+ _yappi._set_test_timings(_timings) -+ -+ def a(): -+ b() -+ -+ def b(): -+ c() -+ -+ def c(): -+ d() -+ -+ def d(): -+ e() -+ -+ def e(): -+ f() -+ -+ def f(): -+ g() -+ -+ def g(): -+ h() -+ -+ def h(): -+ i() -+ -+ def i(): -+ pass -+ -+ yappi.start() -+ a() -+ a() -+ yappi.stop() -+ stats = yappi.get_func_stats() -+ self.assertRaises( -+ NotImplementedError, stats.save, "", "INVALID_SAVE_TYPE" -+ ) -+ stats.save("tests/ystats2.ys") -+ yappi.clear_stats() -+ _yappi._set_test_timings(_timings) -+ yappi.start() -+ a() -+ stats = yappi.get_func_stats().add("tests/ystats2.ys") -+ fsa = utils.find_stat_by_name(stats, "a") -+ fsb = utils.find_stat_by_name(stats, "b") -+ fsc = utils.find_stat_by_name(stats, "c") -+ fsd = utils.find_stat_by_name(stats, "d") -+ fse = utils.find_stat_by_name(stats, "e") -+ fsf = utils.find_stat_by_name(stats, "f") -+ fsg = utils.find_stat_by_name(stats, "g") -+ fsh = utils.find_stat_by_name(stats, "h") -+ fsi = utils.find_stat_by_name(stats, "i") -+ self.assertEqual(fsa.ttot, 45) -+ self.assertEqual(fsa.ncall, 3) -+ self.assertEqual(fsa.nactualcall, 3) -+ self.assertEqual(fsa.tsub, 3) -+ self.assertEqual(fsa.children[fsb].ttot, fsb.ttot) -+ self.assertEqual(fsa.children[fsb].tsub, fsb.tsub) -+ self.assertEqual(fsb.children[fsc].ttot, fsc.ttot) -+ self.assertEqual(fsb.children[fsc].tsub, fsc.tsub) -+ self.assertEqual(fsc.tsub, 6) -+ self.assertEqual(fsc.children[fsd].ttot, fsd.ttot) -+ self.assertEqual(fsc.children[fsd].tsub, fsd.tsub) -+ self.assertEqual(fsd.children[fse].ttot, fse.ttot) -+ self.assertEqual(fsd.children[fse].tsub, fse.tsub) -+ self.assertEqual(fse.children[fsf].ttot, fsf.ttot) -+ self.assertEqual(fse.children[fsf].tsub, fsf.tsub) -+ self.assertEqual(fsf.children[fsg].ttot, fsg.ttot) -+ self.assertEqual(fsf.children[fsg].tsub, fsg.tsub) -+ self.assertEqual(fsg.ttot, 18) -+ self.assertEqual(fsg.tsub, 3) -+ self.assertEqual(fsg.children[fsh].ttot, fsh.ttot) -+ self.assertEqual(fsg.children[fsh].tsub, fsh.tsub) -+ self.assertEqual(fsh.ttot, 15) -+ self.assertEqual(fsh.tsub, 12) -+ self.assertEqual(fsh.tavg, 5) -+ self.assertEqual(fsh.children[fsi].ttot, fsi.ttot) -+ self.assertEqual(fsh.children[fsi].tsub, fsi.tsub) -+ #stats.debug_print() -+ -+ def test_merge_multithreaded_stats(self): -+ import _yappi -+ timings = {"a_1": 2, "b_1": 1} -+ _yappi._set_test_timings(timings) -+ -+ def a(): -+ pass -+ -+ def b(): -+ pass -+ -+ yappi.start() -+ t = threading.Thread(target=a) -+ t.start() -+ t.join() -+ t = threading.Thread(target=b) -+ t.start() -+ t.join() -+ yappi.get_func_stats().save("tests/ystats1.ys") -+ yappi.clear_stats() -+ _yappi._set_test_timings(timings) -+ self.assertEqual(len(yappi.get_func_stats()), 0) -+ self.assertEqual(len(yappi.get_thread_stats()), 1) -+ t = threading.Thread(target=a) -+ t.start() -+ t.join() -+ -+ self.assertEqual(_yappi._get_start_flags()["profile_builtins"], 0) -+ self.assertEqual(_yappi._get_start_flags()["profile_multicontext"], 1) -+ yappi.get_func_stats().save("tests/ystats2.ys") -+ -+ stats = yappi.YFuncStats([ -+ "tests/ystats1.ys", -+ "tests/ystats2.ys", -+ ]) -+ fsa = utils.find_stat_by_name(stats, "a") -+ fsb = utils.find_stat_by_name(stats, "b") -+ self.assertEqual(fsa.ncall, 2) -+ self.assertEqual(fsb.ncall, 1) -+ self.assertEqual(fsa.tsub, fsa.ttot, 4) -+ self.assertEqual(fsb.tsub, fsb.ttot, 1) -+ -+ def test_merge_load_different_clock_types(self): -+ yappi.start(builtins=True) -+ -+ def a(): -+ b() -+ -+ def b(): -+ c() -+ -+ def c(): -+ pass -+ -+ t = threading.Thread(target=a) -+ t.start() -+ t.join() -+ yappi.get_func_stats().sort("name", "asc").save("tests/ystats1.ys") -+ yappi.stop() -+ yappi.clear_stats() -+ yappi.start(builtins=False) -+ t = threading.Thread(target=a) -+ t.start() -+ t.join() -+ yappi.get_func_stats().save("tests/ystats2.ys") -+ yappi.stop() -+ self.assertRaises(_yappi.error, yappi.set_clock_type, "wall") -+ yappi.clear_stats() -+ yappi.set_clock_type("wall") -+ yappi.start() -+ t = threading.Thread(target=a) -+ t.start() -+ t.join() -+ yappi.get_func_stats().save("tests/ystats3.ys") -+ self.assertRaises( -+ yappi.YappiError, -+ yappi.YFuncStats().add("tests/ystats1.ys").add, "tests/ystats3.ys" -+ ) -+ stats = yappi.YFuncStats(["tests/ystats1.ys", -+ "tests/ystats2.ys"]).sort("name") -+ fsa = utils.find_stat_by_name(stats, "a") -+ fsb = utils.find_stat_by_name(stats, "b") -+ fsc = utils.find_stat_by_name(stats, "c") -+ self.assertEqual(fsa.ncall, 2) -+ self.assertEqual(fsa.ncall, fsb.ncall, fsc.ncall) -+ -+ def test_merge_aabab_aabbc(self): -+ _timings = { -+ "a_1": 15, -+ "a_2": 14, -+ "b_1": 12, -+ "a_3": 10, -+ "b_2": 9, -+ "c_1": 4 -+ } -+ _yappi._set_test_timings(_timings) -+ -+ def a(): -+ if self._ncall == 1: -+ self._ncall += 1 -+ a() -+ elif self._ncall == 5: -+ self._ncall += 1 -+ a() -+ else: -+ b() -+ -+ def b(): -+ if self._ncall == 2: -+ self._ncall += 1 -+ a() -+ elif self._ncall == 6: -+ self._ncall += 1 -+ b() -+ elif self._ncall == 7: -+ c() -+ else: -+ return -+ -+ def c(): -+ pass -+ -+ self._ncall = 1 -+ stats = utils.run_and_get_func_stats(a, ) -+ stats.save("tests/ystats1.ys") -+ yappi.clear_stats() -+ _yappi._set_test_timings(_timings) -+ #stats.print_all() -+ -+ self._ncall = 5 -+ stats = utils.run_and_get_func_stats(a, ) -+ stats.save("tests/ystats2.ys") -+ -+ #stats.print_all() -+ -+ def a(): # same name but another function(code object) -+ pass -+ -+ yappi.start() -+ a() -+ stats = yappi.get_func_stats().add( -+ ["tests/ystats1.ys", "tests/ystats2.ys"] -+ ) -+ #stats.print_all() -+ self.assertEqual(len(stats), 4) -+ -+ fsa = None -+ for stat in stats: -+ if stat.name == "a" and stat.ttot == 45: -+ fsa = stat -+ break -+ self.assertTrue(fsa is not None) -+ -+ self.assertEqual(fsa.ncall, 7) -+ self.assertEqual(fsa.nactualcall, 3) -+ self.assertEqual(fsa.ttot, 45) -+ self.assertEqual(fsa.tsub, 10) -+ fsb = utils.find_stat_by_name(stats, "b") -+ fsc = utils.find_stat_by_name(stats, "c") -+ self.assertEqual(fsb.ncall, 6) -+ self.assertEqual(fsb.nactualcall, 3) -+ self.assertEqual(fsb.ttot, 36) -+ self.assertEqual(fsb.tsub, 27) -+ self.assertEqual(fsb.tavg, 6) -+ self.assertEqual(fsc.ttot, 8) -+ self.assertEqual(fsc.tsub, 8) -+ self.assertEqual(fsc.tavg, 4) -+ self.assertEqual(fsc.nactualcall, fsc.ncall, 2) -+ -+ -+class MultithreadedScenarios(utils.YappiUnitTestCase): -+ -+ def test_issue_32(self): -+ ''' -+ Start yappi from different thread and we get Internal Error(15) as -+ the current_ctx_id() called while enumerating the threads in start() -+ and as it does not swap to the enumerated ThreadState* the THreadState_GetDict() -+ returns wrong object and thus sets an invalid id for the _ctx structure. -+ -+ When this issue happens multiple Threads have same tid as the internal ts_ptr -+ will be same for different contexts. So, let's see if that happens -+ ''' -+ -+ def foo(): -+ time.sleep(0.2) -+ -+ def bar(): -+ time.sleep(0.1) -+ -+ def thread_func(): -+ yappi.set_clock_type("wall") -+ yappi.start() -+ -+ bar() -+ -+ t = threading.Thread(target=thread_func) -+ t.start() -+ t.join() -+ -+ foo() -+ -+ yappi.stop() -+ -+ thread_ids = set() -+ for tstat in yappi.get_thread_stats(): -+ self.assertTrue(tstat.tid not in thread_ids) -+ thread_ids.add(tstat.tid) -+ -+ def test_subsequent_profile(self): -+ WORKER_COUNT = 5 -+ -+ def a(): -+ pass -+ -+ def b(): -+ pass -+ -+ def c(): -+ pass -+ -+ _timings = { -+ "a_1": 3, -+ "b_1": 2, -+ "c_1": 1, -+ } -+ -+ yappi.start() -+ -+ def g(): -+ pass -+ -+ g() -+ yappi.stop() -+ yappi.clear_stats() -+ _yappi._set_test_timings(_timings) -+ yappi.start() -+ -+ _dummy = [] -+ for i in range(WORKER_COUNT): -+ t = threading.Thread(target=a) -+ t.start() -+ t.join() -+ for i in range(WORKER_COUNT): -+ t = threading.Thread(target=b) -+ t.start() -+ _dummy.append(t) -+ t.join() -+ for i in range(WORKER_COUNT): -+ t = threading.Thread(target=a) -+ t.start() -+ t.join() -+ for i in range(WORKER_COUNT): -+ t = threading.Thread(target=c) -+ t.start() -+ t.join() -+ yappi.stop() -+ yappi.start() -+ -+ def f(): -+ pass -+ -+ f() -+ stats = yappi.get_func_stats() -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ fsc = utils.find_stat_by_name(stats, 'c') -+ self.assertEqual(fsa.ncall, 10) -+ self.assertEqual(fsb.ncall, 5) -+ self.assertEqual(fsc.ncall, 5) -+ self.assertEqual(fsa.ttot, fsa.tsub, 30) -+ self.assertEqual(fsb.ttot, fsb.tsub, 10) -+ self.assertEqual(fsc.ttot, fsc.tsub, 5) -+ -+ # MACOSx optimizes by only creating one worker thread -+ self.assertTrue(len(yappi.get_thread_stats()) >= 2) -+ -+ def test_basic(self): -+ yappi.set_clock_type('wall') -+ -+ def dummy(): -+ pass -+ -+ def a(): -+ time.sleep(0.2) -+ -+ class Worker1(threading.Thread): -+ -+ def a(self): -+ time.sleep(0.3) -+ -+ def run(self): -+ self.a() -+ -+ yappi.start(builtins=False, profile_threads=True) -+ -+ c = Worker1() -+ c.start() -+ c.join() -+ a() -+ stats = yappi.get_func_stats() -+ fsa1 = utils.find_stat_by_name(stats, 'Worker1.a') -+ fsa2 = utils.find_stat_by_name(stats, 'a') -+ self.assertTrue(fsa1 is not None) -+ self.assertTrue(fsa2 is not None) -+ self.assertTrue(fsa1.ttot > 0.2) -+ self.assertTrue(fsa2.ttot > 0.1) -+ tstats = yappi.get_thread_stats() -+ self.assertEqual(len(tstats), 2) -+ tsa = utils.find_stat_by_name(tstats, 'Worker1') -+ tsm = utils.find_stat_by_name(tstats, '_MainThread') -+ dummy() # call dummy to force ctx name to be retrieved again. -+ self.assertTrue(tsa is not None) -+ # TODO: I put dummy() to fix below, remove the comments after a while. -+ self.assertTrue( # FIX: I see this fails sometimes? -+ tsm is not None, -+ 'Could not find "_MainThread". Found: %s' % (', '.join(utils.get_stat_names(tstats)))) -+ -+ def test_ctx_stats(self): -+ from threading import Thread -+ DUMMY_WORKER_COUNT = 5 -+ yappi.start() -+ -+ class DummyThread(Thread): -+ pass -+ -+ def dummy(): -+ pass -+ -+ def dummy_worker(): -+ pass -+ -+ for i in range(DUMMY_WORKER_COUNT): -+ t = DummyThread(target=dummy_worker) -+ t.start() -+ t.join() -+ yappi.stop() -+ stats = yappi.get_thread_stats() -+ tsa = utils.find_stat_by_name(stats, "DummyThread") -+ self.assertTrue(tsa is not None) -+ yappi.clear_stats() -+ time.sleep(1.0) -+ _timings = { -+ "a_1": 6, -+ "b_1": 5, -+ "c_1": 3, -+ "d_1": 1, -+ "a_2": 4, -+ "b_2": 3, -+ "c_2": 2, -+ "d_2": 1 -+ } -+ _yappi._set_test_timings(_timings) -+ -+ class Thread1(Thread): -+ pass -+ -+ class Thread2(Thread): -+ pass -+ -+ def a(): -+ b() -+ -+ def b(): -+ c() -+ -+ def c(): -+ d() -+ -+ def d(): -+ time.sleep(0.6) -+ -+ yappi.set_clock_type("wall") -+ yappi.start() -+ t1 = Thread1(target=a) -+ t1.start() -+ t2 = Thread2(target=a) -+ t2.start() -+ t1.join() -+ t2.join() -+ stats = yappi.get_thread_stats() -+ -+ # the fist clear_stats clears the context table? -+ tsa = utils.find_stat_by_name(stats, "DummyThread") -+ self.assertTrue(tsa is None) -+ -+ tst1 = utils.find_stat_by_name(stats, "Thread1") -+ tst2 = utils.find_stat_by_name(stats, "Thread2") -+ tsmain = utils.find_stat_by_name(stats, "_MainThread") -+ dummy() # call dummy to force ctx name to be retrieved again. -+ self.assertTrue(len(stats) == 3) -+ self.assertTrue(tst1 is not None) -+ self.assertTrue(tst2 is not None) -+ # TODO: I put dummy() to fix below, remove the comments after a while. -+ self.assertTrue( # FIX: I see this fails sometimes -+ tsmain is not None, -+ 'Could not find "_MainThread". Found: %s' % (', '.join(utils.get_stat_names(stats)))) -+ self.assertTrue(1.0 > tst2.ttot >= 0.5) -+ self.assertTrue(1.0 > tst1.ttot >= 0.5) -+ -+ # test sorting of the ctx stats -+ stats = stats.sort("totaltime", "desc") -+ prev_stat = None -+ for stat in stats: -+ if prev_stat: -+ self.assertTrue(prev_stat.ttot >= stat.ttot) -+ prev_stat = stat -+ stats = stats.sort("totaltime", "asc") -+ prev_stat = None -+ for stat in stats: -+ if prev_stat: -+ self.assertTrue(prev_stat.ttot <= stat.ttot) -+ prev_stat = stat -+ stats = stats.sort("schedcount", "desc") -+ prev_stat = None -+ for stat in stats: -+ if prev_stat: -+ self.assertTrue(prev_stat.sched_count >= stat.sched_count) -+ prev_stat = stat -+ stats = stats.sort("name", "desc") -+ prev_stat = None -+ for stat in stats: -+ if prev_stat: -+ self.assertTrue(prev_stat.name.lower() >= stat.name.lower()) -+ prev_stat = stat -+ self.assertRaises( -+ yappi.YappiError, stats.sort, "invalid_thread_sorttype_arg" -+ ) -+ self.assertRaises( -+ yappi.YappiError, stats.sort, "invalid_thread_sortorder_arg" -+ ) -+ -+ def test_ctx_stats_cpu(self): -+ -+ def get_thread_name(): -+ try: -+ return threading.current_thread().name -+ except AttributeError: -+ return "Anonymous" -+ -+ def burn_cpu(sec): -+ t0 = yappi.get_clock_time() -+ elapsed = 0 -+ while (elapsed < sec): -+ for _ in range(1000): -+ pass -+ elapsed = yappi.get_clock_time() - t0 -+ -+ def test(): -+ -+ ts = [] -+ for i in (0.01, 0.05, 0.1): -+ t = threading.Thread(target=burn_cpu, args=(i, )) -+ t.name = "burn_cpu-%s" % str(i) -+ t.start() -+ ts.append(t) -+ for t in ts: -+ t.join() -+ -+ yappi.set_clock_type("cpu") -+ yappi.set_context_name_callback(get_thread_name) -+ -+ yappi.start() -+ -+ test() -+ -+ yappi.stop() -+ -+ tstats = yappi.get_thread_stats() -+ r1 = ''' -+ burn_cpu-0.1 3 123145356058624 0.100105 8 -+ burn_cpu-0.05 2 123145361313792 0.050149 8 -+ burn_cpu-0.01 1 123145356058624 0.010127 2 -+ MainThread 0 4321620864 0.001632 6 -+ ''' -+ self.assert_ctx_stats_almost_equal(r1, tstats) -+ -+ def test_producer_consumer_with_queues(self): -+ # we currently just stress yappi, no functionality test is done here. -+ yappi.start() -+ if utils.is_py3x(): -+ from queue import Queue -+ else: -+ from Queue import Queue -+ from threading import Thread -+ WORKER_THREAD_COUNT = 50 -+ WORK_ITEM_COUNT = 2000 -+ -+ def worker(): -+ while True: -+ item = q.get() -+ # do the work with item -+ q.task_done() -+ -+ q = Queue() -+ for i in range(WORKER_THREAD_COUNT): -+ t = Thread(target=worker) -+ t.daemon = True -+ t.start() -+ -+ for item in range(WORK_ITEM_COUNT): -+ q.put(item) -+ q.join() # block until all tasks are done -+ #yappi.get_func_stats().sort("callcount").print_all() -+ yappi.stop() -+ -+ def test_temporary_lock_waiting(self): -+ yappi.start() -+ _lock = threading.Lock() -+ -+ def worker(): -+ _lock.acquire() -+ try: -+ time.sleep(1.0) -+ finally: -+ _lock.release() -+ -+ t1 = threading.Thread(target=worker) -+ t2 = threading.Thread(target=worker) -+ t1.start() -+ t2.start() -+ t1.join() -+ t2.join() -+ #yappi.get_func_stats().sort("callcount").print_all() -+ yappi.stop() -+ -+ @unittest.skipIf(os.name != "posix", "requires Posix compliant OS") -+ def test_signals_with_blocking_calls(self): -+ import signal, os, time -+ -+ # just to verify if signal is handled correctly and stats/yappi are not corrupted. -+ def handler(signum, frame): -+ raise Exception("Signal handler executed!") -+ -+ yappi.start() -+ signal.signal(signal.SIGALRM, handler) -+ signal.alarm(1) -+ self.assertRaises(Exception, time.sleep, 2) -+ stats = yappi.get_func_stats() -+ fsh = utils.find_stat_by_name(stats, "handler") -+ self.assertTrue(fsh is not None) -+ -+ @unittest.skipIf(not sys.version_info >= (3, 2), "requires Python 3.2") -+ def test_concurrent_futures(self): -+ yappi.start() -+ from concurrent.futures import ThreadPoolExecutor -+ with ThreadPoolExecutor(max_workers=5) as executor: -+ f = executor.submit(pow, 5, 2) -+ self.assertEqual(f.result(), 25) -+ time.sleep(1.0) -+ yappi.stop() -+ -+ @unittest.skipIf(not sys.version_info >= (3, 2), "requires Python 3.2") -+ def test_barrier(self): -+ yappi.start() -+ b = threading.Barrier(2, timeout=1) -+ -+ def worker(): -+ try: -+ b.wait() -+ except threading.BrokenBarrierError: -+ pass -+ except Exception: -+ raise Exception("BrokenBarrierError not raised") -+ -+ t1 = threading.Thread(target=worker) -+ t1.start() -+ #b.wait() -+ t1.join() -+ yappi.stop() -+ -+ -+class NonRecursiveFunctions(utils.YappiUnitTestCase): -+ -+ def test_abcd(self): -+ _timings = {"a_1": 6, "b_1": 5, "c_1": 3, "d_1": 1} -+ _yappi._set_test_timings(_timings) -+ -+ def a(): -+ b() -+ -+ def b(): -+ c() -+ -+ def c(): -+ d() -+ -+ def d(): -+ pass -+ -+ stats = utils.run_and_get_func_stats(a) -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ fsc = utils.find_stat_by_name(stats, 'c') -+ fsd = utils.find_stat_by_name(stats, 'd') -+ cfsab = fsa.children[fsb] -+ cfsbc = fsb.children[fsc] -+ cfscd = fsc.children[fsd] -+ -+ self.assertEqual(fsa.ttot, 6) -+ self.assertEqual(fsa.tsub, 1) -+ self.assertEqual(fsb.ttot, 5) -+ self.assertEqual(fsb.tsub, 2) -+ self.assertEqual(fsc.ttot, 3) -+ self.assertEqual(fsc.tsub, 2) -+ self.assertEqual(fsd.ttot, 1) -+ self.assertEqual(fsd.tsub, 1) -+ self.assertEqual(cfsab.ttot, 5) -+ self.assertEqual(cfsab.tsub, 2) -+ self.assertEqual(cfsbc.ttot, 3) -+ self.assertEqual(cfsbc.tsub, 2) -+ self.assertEqual(cfscd.ttot, 1) -+ self.assertEqual(cfscd.tsub, 1) -+ -+ def test_stop_in_middle(self): -+ _timings = {"a_1": 6, "b_1": 4} -+ _yappi._set_test_timings(_timings) -+ -+ def a(): -+ b() -+ yappi.stop() -+ -+ def b(): -+ time.sleep(0.2) -+ -+ yappi.start() -+ a() -+ stats = yappi.get_func_stats() -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ -+ self.assertEqual(fsa.ncall, 1) -+ self.assertEqual(fsa.nactualcall, 0) -+ self.assertEqual(fsa.ttot, 0) # no call_leave called -+ self.assertEqual(fsa.tsub, 0) # no call_leave called -+ self.assertEqual(fsb.ttot, 4) -+ -+ -+class RecursiveFunctions(utils.YappiUnitTestCase): -+ -+ def test_fibonacci(self): -+ -+ def fib(n): -+ if n > 1: -+ return fib(n - 1) + fib(n - 2) -+ else: -+ return n -+ -+ stats = utils.run_and_get_func_stats(fib, 22) -+ fs = utils.find_stat_by_name(stats, 'fib') -+ self.assertEqual(fs.ncall, 57313) -+ self.assertEqual(fs.ttot, fs.tsub) -+ -+ def test_abcadc(self): -+ _timings = { -+ "a_1": 20, -+ "b_1": 19, -+ "c_1": 17, -+ "a_2": 13, -+ "d_1": 12, -+ "c_2": 10, -+ "a_3": 5 -+ } -+ _yappi._set_test_timings(_timings) -+ -+ def a(n): -+ if n == 3: -+ return -+ if n == 1 + 1: -+ d(n) -+ else: -+ b(n) -+ -+ def b(n): -+ c(n) -+ -+ def c(n): -+ a(n + 1) -+ -+ def d(n): -+ c(n) -+ -+ stats = utils.run_and_get_func_stats(a, 1) -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ fsc = utils.find_stat_by_name(stats, 'c') -+ fsd = utils.find_stat_by_name(stats, 'd') -+ self.assertEqual(fsa.ncall, 3) -+ self.assertEqual(fsa.nactualcall, 1) -+ self.assertEqual(fsa.ttot, 20) -+ self.assertEqual(fsa.tsub, 7) -+ self.assertEqual(fsb.ttot, 19) -+ self.assertEqual(fsb.tsub, 2) -+ self.assertEqual(fsc.ttot, 17) -+ self.assertEqual(fsc.tsub, 9) -+ self.assertEqual(fsd.ttot, 12) -+ self.assertEqual(fsd.tsub, 2) -+ cfsca = fsc.children[fsa] -+ self.assertEqual(cfsca.nactualcall, 0) -+ self.assertEqual(cfsca.ncall, 2) -+ self.assertEqual(cfsca.ttot, 13) -+ self.assertEqual(cfsca.tsub, 6) -+ -+ def test_aaaa(self): -+ _timings = {"d_1": 9, "d_2": 7, "d_3": 3, "d_4": 2} -+ _yappi._set_test_timings(_timings) -+ -+ def d(n): -+ if n == 3: -+ return -+ d(n + 1) -+ -+ stats = utils.run_and_get_func_stats(d, 0) -+ fsd = utils.find_stat_by_name(stats, 'd') -+ self.assertEqual(fsd.ncall, 4) -+ self.assertEqual(fsd.nactualcall, 1) -+ self.assertEqual(fsd.ttot, 9) -+ self.assertEqual(fsd.tsub, 9) -+ cfsdd = fsd.children[fsd] -+ self.assertEqual(cfsdd.ttot, 7) -+ self.assertEqual(cfsdd.tsub, 7) -+ self.assertEqual(cfsdd.ncall, 3) -+ self.assertEqual(cfsdd.nactualcall, 0) -+ -+ def test_abcabc(self): -+ _timings = { -+ "a_1": 20, -+ "b_1": 19, -+ "c_1": 17, -+ "a_2": 13, -+ "b_2": 11, -+ "c_2": 9, -+ "a_3": 6 -+ } -+ _yappi._set_test_timings(_timings) -+ -+ def a(n): -+ if n == 3: -+ return -+ else: -+ b(n) -+ -+ def b(n): -+ c(n) -+ -+ def c(n): -+ a(n + 1) -+ -+ stats = utils.run_and_get_func_stats(a, 1) -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ fsc = utils.find_stat_by_name(stats, 'c') -+ self.assertEqual(fsa.ncall, 3) -+ self.assertEqual(fsa.nactualcall, 1) -+ self.assertEqual(fsa.ttot, 20) -+ self.assertEqual(fsa.tsub, 9) -+ self.assertEqual(fsb.ttot, 19) -+ self.assertEqual(fsb.tsub, 4) -+ self.assertEqual(fsc.ttot, 17) -+ self.assertEqual(fsc.tsub, 7) -+ cfsab = fsa.children[fsb] -+ cfsbc = fsb.children[fsc] -+ cfsca = fsc.children[fsa] -+ self.assertEqual(cfsab.ttot, 19) -+ self.assertEqual(cfsab.tsub, 4) -+ self.assertEqual(cfsbc.ttot, 17) -+ self.assertEqual(cfsbc.tsub, 7) -+ self.assertEqual(cfsca.ttot, 13) -+ self.assertEqual(cfsca.tsub, 8) -+ -+ def test_abcbca(self): -+ _timings = {"a_1": 10, "b_1": 9, "c_1": 7, "b_2": 4, "c_2": 2, "a_2": 1} -+ _yappi._set_test_timings(_timings) -+ self._ncall = 1 -+ -+ def a(): -+ if self._ncall == 1: -+ b() -+ else: -+ return -+ -+ def b(): -+ c() -+ -+ def c(): -+ if self._ncall == 1: -+ self._ncall += 1 -+ b() -+ else: -+ a() -+ -+ stats = utils.run_and_get_func_stats(a) -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ fsc = utils.find_stat_by_name(stats, 'c') -+ cfsab = fsa.children[fsb] -+ cfsbc = fsb.children[fsc] -+ cfsca = fsc.children[fsa] -+ self.assertEqual(fsa.ttot, 10) -+ self.assertEqual(fsa.tsub, 2) -+ self.assertEqual(fsb.ttot, 9) -+ self.assertEqual(fsb.tsub, 4) -+ self.assertEqual(fsc.ttot, 7) -+ self.assertEqual(fsc.tsub, 4) -+ self.assertEqual(cfsab.ttot, 9) -+ self.assertEqual(cfsab.tsub, 2) -+ self.assertEqual(cfsbc.ttot, 7) -+ self.assertEqual(cfsbc.tsub, 4) -+ self.assertEqual(cfsca.ttot, 1) -+ self.assertEqual(cfsca.tsub, 1) -+ self.assertEqual(cfsca.ncall, 1) -+ self.assertEqual(cfsca.nactualcall, 0) -+ -+ def test_aabccb(self): -+ _timings = { -+ "a_1": 13, -+ "a_2": 11, -+ "b_1": 9, -+ "c_1": 5, -+ "c_2": 3, -+ "b_2": 1 -+ } -+ _yappi._set_test_timings(_timings) -+ self._ncall = 1 -+ -+ def a(): -+ if self._ncall == 1: -+ self._ncall += 1 -+ a() -+ else: -+ b() -+ -+ def b(): -+ if self._ncall == 3: -+ return -+ else: -+ c() -+ -+ def c(): -+ if self._ncall == 2: -+ self._ncall += 1 -+ c() -+ else: -+ b() -+ -+ stats = utils.run_and_get_func_stats(a) -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ fsc = utils.find_stat_by_name(stats, 'c') -+ cfsaa = fsa.children[fsa.index] -+ cfsab = fsa.children[fsb] -+ cfsbc = fsb.children[fsc.full_name] -+ cfscc = fsc.children[fsc] -+ cfscb = fsc.children[fsb] -+ self.assertEqual(fsb.ttot, 9) -+ self.assertEqual(fsb.tsub, 5) -+ self.assertEqual(cfsbc.ttot, 5) -+ self.assertEqual(cfsbc.tsub, 2) -+ self.assertEqual(fsa.ttot, 13) -+ self.assertEqual(fsa.tsub, 4) -+ self.assertEqual(cfsab.ttot, 9) -+ self.assertEqual(cfsab.tsub, 4) -+ self.assertEqual(cfsaa.ttot, 11) -+ self.assertEqual(cfsaa.tsub, 2) -+ self.assertEqual(fsc.ttot, 5) -+ self.assertEqual(fsc.tsub, 4) -+ -+ def test_abaa(self): -+ _timings = {"a_1": 13, "b_1": 10, "a_2": 9, "a_3": 5} -+ _yappi._set_test_timings(_timings) -+ -+ self._ncall = 1 -+ -+ def a(): -+ if self._ncall == 1: -+ b() -+ elif self._ncall == 2: -+ self._ncall += 1 -+ a() -+ else: -+ return -+ -+ def b(): -+ self._ncall += 1 -+ a() -+ -+ stats = utils.run_and_get_func_stats(a) -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ cfsaa = fsa.children[fsa] -+ cfsba = fsb.children[fsa] -+ self.assertEqual(fsb.ttot, 10) -+ self.assertEqual(fsb.tsub, 1) -+ self.assertEqual(fsa.ttot, 13) -+ self.assertEqual(fsa.tsub, 12) -+ self.assertEqual(cfsaa.ttot, 5) -+ self.assertEqual(cfsaa.tsub, 5) -+ self.assertEqual(cfsba.ttot, 9) -+ self.assertEqual(cfsba.tsub, 4) -+ -+ def test_aabb(self): -+ _timings = {"a_1": 13, "a_2": 10, "b_1": 9, "b_2": 5} -+ _yappi._set_test_timings(_timings) -+ -+ self._ncall = 1 -+ -+ def a(): -+ if self._ncall == 1: -+ self._ncall += 1 -+ a() -+ elif self._ncall == 2: -+ b() -+ else: -+ return -+ -+ def b(): -+ if self._ncall == 2: -+ self._ncall += 1 -+ b() -+ else: -+ return -+ -+ stats = utils.run_and_get_func_stats(a) -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ cfsaa = fsa.children[fsa] -+ cfsab = fsa.children[fsb] -+ cfsbb = fsb.children[fsb] -+ self.assertEqual(fsa.ttot, 13) -+ self.assertEqual(fsa.tsub, 4) -+ self.assertEqual(fsb.ttot, 9) -+ self.assertEqual(fsb.tsub, 9) -+ self.assertEqual(cfsaa.ttot, 10) -+ self.assertEqual(cfsaa.tsub, 1) -+ self.assertEqual(cfsab.ttot, 9) -+ self.assertEqual(cfsab.tsub, 4) -+ self.assertEqual(cfsbb.ttot, 5) -+ self.assertEqual(cfsbb.tsub, 5) -+ -+ def test_abbb(self): -+ _timings = {"a_1": 13, "b_1": 10, "b_2": 6, "b_3": 1} -+ _yappi._set_test_timings(_timings) -+ -+ self._ncall = 1 -+ -+ def a(): -+ if self._ncall == 1: -+ b() -+ -+ def b(): -+ if self._ncall == 3: -+ return -+ self._ncall += 1 -+ b() -+ -+ stats = utils.run_and_get_func_stats(a) -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ cfsab = fsa.children[fsb] -+ cfsbb = fsb.children[fsb] -+ self.assertEqual(fsa.ttot, 13) -+ self.assertEqual(fsa.tsub, 3) -+ self.assertEqual(fsb.ttot, 10) -+ self.assertEqual(fsb.tsub, 10) -+ self.assertEqual(fsb.ncall, 3) -+ self.assertEqual(fsb.nactualcall, 1) -+ self.assertEqual(cfsab.ttot, 10) -+ self.assertEqual(cfsab.tsub, 4) -+ self.assertEqual(cfsbb.ttot, 6) -+ self.assertEqual(cfsbb.tsub, 6) -+ self.assertEqual(cfsbb.nactualcall, 0) -+ self.assertEqual(cfsbb.ncall, 2) -+ -+ def test_aaab(self): -+ _timings = {"a_1": 13, "a_2": 10, "a_3": 6, "b_1": 1} -+ _yappi._set_test_timings(_timings) -+ -+ self._ncall = 1 -+ -+ def a(): -+ if self._ncall == 3: -+ b() -+ return -+ self._ncall += 1 -+ a() -+ -+ def b(): -+ return -+ -+ stats = utils.run_and_get_func_stats(a) -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ cfsaa = fsa.children[fsa] -+ cfsab = fsa.children[fsb] -+ self.assertEqual(fsa.ttot, 13) -+ self.assertEqual(fsa.tsub, 12) -+ self.assertEqual(fsb.ttot, 1) -+ self.assertEqual(fsb.tsub, 1) -+ self.assertEqual(cfsaa.ttot, 10) -+ self.assertEqual(cfsaa.tsub, 9) -+ self.assertEqual(cfsab.ttot, 1) -+ self.assertEqual(cfsab.tsub, 1) -+ -+ def test_abab(self): -+ _timings = {"a_1": 13, "b_1": 10, "a_2": 6, "b_2": 1} -+ _yappi._set_test_timings(_timings) -+ -+ self._ncall = 1 -+ -+ def a(): -+ b() -+ -+ def b(): -+ if self._ncall == 2: -+ return -+ self._ncall += 1 -+ a() -+ -+ stats = utils.run_and_get_func_stats(a) -+ fsa = utils.find_stat_by_name(stats, 'a') -+ fsb = utils.find_stat_by_name(stats, 'b') -+ cfsab = fsa.children[fsb] -+ cfsba = fsb.children[fsa] -+ self.assertEqual(fsa.ttot, 13) -+ self.assertEqual(fsa.tsub, 8) -+ self.assertEqual(fsb.ttot, 10) -+ self.assertEqual(fsb.tsub, 5) -+ self.assertEqual(cfsab.ttot, 10) -+ self.assertEqual(cfsab.tsub, 5) -+ self.assertEqual(cfsab.ncall, 2) -+ self.assertEqual(cfsab.nactualcall, 1) -+ self.assertEqual(cfsba.ttot, 6) -+ self.assertEqual(cfsba.tsub, 5) -+ -+ -+if __name__ == '__main__': -+ # import sys;sys.argv = ['', 'BasicUsage.test_run_as_script'] -+ # import sys;sys.argv = ['', 'MultithreadedScenarios.test_subsequent_profile'] -+ unittest.main() ---- a/tests/test_hooks.py -+++ b/tests/test_hooks.py -@@ -5,7 +5,7 @@ import unittest - import time - - import yappi --import utils -+import tests.utils as utils - - - def a(): ---- a/tests/test_tags.py -+++ b/tests/test_tags.py -@@ -2,7 +2,7 @@ import unittest - import yappi - import threading - import time --from utils import YappiUnitTestCase, find_stat_by_name, burn_cpu, burn_io -+from .utils import YappiUnitTestCase, find_stat_by_name, burn_cpu, burn_io - - - class MultiThreadTests(YappiUnitTestCase): diff --git a/meta-python/recipes-devtools/python/python3-yappi/0002-add-3.11-to-the-setup.patch b/meta-python/recipes-devtools/python/python3-yappi/0002-add-3.11-to-the-setup.patch deleted file mode 100644 index d40bd2b7c..000000000 --- a/meta-python/recipes-devtools/python/python3-yappi/0002-add-3.11-to-the-setup.patch +++ /dev/null @@ -1,26 +0,0 @@ -From 38afdacf526410f970afc58e147c7377c6c7112c Mon Sep 17 00:00:00 2001 -From: =?UTF-8?q?S=C3=BCmer=20Cip?= -Date: Fri, 25 Nov 2022 15:58:03 +0300 -Subject: [PATCH 2/2] add 3.11 to the setup - ---- -Upstream-Status: Pending - - setup.py | 1 + - 1 file changed, 1 insertion(+) - -diff --git a/setup.py b/setup.py -index d006787..96e2a66 100644 ---- a/setup.py -+++ b/setup.py -@@ -56,6 +56,7 @@ CLASSIFIERS = [ - 'Programming Language :: Python :: 3.8', - 'Programming Language :: Python :: 3.9', - 'Programming Language :: Python :: 3.10', -+ 'Programming Language :: Python :: 3.11', - 'Programming Language :: Python :: Implementation :: CPython', - 'Operating System :: OS Independent', - 'Topic :: Software Development :: Libraries', --- -2.30.2 - diff --git a/meta-python/recipes-devtools/python/python3-yappi_1.4.0.bb b/meta-python/recipes-devtools/python/python3-yappi_1.6.0.bb similarity index 74% rename from meta-python/recipes-devtools/python/python3-yappi_1.4.0.bb rename to meta-python/recipes-devtools/python/python3-yappi_1.6.0.bb index 71e74e86f..435dc11bb 100644 --- a/meta-python/recipes-devtools/python/python3-yappi_1.4.0.bb +++ b/meta-python/recipes-devtools/python/python3-yappi_1.6.0.bb @@ -4,13 +4,9 @@ HOMEPAGE = "https://github.com/sumerc/yappi" LICENSE = "MIT" LIC_FILES_CHKSUM = "file://LICENSE;md5=71c208c9a4fd864385eb69ad4caa3bee" -SRC_URI[sha256sum] = "504b5d8fc7433736cb5e257991d2e7f2946019174f1faec7b2fe947881a17fc0" +SRC_URI[sha256sum] = "a9aaf72009d8c03067294151ee0470ac7a6dfa7b33baab40b198d6c1ef00430a" -SRC_URI += " \ - file://run-ptest \ - file://0001-Fix-imports-for-ptests.patch \ - file://0002-add-3.11-to-the-setup.patch \ -" +SRC_URI += "file://run-ptest" inherit pypi setuptools3 ptest From patchwork Fri Dec 22 15:11:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alexander Kanavin X-Patchwork-Id: 36863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7670AC46CD2 for ; Fri, 22 Dec 2023 15:11:36 +0000 (UTC) Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by mx.groups.io with SMTP id smtpd.web10.25176.1703257888576462137 for ; Fri, 22 Dec 2023 07:11:29 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Y5vpKbgQ; spf=pass (domain: gmail.com, ip: 209.85.208.51, mailfrom: alex.kanavin@gmail.com) Received: by mail-ed1-f51.google.com with SMTP id 4fb4d7f45d1cf-54c7744a93fso2261649a12.2 for ; Fri, 22 Dec 2023 07:11:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703257887; x=1703862687; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/xatkMVkF/NuW/BP0nfoQW9JEjJQLzPYpU4LIaTJpgc=; b=Y5vpKbgQtcWPwRNWmNioOIL3cchN4axeyuASJosUvypNuXlBhB8CtaxDbw7Ge7meGh adTr+7+Kjv9B1TyOFdec2XAv9mvr01/QdvOGpQ47v93mg6c3xsmY1bRHDdJd3OmIKt2+ flmfyj9IFO9vQFd1K5TYj9yxdGQtANf9EDV+4U0pFaat1YrOxXr3EnGxA0cP+GwVEt2o P45zWwtFmSK4M/yqV7nYEmcElXOHrLtpErxQY55NGhnxNvUtxFNYnj2D3TGjaPGkhV++ GURHYUsd84div+1tBtF9ulSlbCLE2JApGQ35o3kNmEWhUq1iXPDal2VhVjt/D+C7Gsi+ WpMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703257887; x=1703862687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/xatkMVkF/NuW/BP0nfoQW9JEjJQLzPYpU4LIaTJpgc=; b=AR98c8al1oC304jrTks7myc3UcEaXBcai1ZhtIKLhha2vsZGuy7CK3q+bmhuOlVp2V mzVhNNxt8Pfwkr8yqJthqM2h7uYtzJHyd2rS0CKwhOePF2J5jft5xkgOcXTY731LAqTZ lw3WhYl20pePZO8Zp/Xr7RUiXL+k6KJv5LFIIl0H5u7YKNFaofq73bEdfdDL85bWCK12 nulf3dZBamu3aCExZrJgWHHA4veWHa2DULuTx7dhTQiQT34hYR+lMF2S13mPy0k03IHX +P+iQGBVoSRaGp33PNcHmaoDEkCaFgHWVNcqijHwDn62gPDXnmyRUpSa057wfZLuJMsO dQaA== X-Gm-Message-State: AOJu0YxbstdiqBbjGAF4ZDQFBXEb7gR4QmMFOtuO8hgCM1kWb4DjmXLG G5HdwAt/gyxS0KHLa532rU1lsHAYOU74MQ== X-Google-Smtp-Source: AGHT+IHHubn2944bOopNr8kFri4tLvgluYjrytminRhKfJgIzXMS4dn7znr+xgWHJZaqiCBzid/5UA== X-Received: by 2002:a05:6402:647:b0:554:75d7:b35 with SMTP id u7-20020a056402064700b0055475d70b35mr149928edx.20.1703257885825; Fri, 22 Dec 2023 07:11:25 -0800 (PST) Received: from Zen2.lab.linutronix.de. (drugstore.linutronix.de. [80.153.143.164]) by smtp.gmail.com with ESMTPSA id m9-20020aa7c2c9000000b00552666f4745sm2650247edp.22.2023.12.22.07.11.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 07:11:25 -0800 (PST) From: Alexander Kanavin X-Google-Original-From: Alexander Kanavin To: openembedded-devel@lists.openembedded.org Cc: Alexander Kanavin Subject: [PATCH 2/9] polkit: remove long obsolete 0.119 version Date: Fri, 22 Dec 2023 16:11:01 +0100 Message-Id: <20231222151108.645675-2-alex@linutronix.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231222151108.645675-1-alex@linutronix.de> References: <20231222151108.645675-1-alex@linutronix.de> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Fri, 22 Dec 2023 15:11:36 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-devel/message/107756 It's also unbuildable as mozjs-91 has been removed as well. Signed-off-by: Alexander Kanavin --- ...l-privilege-escalation-CVE-2021-4034.patch | 84 - ...0002-CVE-2021-4115-GHSL-2021-077-fix.patch | 88 - .../0002-jsauthority-port-to-mozjs-91.patch | 38 - ...ded-support-for-duktape-as-JS-engine.patch | 3459 ----------------- ...re-to-call-JS_Init-and-JS_ShutDown-e.patch | 63 - .../0004-Make-netgroup-support-optional.patch | 253 -- ...ke-netgroup-support-optional-duktape.patch | 34 - .../polkit/polkit/polkit-1_pam.patch | 35 - .../recipes-extended/polkit/polkit_0.119.bb | 79 - 9 files changed, 4133 deletions(-) delete mode 100644 meta-oe/recipes-extended/polkit/polkit/0001-pkexec-local-privilege-escalation-CVE-2021-4034.patch delete mode 100644 meta-oe/recipes-extended/polkit/polkit/0002-CVE-2021-4115-GHSL-2021-077-fix.patch delete mode 100644 meta-oe/recipes-extended/polkit/polkit/0002-jsauthority-port-to-mozjs-91.patch delete mode 100644 meta-oe/recipes-extended/polkit/polkit/0003-Added-support-for-duktape-as-JS-engine.patch delete mode 100644 meta-oe/recipes-extended/polkit/polkit/0003-jsauthority-ensure-to-call-JS_Init-and-JS_ShutDown-e.patch delete mode 100644 meta-oe/recipes-extended/polkit/polkit/0004-Make-netgroup-support-optional.patch delete mode 100644 meta-oe/recipes-extended/polkit/polkit/0005-Make-netgroup-support-optional-duktape.patch delete mode 100644 meta-oe/recipes-extended/polkit/polkit/polkit-1_pam.patch delete mode 100644 meta-oe/recipes-extended/polkit/polkit_0.119.bb diff --git a/meta-oe/recipes-extended/polkit/polkit/0001-pkexec-local-privilege-escalation-CVE-2021-4034.patch b/meta-oe/recipes-extended/polkit/polkit/0001-pkexec-local-privilege-escalation-CVE-2021-4034.patch deleted file mode 100644 index c725c001d..000000000 --- a/meta-oe/recipes-extended/polkit/polkit/0001-pkexec-local-privilege-escalation-CVE-2021-4034.patch +++ /dev/null @@ -1,84 +0,0 @@ -From 85c2dd9275cdfb369f613089f22733c0f1ba2aec Mon Sep 17 00:00:00 2001 -From: Jan Rybar -Date: Tue, 25 Jan 2022 17:21:46 +0000 -Subject: [PATCH 1/3] pkexec: local privilege escalation (CVE-2021-4034) - -Signed-off-by: Mikko Rapeli - ---- - src/programs/pkcheck.c | 5 +++++ - src/programs/pkexec.c | 23 ++++++++++++++++++++--- - 2 files changed, 25 insertions(+), 3 deletions(-) - -CVE: CVE-2021-4034 -Upstream-Status: Backport [a2bf5c9c83b6ae46cbd5c779d3055bff81ded683] - -diff --git a/src/programs/pkcheck.c b/src/programs/pkcheck.c -index f1bb4e1..768525c 100644 ---- a/src/programs/pkcheck.c -+++ b/src/programs/pkcheck.c -@@ -363,6 +363,11 @@ main (int argc, char *argv[]) - local_agent_handle = NULL; - ret = 126; - -+ if (argc < 1) -+ { -+ exit(126); -+ } -+ - /* Disable remote file access from GIO. */ - setenv ("GIO_USE_VFS", "local", 1); - -diff --git a/src/programs/pkexec.c b/src/programs/pkexec.c -index 7698c5c..84e5ef6 100644 ---- a/src/programs/pkexec.c -+++ b/src/programs/pkexec.c -@@ -488,6 +488,15 @@ main (int argc, char *argv[]) - pid_t pid_of_caller; - gpointer local_agent_handle; - -+ -+ /* -+ * If 'pkexec' is called THIS wrong, someone's probably evil-doing. Don't be nice, just bail out. -+ */ -+ if (argc<1) -+ { -+ exit(127); -+ } -+ - ret = 127; - authority = NULL; - subject = NULL; -@@ -614,10 +623,10 @@ main (int argc, char *argv[]) - - path = g_strdup (pwstruct.pw_shell); - if (!path) -- { -+ { - g_printerr ("No shell configured or error retrieving pw_shell\n"); - goto out; -- } -+ } - /* If you change this, be sure to change the if (!command_line) - case below too */ - command_line = g_strdup (path); -@@ -636,7 +645,15 @@ main (int argc, char *argv[]) - goto out; - } - g_free (path); -- argv[n] = path = s; -+ path = s; -+ -+ /* argc<2 and pkexec runs just shell, argv is guaranteed to be null-terminated. -+ * /-less shell shouldn't happen, but let's be defensive and don't write to null-termination -+ */ -+ if (argv[n] != NULL) -+ { -+ argv[n] = path; -+ } - } - if (access (path, F_OK) != 0) - { --- -2.20.1 - diff --git a/meta-oe/recipes-extended/polkit/polkit/0002-CVE-2021-4115-GHSL-2021-077-fix.patch b/meta-oe/recipes-extended/polkit/polkit/0002-CVE-2021-4115-GHSL-2021-077-fix.patch deleted file mode 100644 index fcad872dc..000000000 --- a/meta-oe/recipes-extended/polkit/polkit/0002-CVE-2021-4115-GHSL-2021-077-fix.patch +++ /dev/null @@ -1,88 +0,0 @@ -From c86aea01a06ad4d6c428137e9cfe2f74b1ae7f01 Mon Sep 17 00:00:00 2001 -From: Jan Rybar -Date: Mon, 21 Feb 2022 08:29:05 +0000 -Subject: [PATCH 2/3] CVE-2021-4115 (GHSL-2021-077) fix - -Signed-off-by: Mikko Rapeli - ---- - src/polkit/polkitsystembusname.c | 38 ++++++++++++++++++++++++++++---- - 1 file changed, 34 insertions(+), 4 deletions(-) - -CVE: CVE-2021-4115 -Upstream-Status: Backport [41cb093f554da8772362654a128a84dd8a5542a7] - -diff --git a/src/polkit/polkitsystembusname.c b/src/polkit/polkitsystembusname.c -index 8ed1363..2fbf5f1 100644 ---- a/src/polkit/polkitsystembusname.c -+++ b/src/polkit/polkitsystembusname.c -@@ -62,6 +62,10 @@ enum - PROP_NAME, - }; - -+ -+guint8 dbus_call_respond_fails; // has to be global because of callback -+ -+ - static void subject_iface_init (PolkitSubjectIface *subject_iface); - - G_DEFINE_TYPE_WITH_CODE (PolkitSystemBusName, polkit_system_bus_name, G_TYPE_OBJECT, -@@ -364,6 +368,7 @@ on_retrieved_unix_uid_pid (GObject *src, - if (!v) - { - data->caught_error = TRUE; -+ dbus_call_respond_fails += 1; - } - else - { -@@ -405,6 +410,8 @@ polkit_system_bus_name_get_creds_sync (PolkitSystemBusName *system_bus - tmp_context = g_main_context_new (); - g_main_context_push_thread_default (tmp_context); - -+ dbus_call_respond_fails = 0; -+ - /* Do two async calls as it's basically as fast as one sync call. - */ - g_dbus_connection_call (connection, -@@ -432,11 +439,34 @@ polkit_system_bus_name_get_creds_sync (PolkitSystemBusName *system_bus - on_retrieved_unix_uid_pid, - &data); - -- while (!((data.retrieved_uid && data.retrieved_pid) || data.caught_error)) -- g_main_context_iteration (tmp_context, TRUE); -+ while (TRUE) -+ { -+ /* If one dbus call returns error, we must wait until the other call -+ * calls _call_finish(), otherwise fd leak is possible. -+ * Resolves: GHSL-2021-077 -+ */ - -- if (data.caught_error) -- goto out; -+ if ( (dbus_call_respond_fails > 1) ) -+ { -+ // we got two faults, we can leave -+ goto out; -+ } -+ -+ if ((data.caught_error && (data.retrieved_pid || data.retrieved_uid))) -+ { -+ // we got one fault and the other call finally finished, we can leave -+ goto out; -+ } -+ -+ if ( !(data.retrieved_uid && data.retrieved_pid) ) -+ { -+ g_main_context_iteration (tmp_context, TRUE); -+ } -+ else -+ { -+ break; -+ } -+ } - - if (out_uid) - *out_uid = data.uid; --- -2.20.1 - diff --git a/meta-oe/recipes-extended/polkit/polkit/0002-jsauthority-port-to-mozjs-91.patch b/meta-oe/recipes-extended/polkit/polkit/0002-jsauthority-port-to-mozjs-91.patch deleted file mode 100644 index 5b3660da2..000000000 --- a/meta-oe/recipes-extended/polkit/polkit/0002-jsauthority-port-to-mozjs-91.patch +++ /dev/null @@ -1,38 +0,0 @@ -From 4ce27b66bb07b72cb96d3d43a75108a5a6e7e156 Mon Sep 17 00:00:00 2001 -From: Xi Ruoyao -Date: Tue, 10 Aug 2021 19:09:42 +0800 -Subject: [PATCH] jsauthority: port to mozjs-91 - -Upstream-Status: Submitted [https://gitlab.freedesktop.org/polkit/polkit/-/merge_requests/92] -Signed-off-by: Alexander Kanavin ---- - configure.ac | 2 +- - meson.build | 2 +- - 2 files changed, 2 insertions(+), 2 deletions(-) - -diff --git a/configure.ac b/configure.ac -index d807086..5a7fc11 100644 ---- a/configure.ac -+++ b/configure.ac -@@ -80,7 +80,7 @@ PKG_CHECK_MODULES(GLIB, [gmodule-2.0 gio-unix-2.0 >= 2.30.0]) - AC_SUBST(GLIB_CFLAGS) - AC_SUBST(GLIB_LIBS) - --PKG_CHECK_MODULES(LIBJS, [mozjs-78]) -+PKG_CHECK_MODULES(LIBJS, [mozjs-91]) - - AC_SUBST(LIBJS_CFLAGS) - AC_SUBST(LIBJS_CXXFLAGS) -diff --git a/meson.build b/meson.build -index b3702be..733bbff 100644 ---- a/meson.build -+++ b/meson.build -@@ -126,7 +126,7 @@ expat_dep = dependency('expat') - assert(cc.has_header('expat.h', dependencies: expat_dep), 'Can\'t find expat.h. Please install expat.') - assert(cc.has_function('XML_ParserCreate', dependencies: expat_dep), 'Can\'t find expat library. Please install expat.') - --mozjs_dep = dependency('mozjs-78') -+mozjs_dep = dependency('mozjs-91') - - dbus_dep = dependency('dbus-1') - dbus_confdir = dbus_dep.get_pkgconfig_variable('datadir', define_variable: ['datadir', pk_prefix / pk_datadir]) #changed from sysconfdir with respect to commit#8eada3836465838 diff --git a/meta-oe/recipes-extended/polkit/polkit/0003-Added-support-for-duktape-as-JS-engine.patch b/meta-oe/recipes-extended/polkit/polkit/0003-Added-support-for-duktape-as-JS-engine.patch deleted file mode 100644 index b8562f8ce..000000000 --- a/meta-oe/recipes-extended/polkit/polkit/0003-Added-support-for-duktape-as-JS-engine.patch +++ /dev/null @@ -1,3459 +0,0 @@ -From 4af72493cb380ab5ce0dd7c5bcd25a8b5457d770 Mon Sep 17 00:00:00 2001 -From: Gustavo Lima Chaves -Date: Tue, 25 Jan 2022 09:43:21 +0000 -Subject: [PATCH] Added support for duktape as JS engine - -Original author: Wu Xiaotian (@yetist) -Resurrection author, runaway-killer author: Gustavo Lima Chaves (@limachaves) - -Signed-off-by: Mikko Rapeli - -Upstream-Status: Backport [c7fc4e1b61f0fd82fc697c19c604af7e9fb291a2] -Dropped change to .gitlab-ci.yml and adapted configure.ac due to other -patches in meta-oe. - ---- - buildutil/ax_pthread.m4 | 522 ++++++++ - configure.ac | 34 +- - docs/man/polkit.xml | 4 +- - meson.build | 16 +- - meson_options.txt | 1 + - src/polkitbackend/Makefile.am | 17 +- - src/polkitbackend/meson.build | 14 +- - src/polkitbackend/polkitbackendcommon.c | 530 +++++++++ - src/polkitbackend/polkitbackendcommon.h | 158 +++ - .../polkitbackendduktapeauthority.c | 1051 +++++++++++++++++ - .../polkitbackendjsauthority.cpp | 721 +---------- - .../etc/polkit-1/rules.d/10-testing.rules | 6 +- - .../test-polkitbackendjsauthority.c | 2 +- - 13 files changed, 2398 insertions(+), 678 deletions(-) - create mode 100644 buildutil/ax_pthread.m4 - create mode 100644 src/polkitbackend/polkitbackendcommon.c - create mode 100644 src/polkitbackend/polkitbackendcommon.h - create mode 100644 src/polkitbackend/polkitbackendduktapeauthority.c - -diff --git a/buildutil/ax_pthread.m4 b/buildutil/ax_pthread.m4 -new file mode 100644 -index 0000000..9f35d13 ---- /dev/null -+++ b/buildutil/ax_pthread.m4 -@@ -0,0 +1,522 @@ -+# =========================================================================== -+# https://www.gnu.org/software/autoconf-archive/ax_pthread.html -+# =========================================================================== -+# -+# SYNOPSIS -+# -+# AX_PTHREAD([ACTION-IF-FOUND[, ACTION-IF-NOT-FOUND]]) -+# -+# DESCRIPTION -+# -+# This macro figures out how to build C programs using POSIX threads. It -+# sets the PTHREAD_LIBS output variable to the threads library and linker -+# flags, and the PTHREAD_CFLAGS output variable to any special C compiler -+# flags that are needed. (The user can also force certain compiler -+# flags/libs to be tested by setting these environment variables.) -+# -+# Also sets PTHREAD_CC and PTHREAD_CXX to any special C compiler that is -+# needed for multi-threaded programs (defaults to the value of CC -+# respectively CXX otherwise). (This is necessary on e.g. AIX to use the -+# special cc_r/CC_r compiler alias.) -+# -+# NOTE: You are assumed to not only compile your program with these flags, -+# but also to link with them as well. For example, you might link with -+# $PTHREAD_CC $CFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS $LIBS -+# $PTHREAD_CXX $CXXFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS $LIBS -+# -+# If you are only building threaded programs, you may wish to use these -+# variables in your default LIBS, CFLAGS, and CC: -+# -+# LIBS="$PTHREAD_LIBS $LIBS" -+# CFLAGS="$CFLAGS $PTHREAD_CFLAGS" -+# CXXFLAGS="$CXXFLAGS $PTHREAD_CFLAGS" -+# CC="$PTHREAD_CC" -+# CXX="$PTHREAD_CXX" -+# -+# In addition, if the PTHREAD_CREATE_JOINABLE thread-attribute constant -+# has a nonstandard name, this macro defines PTHREAD_CREATE_JOINABLE to -+# that name (e.g. PTHREAD_CREATE_UNDETACHED on AIX). -+# -+# Also HAVE_PTHREAD_PRIO_INHERIT is defined if pthread is found and the -+# PTHREAD_PRIO_INHERIT symbol is defined when compiling with -+# PTHREAD_CFLAGS. -+# -+# ACTION-IF-FOUND is a list of shell commands to run if a threads library -+# is found, and ACTION-IF-NOT-FOUND is a list of commands to run it if it -+# is not found. If ACTION-IF-FOUND is not specified, the default action -+# will define HAVE_PTHREAD. -+# -+# Please let the authors know if this macro fails on any platform, or if -+# you have any other suggestions or comments. This macro was based on work -+# by SGJ on autoconf scripts for FFTW (http://www.fftw.org/) (with help -+# from M. Frigo), as well as ac_pthread and hb_pthread macros posted by -+# Alejandro Forero Cuervo to the autoconf macro repository. We are also -+# grateful for the helpful feedback of numerous users. -+# -+# Updated for Autoconf 2.68 by Daniel Richard G. -+# -+# LICENSE -+# -+# Copyright (c) 2008 Steven G. Johnson -+# Copyright (c) 2011 Daniel Richard G. -+# Copyright (c) 2019 Marc Stevens -+# -+# This program is free software: you can redistribute it and/or modify it -+# under the terms of the GNU General Public License as published by the -+# Free Software Foundation, either version 3 of the License, or (at your -+# option) any later version. -+# -+# This program is distributed in the hope that it will be useful, but -+# WITHOUT ANY WARRANTY; without even the implied warranty of -+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General -+# Public License for more details. -+# -+# You should have received a copy of the GNU General Public License along -+# with this program. If not, see . -+# -+# As a special exception, the respective Autoconf Macro's copyright owner -+# gives unlimited permission to copy, distribute and modify the configure -+# scripts that are the output of Autoconf when processing the Macro. You -+# need not follow the terms of the GNU General Public License when using -+# or distributing such scripts, even though portions of the text of the -+# Macro appear in them. The GNU General Public License (GPL) does govern -+# all other use of the material that constitutes the Autoconf Macro. -+# -+# This special exception to the GPL applies to versions of the Autoconf -+# Macro released by the Autoconf Archive. When you make and distribute a -+# modified version of the Autoconf Macro, you may extend this special -+# exception to the GPL to apply to your modified version as well. -+ -+#serial 31 -+ -+AU_ALIAS([ACX_PTHREAD], [AX_PTHREAD]) -+AC_DEFUN([AX_PTHREAD], [ -+AC_REQUIRE([AC_CANONICAL_HOST]) -+AC_REQUIRE([AC_PROG_CC]) -+AC_REQUIRE([AC_PROG_SED]) -+AC_LANG_PUSH([C]) -+ax_pthread_ok=no -+ -+# We used to check for pthread.h first, but this fails if pthread.h -+# requires special compiler flags (e.g. on Tru64 or Sequent). -+# It gets checked for in the link test anyway. -+ -+# First of all, check if the user has set any of the PTHREAD_LIBS, -+# etcetera environment variables, and if threads linking works using -+# them: -+if test "x$PTHREAD_CFLAGS$PTHREAD_LIBS" != "x"; then -+ ax_pthread_save_CC="$CC" -+ ax_pthread_save_CFLAGS="$CFLAGS" -+ ax_pthread_save_LIBS="$LIBS" -+ AS_IF([test "x$PTHREAD_CC" != "x"], [CC="$PTHREAD_CC"]) -+ AS_IF([test "x$PTHREAD_CXX" != "x"], [CXX="$PTHREAD_CXX"]) -+ CFLAGS="$CFLAGS $PTHREAD_CFLAGS" -+ LIBS="$PTHREAD_LIBS $LIBS" -+ AC_MSG_CHECKING([for pthread_join using $CC $PTHREAD_CFLAGS $PTHREAD_LIBS]) -+ AC_LINK_IFELSE([AC_LANG_CALL([], [pthread_join])], [ax_pthread_ok=yes]) -+ AC_MSG_RESULT([$ax_pthread_ok]) -+ if test "x$ax_pthread_ok" = "xno"; then -+ PTHREAD_LIBS="" -+ PTHREAD_CFLAGS="" -+ fi -+ CC="$ax_pthread_save_CC" -+ CFLAGS="$ax_pthread_save_CFLAGS" -+ LIBS="$ax_pthread_save_LIBS" -+fi -+ -+# We must check for the threads library under a number of different -+# names; the ordering is very important because some systems -+# (e.g. DEC) have both -lpthread and -lpthreads, where one of the -+# libraries is broken (non-POSIX). -+ -+# Create a list of thread flags to try. Items with a "," contain both -+# C compiler flags (before ",") and linker flags (after ","). Other items -+# starting with a "-" are C compiler flags, and remaining items are -+# library names, except for "none" which indicates that we try without -+# any flags at all, and "pthread-config" which is a program returning -+# the flags for the Pth emulation library. -+ -+ax_pthread_flags="pthreads none -Kthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" -+ -+# The ordering *is* (sometimes) important. Some notes on the -+# individual items follow: -+ -+# pthreads: AIX (must check this before -lpthread) -+# none: in case threads are in libc; should be tried before -Kthread and -+# other compiler flags to prevent continual compiler warnings -+# -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) -+# -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads), Tru64 -+# (Note: HP C rejects this with "bad form for `-t' option") -+# -pthreads: Solaris/gcc (Note: HP C also rejects) -+# -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it -+# doesn't hurt to check since this sometimes defines pthreads and -+# -D_REENTRANT too), HP C (must be checked before -lpthread, which -+# is present but should not be used directly; and before -mthreads, -+# because the compiler interprets this as "-mt" + "-hreads") -+# -mthreads: Mingw32/gcc, Lynx/gcc -+# pthread: Linux, etcetera -+# --thread-safe: KAI C++ -+# pthread-config: use pthread-config program (for GNU Pth library) -+ -+case $host_os in -+ -+ freebsd*) -+ -+ # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) -+ # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) -+ -+ ax_pthread_flags="-kthread lthread $ax_pthread_flags" -+ ;; -+ -+ hpux*) -+ -+ # From the cc(1) man page: "[-mt] Sets various -D flags to enable -+ # multi-threading and also sets -lpthread." -+ -+ ax_pthread_flags="-mt -pthread pthread $ax_pthread_flags" -+ ;; -+ -+ openedition*) -+ -+ # IBM z/OS requires a feature-test macro to be defined in order to -+ # enable POSIX threads at all, so give the user a hint if this is -+ # not set. (We don't define these ourselves, as they can affect -+ # other portions of the system API in unpredictable ways.) -+ -+ AC_EGREP_CPP([AX_PTHREAD_ZOS_MISSING], -+ [ -+# if !defined(_OPEN_THREADS) && !defined(_UNIX03_THREADS) -+ AX_PTHREAD_ZOS_MISSING -+# endif -+ ], -+ [AC_MSG_WARN([IBM z/OS requires -D_OPEN_THREADS or -D_UNIX03_THREADS to enable pthreads support.])]) -+ ;; -+ -+ solaris*) -+ -+ # On Solaris (at least, for some versions), libc contains stubbed -+ # (non-functional) versions of the pthreads routines, so link-based -+ # tests will erroneously succeed. (N.B.: The stubs are missing -+ # pthread_cleanup_push, or rather a function called by this macro, -+ # so we could check for that, but who knows whether they'll stub -+ # that too in a future libc.) So we'll check first for the -+ # standard Solaris way of linking pthreads (-mt -lpthread). -+ -+ ax_pthread_flags="-mt,-lpthread pthread $ax_pthread_flags" -+ ;; -+esac -+ -+# Are we compiling with Clang? -+ -+AC_CACHE_CHECK([whether $CC is Clang], -+ [ax_cv_PTHREAD_CLANG], -+ [ax_cv_PTHREAD_CLANG=no -+ # Note that Autoconf sets GCC=yes for Clang as well as GCC -+ if test "x$GCC" = "xyes"; then -+ AC_EGREP_CPP([AX_PTHREAD_CC_IS_CLANG], -+ [/* Note: Clang 2.7 lacks __clang_[a-z]+__ */ -+# if defined(__clang__) && defined(__llvm__) -+ AX_PTHREAD_CC_IS_CLANG -+# endif -+ ], -+ [ax_cv_PTHREAD_CLANG=yes]) -+ fi -+ ]) -+ax_pthread_clang="$ax_cv_PTHREAD_CLANG" -+ -+ -+# GCC generally uses -pthread, or -pthreads on some platforms (e.g. SPARC) -+ -+# Note that for GCC and Clang -pthread generally implies -lpthread, -+# except when -nostdlib is passed. -+# This is problematic using libtool to build C++ shared libraries with pthread: -+# [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=25460 -+# [2] https://bugzilla.redhat.com/show_bug.cgi?id=661333 -+# [3] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=468555 -+# To solve this, first try -pthread together with -lpthread for GCC -+ -+AS_IF([test "x$GCC" = "xyes"], -+ [ax_pthread_flags="-pthread,-lpthread -pthread -pthreads $ax_pthread_flags"]) -+ -+# Clang takes -pthread (never supported any other flag), but we'll try with -lpthread first -+ -+AS_IF([test "x$ax_pthread_clang" = "xyes"], -+ [ax_pthread_flags="-pthread,-lpthread -pthread"]) -+ -+ -+# The presence of a feature test macro requesting re-entrant function -+# definitions is, on some systems, a strong hint that pthreads support is -+# correctly enabled -+ -+case $host_os in -+ darwin* | hpux* | linux* | osf* | solaris*) -+ ax_pthread_check_macro="_REENTRANT" -+ ;; -+ -+ aix*) -+ ax_pthread_check_macro="_THREAD_SAFE" -+ ;; -+ -+ *) -+ ax_pthread_check_macro="--" -+ ;; -+esac -+AS_IF([test "x$ax_pthread_check_macro" = "x--"], -+ [ax_pthread_check_cond=0], -+ [ax_pthread_check_cond="!defined($ax_pthread_check_macro)"]) -+ -+ -+if test "x$ax_pthread_ok" = "xno"; then -+for ax_pthread_try_flag in $ax_pthread_flags; do -+ -+ case $ax_pthread_try_flag in -+ none) -+ AC_MSG_CHECKING([whether pthreads work without any flags]) -+ ;; -+ -+ *,*) -+ PTHREAD_CFLAGS=`echo $ax_pthread_try_flag | sed "s/^\(.*\),\(.*\)$/\1/"` -+ PTHREAD_LIBS=`echo $ax_pthread_try_flag | sed "s/^\(.*\),\(.*\)$/\2/"` -+ AC_MSG_CHECKING([whether pthreads work with "$PTHREAD_CFLAGS" and "$PTHREAD_LIBS"]) -+ ;; -+ -+ -*) -+ AC_MSG_CHECKING([whether pthreads work with $ax_pthread_try_flag]) -+ PTHREAD_CFLAGS="$ax_pthread_try_flag" -+ ;; -+ -+ pthread-config) -+ AC_CHECK_PROG([ax_pthread_config], [pthread-config], [yes], [no]) -+ AS_IF([test "x$ax_pthread_config" = "xno"], [continue]) -+ PTHREAD_CFLAGS="`pthread-config --cflags`" -+ PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" -+ ;; -+ -+ *) -+ AC_MSG_CHECKING([for the pthreads library -l$ax_pthread_try_flag]) -+ PTHREAD_LIBS="-l$ax_pthread_try_flag" -+ ;; -+ esac -+ -+ ax_pthread_save_CFLAGS="$CFLAGS" -+ ax_pthread_save_LIBS="$LIBS" -+ CFLAGS="$CFLAGS $PTHREAD_CFLAGS" -+ LIBS="$PTHREAD_LIBS $LIBS" -+ -+ # Check for various functions. We must include pthread.h, -+ # since some functions may be macros. (On the Sequent, we -+ # need a special flag -Kthread to make this header compile.) -+ # We check for pthread_join because it is in -lpthread on IRIX -+ # while pthread_create is in libc. We check for pthread_attr_init -+ # due to DEC craziness with -lpthreads. We check for -+ # pthread_cleanup_push because it is one of the few pthread -+ # functions on Solaris that doesn't have a non-functional libc stub. -+ # We try pthread_create on general principles. -+ -+ AC_LINK_IFELSE([AC_LANG_PROGRAM([#include -+# if $ax_pthread_check_cond -+# error "$ax_pthread_check_macro must be defined" -+# endif -+ static void *some_global = NULL; -+ static void routine(void *a) -+ { -+ /* To avoid any unused-parameter or -+ unused-but-set-parameter warning. */ -+ some_global = a; -+ } -+ static void *start_routine(void *a) { return a; }], -+ [pthread_t th; pthread_attr_t attr; -+ pthread_create(&th, 0, start_routine, 0); -+ pthread_join(th, 0); -+ pthread_attr_init(&attr); -+ pthread_cleanup_push(routine, 0); -+ pthread_cleanup_pop(0) /* ; */])], -+ [ax_pthread_ok=yes], -+ []) -+ -+ CFLAGS="$ax_pthread_save_CFLAGS" -+ LIBS="$ax_pthread_save_LIBS" -+ -+ AC_MSG_RESULT([$ax_pthread_ok]) -+ AS_IF([test "x$ax_pthread_ok" = "xyes"], [break]) -+ -+ PTHREAD_LIBS="" -+ PTHREAD_CFLAGS="" -+done -+fi -+ -+ -+# Clang needs special handling, because older versions handle the -pthread -+# option in a rather... idiosyncratic way -+ -+if test "x$ax_pthread_clang" = "xyes"; then -+ -+ # Clang takes -pthread; it has never supported any other flag -+ -+ # (Note 1: This will need to be revisited if a system that Clang -+ # supports has POSIX threads in a separate library. This tends not -+ # to be the way of modern systems, but it's conceivable.) -+ -+ # (Note 2: On some systems, notably Darwin, -pthread is not needed -+ # to get POSIX threads support; the API is always present and -+ # active. We could reasonably leave PTHREAD_CFLAGS empty. But -+ # -pthread does define _REENTRANT, and while the Darwin headers -+ # ignore this macro, third-party headers might not.) -+ -+ # However, older versions of Clang make a point of warning the user -+ # that, in an invocation where only linking and no compilation is -+ # taking place, the -pthread option has no effect ("argument unused -+ # during compilation"). They expect -pthread to be passed in only -+ # when source code is being compiled. -+ # -+ # Problem is, this is at odds with the way Automake and most other -+ # C build frameworks function, which is that the same flags used in -+ # compilation (CFLAGS) are also used in linking. Many systems -+ # supported by AX_PTHREAD require exactly this for POSIX threads -+ # support, and in fact it is often not straightforward to specify a -+ # flag that is used only in the compilation phase and not in -+ # linking. Such a scenario is extremely rare in practice. -+ # -+ # Even though use of the -pthread flag in linking would only print -+ # a warning, this can be a nuisance for well-run software projects -+ # that build with -Werror. So if the active version of Clang has -+ # this misfeature, we search for an option to squash it. -+ -+ AC_CACHE_CHECK([whether Clang needs flag to prevent "argument unused" warning when linking with -pthread], -+ [ax_cv_PTHREAD_CLANG_NO_WARN_FLAG], -+ [ax_cv_PTHREAD_CLANG_NO_WARN_FLAG=unknown -+ # Create an alternate version of $ac_link that compiles and -+ # links in two steps (.c -> .o, .o -> exe) instead of one -+ # (.c -> exe), because the warning occurs only in the second -+ # step -+ ax_pthread_save_ac_link="$ac_link" -+ ax_pthread_sed='s/conftest\.\$ac_ext/conftest.$ac_objext/g' -+ ax_pthread_link_step=`AS_ECHO(["$ac_link"]) | sed "$ax_pthread_sed"` -+ ax_pthread_2step_ac_link="($ac_compile) && (echo ==== >&5) && ($ax_pthread_link_step)" -+ ax_pthread_save_CFLAGS="$CFLAGS" -+ for ax_pthread_try in '' -Qunused-arguments -Wno-unused-command-line-argument unknown; do -+ AS_IF([test "x$ax_pthread_try" = "xunknown"], [break]) -+ CFLAGS="-Werror -Wunknown-warning-option $ax_pthread_try -pthread $ax_pthread_save_CFLAGS" -+ ac_link="$ax_pthread_save_ac_link" -+ AC_LINK_IFELSE([AC_LANG_SOURCE([[int main(void){return 0;}]])], -+ [ac_link="$ax_pthread_2step_ac_link" -+ AC_LINK_IFELSE([AC_LANG_SOURCE([[int main(void){return 0;}]])], -+ [break]) -+ ]) -+ done -+ ac_link="$ax_pthread_save_ac_link" -+ CFLAGS="$ax_pthread_save_CFLAGS" -+ AS_IF([test "x$ax_pthread_try" = "x"], [ax_pthread_try=no]) -+ ax_cv_PTHREAD_CLANG_NO_WARN_FLAG="$ax_pthread_try" -+ ]) -+ -+ case "$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG" in -+ no | unknown) ;; -+ *) PTHREAD_CFLAGS="$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG $PTHREAD_CFLAGS" ;; -+ esac -+ -+fi # $ax_pthread_clang = yes -+ -+ -+ -+# Various other checks: -+if test "x$ax_pthread_ok" = "xyes"; then -+ ax_pthread_save_CFLAGS="$CFLAGS" -+ ax_pthread_save_LIBS="$LIBS" -+ CFLAGS="$CFLAGS $PTHREAD_CFLAGS" -+ LIBS="$PTHREAD_LIBS $LIBS" -+ -+ # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. -+ AC_CACHE_CHECK([for joinable pthread attribute], -+ [ax_cv_PTHREAD_JOINABLE_ATTR], -+ [ax_cv_PTHREAD_JOINABLE_ATTR=unknown -+ for ax_pthread_attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do -+ AC_LINK_IFELSE([AC_LANG_PROGRAM([#include ], -+ [int attr = $ax_pthread_attr; return attr /* ; */])], -+ [ax_cv_PTHREAD_JOINABLE_ATTR=$ax_pthread_attr; break], -+ []) -+ done -+ ]) -+ AS_IF([test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xunknown" && \ -+ test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xPTHREAD_CREATE_JOINABLE" && \ -+ test "x$ax_pthread_joinable_attr_defined" != "xyes"], -+ [AC_DEFINE_UNQUOTED([PTHREAD_CREATE_JOINABLE], -+ [$ax_cv_PTHREAD_JOINABLE_ATTR], -+ [Define to necessary symbol if this constant -+ uses a non-standard name on your system.]) -+ ax_pthread_joinable_attr_defined=yes -+ ]) -+ -+ AC_CACHE_CHECK([whether more special flags are required for pthreads], -+ [ax_cv_PTHREAD_SPECIAL_FLAGS], -+ [ax_cv_PTHREAD_SPECIAL_FLAGS=no -+ case $host_os in -+ solaris*) -+ ax_cv_PTHREAD_SPECIAL_FLAGS="-D_POSIX_PTHREAD_SEMANTICS" -+ ;; -+ esac -+ ]) -+ AS_IF([test "x$ax_cv_PTHREAD_SPECIAL_FLAGS" != "xno" && \ -+ test "x$ax_pthread_special_flags_added" != "xyes"], -+ [PTHREAD_CFLAGS="$ax_cv_PTHREAD_SPECIAL_FLAGS $PTHREAD_CFLAGS" -+ ax_pthread_special_flags_added=yes]) -+ -+ AC_CACHE_CHECK([for PTHREAD_PRIO_INHERIT], -+ [ax_cv_PTHREAD_PRIO_INHERIT], -+ [AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include ]], -+ [[int i = PTHREAD_PRIO_INHERIT; -+ return i;]])], -+ [ax_cv_PTHREAD_PRIO_INHERIT=yes], -+ [ax_cv_PTHREAD_PRIO_INHERIT=no]) -+ ]) -+ AS_IF([test "x$ax_cv_PTHREAD_PRIO_INHERIT" = "xyes" && \ -+ test "x$ax_pthread_prio_inherit_defined" != "xyes"], -+ [AC_DEFINE([HAVE_PTHREAD_PRIO_INHERIT], [1], [Have PTHREAD_PRIO_INHERIT.]) -+ ax_pthread_prio_inherit_defined=yes -+ ]) -+ -+ CFLAGS="$ax_pthread_save_CFLAGS" -+ LIBS="$ax_pthread_save_LIBS" -+ -+ # More AIX lossage: compile with *_r variant -+ if test "x$GCC" != "xyes"; then -+ case $host_os in -+ aix*) -+ AS_CASE(["x/$CC"], -+ [x*/c89|x*/c89_128|x*/c99|x*/c99_128|x*/cc|x*/cc128|x*/xlc|x*/xlc_v6|x*/xlc128|x*/xlc128_v6], -+ [#handle absolute path differently from PATH based program lookup -+ AS_CASE(["x$CC"], -+ [x/*], -+ [ -+ AS_IF([AS_EXECUTABLE_P([${CC}_r])],[PTHREAD_CC="${CC}_r"]) -+ AS_IF([test "x${CXX}" != "x"], [AS_IF([AS_EXECUTABLE_P([${CXX}_r])],[PTHREAD_CXX="${CXX}_r"])]) -+ ], -+ [ -+ AC_CHECK_PROGS([PTHREAD_CC],[${CC}_r],[$CC]) -+ AS_IF([test "x${CXX}" != "x"], [AC_CHECK_PROGS([PTHREAD_CXX],[${CXX}_r],[$CXX])]) -+ ] -+ ) -+ ]) -+ ;; -+ esac -+ fi -+fi -+ -+test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" -+test -n "$PTHREAD_CXX" || PTHREAD_CXX="$CXX" -+ -+AC_SUBST([PTHREAD_LIBS]) -+AC_SUBST([PTHREAD_CFLAGS]) -+AC_SUBST([PTHREAD_CC]) -+AC_SUBST([PTHREAD_CXX]) -+ -+# Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: -+if test "x$ax_pthread_ok" = "xyes"; then -+ ifelse([$1],,[AC_DEFINE([HAVE_PTHREAD],[1],[Define if you have POSIX threads libraries and header files.])],[$1]) -+ : -+else -+ ax_pthread_ok=no -+ $2 -+fi -+AC_LANG_POP -+])dnl AX_PTHREAD -diff --git a/configure.ac b/configure.ac -index b625743..bbf4768 100644 ---- a/configure.ac -+++ b/configure.ac -@@ -80,11 +80,22 @@ PKG_CHECK_MODULES(GLIB, [gmodule-2.0 gio-unix-2.0 >= 2.30.0]) - AC_SUBST(GLIB_CFLAGS) - AC_SUBST(GLIB_LIBS) - --PKG_CHECK_MODULES(LIBJS, [mozjs-78]) -- --AC_SUBST(LIBJS_CFLAGS) --AC_SUBST(LIBJS_CXXFLAGS) --AC_SUBST(LIBJS_LIBS) -+dnl --------------------------------------------------------------------------- -+dnl - Check javascript backend -+dnl --------------------------------------------------------------------------- -+AC_ARG_WITH(duktape, AS_HELP_STRING([--with-duktape],[Use Duktape as javascript backend]),with_duktape=yes,with_duktape=no) -+AS_IF([test x${with_duktape} == xyes], [ -+ PKG_CHECK_MODULES(LIBJS, [duktape >= 2.2.0 ]) -+ AC_SUBST(LIBJS_CFLAGS) -+ AC_SUBST(LIBJS_LIBS) -+], [ -+ PKG_CHECK_MODULES(LIBJS, [mozjs-78]) -+ -+ AC_SUBST(LIBJS_CFLAGS) -+ AC_SUBST(LIBJS_CXXFLAGS) -+ AC_SUBST(LIBJS_LIBS) -+]) -+AM_CONDITIONAL(USE_DUKTAPE, [test x$with_duktape == xyes], [Using duktape as javascript engine library]) - - EXPAT_LIB="" - AC_ARG_WITH(expat, [ --with-expat= Use expat from here], -@@ -100,6 +111,12 @@ AC_CHECK_LIB(expat,XML_ParserCreate,[EXPAT_LIBS="-lexpat"], - [AC_MSG_ERROR([Can't find expat library. Please install expat.])]) - AC_SUBST(EXPAT_LIBS) - -+AX_PTHREAD([], [AC_MSG_ERROR([Cannot find the way to enable pthread support.])]) -+LIBS="$PTHREAD_LIBS $LIBS" -+CFLAGS="$CFLAGS $PTHREAD_CFLAGS" -+CC="$PTHREAD_CC" -+AC_CHECK_FUNCS([pthread_condattr_setclock]) -+ - AC_CHECK_FUNCS(clearenv fdatasync) - - if test "x$GCC" = "xyes"; then -@@ -581,6 +598,13 @@ echo " - PAM support: ${have_pam} - systemdsystemunitdir: ${systemdsystemunitdir} - polkitd user: ${POLKITD_USER}" -+if test "x${with_duktape}" = xyes; then -+echo " -+ Javascript engine: Duktape" -+else -+echo " -+ Javascript engine: Mozjs" -+fi - - if test "$have_pam" = yes ; then - echo " -diff --git a/docs/man/polkit.xml b/docs/man/polkit.xml -index 99aa474..90715a5 100644 ---- a/docs/man/polkit.xml -+++ b/docs/man/polkit.xml -@@ -639,7 +639,9 @@ polkit.Result = { - If user-provided code takes a long time to execute, an exception - will be thrown which normally results in the function being - terminated (the current limit is 15 seconds). This is used to -- catch runaway scripts. -+ catch runaway scripts. If the duktape JavaScript backend is -+ compiled in, instead of mozjs, no exception will be thrown—the -+ script will be killed right away (same timeout). - - - -diff --git a/meson.build b/meson.build -index b3702be..7506231 100644 ---- a/meson.build -+++ b/meson.build -@@ -126,7 +126,18 @@ expat_dep = dependency('expat') - assert(cc.has_header('expat.h', dependencies: expat_dep), 'Can\'t find expat.h. Please install expat.') - assert(cc.has_function('XML_ParserCreate', dependencies: expat_dep), 'Can\'t find expat library. Please install expat.') - --mozjs_dep = dependency('mozjs-78') -+duktape_req_version = '>= 2.2.0' -+ -+js_engine = get_option('js_engine') -+if js_engine == 'duktape' -+ js_dep = dependency('duktape', version: duktape_req_version) -+ libm_dep = cc.find_library('m') -+ thread_dep = dependency('threads') -+ func = 'pthread_condattr_setclock' -+ config_h.set('HAVE_' + func.to_upper(), cc.has_function(func, prefix : '#include ')) -+elif js_engine == 'mozjs' -+ js_dep = dependency('mozjs-78') -+endif - - dbus_dep = dependency('dbus-1') - dbus_confdir = dbus_dep.get_pkgconfig_variable('datadir', define_variable: ['datadir', pk_prefix / pk_datadir]) #changed from sysconfdir with respect to commit#8eada3836465838 -@@ -350,6 +361,9 @@ if enable_logind - output += ' systemdsystemunitdir: ' + systemd_systemdsystemunitdir + '\n' - endif - output += ' polkitd user: ' + polkitd_user + ' \n' -+output += ' Javascript engine: ' + js_engine + '\n' -+if enable_logind -+endif - output += ' PAM support: ' + enable_pam.to_string() + '\n\n' - if enable_pam - output += ' PAM file auth: ' + pam_conf['PAM_FILE_INCLUDE_AUTH'] + '\n' -diff --git a/meson_options.txt b/meson_options.txt -index 25e3e77..76aa311 100644 ---- a/meson_options.txt -+++ b/meson_options.txt -@@ -16,3 +16,4 @@ option('introspection', type: 'boolean', value: true, description: 'Enable intro - - option('gtk_doc', type: 'boolean', value: false, description: 'use gtk-doc to build documentation') - option('man', type: 'boolean', value: false, description: 'build manual pages') -+option('js_engine', type: 'combo', choices: ['mozjs', 'duktape'], value: 'duktape', description: 'javascript engine') -diff --git a/src/polkitbackend/Makefile.am b/src/polkitbackend/Makefile.am -index 7e3c080..935fb98 100644 ---- a/src/polkitbackend/Makefile.am -+++ b/src/polkitbackend/Makefile.am -@@ -17,6 +17,8 @@ AM_CPPFLAGS = \ - -DPACKAGE_LIB_DIR=\""$(libdir)"\" \ - -D_POSIX_PTHREAD_SEMANTICS \ - -D_REENTRANT \ -+ -D_XOPEN_SOURCE=700 \ -+ -D_GNU_SOURCE=1 \ - $(NULL) - - noinst_LTLIBRARIES=libpolkit-backend-1.la -@@ -31,9 +33,10 @@ libpolkit_backend_1_la_SOURCES = \ - polkitbackend.h \ - polkitbackendtypes.h \ - polkitbackendprivate.h \ -+ polkitbackendcommon.h polkitbackendcommon.c \ - polkitbackendauthority.h polkitbackendauthority.c \ - polkitbackendinteractiveauthority.h polkitbackendinteractiveauthority.c \ -- polkitbackendjsauthority.h polkitbackendjsauthority.cpp \ -+ polkitbackendjsauthority.h \ - polkitbackendactionpool.h polkitbackendactionpool.c \ - polkitbackendactionlookup.h polkitbackendactionlookup.c \ - $(NULL) -@@ -51,19 +54,27 @@ libpolkit_backend_1_la_CFLAGS = \ - -D_POLKIT_BACKEND_COMPILATION \ - $(GLIB_CFLAGS) \ - $(LIBSYSTEMD_CFLAGS) \ -- $(LIBJS_CFLAGS) \ -+ $(LIBJS_CFLAGS) \ - $(NULL) - - libpolkit_backend_1_la_CXXFLAGS = $(libpolkit_backend_1_la_CFLAGS) - - libpolkit_backend_1_la_LIBADD = \ - $(GLIB_LIBS) \ -+ $(DUKTAPE_LIBS) \ - $(LIBSYSTEMD_LIBS) \ - $(top_builddir)/src/polkit/libpolkit-gobject-1.la \ - $(EXPAT_LIBS) \ -- $(LIBJS_LIBS) \ -+ $(LIBJS_LIBS) \ - $(NULL) - -+if USE_DUKTAPE -+libpolkit_backend_1_la_SOURCES += polkitbackendduktapeauthority.c -+libpolkit_backend_1_la_LIBADD += -lm -+else -+libpolkit_backend_1_la_SOURCES += polkitbackendjsauthority.cpp -+endif -+ - rulesdir = $(sysconfdir)/polkit-1/rules.d - rules_DATA = 50-default.rules - -diff --git a/src/polkitbackend/meson.build b/src/polkitbackend/meson.build -index 93c3c34..99f8e33 100644 ---- a/src/polkitbackend/meson.build -+++ b/src/polkitbackend/meson.build -@@ -4,8 +4,8 @@ sources = files( - 'polkitbackendactionlookup.c', - 'polkitbackendactionpool.c', - 'polkitbackendauthority.c', -+ 'polkitbackendcommon.c', - 'polkitbackendinteractiveauthority.c', -- 'polkitbackendjsauthority.cpp', - ) - - output = 'initjs.h' -@@ -21,7 +21,7 @@ sources += custom_target( - deps = [ - expat_dep, - libpolkit_gobject_dep, -- mozjs_dep, -+ js_dep, - ] - - c_flags = [ -@@ -29,8 +29,18 @@ c_flags = [ - '-D_POLKIT_BACKEND_COMPILATION', - '-DPACKAGE_DATA_DIR="@0@"'.format(pk_prefix / pk_datadir), - '-DPACKAGE_SYSCONF_DIR="@0@"'.format(pk_prefix / pk_sysconfdir), -+ '-D_XOPEN_SOURCE=700', -+ '-D_GNU_SOURCE=1', - ] - -+if js_engine == 'duktape' -+ sources += files('polkitbackendduktapeauthority.c') -+ deps += libm_dep -+ deps += thread_dep -+elif js_engine == 'mozjs' -+ sources += files('polkitbackendjsauthority.cpp') -+endif -+ - if enable_logind - sources += files('polkitbackendsessionmonitor-systemd.c') - -diff --git a/src/polkitbackend/polkitbackendcommon.c b/src/polkitbackend/polkitbackendcommon.c -new file mode 100644 -index 0000000..6783dff ---- /dev/null -+++ b/src/polkitbackend/polkitbackendcommon.c -@@ -0,0 +1,530 @@ -+/* -+ * Copyright (C) 2008 Red Hat, Inc. -+ * -+ * This library is free software; you can redistribute it and/or -+ * modify it under the terms of the GNU Lesser General Public -+ * License as published by the Free Software Foundation; either -+ * version 2 of the License, or (at your option) any later version. -+ * -+ * This library is distributed in the hope that it will be useful, -+ * but WITHOUT ANY WARRANTY; without even the implied warranty of -+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -+ * Lesser General Public License for more details. -+ * -+ * You should have received a copy of the GNU Lesser General -+ * Public License along with this library; if not, write to the -+ * Free Software Foundation, Inc., 59 Temple Place, Suite 330, -+ * Boston, MA 02111-1307, USA. -+ * -+ * Author: David Zeuthen -+ */ -+ -+#include "polkitbackendcommon.h" -+ -+static void -+utils_child_watch_from_release_cb (GPid pid, -+ gint status, -+ gpointer user_data) -+{ -+} -+ -+static void -+utils_spawn_data_free (UtilsSpawnData *data) -+{ -+ if (data->timeout_source != NULL) -+ { -+ g_source_destroy (data->timeout_source); -+ data->timeout_source = NULL; -+ } -+ -+ /* Nuke the child, if necessary */ -+ if (data->child_watch_source != NULL) -+ { -+ g_source_destroy (data->child_watch_source); -+ data->child_watch_source = NULL; -+ } -+ -+ if (data->child_pid != 0) -+ { -+ GSource *source; -+ kill (data->child_pid, SIGTERM); -+ /* OK, we need to reap for the child ourselves - we don't want -+ * to use waitpid() because that might block the calling -+ * thread (the child might handle SIGTERM and use several -+ * seconds for cleanup/rollback). -+ * -+ * So we use GChildWatch instead. -+ * -+ * Avoid taking a references to ourselves. but note that we need -+ * to pass the GSource so we can nuke it once handled. -+ */ -+ source = g_child_watch_source_new (data->child_pid); -+ g_source_set_callback (source, -+ (GSourceFunc) utils_child_watch_from_release_cb, -+ source, -+ (GDestroyNotify) g_source_destroy); -+ g_source_attach (source, data->main_context); -+ g_source_unref (source); -+ data->child_pid = 0; -+ } -+ -+ if (data->child_stdout != NULL) -+ { -+ g_string_free (data->child_stdout, TRUE); -+ data->child_stdout = NULL; -+ } -+ -+ if (data->child_stderr != NULL) -+ { -+ g_string_free (data->child_stderr, TRUE); -+ data->child_stderr = NULL; -+ } -+ -+ if (data->child_stdout_channel != NULL) -+ { -+ g_io_channel_unref (data->child_stdout_channel); -+ data->child_stdout_channel = NULL; -+ } -+ if (data->child_stderr_channel != NULL) -+ { -+ g_io_channel_unref (data->child_stderr_channel); -+ data->child_stderr_channel = NULL; -+ } -+ -+ if (data->child_stdout_source != NULL) -+ { -+ g_source_destroy (data->child_stdout_source); -+ data->child_stdout_source = NULL; -+ } -+ if (data->child_stderr_source != NULL) -+ { -+ g_source_destroy (data->child_stderr_source); -+ data->child_stderr_source = NULL; -+ } -+ -+ if (data->child_stdout_fd != -1) -+ { -+ g_warn_if_fail (close (data->child_stdout_fd) == 0); -+ data->child_stdout_fd = -1; -+ } -+ if (data->child_stderr_fd != -1) -+ { -+ g_warn_if_fail (close (data->child_stderr_fd) == 0); -+ data->child_stderr_fd = -1; -+ } -+ -+ if (data->cancellable_handler_id > 0) -+ { -+ g_cancellable_disconnect (data->cancellable, data->cancellable_handler_id); -+ data->cancellable_handler_id = 0; -+ } -+ -+ if (data->main_context != NULL) -+ g_main_context_unref (data->main_context); -+ -+ if (data->cancellable != NULL) -+ g_object_unref (data->cancellable); -+ -+ g_slice_free (UtilsSpawnData, data); -+} -+ -+/* called in the thread where @cancellable was cancelled */ -+static void -+utils_on_cancelled (GCancellable *cancellable, -+ gpointer user_data) -+{ -+ UtilsSpawnData *data = (UtilsSpawnData *)user_data; -+ GError *error; -+ -+ error = NULL; -+ g_warn_if_fail (g_cancellable_set_error_if_cancelled (cancellable, &error)); -+ g_simple_async_result_take_error (data->simple, error); -+ g_simple_async_result_complete_in_idle (data->simple); -+ g_object_unref (data->simple); -+} -+ -+static gboolean -+utils_timeout_cb (gpointer user_data) -+{ -+ UtilsSpawnData *data = (UtilsSpawnData *)user_data; -+ -+ data->timed_out = TRUE; -+ -+ /* ok, timeout is history, make sure we don't free it in spawn_data_free() */ -+ data->timeout_source = NULL; -+ -+ /* we're done */ -+ g_simple_async_result_complete_in_idle (data->simple); -+ g_object_unref (data->simple); -+ -+ return FALSE; /* remove source */ -+} -+ -+static void -+utils_child_watch_cb (GPid pid, -+ gint status, -+ gpointer user_data) -+{ -+ UtilsSpawnData *data = (UtilsSpawnData *)user_data; -+ gchar *buf; -+ gsize buf_size; -+ -+ if (g_io_channel_read_to_end (data->child_stdout_channel, &buf, &buf_size, NULL) == G_IO_STATUS_NORMAL) -+ { -+ g_string_append_len (data->child_stdout, buf, buf_size); -+ g_free (buf); -+ } -+ if (g_io_channel_read_to_end (data->child_stderr_channel, &buf, &buf_size, NULL) == G_IO_STATUS_NORMAL) -+ { -+ g_string_append_len (data->child_stderr, buf, buf_size); -+ g_free (buf); -+ } -+ -+ data->exit_status = status; -+ -+ /* ok, child watch is history, make sure we don't free it in spawn_data_free() */ -+ data->child_pid = 0; -+ data->child_watch_source = NULL; -+ -+ /* we're done */ -+ g_simple_async_result_complete_in_idle (data->simple); -+ g_object_unref (data->simple); -+} -+ -+static gboolean -+utils_read_child_stderr (GIOChannel *channel, -+ GIOCondition condition, -+ gpointer user_data) -+{ -+ UtilsSpawnData *data = (UtilsSpawnData *)user_data; -+ gchar buf[1024]; -+ gsize bytes_read; -+ -+ g_io_channel_read_chars (channel, buf, sizeof buf, &bytes_read, NULL); -+ g_string_append_len (data->child_stderr, buf, bytes_read); -+ return TRUE; -+} -+ -+static gboolean -+utils_read_child_stdout (GIOChannel *channel, -+ GIOCondition condition, -+ gpointer user_data) -+{ -+ UtilsSpawnData *data = (UtilsSpawnData *)user_data; -+ gchar buf[1024]; -+ gsize bytes_read; -+ -+ g_io_channel_read_chars (channel, buf, sizeof buf, &bytes_read, NULL); -+ g_string_append_len (data->child_stdout, buf, bytes_read); -+ return TRUE; -+} -+ -+void -+polkit_backend_common_spawn (const gchar *const *argv, -+ guint timeout_seconds, -+ GCancellable *cancellable, -+ GAsyncReadyCallback callback, -+ gpointer user_data) -+{ -+ UtilsSpawnData *data; -+ GError *error; -+ -+ data = g_slice_new0 (UtilsSpawnData); -+ data->timeout_seconds = timeout_seconds; -+ data->simple = g_simple_async_result_new (NULL, -+ callback, -+ user_data, -+ (gpointer*)polkit_backend_common_spawn); -+ data->main_context = g_main_context_get_thread_default (); -+ if (data->main_context != NULL) -+ g_main_context_ref (data->main_context); -+ -+ data->cancellable = cancellable != NULL ? (GCancellable*)g_object_ref (cancellable) : NULL; -+ -+ data->child_stdout = g_string_new (NULL); -+ data->child_stderr = g_string_new (NULL); -+ data->child_stdout_fd = -1; -+ data->child_stderr_fd = -1; -+ -+ /* the life-cycle of UtilsSpawnData is tied to its GSimpleAsyncResult */ -+ g_simple_async_result_set_op_res_gpointer (data->simple, data, (GDestroyNotify) utils_spawn_data_free); -+ -+ error = NULL; -+ if (data->cancellable != NULL) -+ { -+ /* could already be cancelled */ -+ error = NULL; -+ if (g_cancellable_set_error_if_cancelled (data->cancellable, &error)) -+ { -+ g_simple_async_result_take_error (data->simple, error); -+ g_simple_async_result_complete_in_idle (data->simple); -+ g_object_unref (data->simple); -+ goto out; -+ } -+ -+ data->cancellable_handler_id = g_cancellable_connect (data->cancellable, -+ G_CALLBACK (utils_on_cancelled), -+ data, -+ NULL); -+ } -+ -+ error = NULL; -+ if (!g_spawn_async_with_pipes (NULL, /* working directory */ -+ (gchar **) argv, -+ NULL, /* envp */ -+ G_SPAWN_SEARCH_PATH | G_SPAWN_DO_NOT_REAP_CHILD, -+ NULL, /* child_setup */ -+ NULL, /* child_setup's user_data */ -+ &(data->child_pid), -+ NULL, /* gint *stdin_fd */ -+ &(data->child_stdout_fd), -+ &(data->child_stderr_fd), -+ &error)) -+ { -+ g_prefix_error (&error, "Error spawning: "); -+ g_simple_async_result_take_error (data->simple, error); -+ g_simple_async_result_complete_in_idle (data->simple); -+ g_object_unref (data->simple); -+ goto out; -+ } -+ -+ if (timeout_seconds > 0) -+ { -+ data->timeout_source = g_timeout_source_new_seconds (timeout_seconds); -+ g_source_set_priority (data->timeout_source, G_PRIORITY_DEFAULT); -+ g_source_set_callback (data->timeout_source, utils_timeout_cb, data, NULL); -+ g_source_attach (data->timeout_source, data->main_context); -+ g_source_unref (data->timeout_source); -+ } -+ -+ data->child_watch_source = g_child_watch_source_new (data->child_pid); -+ g_source_set_callback (data->child_watch_source, (GSourceFunc) utils_child_watch_cb, data, NULL); -+ g_source_attach (data->child_watch_source, data->main_context); -+ g_source_unref (data->child_watch_source); -+ -+ data->child_stdout_channel = g_io_channel_unix_new (data->child_stdout_fd); -+ g_io_channel_set_flags (data->child_stdout_channel, G_IO_FLAG_NONBLOCK, NULL); -+ data->child_stdout_source = g_io_create_watch (data->child_stdout_channel, G_IO_IN); -+ g_source_set_callback (data->child_stdout_source, (GSourceFunc) utils_read_child_stdout, data, NULL); -+ g_source_attach (data->child_stdout_source, data->main_context); -+ g_source_unref (data->child_stdout_source); -+ -+ data->child_stderr_channel = g_io_channel_unix_new (data->child_stderr_fd); -+ g_io_channel_set_flags (data->child_stderr_channel, G_IO_FLAG_NONBLOCK, NULL); -+ data->child_stderr_source = g_io_create_watch (data->child_stderr_channel, G_IO_IN); -+ g_source_set_callback (data->child_stderr_source, (GSourceFunc) utils_read_child_stderr, data, NULL); -+ g_source_attach (data->child_stderr_source, data->main_context); -+ g_source_unref (data->child_stderr_source); -+ -+ out: -+ ; -+} -+ -+void -+polkit_backend_common_on_dir_monitor_changed (GFileMonitor *monitor, -+ GFile *file, -+ GFile *other_file, -+ GFileMonitorEvent event_type, -+ gpointer user_data) -+{ -+ PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (user_data); -+ -+ /* TODO: maybe rate-limit so storms of events are collapsed into one with a 500ms resolution? -+ * Because when editing a file with emacs we get 4-8 events.. -+ */ -+ -+ if (file != NULL) -+ { -+ gchar *name; -+ -+ name = g_file_get_basename (file); -+ -+ /* g_print ("event_type=%d file=%p name=%s\n", event_type, file, name); */ -+ if (!g_str_has_prefix (name, ".") && -+ !g_str_has_prefix (name, "#") && -+ g_str_has_suffix (name, ".rules") && -+ (event_type == G_FILE_MONITOR_EVENT_CREATED || -+ event_type == G_FILE_MONITOR_EVENT_DELETED || -+ event_type == G_FILE_MONITOR_EVENT_CHANGES_DONE_HINT)) -+ { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Reloading rules"); -+ polkit_backend_common_reload_scripts (authority); -+ } -+ g_free (name); -+ } -+} -+ -+gboolean -+polkit_backend_common_spawn_finish (GAsyncResult *res, -+ gint *out_exit_status, -+ gchar **out_standard_output, -+ gchar **out_standard_error, -+ GError **error) -+{ -+ GSimpleAsyncResult *simple = G_SIMPLE_ASYNC_RESULT (res); -+ UtilsSpawnData *data; -+ gboolean ret = FALSE; -+ -+ g_return_val_if_fail (G_IS_ASYNC_RESULT (res), FALSE); -+ g_return_val_if_fail (error == NULL || *error == NULL, FALSE); -+ -+ g_warn_if_fail (g_simple_async_result_get_source_tag (simple) == polkit_backend_common_spawn); -+ -+ if (g_simple_async_result_propagate_error (simple, error)) -+ goto out; -+ -+ data = (UtilsSpawnData*)g_simple_async_result_get_op_res_gpointer (simple); -+ -+ if (data->timed_out) -+ { -+ g_set_error (error, -+ G_IO_ERROR, -+ G_IO_ERROR_TIMED_OUT, -+ "Timed out after %d seconds", -+ data->timeout_seconds); -+ goto out; -+ } -+ -+ if (out_exit_status != NULL) -+ *out_exit_status = data->exit_status; -+ -+ if (out_standard_output != NULL) -+ *out_standard_output = g_strdup (data->child_stdout->str); -+ -+ if (out_standard_error != NULL) -+ *out_standard_error = g_strdup (data->child_stderr->str); -+ -+ ret = TRUE; -+ -+ out: -+ return ret; -+} -+ -+static const gchar * -+polkit_backend_js_authority_get_name (PolkitBackendAuthority *authority) -+{ -+ return "js"; -+} -+ -+static const gchar * -+polkit_backend_js_authority_get_version (PolkitBackendAuthority *authority) -+{ -+ return PACKAGE_VERSION; -+} -+ -+static PolkitAuthorityFeatures -+polkit_backend_js_authority_get_features (PolkitBackendAuthority *authority) -+{ -+ return POLKIT_AUTHORITY_FEATURES_TEMPORARY_AUTHORIZATION; -+} -+ -+void -+polkit_backend_common_js_authority_class_init_common (PolkitBackendJsAuthorityClass *klass) -+{ -+ GObjectClass *gobject_class; -+ PolkitBackendAuthorityClass *authority_class; -+ PolkitBackendInteractiveAuthorityClass *interactive_authority_class; -+ -+ gobject_class = G_OBJECT_CLASS (klass); -+ gobject_class->finalize = polkit_backend_common_js_authority_finalize; -+ gobject_class->set_property = polkit_backend_common_js_authority_set_property; -+ gobject_class->constructed = polkit_backend_common_js_authority_constructed; -+ -+ authority_class = POLKIT_BACKEND_AUTHORITY_CLASS (klass); -+ authority_class->get_name = polkit_backend_js_authority_get_name; -+ authority_class->get_version = polkit_backend_js_authority_get_version; -+ authority_class->get_features = polkit_backend_js_authority_get_features; -+ -+ interactive_authority_class = POLKIT_BACKEND_INTERACTIVE_AUTHORITY_CLASS (klass); -+ interactive_authority_class->get_admin_identities = polkit_backend_common_js_authority_get_admin_auth_identities; -+ interactive_authority_class->check_authorization_sync = polkit_backend_common_js_authority_check_authorization_sync; -+ -+ g_object_class_install_property (gobject_class, -+ PROP_RULES_DIRS, -+ g_param_spec_boxed ("rules-dirs", -+ NULL, -+ NULL, -+ G_TYPE_STRV, -+ G_PARAM_CONSTRUCT_ONLY | G_PARAM_WRITABLE)); -+} -+ -+gint -+polkit_backend_common_rules_file_name_cmp (const gchar *a, -+ const gchar *b) -+{ -+ gint ret; -+ const gchar *a_base; -+ const gchar *b_base; -+ -+ a_base = strrchr (a, '/'); -+ b_base = strrchr (b, '/'); -+ -+ g_assert (a_base != NULL); -+ g_assert (b_base != NULL); -+ a_base += 1; -+ b_base += 1; -+ -+ ret = g_strcmp0 (a_base, b_base); -+ if (ret == 0) -+ { -+ /* /etc wins over /usr */ -+ ret = g_strcmp0 (a, b); -+ g_assert (ret != 0); -+ } -+ -+ return ret; -+} -+ -+const gchar * -+polkit_backend_common_get_signal_name (gint signal_number) -+{ -+ switch (signal_number) -+ { -+#define _HANDLE_SIG(sig) case sig: return #sig; -+ _HANDLE_SIG (SIGHUP); -+ _HANDLE_SIG (SIGINT); -+ _HANDLE_SIG (SIGQUIT); -+ _HANDLE_SIG (SIGILL); -+ _HANDLE_SIG (SIGABRT); -+ _HANDLE_SIG (SIGFPE); -+ _HANDLE_SIG (SIGKILL); -+ _HANDLE_SIG (SIGSEGV); -+ _HANDLE_SIG (SIGPIPE); -+ _HANDLE_SIG (SIGALRM); -+ _HANDLE_SIG (SIGTERM); -+ _HANDLE_SIG (SIGUSR1); -+ _HANDLE_SIG (SIGUSR2); -+ _HANDLE_SIG (SIGCHLD); -+ _HANDLE_SIG (SIGCONT); -+ _HANDLE_SIG (SIGSTOP); -+ _HANDLE_SIG (SIGTSTP); -+ _HANDLE_SIG (SIGTTIN); -+ _HANDLE_SIG (SIGTTOU); -+ _HANDLE_SIG (SIGBUS); -+#ifdef SIGPOLL -+ _HANDLE_SIG (SIGPOLL); -+#endif -+ _HANDLE_SIG (SIGPROF); -+ _HANDLE_SIG (SIGSYS); -+ _HANDLE_SIG (SIGTRAP); -+ _HANDLE_SIG (SIGURG); -+ _HANDLE_SIG (SIGVTALRM); -+ _HANDLE_SIG (SIGXCPU); -+ _HANDLE_SIG (SIGXFSZ); -+#undef _HANDLE_SIG -+ default: -+ break; -+ } -+ return "UNKNOWN_SIGNAL"; -+} -+ -+void -+polkit_backend_common_spawn_cb (GObject *source_object, -+ GAsyncResult *res, -+ gpointer user_data) -+{ -+ SpawnData *data = (SpawnData *)user_data; -+ data->res = (GAsyncResult*)g_object_ref (res); -+ g_main_loop_quit (data->loop); -+} -diff --git a/src/polkitbackend/polkitbackendcommon.h b/src/polkitbackend/polkitbackendcommon.h -new file mode 100644 -index 0000000..dd700fc ---- /dev/null -+++ b/src/polkitbackend/polkitbackendcommon.h -@@ -0,0 +1,158 @@ -+/* -+ * Copyright (C) 2008 Red Hat, Inc. -+ * -+ * This library is free software; you can redistribute it and/or -+ * modify it under the terms of the GNU Lesser General Public -+ * License as published by the Free Software Foundation; either -+ * version 2 of the License, or (at your option) any later version. -+ * -+ * This library is distributed in the hope that it will be useful, -+ * but WITHOUT ANY WARRANTY; without even the implied warranty of -+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -+ * Lesser General Public License for more details. -+ * -+ * You should have received a copy of the GNU Lesser General -+ * Public License along with this library; if not, write to the -+ * Free Software Foundation, Inc., 59 Temple Place, Suite 330, -+ * Boston, MA 02111-1307, USA. -+ * -+ * Author: David Zeuthen -+ */ -+ -+#if !defined (_POLKIT_BACKEND_COMPILATION) && !defined(_POLKIT_BACKEND_INSIDE_POLKIT_BACKEND_H) -+#error "Only can be included directly, this file may disappear or change contents." -+#endif -+ -+#ifndef __POLKIT_BACKEND_COMMON_H -+#define __POLKIT_BACKEND_COMMON_H -+ -+#include "config.h" -+#include -+#include -+#include -+#include -+#ifdef HAVE_NETGROUP_H -+#include -+#else -+#include -+#endif -+#include -+#include -+#include -+#include //here, all things glib via glib.h (including -> gspawn.h) -+ -+#include -+#include "polkitbackendjsauthority.h" -+ -+#include -+ -+#ifdef HAVE_LIBSYSTEMD -+#include -+#endif /* HAVE_LIBSYSTEMD */ -+ -+#define RUNAWAY_KILLER_TIMEOUT (15) -+ -+#ifdef __cplusplus -+extern "C" { -+#endif -+ -+enum -+{ -+ PROP_0, -+ PROP_RULES_DIRS, -+}; -+ -+typedef struct -+{ -+ GSimpleAsyncResult *simple; /* borrowed reference */ -+ GMainContext *main_context; /* may be NULL */ -+ -+ GCancellable *cancellable; /* may be NULL */ -+ gulong cancellable_handler_id; -+ -+ GPid child_pid; -+ gint child_stdout_fd; -+ gint child_stderr_fd; -+ -+ GIOChannel *child_stdout_channel; -+ GIOChannel *child_stderr_channel; -+ -+ GSource *child_watch_source; -+ GSource *child_stdout_source; -+ GSource *child_stderr_source; -+ -+ guint timeout_seconds; -+ gboolean timed_out; -+ GSource *timeout_source; -+ -+ GString *child_stdout; -+ GString *child_stderr; -+ -+ gint exit_status; -+} UtilsSpawnData; -+ -+typedef struct -+{ -+ GMainLoop *loop; -+ GAsyncResult *res; -+} SpawnData; -+ -+void polkit_backend_common_spawn (const gchar *const *argv, -+ guint timeout_seconds, -+ GCancellable *cancellable, -+ GAsyncReadyCallback callback, -+ gpointer user_data); -+void polkit_backend_common_spawn_cb (GObject *source_object, -+ GAsyncResult *res, -+ gpointer user_data); -+gboolean polkit_backend_common_spawn_finish (GAsyncResult *res, -+ gint *out_exit_status, -+ gchar **out_standard_output, -+ gchar **out_standard_error, -+ GError **error); -+ -+void polkit_backend_common_on_dir_monitor_changed (GFileMonitor *monitor, -+ GFile *file, -+ GFile *other_file, -+ GFileMonitorEvent event_type, -+ gpointer user_data); -+ -+void polkit_backend_common_js_authority_class_init_common (PolkitBackendJsAuthorityClass *klass); -+ -+gint polkit_backend_common_rules_file_name_cmp (const gchar *a, -+ const gchar *b); -+ -+const gchar *polkit_backend_common_get_signal_name (gint signal_number); -+ -+/* To be provided by each JS backend, from here onwards ---------------------------------------------- */ -+ -+void polkit_backend_common_reload_scripts (PolkitBackendJsAuthority *authority); -+void polkit_backend_common_js_authority_finalize (GObject *object); -+void polkit_backend_common_js_authority_constructed (GObject *object); -+GList *polkit_backend_common_js_authority_get_admin_auth_identities (PolkitBackendInteractiveAuthority *_authority, -+ PolkitSubject *caller, -+ PolkitSubject *subject, -+ PolkitIdentity *user_for_subject, -+ gboolean subject_is_local, -+ gboolean subject_is_active, -+ const gchar *action_id, -+ PolkitDetails *details); -+void polkit_backend_common_js_authority_set_property (GObject *object, -+ guint property_id, -+ const GValue *value, -+ GParamSpec *pspec); -+PolkitImplicitAuthorization polkit_backend_common_js_authority_check_authorization_sync (PolkitBackendInteractiveAuthority *_authority, -+ PolkitSubject *caller, -+ PolkitSubject *subject, -+ PolkitIdentity *user_for_subject, -+ gboolean subject_is_local, -+ gboolean subject_is_active, -+ const gchar *action_id, -+ PolkitDetails *details, -+ PolkitImplicitAuthorization implicit); -+#ifdef __cplusplus -+} -+#endif -+ -+#endif /* __POLKIT_BACKEND_COMMON_H */ -+ -diff --git a/src/polkitbackend/polkitbackendduktapeauthority.c b/src/polkitbackend/polkitbackendduktapeauthority.c -new file mode 100644 -index 0000000..c89dbcf ---- /dev/null -+++ b/src/polkitbackend/polkitbackendduktapeauthority.c -@@ -0,0 +1,1051 @@ -+/* -+ * Copyright (C) 2008-2012 Red Hat, Inc. -+ * Copyright (C) 2015 Tangent Space -+ * Copyright (C) 2019 Wu Xiaotian -+ * -+ * This library is free software; you can redistribute it and/or -+ * modify it under the terms of the GNU Lesser General Public -+ * License as published by the Free Software Foundation; either -+ * version 2 of the License, or (at your option) any later version. -+ * -+ * This library is distributed in the hope that it will be useful, -+ * but WITHOUT ANY WARRANTY; without even the implied warranty of -+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -+ * Lesser General Public License for more details. -+ * -+ * You should have received a copy of the GNU Lesser General -+ * Public License along with this library; if not, write to the -+ * Free Software Foundation, Inc., 59 Temple Place, Suite 330, -+ * Boston, MA 02111-1307, USA. -+ * -+ * Author: David Zeuthen -+ */ -+ -+#include -+ -+#include "polkitbackendcommon.h" -+ -+#include "duktape.h" -+ -+/* Built source and not too big to worry about deduplication */ -+#include "initjs.h" /* init.js */ -+ -+/** -+ * SECTION:polkitbackendjsauthority -+ * @title: PolkitBackendJsAuthority -+ * @short_description: JS Authority -+ * @stability: Unstable -+ * -+ * An (Duktape-based) implementation of #PolkitBackendAuthority that reads and -+ * evaluates Javascript files and supports interaction with authentication -+ * agents (virtue of being based on #PolkitBackendInteractiveAuthority). -+ */ -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+struct _PolkitBackendJsAuthorityPrivate -+{ -+ gchar **rules_dirs; -+ GFileMonitor **dir_monitors; /* NULL-terminated array of GFileMonitor instances */ -+ -+ duk_context *cx; -+ -+ pthread_t runaway_killer_thread; -+}; -+ -+enum -+{ -+ RUNAWAY_KILLER_THREAD_EXIT_STATUS_UNSET, -+ RUNAWAY_KILLER_THREAD_EXIT_STATUS_SUCCESS, -+ RUNAWAY_KILLER_THREAD_EXIT_STATUS_FAILURE, -+}; -+ -+static gboolean execute_script_with_runaway_killer(PolkitBackendJsAuthority *authority, -+ const gchar *filename); -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+G_DEFINE_TYPE (PolkitBackendJsAuthority, polkit_backend_js_authority, POLKIT_BACKEND_TYPE_INTERACTIVE_AUTHORITY); -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+static duk_ret_t js_polkit_log (duk_context *cx); -+static duk_ret_t js_polkit_spawn (duk_context *cx); -+static duk_ret_t js_polkit_user_is_in_netgroup (duk_context *cx); -+ -+static const duk_function_list_entry js_polkit_functions[] = -+{ -+ { "log", js_polkit_log, 1 }, -+ { "spawn", js_polkit_spawn, 1 }, -+ { "_userIsInNetGroup", js_polkit_user_is_in_netgroup, 2 }, -+ { NULL, NULL, 0 }, -+}; -+ -+static void report_error (void *udata, -+ const char *msg) -+{ -+ PolkitBackendJsAuthority *authority = udata; -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "fatal Duktape JS backend error: %s", -+ (msg ? msg : "no message")); -+} -+ -+static void -+polkit_backend_js_authority_init (PolkitBackendJsAuthority *authority) -+{ -+ authority->priv = G_TYPE_INSTANCE_GET_PRIVATE (authority, -+ POLKIT_BACKEND_TYPE_JS_AUTHORITY, -+ PolkitBackendJsAuthorityPrivate); -+} -+ -+static void -+load_scripts (PolkitBackendJsAuthority *authority) -+{ -+ GList *files = NULL; -+ GList *l; -+ guint num_scripts = 0; -+ GError *error = NULL; -+ guint n; -+ -+ files = NULL; -+ -+ for (n = 0; authority->priv->rules_dirs != NULL && authority->priv->rules_dirs[n] != NULL; n++) -+ { -+ const gchar *dir_name = authority->priv->rules_dirs[n]; -+ GDir *dir = NULL; -+ -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Loading rules from directory %s", -+ dir_name); -+ -+ dir = g_dir_open (dir_name, -+ 0, -+ &error); -+ if (dir == NULL) -+ { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error opening rules directory: %s (%s, %d)", -+ error->message, g_quark_to_string (error->domain), error->code); -+ g_clear_error (&error); -+ } -+ else -+ { -+ const gchar *name; -+ while ((name = g_dir_read_name (dir)) != NULL) -+ { -+ if (g_str_has_suffix (name, ".rules")) -+ files = g_list_prepend (files, g_strdup_printf ("%s/%s", dir_name, name)); -+ } -+ g_dir_close (dir); -+ } -+ } -+ -+ files = g_list_sort (files, (GCompareFunc) polkit_backend_common_rules_file_name_cmp); -+ -+ for (l = files; l != NULL; l = l->next) -+ { -+ const gchar *filename = (gchar *)l->data; -+ -+ if (!execute_script_with_runaway_killer(authority, filename)) -+ continue; -+ num_scripts++; -+ } -+ -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Finished loading, compiling and executing %d rules", -+ num_scripts); -+ g_list_free_full (files, g_free); -+} -+ -+void -+polkit_backend_common_reload_scripts (PolkitBackendJsAuthority *authority) -+{ -+ duk_context *cx = authority->priv->cx; -+ -+ duk_set_top (cx, 0); -+ if (!duk_get_global_string (cx, "polkit")) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error deleting old rules, not loading new ones"); -+ return; -+ } -+ duk_push_string (cx, "_deleteRules"); -+ -+ duk_call_prop (cx, 0, 0); -+ -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Collecting garbage unconditionally..."); -+ -+ load_scripts (authority); -+ -+ /* Let applications know we have new rules... */ -+ g_signal_emit_by_name (authority, "changed"); -+} -+ -+static void -+setup_file_monitors (PolkitBackendJsAuthority *authority) -+{ -+ guint n; -+ GPtrArray *p; -+ -+ p = g_ptr_array_new (); -+ for (n = 0; authority->priv->rules_dirs != NULL && authority->priv->rules_dirs[n] != NULL; n++) -+ { -+ GFile *file; -+ GError *error; -+ GFileMonitor *monitor; -+ -+ file = g_file_new_for_path (authority->priv->rules_dirs[n]); -+ error = NULL; -+ monitor = g_file_monitor_directory (file, -+ G_FILE_MONITOR_NONE, -+ NULL, -+ &error); -+ g_object_unref (file); -+ if (monitor == NULL) -+ { -+ g_warning ("Error monitoring directory %s: %s", -+ authority->priv->rules_dirs[n], -+ error->message); -+ g_clear_error (&error); -+ } -+ else -+ { -+ g_signal_connect (monitor, -+ "changed", -+ G_CALLBACK (polkit_backend_common_on_dir_monitor_changed), -+ authority); -+ g_ptr_array_add (p, monitor); -+ } -+ } -+ g_ptr_array_add (p, NULL); -+ authority->priv->dir_monitors = (GFileMonitor**) g_ptr_array_free (p, FALSE); -+} -+ -+void -+polkit_backend_common_js_authority_constructed (GObject *object) -+{ -+ PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (object); -+ duk_context *cx; -+ -+ cx = duk_create_heap (NULL, NULL, NULL, authority, report_error); -+ if (cx == NULL) -+ goto fail; -+ -+ authority->priv->cx = cx; -+ -+ duk_push_global_object (cx); -+ duk_push_object (cx); -+ duk_put_function_list (cx, -1, js_polkit_functions); -+ duk_put_prop_string (cx, -2, "polkit"); -+ -+ /* load polkit objects/functions into JS context (e.g. addRule(), -+ * _deleteRules(), _runRules() et al) -+ */ -+ duk_eval_string (cx, init_js); -+ -+ if (authority->priv->rules_dirs == NULL) -+ { -+ authority->priv->rules_dirs = g_new0 (gchar *, 3); -+ authority->priv->rules_dirs[0] = g_strdup (PACKAGE_SYSCONF_DIR "/polkit-1/rules.d"); -+ authority->priv->rules_dirs[1] = g_strdup (PACKAGE_DATA_DIR "/polkit-1/rules.d"); -+ } -+ -+ setup_file_monitors (authority); -+ load_scripts (authority); -+ -+ G_OBJECT_CLASS (polkit_backend_js_authority_parent_class)->constructed (object); -+ return; -+ -+ fail: -+ g_critical ("Error initializing JavaScript environment"); -+ g_assert_not_reached (); -+} -+ -+void -+polkit_backend_common_js_authority_finalize (GObject *object) -+{ -+ PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (object); -+ guint n; -+ -+ for (n = 0; authority->priv->dir_monitors != NULL && authority->priv->dir_monitors[n] != NULL; n++) -+ { -+ GFileMonitor *monitor = authority->priv->dir_monitors[n]; -+ g_signal_handlers_disconnect_by_func (monitor, -+ G_CALLBACK (polkit_backend_common_on_dir_monitor_changed), -+ authority); -+ g_object_unref (monitor); -+ } -+ g_free (authority->priv->dir_monitors); -+ g_strfreev (authority->priv->rules_dirs); -+ -+ duk_destroy_heap (authority->priv->cx); -+ -+ G_OBJECT_CLASS (polkit_backend_js_authority_parent_class)->finalize (object); -+} -+ -+void -+polkit_backend_common_js_authority_set_property (GObject *object, -+ guint property_id, -+ const GValue *value, -+ GParamSpec *pspec) -+{ -+ PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (object); -+ -+ switch (property_id) -+ { -+ case PROP_RULES_DIRS: -+ g_assert (authority->priv->rules_dirs == NULL); -+ authority->priv->rules_dirs = (gchar **) g_value_dup_boxed (value); -+ break; -+ -+ default: -+ G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec); -+ break; -+ } -+} -+ -+static void -+polkit_backend_js_authority_class_init (PolkitBackendJsAuthorityClass *klass) -+{ -+ polkit_backend_common_js_authority_class_init_common (klass); -+ g_type_class_add_private (klass, sizeof (PolkitBackendJsAuthorityPrivate)); -+} -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+static void -+set_property_str (duk_context *cx, -+ const gchar *name, -+ const gchar *value) -+{ -+ duk_push_string (cx, value); -+ duk_put_prop_string (cx, -2, name); -+} -+ -+static void -+set_property_strv (duk_context *cx, -+ const gchar *name, -+ GPtrArray *value) -+{ -+ guint n; -+ duk_push_array (cx); -+ for (n = 0; n < value->len; n++) -+ { -+ duk_push_string (cx, g_ptr_array_index (value, n)); -+ duk_put_prop_index (cx, -2, n); -+ } -+ duk_put_prop_string (cx, -2, name); -+} -+ -+static void -+set_property_int32 (duk_context *cx, -+ const gchar *name, -+ gint32 value) -+{ -+ duk_push_int (cx, value); -+ duk_put_prop_string (cx, -2, name); -+} -+ -+static void -+set_property_bool (duk_context *cx, -+ const char *name, -+ gboolean value) -+{ -+ duk_push_boolean (cx, value); -+ duk_put_prop_string (cx, -2, name); -+} -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+static gboolean -+push_subject (duk_context *cx, -+ PolkitSubject *subject, -+ PolkitIdentity *user_for_subject, -+ gboolean subject_is_local, -+ gboolean subject_is_active, -+ GError **error) -+{ -+ gboolean ret = FALSE; -+ pid_t pid; -+ uid_t uid; -+ gchar *user_name = NULL; -+ GPtrArray *groups = NULL; -+ struct passwd *passwd; -+ char *seat_str = NULL; -+ char *session_str = NULL; -+ -+ if (!duk_get_global_string (cx, "Subject")) { -+ return FALSE; -+ } -+ -+ duk_new (cx, 0); -+ -+ if (POLKIT_IS_UNIX_PROCESS (subject)) -+ { -+ pid = polkit_unix_process_get_pid (POLKIT_UNIX_PROCESS (subject)); -+ } -+ else if (POLKIT_IS_SYSTEM_BUS_NAME (subject)) -+ { -+ PolkitSubject *process; -+ process = polkit_system_bus_name_get_process_sync (POLKIT_SYSTEM_BUS_NAME (subject), NULL, error); -+ if (process == NULL) -+ goto out; -+ pid = polkit_unix_process_get_pid (POLKIT_UNIX_PROCESS (process)); -+ g_object_unref (process); -+ } -+ else -+ { -+ g_assert_not_reached (); -+ } -+ -+#ifdef HAVE_LIBSYSTEMD -+ if (sd_pid_get_session (pid, &session_str) == 0) -+ { -+ if (sd_session_get_seat (session_str, &seat_str) == 0) -+ { -+ /* do nothing */ -+ } -+ } -+#endif /* HAVE_LIBSYSTEMD */ -+ -+ g_assert (POLKIT_IS_UNIX_USER (user_for_subject)); -+ uid = polkit_unix_user_get_uid (POLKIT_UNIX_USER (user_for_subject)); -+ -+ groups = g_ptr_array_new_with_free_func (g_free); -+ -+ passwd = getpwuid (uid); -+ if (passwd == NULL) -+ { -+ user_name = g_strdup_printf ("%d", (gint) uid); -+ g_warning ("Error looking up info for uid %d: %m", (gint) uid); -+ } -+ else -+ { -+ gid_t gids[512]; -+ int num_gids = 512; -+ -+ user_name = g_strdup (passwd->pw_name); -+ -+ if (getgrouplist (passwd->pw_name, -+ passwd->pw_gid, -+ gids, -+ &num_gids) < 0) -+ { -+ g_warning ("Error looking up groups for uid %d: %m", (gint) uid); -+ } -+ else -+ { -+ gint n; -+ for (n = 0; n < num_gids; n++) -+ { -+ struct group *group; -+ group = getgrgid (gids[n]); -+ if (group == NULL) -+ { -+ g_ptr_array_add (groups, g_strdup_printf ("%d", (gint) gids[n])); -+ } -+ else -+ { -+ g_ptr_array_add (groups, g_strdup (group->gr_name)); -+ } -+ } -+ } -+ } -+ -+ set_property_int32 (cx, "pid", pid); -+ set_property_str (cx, "user", user_name); -+ set_property_strv (cx, "groups", groups); -+ set_property_str (cx, "seat", seat_str); -+ set_property_str (cx, "session", session_str); -+ set_property_bool (cx, "local", subject_is_local); -+ set_property_bool (cx, "active", subject_is_active); -+ -+ ret = TRUE; -+ -+ out: -+ free (session_str); -+ free (seat_str); -+ g_free (user_name); -+ if (groups != NULL) -+ g_ptr_array_unref (groups); -+ -+ return ret; -+} -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+static gboolean -+push_action_and_details (duk_context *cx, -+ const gchar *action_id, -+ PolkitDetails *details, -+ GError **error) -+{ -+ gchar **keys; -+ guint n; -+ -+ if (!duk_get_global_string (cx, "Action")) { -+ return FALSE; -+ } -+ -+ duk_new (cx, 0); -+ -+ set_property_str (cx, "id", action_id); -+ -+ keys = polkit_details_get_keys (details); -+ for (n = 0; keys != NULL && keys[n] != NULL; n++) -+ { -+ gchar *key; -+ const gchar *value; -+ key = g_strdup_printf ("_detail_%s", keys[n]); -+ value = polkit_details_lookup (details, keys[n]); -+ set_property_str (cx, key, value); -+ g_free (key); -+ } -+ g_strfreev (keys); -+ -+ return TRUE; -+} -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+typedef struct { -+ PolkitBackendJsAuthority *authority; -+ const gchar *filename; -+ pthread_cond_t cond; -+ pthread_mutex_t mutex; -+ gint ret; -+} RunawayKillerCtx; -+ -+static gpointer -+runaway_killer_thread_execute_js (gpointer user_data) -+{ -+ RunawayKillerCtx *ctx = user_data; -+ duk_context *cx = ctx->authority->priv->cx; -+ -+ int oldtype, pthread_err; -+ -+ if ((pthread_err = pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &oldtype))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (ctx->authority), -+ "Error setting thread cancel type: %s", -+ strerror(pthread_err)); -+ goto err; -+ } -+ -+ GFile *file = g_file_new_for_path(ctx->filename); -+ char *contents; -+ gsize len; -+ -+ if (!g_file_load_contents(file, NULL, &contents, &len, NULL, NULL)) { -+ polkit_backend_authority_log(POLKIT_BACKEND_AUTHORITY(ctx->authority), -+ "Error loading script %s", ctx->filename); -+ g_object_unref(file); -+ goto err; -+ } -+ -+ g_object_unref(file); -+ -+ /* evaluate the script, trying to print context in any syntax errors -+ found */ -+ if (duk_peval_lstring(cx, contents, len) != 0) -+ { -+ polkit_backend_authority_log(POLKIT_BACKEND_AUTHORITY(ctx->authority), -+ "Error compiling script %s: %s", ctx->filename, -+ duk_safe_to_string(cx, -1)); -+ duk_pop(cx); -+ goto free_err; -+ } -+ g_free(contents); -+ -+ ctx->ret = RUNAWAY_KILLER_THREAD_EXIT_STATUS_SUCCESS; -+ goto end; -+ -+free_err: -+ g_free(contents); -+err: -+ ctx->ret = RUNAWAY_KILLER_THREAD_EXIT_STATUS_FAILURE; -+end: -+ if ((pthread_err = pthread_cond_signal(&ctx->cond))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (ctx->authority), -+ "Error signaling on condition variable: %s", -+ strerror(pthread_err)); -+ ctx->ret = RUNAWAY_KILLER_THREAD_EXIT_STATUS_FAILURE; -+ } -+ return NULL; -+} -+ -+static gpointer -+runaway_killer_thread_call_js (gpointer user_data) -+{ -+ RunawayKillerCtx *ctx = user_data; -+ duk_context *cx = ctx->authority->priv->cx; -+ int oldtype, pthread_err; -+ -+ if ((pthread_err = pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &oldtype))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (ctx->authority), -+ "Error setting thread cancel type: %s", -+ strerror(pthread_err)); -+ goto err; -+ } -+ -+ if (duk_pcall_prop (cx, 0, 2) != DUK_EXEC_SUCCESS) -+ { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (ctx->authority), -+ "Error evaluating admin rules: ", -+ duk_safe_to_string (cx, -1)); -+ goto err; -+ } -+ -+ ctx->ret = RUNAWAY_KILLER_THREAD_EXIT_STATUS_SUCCESS; -+ goto end; -+ -+err: -+ ctx->ret = RUNAWAY_KILLER_THREAD_EXIT_STATUS_FAILURE; -+end: -+ if ((pthread_err = pthread_cond_signal(&ctx->cond))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (ctx->authority), -+ "Error signaling on condition variable: %s", -+ strerror(pthread_err)); -+ ctx->ret = RUNAWAY_KILLER_THREAD_EXIT_STATUS_FAILURE; -+ } -+ return NULL; -+} -+ -+#if defined (HAVE_PTHREAD_CONDATTR_SETCLOCK) -+# if defined(CLOCK_MONOTONIC) -+# define PK_CLOCK CLOCK_MONOTONIC -+# elif defined(CLOCK_BOOTTIME) -+# define PK_CLOCK CLOCK_BOOTTIME -+# else -+ /* No suitable clock */ -+# undef HAVE_PTHREAD_CONDATTR_SETCLOCK -+# define PK_CLOCK CLOCK_REALTIME -+# endif -+#else /* ! HAVE_PTHREAD_CONDATTR_SETCLOCK */ -+# define PK_CLOCK CLOCK_REALTIME -+#endif /* ! HAVE_PTHREAD_CONDATTR_SETCLOCK */ -+ -+static gboolean -+runaway_killer_common(PolkitBackendJsAuthority *authority, RunawayKillerCtx *ctx, void *js_context_cb (void *user_data)) -+{ -+ int pthread_err; -+ gboolean cancel = FALSE; -+ pthread_condattr_t attr; -+ struct timespec abs_time; -+ -+#ifdef HAVE_PTHREAD_CONDATTR_SETCLOCK -+ if ((pthread_err = pthread_condattr_init(&attr))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error initializing condition variable attributes: %s", -+ strerror(pthread_err)); -+ return FALSE; -+ } -+ if ((pthread_err = pthread_condattr_setclock(&attr, PK_CLOCK))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error setting condition variable attributes: %s", -+ strerror(pthread_err)); -+ goto err_clean_condattr; -+ } -+ /* Init again, with needed attr */ -+ if ((pthread_err = pthread_cond_init(&ctx->cond, &attr))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error initializing condition variable: %s", -+ strerror(pthread_err)); -+ goto err_clean_condattr; -+ } -+#endif -+ -+ if ((pthread_err = pthread_mutex_lock(&ctx->mutex))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error locking mutex: %s", -+ strerror(pthread_err)); -+ goto err_clean_cond; -+ } -+ -+ if (clock_gettime(PK_CLOCK, &abs_time)) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error getting system's monotonic time: %s", -+ strerror(errno)); -+ goto err_clean_cond; -+ } -+ abs_time.tv_sec += RUNAWAY_KILLER_TIMEOUT; -+ -+ if ((pthread_err = pthread_create(&authority->priv->runaway_killer_thread, NULL, -+ js_context_cb, ctx))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error creating runaway JS killer thread: %s", -+ strerror(pthread_err)); -+ goto err_clean_cond; -+ } -+ -+ while (ctx->ret == RUNAWAY_KILLER_THREAD_EXIT_STATUS_UNSET) /* loop to treat spurious wakeups */ -+ if (pthread_cond_timedwait(&ctx->cond, &ctx->mutex, &abs_time) == ETIMEDOUT) { -+ cancel = TRUE; -+ -+ /* Log that we are terminating the script */ -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Terminating runaway script after %d seconds", -+ RUNAWAY_KILLER_TIMEOUT); -+ -+ break; -+ } -+ -+ if ((pthread_err = pthread_mutex_unlock(&ctx->mutex))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error unlocking mutex: %s", -+ strerror(pthread_err)); -+ goto err_clean_cond; -+ } -+ -+ if (cancel) { -+ if ((pthread_err = pthread_cancel (authority->priv->runaway_killer_thread))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error cancelling runaway JS killer thread: %s", -+ strerror(pthread_err)); -+ goto err_clean_cond; -+ } -+ } -+ if ((pthread_err = pthread_join (authority->priv->runaway_killer_thread, NULL))) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error joining runaway JS killer thread: %s", -+ strerror(pthread_err)); -+ goto err_clean_cond; -+ } -+ -+ return ctx->ret == RUNAWAY_KILLER_THREAD_EXIT_STATUS_SUCCESS; -+ -+ err_clean_cond: -+#ifdef HAVE_PTHREAD_CONDATTR_SETCLOCK -+ pthread_cond_destroy(&ctx->cond); -+#endif -+ err_clean_condattr: -+#ifdef HAVE_PTHREAD_CONDATTR_SETCLOCK -+ pthread_condattr_destroy(&attr); -+#endif -+ return FALSE; -+} -+ -+/* Blocking for at most RUNAWAY_KILLER_TIMEOUT */ -+static gboolean -+execute_script_with_runaway_killer(PolkitBackendJsAuthority *authority, -+ const gchar *filename) -+{ -+ RunawayKillerCtx ctx = {.authority = authority, .filename = filename, -+ .ret = RUNAWAY_KILLER_THREAD_EXIT_STATUS_UNSET, -+ .mutex = PTHREAD_MUTEX_INITIALIZER, -+ .cond = PTHREAD_COND_INITIALIZER}; -+ -+ return runaway_killer_common(authority, &ctx, &runaway_killer_thread_execute_js); -+} -+ -+/* Calls already stacked function and args. Blocking for at most -+ * RUNAWAY_KILLER_TIMEOUT. If timeout is the case, ctx.ret will be -+ * RUNAWAY_KILLER_THREAD_EXIT_STATUS_UNSET, thus returning FALSE. -+ */ -+static gboolean -+call_js_function_with_runaway_killer(PolkitBackendJsAuthority *authority) -+{ -+ RunawayKillerCtx ctx = {.authority = authority, -+ .ret = RUNAWAY_KILLER_THREAD_EXIT_STATUS_UNSET, -+ .mutex = PTHREAD_MUTEX_INITIALIZER, -+ .cond = PTHREAD_COND_INITIALIZER}; -+ -+ return runaway_killer_common(authority, &ctx, &runaway_killer_thread_call_js); -+} -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+GList * -+polkit_backend_common_js_authority_get_admin_auth_identities (PolkitBackendInteractiveAuthority *_authority, -+ PolkitSubject *caller, -+ PolkitSubject *subject, -+ PolkitIdentity *user_for_subject, -+ gboolean subject_is_local, -+ gboolean subject_is_active, -+ const gchar *action_id, -+ PolkitDetails *details) -+{ -+ PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (_authority); -+ GList *ret = NULL; -+ guint n; -+ GError *error = NULL; -+ const char *ret_str = NULL; -+ gchar **ret_strs = NULL; -+ duk_context *cx = authority->priv->cx; -+ -+ duk_set_top (cx, 0); -+ if (!duk_get_global_string (cx, "polkit")) { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error deleting old rules, not loading new ones"); -+ goto out; -+ } -+ -+ duk_push_string (cx, "_runAdminRules"); -+ -+ if (!push_action_and_details (cx, action_id, details, &error)) -+ { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error converting action and details to JS object: %s", -+ error->message); -+ g_clear_error (&error); -+ goto out; -+ } -+ -+ if (!push_subject (cx, subject, user_for_subject, subject_is_local, subject_is_active, &error)) -+ { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error converting subject to JS object: %s", -+ error->message); -+ g_clear_error (&error); -+ goto out; -+ } -+ -+ if (!call_js_function_with_runaway_killer (authority)) -+ goto out; -+ -+ ret_str = duk_require_string (cx, -1); -+ -+ ret_strs = g_strsplit (ret_str, ",", -1); -+ for (n = 0; ret_strs != NULL && ret_strs[n] != NULL; n++) -+ { -+ const gchar *identity_str = ret_strs[n]; -+ PolkitIdentity *identity; -+ -+ error = NULL; -+ identity = polkit_identity_from_string (identity_str, &error); -+ if (identity == NULL) -+ { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Identity `%s' is not valid, ignoring: %s", -+ identity_str, error->message); -+ g_clear_error (&error); -+ } -+ else -+ { -+ ret = g_list_prepend (ret, identity); -+ } -+ } -+ ret = g_list_reverse (ret); -+ -+ out: -+ g_strfreev (ret_strs); -+ /* fallback to root password auth */ -+ if (ret == NULL) -+ ret = g_list_prepend (ret, polkit_unix_user_new (0)); -+ -+ return ret; -+} -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+PolkitImplicitAuthorization -+polkit_backend_common_js_authority_check_authorization_sync (PolkitBackendInteractiveAuthority *_authority, -+ PolkitSubject *caller, -+ PolkitSubject *subject, -+ PolkitIdentity *user_for_subject, -+ gboolean subject_is_local, -+ gboolean subject_is_active, -+ const gchar *action_id, -+ PolkitDetails *details, -+ PolkitImplicitAuthorization implicit) -+{ -+ PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (_authority); -+ PolkitImplicitAuthorization ret = implicit; -+ GError *error = NULL; -+ gchar *ret_str = NULL; -+ gboolean good = FALSE; -+ duk_context *cx = authority->priv->cx; -+ -+ duk_set_top (cx, 0); -+ if (!duk_get_global_string (cx, "polkit")) { -+ goto out; -+ } -+ -+ duk_push_string (cx, "_runRules"); -+ -+ if (!push_action_and_details (cx, action_id, details, &error)) -+ { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error converting action and details to JS object: %s", -+ error->message); -+ g_clear_error (&error); -+ goto out; -+ } -+ -+ if (!push_subject (cx, subject, user_for_subject, subject_is_local, subject_is_active, &error)) -+ { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Error converting subject to JS object: %s", -+ error->message); -+ g_clear_error (&error); -+ goto out; -+ } -+ -+ // If any error is the js context happened (ctx.ret == -+ // RUNAWAY_KILLER_THREAD_EXIT_STATUS_FAILURE) or it never properly returned -+ // (runaway scripts or ctx.ret == RUNAWAY_KILLER_THREAD_EXIT_STATUS_UNSET), -+ // unauthorize -+ if (!call_js_function_with_runaway_killer (authority)) -+ goto out; -+ -+ if (duk_is_null(cx, -1)) { -+ /* this is fine, means there was no match, use implicit authorizations */ -+ good = TRUE; -+ goto out; -+ } -+ ret_str = g_strdup (duk_require_string (cx, -1)); -+ if (!polkit_implicit_authorization_from_string (ret_str, &ret)) -+ { -+ polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -+ "Returned result `%s' is not valid", -+ ret_str); -+ goto out; -+ } -+ -+ good = TRUE; -+ -+ out: -+ if (!good) -+ ret = POLKIT_IMPLICIT_AUTHORIZATION_NOT_AUTHORIZED; -+ if (ret_str != NULL) -+ g_free (ret_str); -+ -+ return ret; -+} -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+static duk_ret_t -+js_polkit_log (duk_context *cx) -+{ -+ const char *str = duk_require_string (cx, 0); -+ fprintf (stderr, "%s\n", str); -+ return 0; -+} -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+static duk_ret_t -+js_polkit_spawn (duk_context *cx) -+{ -+ duk_ret_t ret = DUK_RET_ERROR; -+ gchar *standard_output = NULL; -+ gchar *standard_error = NULL; -+ gint exit_status; -+ GError *error = NULL; -+ guint32 array_len; -+ gchar **argv = NULL; -+ GMainContext *context = NULL; -+ GMainLoop *loop = NULL; -+ SpawnData data = {0}; -+ char *err_str = NULL; -+ guint n; -+ -+ if (!duk_is_array (cx, 0)) -+ goto out; -+ -+ array_len = duk_get_length (cx, 0); -+ -+ argv = g_new0 (gchar*, array_len + 1); -+ for (n = 0; n < array_len; n++) -+ { -+ duk_get_prop_index (cx, 0, n); -+ argv[n] = g_strdup (duk_to_string (cx, -1)); -+ duk_pop (cx); -+ } -+ -+ context = g_main_context_new (); -+ loop = g_main_loop_new (context, FALSE); -+ -+ g_main_context_push_thread_default (context); -+ -+ data.loop = loop; -+ polkit_backend_common_spawn ((const gchar *const *) argv, -+ 10, /* timeout_seconds */ -+ NULL, /* cancellable */ -+ polkit_backend_common_spawn_cb, -+ &data); -+ -+ g_main_loop_run (loop); -+ -+ g_main_context_pop_thread_default (context); -+ -+ if (!polkit_backend_common_spawn_finish (data.res, -+ &exit_status, -+ &standard_output, -+ &standard_error, -+ &error)) -+ { -+ err_str = g_strdup_printf ("Error spawning helper: %s (%s, %d)", -+ error->message, g_quark_to_string (error->domain), error->code); -+ g_clear_error (&error); -+ goto out; -+ } -+ -+ if (!(WIFEXITED (exit_status) && WEXITSTATUS (exit_status) == 0)) -+ { -+ GString *gstr; -+ gstr = g_string_new (NULL); -+ if (WIFEXITED (exit_status)) -+ { -+ g_string_append_printf (gstr, -+ "Helper exited with non-zero exit status %d", -+ WEXITSTATUS (exit_status)); -+ } -+ else if (WIFSIGNALED (exit_status)) -+ { -+ g_string_append_printf (gstr, -+ "Helper was signaled with signal %s (%d)", -+ polkit_backend_common_get_signal_name (WTERMSIG (exit_status)), -+ WTERMSIG (exit_status)); -+ } -+ g_string_append_printf (gstr, ", stdout=`%s', stderr=`%s'", -+ standard_output, standard_error); -+ err_str = g_string_free (gstr, FALSE); -+ goto out; -+ } -+ -+ duk_push_string (cx, standard_output); -+ ret = 1; -+ -+ out: -+ g_strfreev (argv); -+ g_free (standard_output); -+ g_free (standard_error); -+ g_clear_object (&data.res); -+ if (loop != NULL) -+ g_main_loop_unref (loop); -+ if (context != NULL) -+ g_main_context_unref (context); -+ -+ if (err_str) -+ duk_error (cx, DUK_ERR_ERROR, err_str); -+ -+ return ret; -+} -+ -+/* ---------------------------------------------------------------------------------------------------- */ -+ -+ -+static duk_ret_t -+js_polkit_user_is_in_netgroup (duk_context *cx) -+{ -+ const char *user; -+ const char *netgroup; -+ gboolean is_in_netgroup = FALSE; -+ -+ user = duk_require_string (cx, 0); -+ netgroup = duk_require_string (cx, 1); -+ -+ if (innetgr (netgroup, -+ NULL, /* host */ -+ user, -+ NULL)) /* domain */ -+ { -+ is_in_netgroup = TRUE; -+ } -+ -+ duk_push_boolean (cx, is_in_netgroup); -+ return 1; -+} -+ -+/* ---------------------------------------------------------------------------------------------------- */ -diff --git a/src/polkitbackend/polkitbackendjsauthority.cpp b/src/polkitbackend/polkitbackendjsauthority.cpp -index ca17108..11e91c0 100644 ---- a/src/polkitbackend/polkitbackendjsauthority.cpp -+++ b/src/polkitbackend/polkitbackendjsauthority.cpp -@@ -19,29 +19,7 @@ - * Author: David Zeuthen - */ - --#include "config.h" --#include --#include --#include --#include --#ifdef HAVE_NETGROUP_H --#include --#else --#include --#endif --#include --#include --#include --#include -- --#include --#include "polkitbackendjsauthority.h" -- --#include -- --#ifdef HAVE_LIBSYSTEMD --#include --#endif /* HAVE_LIBSYSTEMD */ -+#include "polkitbackendcommon.h" - - #include - #include -@@ -52,6 +30,7 @@ - #include - #include - -+/* Built source and not too big to worry about deduplication */ - #include "initjs.h" /* init.js */ - - #ifdef JSGC_USE_EXACT_ROOTING -@@ -67,10 +46,9 @@ - * @short_description: JS Authority - * @stability: Unstable - * -- * An implementation of #PolkitBackendAuthority that reads and -- * evalates Javascript files and supports interaction with -- * authentication agents (virtue of being based on -- * #PolkitBackendInteractiveAuthority). -+ * An (SpiderMonkey-based) implementation of #PolkitBackendAuthority that reads -+ * and evaluates Javascript files and supports interaction with authentication -+ * agents (virtue of being based on #PolkitBackendInteractiveAuthority). - */ - - /* ---------------------------------------------------------------------------------------------------- */ -@@ -100,57 +78,11 @@ static bool execute_script_with_runaway_killer (PolkitBackendJsAuthority *author - JS::HandleScript script, - JS::MutableHandleValue rval); - --static void utils_spawn (const gchar *const *argv, -- guint timeout_seconds, -- GCancellable *cancellable, -- GAsyncReadyCallback callback, -- gpointer user_data); -- --gboolean utils_spawn_finish (GAsyncResult *res, -- gint *out_exit_status, -- gchar **out_standard_output, -- gchar **out_standard_error, -- GError **error); -- --static void on_dir_monitor_changed (GFileMonitor *monitor, -- GFile *file, -- GFile *other_file, -- GFileMonitorEvent event_type, -- gpointer user_data); -- --/* ---------------------------------------------------------------------------------------------------- */ -- --enum --{ -- PROP_0, -- PROP_RULES_DIRS, --}; -- - /* ---------------------------------------------------------------------------------------------------- */ - - static gpointer runaway_killer_thread_func (gpointer user_data); - static void runaway_killer_terminate (PolkitBackendJsAuthority *authority); - --static GList *polkit_backend_js_authority_get_admin_auth_identities (PolkitBackendInteractiveAuthority *authority, -- PolkitSubject *caller, -- PolkitSubject *subject, -- PolkitIdentity *user_for_subject, -- gboolean subject_is_local, -- gboolean subject_is_active, -- const gchar *action_id, -- PolkitDetails *details); -- --static PolkitImplicitAuthorization polkit_backend_js_authority_check_authorization_sync ( -- PolkitBackendInteractiveAuthority *authority, -- PolkitSubject *caller, -- PolkitSubject *subject, -- PolkitIdentity *user_for_subject, -- gboolean subject_is_local, -- gboolean subject_is_active, -- const gchar *action_id, -- PolkitDetails *details, -- PolkitImplicitAuthorization implicit); -- - G_DEFINE_TYPE (PolkitBackendJsAuthority, polkit_backend_js_authority, POLKIT_BACKEND_TYPE_INTERACTIVE_AUTHORITY); - - /* ---------------------------------------------------------------------------------------------------- */ -@@ -229,33 +161,6 @@ polkit_backend_js_authority_init (PolkitBackendJsAuthority *authority) - PolkitBackendJsAuthorityPrivate); - } - --static gint --rules_file_name_cmp (const gchar *a, -- const gchar *b) --{ -- gint ret; -- const gchar *a_base; -- const gchar *b_base; -- -- a_base = strrchr (a, '/'); -- b_base = strrchr (b, '/'); -- -- g_assert (a_base != NULL); -- g_assert (b_base != NULL); -- a_base += 1; -- b_base += 1; -- -- ret = g_strcmp0 (a_base, b_base); -- if (ret == 0) -- { -- /* /etc wins over /usr */ -- ret = g_strcmp0 (a, b); -- g_assert (ret != 0); -- } -- -- return ret; --} -- - /* authority->priv->cx must be within a request */ - static void - load_scripts (PolkitBackendJsAuthority *authority) -@@ -299,7 +204,7 @@ load_scripts (PolkitBackendJsAuthority *authority) - } - } - -- files = g_list_sort (files, (GCompareFunc) rules_file_name_cmp); -+ files = g_list_sort (files, (GCompareFunc) polkit_backend_common_rules_file_name_cmp); - - for (l = files; l != NULL; l = l->next) - { -@@ -365,8 +270,8 @@ load_scripts (PolkitBackendJsAuthority *authority) - g_list_free_full (files, g_free); - } - --static void --reload_scripts (PolkitBackendJsAuthority *authority) -+void -+polkit_backend_common_reload_scripts (PolkitBackendJsAuthority *authority) - { - JS::RootedValueArray<1> args(authority->priv->cx); - JS::RootedValue rval(authority->priv->cx); -@@ -395,42 +300,6 @@ reload_scripts (PolkitBackendJsAuthority *authority) - g_signal_emit_by_name (authority, "changed"); - } - --static void --on_dir_monitor_changed (GFileMonitor *monitor, -- GFile *file, -- GFile *other_file, -- GFileMonitorEvent event_type, -- gpointer user_data) --{ -- PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (user_data); -- -- /* TODO: maybe rate-limit so storms of events are collapsed into one with a 500ms resolution? -- * Because when editing a file with emacs we get 4-8 events.. -- */ -- -- if (file != NULL) -- { -- gchar *name; -- -- name = g_file_get_basename (file); -- -- /* g_print ("event_type=%d file=%p name=%s\n", event_type, file, name); */ -- if (!g_str_has_prefix (name, ".") && -- !g_str_has_prefix (name, "#") && -- g_str_has_suffix (name, ".rules") && -- (event_type == G_FILE_MONITOR_EVENT_CREATED || -- event_type == G_FILE_MONITOR_EVENT_DELETED || -- event_type == G_FILE_MONITOR_EVENT_CHANGES_DONE_HINT)) -- { -- polkit_backend_authority_log (POLKIT_BACKEND_AUTHORITY (authority), -- "Reloading rules"); -- reload_scripts (authority); -- } -- g_free (name); -- } --} -- -- - static void - setup_file_monitors (PolkitBackendJsAuthority *authority) - { -@@ -462,7 +331,7 @@ setup_file_monitors (PolkitBackendJsAuthority *authority) - { - g_signal_connect (monitor, - "changed", -- G_CALLBACK (on_dir_monitor_changed), -+ G_CALLBACK (polkit_backend_common_on_dir_monitor_changed), - authority); - g_ptr_array_add (p, monitor); - } -@@ -471,8 +340,8 @@ setup_file_monitors (PolkitBackendJsAuthority *authority) - authority->priv->dir_monitors = (GFileMonitor**) g_ptr_array_free (p, FALSE); - } - --static void --polkit_backend_js_authority_constructed (GObject *object) -+void -+polkit_backend_common_js_authority_constructed (GObject *object) - { - PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (object); - -@@ -561,8 +430,8 @@ polkit_backend_js_authority_constructed (GObject *object) - g_assert_not_reached (); - } - --static void --polkit_backend_js_authority_finalize (GObject *object) -+void -+polkit_backend_common_js_authority_finalize (GObject *object) - { - PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (object); - guint n; -@@ -577,7 +446,7 @@ polkit_backend_js_authority_finalize (GObject *object) - { - GFileMonitor *monitor = authority->priv->dir_monitors[n]; - g_signal_handlers_disconnect_by_func (monitor, -- (gpointer*)G_CALLBACK (on_dir_monitor_changed), -+ (gpointer*)G_CALLBACK (polkit_backend_common_on_dir_monitor_changed), - authority); - g_object_unref (monitor); - } -@@ -594,11 +463,11 @@ polkit_backend_js_authority_finalize (GObject *object) - G_OBJECT_CLASS (polkit_backend_js_authority_parent_class)->finalize (object); - } - --static void --polkit_backend_js_authority_set_property (GObject *object, -- guint property_id, -- const GValue *value, -- GParamSpec *pspec) -+void -+polkit_backend_common_js_authority_set_property (GObject *object, -+ guint property_id, -+ const GValue *value, -+ GParamSpec *pspec) - { - PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (object); - -@@ -615,57 +484,12 @@ polkit_backend_js_authority_set_property (GObject *object, - } - } - --static const gchar * --polkit_backend_js_authority_get_name (PolkitBackendAuthority *authority) --{ -- return "js"; --} -- --static const gchar * --polkit_backend_js_authority_get_version (PolkitBackendAuthority *authority) --{ -- return PACKAGE_VERSION; --} -- --static PolkitAuthorityFeatures --polkit_backend_js_authority_get_features (PolkitBackendAuthority *authority) --{ -- return POLKIT_AUTHORITY_FEATURES_TEMPORARY_AUTHORIZATION; --} -- - static void - polkit_backend_js_authority_class_init (PolkitBackendJsAuthorityClass *klass) - { -- GObjectClass *gobject_class; -- PolkitBackendAuthorityClass *authority_class; -- PolkitBackendInteractiveAuthorityClass *interactive_authority_class; -- -- -- gobject_class = G_OBJECT_CLASS (klass); -- gobject_class->finalize = polkit_backend_js_authority_finalize; -- gobject_class->set_property = polkit_backend_js_authority_set_property; -- gobject_class->constructed = polkit_backend_js_authority_constructed; -- -- authority_class = POLKIT_BACKEND_AUTHORITY_CLASS (klass); -- authority_class->get_name = polkit_backend_js_authority_get_name; -- authority_class->get_version = polkit_backend_js_authority_get_version; -- authority_class->get_features = polkit_backend_js_authority_get_features; -- -- interactive_authority_class = POLKIT_BACKEND_INTERACTIVE_AUTHORITY_CLASS (klass); -- interactive_authority_class->get_admin_identities = polkit_backend_js_authority_get_admin_auth_identities; -- interactive_authority_class->check_authorization_sync = polkit_backend_js_authority_check_authorization_sync; -- -- g_object_class_install_property (gobject_class, -- PROP_RULES_DIRS, -- g_param_spec_boxed ("rules-dirs", -- NULL, -- NULL, -- G_TYPE_STRV, -- GParamFlags(G_PARAM_CONSTRUCT_ONLY | G_PARAM_WRITABLE))); -- -+ polkit_backend_common_js_authority_class_init_common (klass); - - g_type_class_add_private (klass, sizeof (PolkitBackendJsAuthorityPrivate)); -- - JS_Init (); - } - -@@ -1005,11 +829,14 @@ runaway_killer_setup (PolkitBackendJsAuthority *authority) - { - g_assert (authority->priv->rkt_source == NULL); - -- /* set-up timer for runaway scripts, will be executed in runaway_killer_thread */ -+ /* set-up timer for runaway scripts, will be executed in -+ runaway_killer_thread, that is one, permanent thread running a glib -+ mainloop (rkt_loop) whose context (rkt_context) has a timeout source -+ (rkt_source) */ - g_mutex_lock (&authority->priv->rkt_timeout_pending_mutex); - authority->priv->rkt_timeout_pending = FALSE; - g_mutex_unlock (&authority->priv->rkt_timeout_pending_mutex); -- authority->priv->rkt_source = g_timeout_source_new_seconds (15); -+ authority->priv->rkt_source = g_timeout_source_new_seconds (RUNAWAY_KILLER_TIMEOUT); - g_source_set_callback (authority->priv->rkt_source, rkt_on_timeout, authority, NULL); - g_source_attach (authority->priv->rkt_source, authority->priv->rkt_context); - -@@ -1069,6 +896,9 @@ execute_script_with_runaway_killer (PolkitBackendJsAuthority *authority, - { - bool ret; - -+ // tries to JS_ExecuteScript(), may hang for > RUNAWAY_KILLER_TIMEOUT, -+ // runaway_killer_thread makes sure the call returns, due to exception -+ // injection - runaway_killer_setup (authority); - ret = JS_ExecuteScript (authority->priv->cx, - script, -@@ -1099,15 +929,15 @@ call_js_function_with_runaway_killer (PolkitBackendJsAuthority *authority, - - /* ---------------------------------------------------------------------------------------------------- */ - --static GList * --polkit_backend_js_authority_get_admin_auth_identities (PolkitBackendInteractiveAuthority *_authority, -- PolkitSubject *caller, -- PolkitSubject *subject, -- PolkitIdentity *user_for_subject, -- gboolean subject_is_local, -- gboolean subject_is_active, -- const gchar *action_id, -- PolkitDetails *details) -+GList * -+polkit_backend_common_js_authority_get_admin_auth_identities (PolkitBackendInteractiveAuthority *_authority, -+ PolkitSubject *caller, -+ PolkitSubject *subject, -+ PolkitIdentity *user_for_subject, -+ gboolean subject_is_local, -+ gboolean subject_is_active, -+ const gchar *action_id, -+ PolkitDetails *details) - { - PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (_authority); - GList *ret = NULL; -@@ -1202,16 +1032,16 @@ polkit_backend_js_authority_get_admin_auth_identities (PolkitBackendInteractiveA - - /* ---------------------------------------------------------------------------------------------------- */ - --static PolkitImplicitAuthorization --polkit_backend_js_authority_check_authorization_sync (PolkitBackendInteractiveAuthority *_authority, -- PolkitSubject *caller, -- PolkitSubject *subject, -- PolkitIdentity *user_for_subject, -- gboolean subject_is_local, -- gboolean subject_is_active, -- const gchar *action_id, -- PolkitDetails *details, -- PolkitImplicitAuthorization implicit) -+PolkitImplicitAuthorization -+polkit_backend_common_js_authority_check_authorization_sync (PolkitBackendInteractiveAuthority *_authority, -+ PolkitSubject *caller, -+ PolkitSubject *subject, -+ PolkitIdentity *user_for_subject, -+ gboolean subject_is_local, -+ gboolean subject_is_active, -+ const gchar *action_id, -+ PolkitDetails *details, -+ PolkitImplicitAuthorization implicit) - { - PolkitBackendJsAuthority *authority = POLKIT_BACKEND_JS_AUTHORITY (_authority); - PolkitImplicitAuthorization ret = implicit; -@@ -1324,65 +1154,6 @@ js_polkit_log (JSContext *cx, - - /* ---------------------------------------------------------------------------------------------------- */ - --static const gchar * --get_signal_name (gint signal_number) --{ -- switch (signal_number) -- { --#define _HANDLE_SIG(sig) case sig: return #sig; -- _HANDLE_SIG (SIGHUP); -- _HANDLE_SIG (SIGINT); -- _HANDLE_SIG (SIGQUIT); -- _HANDLE_SIG (SIGILL); -- _HANDLE_SIG (SIGABRT); -- _HANDLE_SIG (SIGFPE); -- _HANDLE_SIG (SIGKILL); -- _HANDLE_SIG (SIGSEGV); -- _HANDLE_SIG (SIGPIPE); -- _HANDLE_SIG (SIGALRM); -- _HANDLE_SIG (SIGTERM); -- _HANDLE_SIG (SIGUSR1); -- _HANDLE_SIG (SIGUSR2); -- _HANDLE_SIG (SIGCHLD); -- _HANDLE_SIG (SIGCONT); -- _HANDLE_SIG (SIGSTOP); -- _HANDLE_SIG (SIGTSTP); -- _HANDLE_SIG (SIGTTIN); -- _HANDLE_SIG (SIGTTOU); -- _HANDLE_SIG (SIGBUS); --#ifdef SIGPOLL -- _HANDLE_SIG (SIGPOLL); --#endif -- _HANDLE_SIG (SIGPROF); -- _HANDLE_SIG (SIGSYS); -- _HANDLE_SIG (SIGTRAP); -- _HANDLE_SIG (SIGURG); -- _HANDLE_SIG (SIGVTALRM); -- _HANDLE_SIG (SIGXCPU); -- _HANDLE_SIG (SIGXFSZ); --#undef _HANDLE_SIG -- default: -- break; -- } -- return "UNKNOWN_SIGNAL"; --} -- --typedef struct --{ -- GMainLoop *loop; -- GAsyncResult *res; --} SpawnData; -- --static void --spawn_cb (GObject *source_object, -- GAsyncResult *res, -- gpointer user_data) --{ -- SpawnData *data = (SpawnData *)user_data; -- data->res = (GAsyncResult*)g_object_ref (res); -- g_main_loop_quit (data->loop); --} -- - static bool - js_polkit_spawn (JSContext *cx, - unsigned js_argc, -@@ -1440,21 +1211,21 @@ js_polkit_spawn (JSContext *cx, - g_main_context_push_thread_default (context); - - data.loop = loop; -- utils_spawn ((const gchar *const *) argv, -- 10, /* timeout_seconds */ -- NULL, /* cancellable */ -- spawn_cb, -- &data); -+ polkit_backend_common_spawn ((const gchar *const *) argv, -+ 10, /* timeout_seconds */ -+ NULL, /* cancellable */ -+ polkit_backend_common_spawn_cb, -+ &data); - - g_main_loop_run (loop); - - g_main_context_pop_thread_default (context); - -- if (!utils_spawn_finish (data.res, -- &exit_status, -- &standard_output, -- &standard_error, -- &error)) -+ if (!polkit_backend_common_spawn_finish (data.res, -+ &exit_status, -+ &standard_output, -+ &standard_error, -+ &error)) - { - JS_ReportErrorUTF8 (cx, - "Error spawning helper: %s (%s, %d)", -@@ -1477,7 +1248,7 @@ js_polkit_spawn (JSContext *cx, - { - g_string_append_printf (gstr, - "Helper was signaled with signal %s (%d)", -- get_signal_name (WTERMSIG (exit_status)), -+ polkit_backend_common_get_signal_name (WTERMSIG (exit_status)), - WTERMSIG (exit_status)); - } - g_string_append_printf (gstr, ", stdout=`%s', stderr=`%s'", -@@ -1542,381 +1313,5 @@ js_polkit_user_is_in_netgroup (JSContext *cx, - return ret; - } - -- -- - /* ---------------------------------------------------------------------------------------------------- */ - --typedef struct --{ -- GSimpleAsyncResult *simple; /* borrowed reference */ -- GMainContext *main_context; /* may be NULL */ -- -- GCancellable *cancellable; /* may be NULL */ -- gulong cancellable_handler_id; -- -- GPid child_pid; -- gint child_stdout_fd; -- gint child_stderr_fd; -- -- GIOChannel *child_stdout_channel; -- GIOChannel *child_stderr_channel; -- -- GSource *child_watch_source; -- GSource *child_stdout_source; -- GSource *child_stderr_source; -- -- guint timeout_seconds; -- gboolean timed_out; -- GSource *timeout_source; -- -- GString *child_stdout; -- GString *child_stderr; -- -- gint exit_status; --} UtilsSpawnData; -- --static void --utils_child_watch_from_release_cb (GPid pid, -- gint status, -- gpointer user_data) --{ --} -- --static void --utils_spawn_data_free (UtilsSpawnData *data) --{ -- if (data->timeout_source != NULL) -- { -- g_source_destroy (data->timeout_source); -- data->timeout_source = NULL; -- } -- -- /* Nuke the child, if necessary */ -- if (data->child_watch_source != NULL) -- { -- g_source_destroy (data->child_watch_source); -- data->child_watch_source = NULL; -- } -- -- if (data->child_pid != 0) -- { -- GSource *source; -- kill (data->child_pid, SIGTERM); -- /* OK, we need to reap for the child ourselves - we don't want -- * to use waitpid() because that might block the calling -- * thread (the child might handle SIGTERM and use several -- * seconds for cleanup/rollback). -- * -- * So we use GChildWatch instead. -- * -- * Avoid taking a references to ourselves. but note that we need -- * to pass the GSource so we can nuke it once handled. -- */ -- source = g_child_watch_source_new (data->child_pid); -- g_source_set_callback (source, -- (GSourceFunc) utils_child_watch_from_release_cb, -- source, -- (GDestroyNotify) g_source_destroy); -- /* attach source to the global default main context */ -- g_source_attach (source, NULL); -- g_source_unref (source); -- data->child_pid = 0; -- } -- -- if (data->child_stdout != NULL) -- { -- g_string_free (data->child_stdout, TRUE); -- data->child_stdout = NULL; -- } -- -- if (data->child_stderr != NULL) -- { -- g_string_free (data->child_stderr, TRUE); -- data->child_stderr = NULL; -- } -- -- if (data->child_stdout_channel != NULL) -- { -- g_io_channel_unref (data->child_stdout_channel); -- data->child_stdout_channel = NULL; -- } -- if (data->child_stderr_channel != NULL) -- { -- g_io_channel_unref (data->child_stderr_channel); -- data->child_stderr_channel = NULL; -- } -- -- if (data->child_stdout_source != NULL) -- { -- g_source_destroy (data->child_stdout_source); -- data->child_stdout_source = NULL; -- } -- if (data->child_stderr_source != NULL) -- { -- g_source_destroy (data->child_stderr_source); -- data->child_stderr_source = NULL; -- } -- -- if (data->child_stdout_fd != -1) -- { -- g_warn_if_fail (close (data->child_stdout_fd) == 0); -- data->child_stdout_fd = -1; -- } -- if (data->child_stderr_fd != -1) -- { -- g_warn_if_fail (close (data->child_stderr_fd) == 0); -- data->child_stderr_fd = -1; -- } -- -- if (data->cancellable_handler_id > 0) -- { -- g_cancellable_disconnect (data->cancellable, data->cancellable_handler_id); -- data->cancellable_handler_id = 0; -- } -- -- if (data->main_context != NULL) -- g_main_context_unref (data->main_context); -- -- if (data->cancellable != NULL) -- g_object_unref (data->cancellable); -- -- g_slice_free (UtilsSpawnData, data); --} -- --/* called in the thread where @cancellable was cancelled */ --static void --utils_on_cancelled (GCancellable *cancellable, -- gpointer user_data) --{ -- UtilsSpawnData *data = (UtilsSpawnData *)user_data; -- GError *error; -- -- error = NULL; -- g_warn_if_fail (g_cancellable_set_error_if_cancelled (cancellable, &error)); -- g_simple_async_result_take_error (data->simple, error); -- g_simple_async_result_complete_in_idle (data->simple); -- g_object_unref (data->simple); --} -- --static gboolean --utils_read_child_stderr (GIOChannel *channel, -- GIOCondition condition, -- gpointer user_data) --{ -- UtilsSpawnData *data = (UtilsSpawnData *)user_data; -- gchar buf[1024]; -- gsize bytes_read; -- -- g_io_channel_read_chars (channel, buf, sizeof buf, &bytes_read, NULL); -- g_string_append_len (data->child_stderr, buf, bytes_read); -- return TRUE; --} -- --static gboolean --utils_read_child_stdout (GIOChannel *channel, -- GIOCondition condition, -- gpointer user_data) --{ -- UtilsSpawnData *data = (UtilsSpawnData *)user_data; -- gchar buf[1024]; -- gsize bytes_read; -- -- g_io_channel_read_chars (channel, buf, sizeof buf, &bytes_read, NULL); -- g_string_append_len (data->child_stdout, buf, bytes_read); -- return TRUE; --} -- --static void --utils_child_watch_cb (GPid pid, -- gint status, -- gpointer user_data) --{ -- UtilsSpawnData *data = (UtilsSpawnData *)user_data; -- gchar *buf; -- gsize buf_size; -- -- if (g_io_channel_read_to_end (data->child_stdout_channel, &buf, &buf_size, NULL) == G_IO_STATUS_NORMAL) -- { -- g_string_append_len (data->child_stdout, buf, buf_size); -- g_free (buf); -- } -- if (g_io_channel_read_to_end (data->child_stderr_channel, &buf, &buf_size, NULL) == G_IO_STATUS_NORMAL) -- { -- g_string_append_len (data->child_stderr, buf, buf_size); -- g_free (buf); -- } -- -- data->exit_status = status; -- -- /* ok, child watch is history, make sure we don't free it in spawn_data_free() */ -- data->child_pid = 0; -- data->child_watch_source = NULL; -- -- /* we're done */ -- g_simple_async_result_complete_in_idle (data->simple); -- g_object_unref (data->simple); --} -- --static gboolean --utils_timeout_cb (gpointer user_data) --{ -- UtilsSpawnData *data = (UtilsSpawnData *)user_data; -- -- data->timed_out = TRUE; -- -- /* ok, timeout is history, make sure we don't free it in spawn_data_free() */ -- data->timeout_source = NULL; -- -- /* we're done */ -- g_simple_async_result_complete_in_idle (data->simple); -- g_object_unref (data->simple); -- -- return FALSE; /* remove source */ --} -- --static void --utils_spawn (const gchar *const *argv, -- guint timeout_seconds, -- GCancellable *cancellable, -- GAsyncReadyCallback callback, -- gpointer user_data) --{ -- UtilsSpawnData *data; -- GError *error; -- -- data = g_slice_new0 (UtilsSpawnData); -- data->timeout_seconds = timeout_seconds; -- data->simple = g_simple_async_result_new (NULL, -- callback, -- user_data, -- (gpointer*)utils_spawn); -- data->main_context = g_main_context_get_thread_default (); -- if (data->main_context != NULL) -- g_main_context_ref (data->main_context); -- -- data->cancellable = cancellable != NULL ? (GCancellable*)g_object_ref (cancellable) : NULL; -- -- data->child_stdout = g_string_new (NULL); -- data->child_stderr = g_string_new (NULL); -- data->child_stdout_fd = -1; -- data->child_stderr_fd = -1; -- -- /* the life-cycle of UtilsSpawnData is tied to its GSimpleAsyncResult */ -- g_simple_async_result_set_op_res_gpointer (data->simple, data, (GDestroyNotify) utils_spawn_data_free); -- -- error = NULL; -- if (data->cancellable != NULL) -- { -- /* could already be cancelled */ -- error = NULL; -- if (g_cancellable_set_error_if_cancelled (data->cancellable, &error)) -- { -- g_simple_async_result_take_error (data->simple, error); -- g_simple_async_result_complete_in_idle (data->simple); -- g_object_unref (data->simple); -- goto out; -- } -- -- data->cancellable_handler_id = g_cancellable_connect (data->cancellable, -- G_CALLBACK (utils_on_cancelled), -- data, -- NULL); -- } -- -- error = NULL; -- if (!g_spawn_async_with_pipes (NULL, /* working directory */ -- (gchar **) argv, -- NULL, /* envp */ -- GSpawnFlags(G_SPAWN_SEARCH_PATH | G_SPAWN_DO_NOT_REAP_CHILD), -- NULL, /* child_setup */ -- NULL, /* child_setup's user_data */ -- &(data->child_pid), -- NULL, /* gint *stdin_fd */ -- &(data->child_stdout_fd), -- &(data->child_stderr_fd), -- &error)) -- { -- g_prefix_error (&error, "Error spawning: "); -- g_simple_async_result_take_error (data->simple, error); -- g_simple_async_result_complete_in_idle (data->simple); -- g_object_unref (data->simple); -- goto out; -- } -- -- if (timeout_seconds > 0) -- { -- data->timeout_source = g_timeout_source_new_seconds (timeout_seconds); -- g_source_set_priority (data->timeout_source, G_PRIORITY_DEFAULT); -- g_source_set_callback (data->timeout_source, utils_timeout_cb, data, NULL); -- g_source_attach (data->timeout_source, data->main_context); -- g_source_unref (data->timeout_source); -- } -- -- data->child_watch_source = g_child_watch_source_new (data->child_pid); -- g_source_set_callback (data->child_watch_source, (GSourceFunc) utils_child_watch_cb, data, NULL); -- g_source_attach (data->child_watch_source, data->main_context); -- g_source_unref (data->child_watch_source); -- -- data->child_stdout_channel = g_io_channel_unix_new (data->child_stdout_fd); -- g_io_channel_set_flags (data->child_stdout_channel, G_IO_FLAG_NONBLOCK, NULL); -- data->child_stdout_source = g_io_create_watch (data->child_stdout_channel, G_IO_IN); -- g_source_set_callback (data->child_stdout_source, (GSourceFunc) utils_read_child_stdout, data, NULL); -- g_source_attach (data->child_stdout_source, data->main_context); -- g_source_unref (data->child_stdout_source); -- -- data->child_stderr_channel = g_io_channel_unix_new (data->child_stderr_fd); -- g_io_channel_set_flags (data->child_stderr_channel, G_IO_FLAG_NONBLOCK, NULL); -- data->child_stderr_source = g_io_create_watch (data->child_stderr_channel, G_IO_IN); -- g_source_set_callback (data->child_stderr_source, (GSourceFunc) utils_read_child_stderr, data, NULL); -- g_source_attach (data->child_stderr_source, data->main_context); -- g_source_unref (data->child_stderr_source); -- -- out: -- ; --} -- --gboolean --utils_spawn_finish (GAsyncResult *res, -- gint *out_exit_status, -- gchar **out_standard_output, -- gchar **out_standard_error, -- GError **error) --{ -- GSimpleAsyncResult *simple = G_SIMPLE_ASYNC_RESULT (res); -- UtilsSpawnData *data; -- gboolean ret = FALSE; -- -- g_return_val_if_fail (G_IS_ASYNC_RESULT (res), FALSE); -- g_return_val_if_fail (error == NULL || *error == NULL, FALSE); -- -- g_warn_if_fail (g_simple_async_result_get_source_tag (simple) == utils_spawn); -- -- if (g_simple_async_result_propagate_error (simple, error)) -- goto out; -- -- data = (UtilsSpawnData*)g_simple_async_result_get_op_res_gpointer (simple); -- -- if (data->timed_out) -- { -- g_set_error (error, -- G_IO_ERROR, -- G_IO_ERROR_TIMED_OUT, -- "Timed out after %d seconds", -- data->timeout_seconds); -- goto out; -- } -- -- if (out_exit_status != NULL) -- *out_exit_status = data->exit_status; -- -- if (out_standard_output != NULL) -- *out_standard_output = g_strdup (data->child_stdout->str); -- -- if (out_standard_error != NULL) -- *out_standard_error = g_strdup (data->child_stderr->str); -- -- ret = TRUE; -- -- out: -- return ret; --} -diff --git a/test/data/etc/polkit-1/rules.d/10-testing.rules b/test/data/etc/polkit-1/rules.d/10-testing.rules -index 98bf062..e346b5d 100644 ---- a/test/data/etc/polkit-1/rules.d/10-testing.rules -+++ b/test/data/etc/polkit-1/rules.d/10-testing.rules -@@ -189,8 +189,10 @@ polkit.addRule(function(action, subject) { - ; - } catch (error) { - if (error == "Terminating runaway script") -- return polkit.Result.YES; -- return polkit.Result.NO; -+ // Inverted logic to accomodate Duktape's model as well, which -+ // will always fail with negation, on timeouts -+ return polkit.Result.NO; -+ return polkit.Result.YES; - } - } - }); -diff --git a/test/polkitbackend/test-polkitbackendjsauthority.c b/test/polkitbackend/test-polkitbackendjsauthority.c -index f97e0e0..2103b17 100644 ---- a/test/polkitbackend/test-polkitbackendjsauthority.c -+++ b/test/polkitbackend/test-polkitbackendjsauthority.c -@@ -328,7 +328,7 @@ static const RulesTestCase rules_test_cases[] = { - "net.company.run_away_script", - "unix-user:root", - NULL, -- POLKIT_IMPLICIT_AUTHORIZATION_AUTHORIZED, -+ POLKIT_IMPLICIT_AUTHORIZATION_NOT_AUTHORIZED, - }, - - { diff --git a/meta-oe/recipes-extended/polkit/polkit/0003-jsauthority-ensure-to-call-JS_Init-and-JS_ShutDown-e.patch b/meta-oe/recipes-extended/polkit/polkit/0003-jsauthority-ensure-to-call-JS_Init-and-JS_ShutDown-e.patch deleted file mode 100644 index 9e9755e44..000000000 --- a/meta-oe/recipes-extended/polkit/polkit/0003-jsauthority-ensure-to-call-JS_Init-and-JS_ShutDown-e.patch +++ /dev/null @@ -1,63 +0,0 @@ -From 7799441b9aa55324160deefbc65f9d918b8c94c1 Mon Sep 17 00:00:00 2001 -From: Xi Ruoyao -Date: Tue, 10 Aug 2021 18:52:56 +0800 -Subject: [PATCH] jsauthority: ensure to call JS_Init() and JS_ShutDown() - exactly once - -Before this commit, we were calling JS_Init() in -polkit_backend_js_authority_class_init and never called JS_ShutDown. -This is actually a misusage of SpiderMonkey API. Quote from a comment -in js/Initialization.h (both mozjs-78 and mozjs-91): - - It is currently not possible to initialize SpiderMonkey multiple - times (that is, calling JS_Init/JSAPI methods/JS_ShutDown in that - order, then doing so again). - -This misusage does not cause severe issues with mozjs-78. However, when -we eventually port jsauthority to use mozjs-91, bad thing will happen: -see the test failure mentioned in #150. - -This commit is tested with both mozjs-78 and mozjs-91, all tests pass -with it. - -Upstream-Status: Submitted [https://gitlab.freedesktop.org/polkit/polkit/-/merge_requests/91] -Signed-off-by: Alexander Kanavin ---- - src/polkitbackend/polkitbackendjsauthority.cpp | 10 +++++++--- - 1 file changed, 7 insertions(+), 3 deletions(-) - -diff --git a/src/polkitbackend/polkitbackendjsauthority.cpp b/src/polkitbackend/polkitbackendjsauthority.cpp -index 41d8d5c..38dc001 100644 ---- a/src/polkitbackend/polkitbackendjsauthority.cpp -+++ b/src/polkitbackend/polkitbackendjsauthority.cpp -@@ -75,6 +75,13 @@ - - /* ---------------------------------------------------------------------------------------------------- */ - -+static class JsInitHelperType -+{ -+public: -+ JsInitHelperType() { JS_Init(); } -+ ~JsInitHelperType() { JS_ShutDown(); } -+} JsInitHelper; -+ - struct _PolkitBackendJsAuthorityPrivate - { - gchar **rules_dirs; -@@ -589,7 +596,6 @@ polkit_backend_js_authority_finalize (GObject *object) - delete authority->priv->js_polkit; - - JS_DestroyContext (authority->priv->cx); -- /* JS_ShutDown (); */ - - G_OBJECT_CLASS (polkit_backend_js_authority_parent_class)->finalize (object); - } -@@ -665,8 +671,6 @@ polkit_backend_js_authority_class_init (PolkitBackendJsAuthorityClass *klass) - - - g_type_class_add_private (klass, sizeof (PolkitBackendJsAuthorityPrivate)); -- -- JS_Init (); - } - - /* ---------------------------------------------------------------------------------------------------- */ diff --git a/meta-oe/recipes-extended/polkit/polkit/0004-Make-netgroup-support-optional.patch b/meta-oe/recipes-extended/polkit/polkit/0004-Make-netgroup-support-optional.patch deleted file mode 100644 index 181aca16c..000000000 --- a/meta-oe/recipes-extended/polkit/polkit/0004-Make-netgroup-support-optional.patch +++ /dev/null @@ -1,253 +0,0 @@ -From a334fac72112c01cd322f7c97ef7ca21457ab52f Mon Sep 17 00:00:00 2001 -From: "A. Wilcox" -Date: Sun, 15 May 2022 05:04:10 +0000 -Subject: [PATCH] Make netgroup support optional - -On at least Linux/musl and Linux/uclibc, netgroup support is not -available. PolKit fails to compile on these systems for that reason. - -This change makes netgroup support conditional on the presence of the -setnetgrent(3) function which is required for the support to work. If -that function is not available on the system, an error will be returned -to the administrator if unix-netgroup: is specified in configuration. - -(sam: rebased for Meson and Duktape.) - -Closes: https://gitlab.freedesktop.org/polkit/polkit/-/issues/14 -Closes: https://gitlab.freedesktop.org/polkit/polkit/-/issues/163 -Closes: https://gitlab.freedesktop.org/polkit/polkit/-/merge_requests/52 -Signed-off-by: A. Wilcox - -Ported back the change in configure.ac (upstream removed autotools -support). - -Upstream-Status: Backport [https://gitlab.freedesktop.org/polkit/polkit/-/commit/b57deee8178190a7ecc75290fa13cf7daabc2c66] -Signed-off-by: Marta Rybczynska - ---- - configure.ac | 2 +- - meson.build | 1 + - src/polkit/polkitidentity.c | 17 +++++++++++++++++ - src/polkit/polkitunixnetgroup.c | 3 +++ - .../polkitbackendinteractiveauthority.c | 14 ++++++++------ - src/polkitbackend/polkitbackendjsauthority.cpp | 2 ++ - test/polkit/polkitidentitytest.c | 8 +++++++- - test/polkit/polkitunixnetgrouptest.c | 2 ++ - .../test-polkitbackendjsauthority.c | 2 ++ - 9 files changed, 43 insertions(+), 8 deletions(-) - -diff --git a/configure.ac b/configure.ac -index ca4b9f2..4c5d596 100644 ---- a/configure.ac -+++ b/configure.ac -@@ -100,7 +100,7 @@ AC_CHECK_LIB(expat,XML_ParserCreate,[EXPAT_LIBS="-lexpat"], - [AC_MSG_ERROR([Can't find expat library. Please install expat.])]) - AC_SUBST(EXPAT_LIBS) - --AC_CHECK_FUNCS(clearenv fdatasync) -+AC_CHECK_FUNCS(clearenv fdatasync setnetgrent) - - if test "x$GCC" = "xyes"; then - LDFLAGS="-Wl,--as-needed $LDFLAGS" -diff --git a/meson.build b/meson.build -index 733bbff..d840926 100644 ---- a/meson.build -+++ b/meson.build -@@ -82,6 +82,7 @@ config_h.set('_GNU_SOURCE', true) - check_functions = [ - 'clearenv', - 'fdatasync', -+ 'setnetgrent', - ] - - foreach func: check_functions -diff --git a/src/polkit/polkitidentity.c b/src/polkit/polkitidentity.c -index 3aa1f7f..793f17d 100644 ---- a/src/polkit/polkitidentity.c -+++ b/src/polkit/polkitidentity.c -@@ -182,7 +182,15 @@ polkit_identity_from_string (const gchar *str, - } - else if (g_str_has_prefix (str, "unix-netgroup:")) - { -+#ifndef HAVE_SETNETGRENT -+ g_set_error (error, -+ POLKIT_ERROR, -+ POLKIT_ERROR_FAILED, -+ "Netgroups are not available on this machine ('%s')", -+ str); -+#else - identity = polkit_unix_netgroup_new (str + sizeof "unix-netgroup:" - 1); -+#endif - } - - if (identity == NULL && (error != NULL && *error == NULL)) -@@ -344,6 +352,14 @@ polkit_identity_new_for_gvariant (GVariant *variant, - GVariant *v; - const char *name; - -+#ifndef HAVE_SETNETGRENT -+ g_set_error (error, -+ POLKIT_ERROR, -+ POLKIT_ERROR_FAILED, -+ "Netgroups are not available on this machine"); -+ goto out; -+#else -+ - v = lookup_asv (details_gvariant, "name", G_VARIANT_TYPE_STRING, error); - if (v == NULL) - { -@@ -353,6 +369,7 @@ polkit_identity_new_for_gvariant (GVariant *variant, - name = g_variant_get_string (v, NULL); - ret = polkit_unix_netgroup_new (name); - g_variant_unref (v); -+#endif - } - else - { -diff --git a/src/polkit/polkitunixnetgroup.c b/src/polkit/polkitunixnetgroup.c -index 8a2b369..83f8d4a 100644 ---- a/src/polkit/polkitunixnetgroup.c -+++ b/src/polkit/polkitunixnetgroup.c -@@ -194,6 +194,9 @@ polkit_unix_netgroup_set_name (PolkitUnixNetgroup *group, - PolkitIdentity * - polkit_unix_netgroup_new (const gchar *name) - { -+#ifndef HAVE_SETNETGRENT -+ g_assert_not_reached(); -+#endif - g_return_val_if_fail (name != NULL, NULL); - return POLKIT_IDENTITY (g_object_new (POLKIT_TYPE_UNIX_NETGROUP, - "name", name, -diff --git a/src/polkitbackend/polkitbackendinteractiveauthority.c b/src/polkitbackend/polkitbackendinteractiveauthority.c -index 056d9a8..36c2f3d 100644 ---- a/src/polkitbackend/polkitbackendinteractiveauthority.c -+++ b/src/polkitbackend/polkitbackendinteractiveauthority.c -@@ -2233,25 +2233,26 @@ get_users_in_net_group (PolkitIdentity *group, - GList *ret; - - ret = NULL; -+#ifdef HAVE_SETNETGRENT - name = polkit_unix_netgroup_get_name (POLKIT_UNIX_NETGROUP (group)); - --#ifdef HAVE_SETNETGRENT_RETURN -+# ifdef HAVE_SETNETGRENT_RETURN - if (setnetgrent (name) == 0) - { - g_warning ("Error looking up net group with name %s: %s", name, g_strerror (errno)); - goto out; - } --#else -+# else - setnetgrent (name); --#endif -+# endif /* HAVE_SETNETGRENT_RETURN */ - - for (;;) - { --#if defined(HAVE_NETBSD) || defined(HAVE_OPENBSD) -+# if defined(HAVE_NETBSD) || defined(HAVE_OPENBSD) - const char *hostname, *username, *domainname; --#else -+# else - char *hostname, *username, *domainname; --#endif -+# endif /* defined(HAVE_NETBSD) || defined(HAVE_OPENBSD) */ - PolkitIdentity *user; - GError *error = NULL; - -@@ -2282,6 +2283,7 @@ get_users_in_net_group (PolkitIdentity *group, - - out: - endnetgrent (); -+#endif /* HAVE_SETNETGRENT */ - return ret; - } - -diff --git a/src/polkitbackend/polkitbackendjsauthority.cpp b/src/polkitbackend/polkitbackendjsauthority.cpp -index 5027815..bcb040c 100644 ---- a/src/polkitbackend/polkitbackendjsauthority.cpp -+++ b/src/polkitbackend/polkitbackendjsauthority.cpp -@@ -1524,6 +1524,7 @@ js_polkit_user_is_in_netgroup (JSContext *cx, - - JS::CallArgs args = JS::CallArgsFromVp (argc, vp); - -+#ifdef HAVE_SETNETGRENT - JS::RootedString usrstr (authority->priv->cx); - usrstr = args[0].toString(); - user = JS_EncodeStringToUTF8 (cx, usrstr); -@@ -1538,6 +1539,7 @@ js_polkit_user_is_in_netgroup (JSContext *cx, - { - is_in_netgroup = true; - } -+#endif - - ret = true; - -diff --git a/test/polkit/polkitidentitytest.c b/test/polkit/polkitidentitytest.c -index e91967b..2635c4c 100644 ---- a/test/polkit/polkitidentitytest.c -+++ b/test/polkit/polkitidentitytest.c -@@ -145,11 +145,15 @@ struct ComparisonTestData comparison_test_data [] = { - {"unix-group:root", "unix-group:jane", FALSE}, - {"unix-group:jane", "unix-group:jane", TRUE}, - -+#ifdef HAVE_SETNETGRENT - {"unix-netgroup:foo", "unix-netgroup:foo", TRUE}, - {"unix-netgroup:foo", "unix-netgroup:bar", FALSE}, -+#endif - - {"unix-user:root", "unix-group:root", FALSE}, -+#ifdef HAVE_SETNETGRENT - {"unix-user:jane", "unix-netgroup:foo", FALSE}, -+#endif - - {NULL}, - }; -@@ -181,11 +185,13 @@ main (int argc, char *argv[]) - g_test_add_data_func ("/PolkitIdentity/group_string_2", "unix-group:jane", test_string); - g_test_add_data_func ("/PolkitIdentity/group_string_3", "unix-group:users", test_string); - -+#ifdef HAVE_SETNETGRENT - g_test_add_data_func ("/PolkitIdentity/netgroup_string", "unix-netgroup:foo", test_string); -+ g_test_add_data_func ("/PolkitIdentity/netgroup_gvariant", "unix-netgroup:foo", test_gvariant); -+#endif - - g_test_add_data_func ("/PolkitIdentity/user_gvariant", "unix-user:root", test_gvariant); - g_test_add_data_func ("/PolkitIdentity/group_gvariant", "unix-group:root", test_gvariant); -- g_test_add_data_func ("/PolkitIdentity/netgroup_gvariant", "unix-netgroup:foo", test_gvariant); - - add_comparison_tests (); - -diff --git a/test/polkit/polkitunixnetgrouptest.c b/test/polkit/polkitunixnetgrouptest.c -index 3701ba1..e1d211e 100644 ---- a/test/polkit/polkitunixnetgrouptest.c -+++ b/test/polkit/polkitunixnetgrouptest.c -@@ -69,7 +69,9 @@ int - main (int argc, char *argv[]) - { - g_test_init (&argc, &argv, NULL); -+#ifdef HAVE_SETNETGRENT - g_test_add_func ("/PolkitUnixNetgroup/new", test_new); - g_test_add_func ("/PolkitUnixNetgroup/set_name", test_set_name); -+#endif - return g_test_run (); - } -diff --git a/test/polkitbackend/test-polkitbackendjsauthority.c b/test/polkitbackend/test-polkitbackendjsauthority.c -index f97e0e0..fc52149 100644 ---- a/test/polkitbackend/test-polkitbackendjsauthority.c -+++ b/test/polkitbackend/test-polkitbackendjsauthority.c -@@ -137,12 +137,14 @@ test_get_admin_identities (void) - "unix-group:users" - } - }, -+#ifdef HAVE_SETNETGRENT - { - "net.company.action3", - { - "unix-netgroup:foo" - } - }, -+#endif - }; - guint n; - diff --git a/meta-oe/recipes-extended/polkit/polkit/0005-Make-netgroup-support-optional-duktape.patch b/meta-oe/recipes-extended/polkit/polkit/0005-Make-netgroup-support-optional-duktape.patch deleted file mode 100644 index 12988ad94..000000000 --- a/meta-oe/recipes-extended/polkit/polkit/0005-Make-netgroup-support-optional-duktape.patch +++ /dev/null @@ -1,34 +0,0 @@ -From 792f8e2151c120ec51b50a4098e4f9642409cbec Mon Sep 17 00:00:00 2001 -From: Marta Rybczynska -Date: Fri, 29 Jul 2022 11:52:59 +0200 -Subject: [PATCH] Make netgroup support optional - -This patch adds a fragment of the netgroup patch to apply on the duktape-related -code. This change is needed to compile with duktape+musl. - -Upstream-Status: Backport [https://gitlab.freedesktop.org/polkit/polkit/-/commit/b57deee8178190a7ecc75290fa13cf7daabc2c66] -Signed-off-by: Marta Rybczynska ---- - src/polkitbackend/polkitbackendduktapeauthority.c | 2 ++ - 1 file changed, 2 insertions(+) - -diff --git a/src/polkitbackend/polkitbackendduktapeauthority.c b/src/polkitbackend/polkitbackendduktapeauthority.c -index c89dbcf..58a5936 100644 ---- a/src/polkitbackend/polkitbackendduktapeauthority.c -+++ b/src/polkitbackend/polkitbackendduktapeauthority.c -@@ -1036,6 +1036,7 @@ js_polkit_user_is_in_netgroup (duk_context *cx) - user = duk_require_string (cx, 0); - netgroup = duk_require_string (cx, 1); - -+#ifdef HAVE_SETNETGRENT - if (innetgr (netgroup, - NULL, /* host */ - user, -@@ -1043,6 +1044,7 @@ js_polkit_user_is_in_netgroup (duk_context *cx) - { - is_in_netgroup = TRUE; - } -+#endif - - duk_push_boolean (cx, is_in_netgroup); - return 1; diff --git a/meta-oe/recipes-extended/polkit/polkit/polkit-1_pam.patch b/meta-oe/recipes-extended/polkit/polkit/polkit-1_pam.patch deleted file mode 100644 index c491abf4a..000000000 --- a/meta-oe/recipes-extended/polkit/polkit/polkit-1_pam.patch +++ /dev/null @@ -1,35 +0,0 @@ -polkit: No system-auth in OE-Core, we can use common-* in place of it. - -Upstream-Status:Inappropriate [configuration] - -Signed-off-by: Xiaofeng Yan - -Upstream-Status: Inappropriate [oe specific] -Rebase to 0.115 -Signed-off-by: Hongxu Jia ---- - configure.ac | 8 ++++---- - 1 file changed, 4 insertions(+), 4 deletions(-) - -diff --git a/configure.ac b/configure.ac -index 36df239..8b3e1b1 100644 ---- a/configure.ac -+++ b/configure.ac -@@ -471,10 +471,10 @@ elif test x$with_os_type = xfreebsd -o x$with_os_type = xnetbsd; then - PAM_FILE_INCLUDE_PASSWORD=system - PAM_FILE_INCLUDE_SESSION=system - else -- PAM_FILE_INCLUDE_AUTH=system-auth -- PAM_FILE_INCLUDE_ACCOUNT=system-auth -- PAM_FILE_INCLUDE_PASSWORD=system-auth -- PAM_FILE_INCLUDE_SESSION=system-auth -+ PAM_FILE_INCLUDE_AUTH=common-auth -+ PAM_FILE_INCLUDE_ACCOUNT=common-account -+ PAM_FILE_INCLUDE_PASSWORD=common-password -+ PAM_FILE_INCLUDE_SESSION=common-session - fi - - AC_SUBST(PAM_FILE_INCLUDE_AUTH) --- -2.7.4 - diff --git a/meta-oe/recipes-extended/polkit/polkit_0.119.bb b/meta-oe/recipes-extended/polkit/polkit_0.119.bb deleted file mode 100644 index c4d3d25af..000000000 --- a/meta-oe/recipes-extended/polkit/polkit_0.119.bb +++ /dev/null @@ -1,79 +0,0 @@ -SUMMARY = "PolicyKit Authorization Framework" -DESCRIPTION = "The polkit package is an application-level toolkit for defining and handling the policy that allows unprivileged processes to speak to privileged processes." -HOMEPAGE = "http://www.freedesktop.org/wiki/Software/polkit" -LICENSE = "LGPL-2.0-or-later" -LIC_FILES_CHKSUM = "file://COPYING;md5=155db86cdbafa7532b41f390409283eb \ - file://src/polkit/polkit.h;beginline=1;endline=20;md5=0a8630b0133176d0504c87a0ded39db4" - -DEPENDS = "expat glib-2.0 intltool-native" - -inherit autotools gtk-doc pkgconfig useradd systemd gobject-introspection features_check - -REQUIRED_DISTRO_FEATURES = "polkit" - -PACKAGECONFIG = "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)} \ - ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'systemd', \ - bb.utils.contains('DISTRO_FEATURES', 'x11', 'consolekit', '', d), d)} \ - mozjs \ - " - -PACKAGECONFIG[pam] = "--with-authfw=pam,--with-authfw=shadow,libpam,libpam" -PACKAGECONFIG[systemd] = "--enable-libsystemd-login=yes --with-systemdsystemunitdir=${systemd_unitdir}/system/,--enable-libsystemd-login=no --with-systemdsystemunitdir=,systemd" -# there is no --enable/--disable option for consolekit and it's not picked by shlibs, so add it to RDEPENDS -PACKAGECONFIG[consolekit] = ",,,consolekit" - -# Default to mozjs javascript library -PACKAGECONFIG[mozjs] = ",,mozjs-91,,,duktape" -# duktape javascript engine is much smaller and faster but is not compatible with -# same javascript standards as mozjs. For example array.includes() function is not -# supported. Test rule compatibility when switching to duktape. -PACKAGECONFIG[duktape] = "--with-duktape,,duktape,,,mozjs" - -MOZJS_PATCHES = "\ - file://0002-jsauthority-port-to-mozjs-91.patch \ - file://0003-jsauthority-ensure-to-call-JS_Init-and-JS_ShutDown-e.patch \ -" -DUKTAPE_PATCHES = "file://0003-Added-support-for-duktape-as-JS-engine.patch" -DUKTAPE_NG_PATCHES = "file://0005-Make-netgroup-support-optional-duktape.patch" -PAM_SRC_URI = "file://polkit-1_pam.patch" -SRC_URI = "http://www.freedesktop.org/software/polkit/releases/polkit-${PV}.tar.gz \ - ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_SRC_URI}', '', d)} \ - ${@bb.utils.contains('PACKAGECONFIG', 'mozjs', '${MOZJS_PATCHES}', '', d)} \ - ${@bb.utils.contains('PACKAGECONFIG', 'duktape', '${DUKTAPE_PATCHES}', '', d)} \ - file://0001-pkexec-local-privilege-escalation-CVE-2021-4034.patch \ - file://0002-CVE-2021-4115-GHSL-2021-077-fix.patch \ - file://0004-Make-netgroup-support-optional.patch \ - ${@bb.utils.contains('PACKAGECONFIG', 'duktape', '${DUKTAPE_NG_PATCHES}', '', d)} \ - " -SRC_URI[sha256sum] = "c8579fdb86e94295404211285fee0722ad04893f0213e571bd75c00972fd1f5c" - -EXTRA_OECONF = "--with-os-type=moblin \ - --disable-man-pages \ - --disable-libelogind \ - " - -do_configure:prepend () { - rm -f ${S}/buildutil/lt*.m4 ${S}/buildutil/libtool.m4 -} - -do_compile:prepend () { - export GIR_EXTRA_LIBS_PATH="${B}/src/polkit/.libs" -} - -PACKAGES =+ "${PN}-examples" - -FILES:${PN}:append = " \ - ${libdir}/${BPN}-1 \ - ${nonarch_libdir}/${BPN}-1 \ - ${datadir}/dbus-1 \ - ${datadir}/${BPN}-1 \ - ${datadir}/gettext \ -" - -FILES:${PN}-examples = "${bindir}/*example*" - -USERADD_PACKAGES = "${PN}" -USERADD_PARAM:${PN} = "--system --no-create-home --user-group --home-dir ${sysconfdir}/${BPN}-1 --shell /bin/nologin polkitd" - -SYSTEMD_SERVICE:${PN} = "${BPN}.service" -SYSTEMD_AUTO_ENABLE = "disable" From patchwork Fri Dec 22 15:11:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kanavin X-Patchwork-Id: 36867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA821C47070 for ; Fri, 22 Dec 2023 15:11:36 +0000 (UTC) Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) by mx.groups.io with SMTP id smtpd.web10.25175.1703257888144824748 for ; Fri, 22 Dec 2023 07:11:28 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@gmail.com header.s=20230601 header.b=QBOg5OOW; spf=pass (domain: gmail.com, ip: 209.85.208.48, mailfrom: alex.kanavin@gmail.com) Received: by mail-ed1-f48.google.com with SMTP id 4fb4d7f45d1cf-554473c653aso1157382a12.0 for ; Fri, 22 Dec 2023 07:11:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703257886; x=1703862686; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xNqQkuKDZCm5vkepWuzoiDD+i7bP+mUeHeySnspP2UQ=; b=QBOg5OOWwAXmqKxNuObvQIVgY1DNNztfEgnj2gs52HqyLRgXjYCI+y6ow78nHDCfGe exbnfaYO9mx/YMUL+weUsaylgs8djiNYQj3ZZiq1/D6c145CzR4yjXY+IesOu36B0Myr 78in6UcU6GYLAugAyBubayXZLK2mw6+L1kJW4cZcnPUEvd5dsqhdn4L8n9JIUmOzPQ6Z oeeOB6pzoLXtAYKIIfJv1zhBBpVedP9iQ/SnNTfDTFVLQ2Mj3ULfUqPy23LDQ5IPXE+v U4fgVycolb3jwf0crGEAK/y01An41MApA+407KnxubzIOkg4yXs7eRTbGIAxINH0lu8N Lh9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703257886; x=1703862686; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xNqQkuKDZCm5vkepWuzoiDD+i7bP+mUeHeySnspP2UQ=; b=l9buMmXs9MzC7tj1Ya3T74PASJUeJM/81+gOGvyeuPMswEwQfhm+Jjz0Ubm4pSjd3f ElbEyhl0lX4Pdy3njB5VtIAMpaV95OLW/m4KM/bcMZLazL2krVQRNMPJxw9Bd+zYl96X YMTe+nPCEwdCrajt74W0m3FxxhMTMApZmNEcYIG4Ep+xHU5Zncsvz9yv/cIHNGkj62l5 syP2AKo4+PkoRn2OTcT8jc2qh1aJ/KmrjKmH3b8eCiB6Q4f5VBTCBzEoiCiwZ5cH7xxI uguvTYJx0IZg0ad7qR6MrqdWZ4DFlpM+6nCfSSn81MK17OJRs5eHgvaS2kMZVvlN4/Kp DLVg== X-Gm-Message-State: AOJu0Yz9QmQS6c++c7XCcP0/AGn/+Hrhb9btdNMjAKsdOHQtu3wP1jIm 5al2mnMDb+6tuHQ7pfWF3qlXSYnvBdz8UA== X-Google-Smtp-Source: AGHT+IGbEyR+nLU1mhvb6VIdH4wVAggI0SIypCfpGEmEEnu8kj9708hjjdMrcjrrYoe8X6VCjlwgGw== X-Received: by 2002:a50:a6db:0:b0:551:a444:2a4d with SMTP id f27-20020a50a6db000000b00551a4442a4dmr612343edc.77.1703257886366; Fri, 22 Dec 2023 07:11:26 -0800 (PST) Received: from Zen2.lab.linutronix.de. (drugstore.linutronix.de. [80.153.143.164]) by smtp.gmail.com with ESMTPSA id m9-20020aa7c2c9000000b00552666f4745sm2650247edp.22.2023.12.22.07.11.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 07:11:26 -0800 (PST) From: Alexander Kanavin X-Google-Original-From: Alexander Kanavin To: openembedded-devel@lists.openembedded.org Cc: Alexander Kanavin Subject: [PATCH 3/9] mozjs-115: split the way-too-long PYTHONPATH line Date: Fri, 22 Dec 2023 16:11:02 +0100 Message-Id: <20231222151108.645675-3-alex@linutronix.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231222151108.645675-1-alex@linutronix.de> References: <20231222151108.645675-1-alex@linutronix.de> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Fri, 22 Dec 2023 15:11:36 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-devel/message/107754 Signed-off-by: Alexander Kanavin --- .../recipes-extended/mozjs/mozjs-115_115.2.0.bb | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/meta-oe/recipes-extended/mozjs/mozjs-115_115.2.0.bb b/meta-oe/recipes-extended/mozjs/mozjs-115_115.2.0.bb index fcdf64c93..d0acabd8b 100644 --- a/meta-oe/recipes-extended/mozjs/mozjs-115_115.2.0.bb +++ b/meta-oe/recipes-extended/mozjs/mozjs-115_115.2.0.bb @@ -28,7 +28,22 @@ DEPENDS:remove:powerpc:toolchain-clang = "icu" B = "${WORKDIR}/build" -export PYTHONPATH = "${S}/build:${S}/third_party/python/PyYAML/lib3:${S}/testing/mozbase/mozfile:${S}/python/mozboot:${S}/third_party/python/distro:${S}/testing/mozbase/mozinfo:${S}/config:${S}/testing/mozbase/manifestparser:${S}/third_party/python/pytoml:${S}/testing/mozbase/mozprocess:${S}/third_party/python/six:${S}/python/mozbuild:${S}/python/mozbuild/mozbuild:${S}/python/mach:${S}/third_party/python/jsmin:${S}/python/mozversioncontrol" +export PYTHONPATH = "${S}/build:\ +${S}/third_party/python/PyYAML/lib3:\ +${S}/testing/mozbase/mozfile:\ +${S}/python/mozboot:\ +${S}/third_party/python/distro:\ +${S}/testing/mozbase/mozinfo:\ +${S}/config:\ +${S}/testing/mozbase/manifestparser:\ +${S}/third_party/python/pytoml:\ +${S}/testing/mozbase/mozprocess:\ +${S}/third_party/python/six:\ +${S}/python/mozbuild:\ +${S}/python/mozbuild/mozbuild:\ +${S}/python/mach:\ +${S}/third_party/python/jsmin:\ +${S}/python/mozversioncontrol" export HOST_CC = "${BUILD_CC}" export HOST_CXX = "${BUILD_CXX}" From patchwork Fri Dec 22 15:11:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kanavin X-Patchwork-Id: 36866 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE075C47072 for ; Fri, 22 Dec 2023 15:11:36 +0000 (UTC) Received: from mail-ed1-f45.google.com (mail-ed1-f45.google.com [209.85.208.45]) by mx.groups.io with SMTP id smtpd.web11.24910.1703257888506017615 for ; Fri, 22 Dec 2023 07:11:28 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@gmail.com header.s=20230601 header.b=I65SM8lY; spf=pass (domain: gmail.com, ip: 209.85.208.45, mailfrom: alex.kanavin@gmail.com) Received: by mail-ed1-f45.google.com with SMTP id 4fb4d7f45d1cf-55372c1338bso2110154a12.2 for ; Fri, 22 Dec 2023 07:11:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703257887; x=1703862687; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ra2T8pZHXKx2HBPyaV+6rVIgy4oAoTRNICIado0EoiA=; b=I65SM8lYMmzB9f9YQIxx42MsTOeAL62VqhxrDUXBgviIniz8qrQm7bShxkAGs4+LGF cqjJgNKU9uyAVHLmqqwIdOCKc1Jh3qfgJKck5j8cBlngOmvXvUbF+B2qIFLYdQIdUSL2 NbIAHKzmOfSDSpEau5bOUPIjVB1tBaDAgJ+UHby+/7sliOMkI6BpyYa6oxl35VHkxaOL /cLZete9tkaleP+OvaQq7Wsd6dzkK02Y3xR5g/FEx09DSx8bwY7ePvulIU80v1b/L/DF +K7AhJljL0KgvWhXgE8J4hqKRLG6AItvPXxlOgnMWi+LATS6wbPRz7z2+Ke/QIBxpXxj 3UUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703257887; x=1703862687; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ra2T8pZHXKx2HBPyaV+6rVIgy4oAoTRNICIado0EoiA=; b=HcG35ScL6jZXF+t8+wsYfEBLjVXVEBjb+e4zeI2B1ENp4tRMgcBMZenSM4L/q5GqDT KKcpYYn0kpIzNRGEZdw3ecBQbF6qURbVSY3hYF6/1YYn9M9xF2XuISys3OMvr4lf1WPB mCaG1c+aYO3yugVtGS6uGedDbWl5L/0a5KLLtQrRTtG17ezj2odO7S8bauy1PGNmvjYR 6DOjtssZPc93Y9FYWD0WFP8Uxy35j0u7pgXg70ll1Ap7pUl9H3fOC/ANq+vhxjIEu+MR 8N8xdbv072HJeeyXMfo1UvJ+FJli48Kkksp4SJKL4G/TfjsYDIxD6X6E5VjUXqurLtg1 bjIg== X-Gm-Message-State: AOJu0Yz0DzgD1T0GU4Rb9rv1Yp5sT/PpbV80neIE0uJ3umxtbQgTCtQm AJxcLpnrDScT3ByaA73KSrPbCmecdaXz7g== X-Google-Smtp-Source: AGHT+IGyzvXrBbl+fBT0DB82R9un7Z/Sa0kmh8vyPczO0rCVH750IVja43zG5LxR29EvkiRudipTWQ== X-Received: by 2002:a50:8e56:0:b0:553:29ef:4f86 with SMTP id 22-20020a508e56000000b0055329ef4f86mr745418edx.69.1703257886859; Fri, 22 Dec 2023 07:11:26 -0800 (PST) Received: from Zen2.lab.linutronix.de. (drugstore.linutronix.de. [80.153.143.164]) by smtp.gmail.com with ESMTPSA id m9-20020aa7c2c9000000b00552666f4745sm2650247edp.22.2023.12.22.07.11.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 07:11:26 -0800 (PST) From: Alexander Kanavin X-Google-Original-From: Alexander Kanavin To: openembedded-devel@lists.openembedded.org Cc: Alexander Kanavin Subject: [PATCH 4/9] polkit: update mozjs dependency 102 -> 115 Date: Fri, 22 Dec 2023 16:11:03 +0100 Message-Id: <20231222151108.645675-4-alex@linutronix.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231222151108.645675-1-alex@linutronix.de> References: <20231222151108.645675-1-alex@linutronix.de> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Fri, 22 Dec 2023 15:11:36 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-devel/message/107755 This will allow dropping mozjs-102 instead of attempting to make it work with python 3.12. Signed-off-by: Alexander Kanavin --- .../0001-jsauthority-Bump-mozjs-to-115.patch | 26 +++++++++++++++++++ meta-oe/recipes-extended/polkit/polkit_123.bb | 10 +++---- 2 files changed, 31 insertions(+), 5 deletions(-) create mode 100644 meta-oe/recipes-extended/polkit/polkit/0001-jsauthority-Bump-mozjs-to-115.patch diff --git a/meta-oe/recipes-extended/polkit/polkit/0001-jsauthority-Bump-mozjs-to-115.patch b/meta-oe/recipes-extended/polkit/polkit/0001-jsauthority-Bump-mozjs-to-115.patch new file mode 100644 index 000000000..163a03cfc --- /dev/null +++ b/meta-oe/recipes-extended/polkit/polkit/0001-jsauthority-Bump-mozjs-to-115.patch @@ -0,0 +1,26 @@ +From 2f0de2a831ab106fce210c1d65baef041256bc18 Mon Sep 17 00:00:00 2001 +From: Xi Ruoyao +Date: Mon, 18 Sep 2023 01:53:04 +0800 +Subject: [PATCH] jsauthority: Bump mozjs to 115 + +No code change is needed! + +Upstream-Status: Backport [https://gitlab.freedesktop.org/polkit/polkit/-/commit/b340f50b7bb963863ede7c63f9a0b5c50c80c1e1] +Signed-off-by: Alexander Kanavin +--- + meson.build | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/meson.build b/meson.build +index 3b96562..92b68fd 100644 +--- a/meson.build ++++ b/meson.build +@@ -153,7 +153,7 @@ if js_engine == 'duktape' + func = 'pthread_condattr_setclock' + config_h.set('HAVE_' + func.to_upper(), cc.has_function(func, prefix : '#include ')) + elif js_engine == 'mozjs' +- js_dep = dependency('mozjs-102') ++ js_dep = dependency('mozjs-115') + + _system = host_machine.system().to_lower() + if _system.contains('freebsd') diff --git a/meta-oe/recipes-extended/polkit/polkit_123.bb b/meta-oe/recipes-extended/polkit/polkit_123.bb index 4fc23559f..670fd995f 100644 --- a/meta-oe/recipes-extended/polkit/polkit_123.bb +++ b/meta-oe/recipes-extended/polkit/polkit_123.bb @@ -4,10 +4,10 @@ HOMEPAGE = "http://www.freedesktop.org/wiki/Software/polkit" LICENSE = "LGPL-2.0-or-later" LIC_FILES_CHKSUM = "file://COPYING;md5=155db86cdbafa7532b41f390409283eb" -SRC_URI = " \ - git://gitlab.freedesktop.org/polkit/polkit.git;protocol=https;branch=master \ - file://0001-polkit.service.in-disable-MemoryDenyWriteExecute.patch \ -" +SRC_URI = "git://gitlab.freedesktop.org/polkit/polkit.git;protocol=https;branch=master \ + file://0001-polkit.service.in-disable-MemoryDenyWriteExecute.patch \ + file://0001-jsauthority-Bump-mozjs-to-115.patch \ + " S = "${WORKDIR}/git" SRCREV = "fc8b07e71d99f88a29258cde99b913b44da1846d" @@ -31,7 +31,7 @@ PACKAGECONFIG[systemd] = "-Dsession_tracking=libsystemd-login,-Dsession_tracking PACKAGECONFIG[consolekit] = ",,,consolekit" # Default to mozjs javascript library -PACKAGECONFIG[mozjs] = "-Djs_engine=mozjs,,mozjs-102,,,duktape" +PACKAGECONFIG[mozjs] = "-Djs_engine=mozjs,,mozjs-115,,,duktape" # duktape javascript engine is much smaller and faster but is not compatible with # same javascript standards as mozjs. For example array.includes() function is not # supported. Test rule compatibility when switching to duktape. From patchwork Fri Dec 22 15:11:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kanavin X-Patchwork-Id: 36870 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 922CAC4706F for ; Fri, 22 Dec 2023 15:11:36 +0000 (UTC) Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) by mx.groups.io with SMTP id smtpd.web11.24911.1703257890328734555 for ; Fri, 22 Dec 2023 07:11:31 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@gmail.com header.s=20230601 header.b=HsWVqtsC; spf=pass (domain: gmail.com, ip: 209.85.208.42, mailfrom: alex.kanavin@gmail.com) Received: by mail-ed1-f42.google.com with SMTP id 4fb4d7f45d1cf-54cb4fa667bso2450573a12.3 for ; Fri, 22 Dec 2023 07:11:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703257889; x=1703862689; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pcGs7JLt8d655nsBpymc8UaznSg9VNyptGxOEQXwbMI=; b=HsWVqtsCGsyiS8fqZFk1z5H6yVVJmxxPGWKftWR7IZWLw5aMn35UvlWyq0YdcS5g26 RY2CdTjCjO5JvCLwSqe0KkRCjfB6HKFMhuipgxIuUWlXpxog6YkZwRayXrERsoSM18fC d3Qr6XLEe0Q/iGNtwGdJBd/2IMu2Alx+yjc+3jODQuIAXtFMyIp91ArnurvsZ0xcP3vZ oiZyfqjmtU9t6N+TbRA7jLpRBN+eQh0dzRZY5QcO/zKr+5UOYqbIBxlqCpJF1AK33Q8u 0t9c7D9/vGsErghFNfUpXk7LHL0HJ4l6Zm7h5RQdGPlwtEQE8Ax6IJrn/9hhAnsw4Duo 4DUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703257889; x=1703862689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pcGs7JLt8d655nsBpymc8UaznSg9VNyptGxOEQXwbMI=; b=GvQXFhJAEUMoZLNRHxRgX1otN7GWXYsxsvqO0btqBLXUcfyv3Ner1AYCWyLwPAL3uJ dCpcGl4EtvaztoVYjqJkePhkyEb11UX0Aj7raTOw3wfUoXgZ3Um7RudN2q26cXMQUzq5 G2Up727iMEUVERdNmiKmvTcEZdvY+t/0nkEkKgoVwXX9U/0Ef+x7vAoEdAqG1RvYtC0Q 0eLx25yQ7e+E0YoLvXAv/0pTICkJxkPOC9Ro/P0DNj3NBJivaVnC7xOXaAHsg+WJA253 Il8+aCykkq7L4Ic43EB49aQNPpvNppQjnKbSmNL/gR15Q/id342K9+jXn11mZm+lsyim D8Qw== X-Gm-Message-State: AOJu0Yw1V6yggHt9SNjvtjHtljpscZjA8LAX8VLdgALvlyx8R6TwhIJy K8RyIwyK4rijrxp6AntMHqFyk3P+eZAADw== X-Google-Smtp-Source: AGHT+IH86B3aZSkJ0ePj6actCYhCq3k1F7VYoeujyFcxLhX/AL25oSuLSXO9QAfrmuYtQ3QcowkpIQ== X-Received: by 2002:a05:6402:318b:b0:553:baef:a6f7 with SMTP id di11-20020a056402318b00b00553baefa6f7mr743468edb.65.1703257887600; Fri, 22 Dec 2023 07:11:27 -0800 (PST) Received: from Zen2.lab.linutronix.de. (drugstore.linutronix.de. [80.153.143.164]) by smtp.gmail.com with ESMTPSA id m9-20020aa7c2c9000000b00552666f4745sm2650247edp.22.2023.12.22.07.11.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 07:11:27 -0800 (PST) From: Alexander Kanavin X-Google-Original-From: Alexander Kanavin To: openembedded-devel@lists.openembedded.org Cc: Alexander Kanavin Subject: [PATCH 5/9] mozjs-115: backport py 3.12 compatibility Date: Fri, 22 Dec 2023 16:11:04 +0100 Message-Id: <20231222151108.645675-5-alex@linutronix.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231222151108.645675-1-alex@linutronix.de> References: <20231222151108.645675-1-alex@linutronix.de> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Fri, 22 Dec 2023 15:11:36 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-devel/message/107758 Signed-off-by: Alexander Kanavin --- .../mozjs/mozjs-115/py3.12.patch | 2496 +++++++++++++++++ .../mozjs/mozjs-115_115.2.0.bb | 1 + 2 files changed, 2497 insertions(+) create mode 100644 meta-oe/recipes-extended/mozjs/mozjs-115/py3.12.patch diff --git a/meta-oe/recipes-extended/mozjs/mozjs-115/py3.12.patch b/meta-oe/recipes-extended/mozjs/mozjs-115/py3.12.patch new file mode 100644 index 000000000..732c8ff1c --- /dev/null +++ b/meta-oe/recipes-extended/mozjs/mozjs-115/py3.12.patch @@ -0,0 +1,2496 @@ +From 7293cfae4fd68004901825ad1cabb83424d8729a Mon Sep 17 00:00:00 2001 +From: serge-sans-paille +Date: Mon, 16 Oct 2023 13:35:24 +0000 +Subject: [PATCH] Bug 1857492 - Upgrade vendored version of six and urllib3 + r=saschanaz + +six -> 1.16 +urllib3 -> 1.26.17 + +Differential Revision: https://phabricator.services.mozilla.com/D190288 +Upstream-Status: Backport [https://github.com/mozilla/gecko-dev/commit/7293cfae4fd68004901825ad1cabb83424d8729a] +Signed-off-by: Alexander Kanavin +--- + third_party/python/poetry.lock | 22 +-- + third_party/python/requirements.in | 4 +- + third_party/python/requirements.txt | 12 +- + .../python/six/six-1.13.0.dist-info/RECORD | 6 - + .../LICENSE | 2 +- + .../METADATA | 9 +- + .../python/six/six-1.16.0.dist-info/RECORD | 6 + + .../six-1.16.0.dist-info}/WHEEL | 2 +- + .../top_level.txt | 0 + third_party/python/six/six.py | 91 ++++++--- + .../urllib3/urllib3-1.26.0.dist-info/RECORD | 44 ----- + .../LICENSE.txt | 0 + .../METADATA | 177 ++++++++++++++++-- + .../urllib3/urllib3-1.26.17.dist-info/RECORD | 44 +++++ + .../urllib3-1.26.17.dist-info}/WHEEL | 2 +- + .../top_level.txt | 0 + .../python/urllib3/urllib3/__init__.py | 17 ++ + .../python/urllib3/urllib3/_version.py | 2 +- + .../python/urllib3/urllib3/connection.py | 62 ++++-- + .../python/urllib3/urllib3/connectionpool.py | 97 ++++++++-- + .../contrib/_securetransport/bindings.py | 2 +- + .../contrib/_securetransport/low_level.py | 1 + + .../urllib3/urllib3/contrib/appengine.py | 4 +- + .../urllib3/urllib3/contrib/ntlmpool.py | 13 +- + .../urllib3/urllib3/contrib/pyopenssl.py | 19 +- + .../urllib3/contrib/securetransport.py | 5 +- + .../python/urllib3/urllib3/contrib/socks.py | 2 +- + .../python/urllib3/urllib3/exceptions.py | 12 +- + .../urllib3/urllib3/packages/__init__.py | 5 - + .../packages/backports/weakref_finalize.py | 155 +++++++++++++++ + .../python/urllib3/urllib3/packages/six.py | 125 +++++++++---- + .../packages/ssl_match_hostname/__init__.py | 22 --- + .../python/urllib3/urllib3/poolmanager.py | 3 +- + third_party/python/urllib3/urllib3/request.py | 21 +++ + .../python/urllib3/urllib3/response.py | 72 ++++++- + .../python/urllib3/urllib3/util/connection.py | 5 +- + .../python/urllib3/urllib3/util/proxy.py | 1 + + .../python/urllib3/urllib3/util/request.py | 5 +- + .../python/urllib3/urllib3/util/retry.py | 37 +++- + .../python/urllib3/urllib3/util/ssl_.py | 53 ++++-- + .../ssl_match_hostname.py} | 15 +- + .../urllib3/urllib3/util/ssltransport.py | 6 +- + .../python/urllib3/urllib3/util/timeout.py | 9 +- + .../python/urllib3/urllib3/util/url.py | 17 +- + .../python/urllib3/urllib3/util/wait.py | 1 - + 45 files changed, 934 insertions(+), 275 deletions(-) + delete mode 100644 third_party/python/six/six-1.13.0.dist-info/RECORD + rename third_party/python/six/{six-1.13.0.dist-info => six-1.16.0.dist-info}/LICENSE (96%) + rename third_party/python/six/{six-1.13.0.dist-info => six-1.16.0.dist-info}/METADATA (85%) + create mode 100644 third_party/python/six/six-1.16.0.dist-info/RECORD + rename third_party/python/{urllib3/urllib3-1.26.0.dist-info => six/six-1.16.0.dist-info}/WHEEL (70%) + rename third_party/python/six/{six-1.13.0.dist-info => six-1.16.0.dist-info}/top_level.txt (100%) + delete mode 100644 third_party/python/urllib3/urllib3-1.26.0.dist-info/RECORD + rename third_party/python/urllib3/{urllib3-1.26.0.dist-info => urllib3-1.26.17.dist-info}/LICENSE.txt (100%) + rename third_party/python/urllib3/{urllib3-1.26.0.dist-info => urllib3-1.26.17.dist-info}/METADATA (86%) + create mode 100644 third_party/python/urllib3/urllib3-1.26.17.dist-info/RECORD + rename third_party/python/{six/six-1.13.0.dist-info => urllib3/urllib3-1.26.17.dist-info}/WHEEL (70%) + rename third_party/python/urllib3/{urllib3-1.26.0.dist-info => urllib3-1.26.17.dist-info}/top_level.txt (100%) + create mode 100644 third_party/python/urllib3/urllib3/packages/backports/weakref_finalize.py + delete mode 100644 third_party/python/urllib3/urllib3/packages/ssl_match_hostname/__init__.py + rename third_party/python/urllib3/urllib3/{packages/ssl_match_hostname/_implementation.py => util/ssl_match_hostname.py} (92%) + +diff --git a/third_party/python/poetry.lock b/third_party/python/poetry.lock +index 3d50174e58bcb..b4a8455d20fb4 100644 +--- a/third_party/python/poetry.lock ++++ b/third_party/python/poetry.lock +@@ -1333,14 +1333,14 @@ testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs ( + + [[package]] + name = "six" +-version = "1.13.0" ++version = "1.16.0" + description = "Python 2 and 3 compatibility utilities" + category = "main" + optional = false +-python-versions = ">=2.6, !=3.0.*, !=3.1.*" ++python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" + files = [ +- {file = "six-1.13.0-py2.py3-none-any.whl", hash = "sha256:1f1b7d42e254082a9db6279deae68afb421ceba6158efa6131de7b3003ee93fd"}, +- {file = "six-1.13.0.tar.gz", hash = "sha256:30f610279e8b2578cab6db20741130331735c781b56053c59c4076da27f06b66"}, ++ {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, ++ {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, + ] + + [[package]] +@@ -1491,19 +1491,19 @@ files = [ + + [[package]] + name = "urllib3" +-version = "1.26.0" ++version = "1.26.17" + description = "HTTP library with thread-safe connection pooling, file post, and more." + category = "main" + optional = false +-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4" ++python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" + files = [ +- {file = "urllib3-1.26.0-py2.py3-none-any.whl", hash = "sha256:bad31cb622ceee0ab46c4c884cf61957def0ff2e644de0a7a093678844c9ccac"}, +- {file = "urllib3-1.26.0.tar.gz", hash = "sha256:4849f132941d68144df0a3785ccc4fe423430ba5db0108d045c8cadbc90f517a"}, ++ {file = "urllib3-1.26.17-py2.py3-none-any.whl", hash = "sha256:94a757d178c9be92ef5539b8840d48dc9cf1b2709c9d6b588232a055c524458b"}, ++ {file = "urllib3-1.26.17.tar.gz", hash = "sha256:24d6a242c28d29af46c3fae832c36db3bbebcc533dd1bb549172cd739c82df21"}, + ] + + [package.extras] +-brotli = ["brotlipy (>=0.6.0)"] +-secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)"] ++brotli = ["brotli (==1.0.9)", "brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"] ++secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"] + socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] + + [[package]] +diff --git a/third_party/python/six/six-1.13.0.dist-info/RECORD b/third_party/python/six/six-1.13.0.dist-info/RECORD +deleted file mode 100644 +index a0e6c1fd4bd99..0000000000000 +--- a/third_party/python/six/six-1.13.0.dist-info/RECORD ++++ /dev/null +@@ -1,6 +0,0 @@ +-six.py,sha256=bsEzSFTZTx49wQttLORmSZTrpjGc8UbXt-HBa_LZX7Q,33045 +-six-1.13.0.dist-info/LICENSE,sha256=t1KbjAcXGniow2wyg5BVKOSBKUXZd9El65JujMvyRbY,1066 +-six-1.13.0.dist-info/METADATA,sha256=hxS4rSPRfO8ewbcLS30anoFi6LFgUQ3mk_xknZ8RV4w,1940 +-six-1.13.0.dist-info/WHEEL,sha256=8zNYZbwQSXoB9IfXOjPfeNwvAsALAjffgk27FqvCWbo,110 +-six-1.13.0.dist-info/top_level.txt,sha256=_iVH_iYEtEXnD8nYGQYpYFUvkUW9sEO1GYbkeKSAais,4 +-six-1.13.0.dist-info/RECORD,, +diff --git a/third_party/python/six/six-1.13.0.dist-info/LICENSE b/third_party/python/six/six-1.16.0.dist-info/LICENSE +similarity index 96% +rename from third_party/python/six/six-1.13.0.dist-info/LICENSE +rename to third_party/python/six/six-1.16.0.dist-info/LICENSE +index 4b05a545261c0..de6633112c1f9 100644 +--- a/third_party/python/six/six-1.13.0.dist-info/LICENSE ++++ b/third_party/python/six/six-1.16.0.dist-info/LICENSE +@@ -1,4 +1,4 @@ +-Copyright (c) 2010-2019 Benjamin Peterson ++Copyright (c) 2010-2020 Benjamin Peterson + + Permission is hereby granted, free of charge, to any person obtaining a copy of + this software and associated documentation files (the "Software"), to deal in +diff --git a/third_party/python/six/six-1.13.0.dist-info/METADATA b/third_party/python/six/six-1.16.0.dist-info/METADATA +similarity index 85% +rename from third_party/python/six/six-1.13.0.dist-info/METADATA +rename to third_party/python/six/six-1.16.0.dist-info/METADATA +index b0c8f51e1f366..6d7525c2ebcfe 100644 +--- a/third_party/python/six/six-1.13.0.dist-info/METADATA ++++ b/third_party/python/six/six-1.16.0.dist-info/METADATA +@@ -1,6 +1,6 @@ + Metadata-Version: 2.1 + Name: six +-Version: 1.13.0 ++Version: 1.16.0 + Summary: Python 2 and 3 compatibility utilities + Home-page: https://github.com/benjaminp/six + Author: Benjamin Peterson +@@ -14,7 +14,7 @@ Classifier: Intended Audience :: Developers + Classifier: License :: OSI Approved :: MIT License + Classifier: Topic :: Software Development :: Libraries + Classifier: Topic :: Utilities +-Requires-Python: >=2.6, !=3.0.*, !=3.1.* ++Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.* + + .. image:: https://img.shields.io/pypi/v/six.svg + :target: https://pypi.org/project/six/ +@@ -37,7 +37,7 @@ for smoothing over the differences between the Python versions with the goal of + writing Python code that is compatible on both Python versions. See the + documentation for more information on what is provided. + +-Six supports every Python version since 2.6. It is contained in only one Python ++Six supports Python 2.7 and 3.3+. It is contained in only one Python + file, so it can be easily copied into your project. (The copyright and license + notice must be retained.) + +@@ -46,7 +46,4 @@ Online documentation is at https://six.readthedocs.io/. + Bugs can be reported to https://github.com/benjaminp/six. The code can also + be found there. + +-For questions about six or porting in general, email the python-porting mailing +-list: https://mail.python.org/mailman/listinfo/python-porting +- + +diff --git a/third_party/python/six/six-1.16.0.dist-info/RECORD b/third_party/python/six/six-1.16.0.dist-info/RECORD +new file mode 100644 +index 0000000000000..8de4af79fae0b +--- /dev/null ++++ b/third_party/python/six/six-1.16.0.dist-info/RECORD +@@ -0,0 +1,6 @@ ++six.py,sha256=TOOfQi7nFGfMrIvtdr6wX4wyHH8M7aknmuLfo2cBBrM,34549 ++six-1.16.0.dist-info/LICENSE,sha256=i7hQxWWqOJ_cFvOkaWWtI9gq3_YPI5P8J2K2MYXo5sk,1066 ++six-1.16.0.dist-info/METADATA,sha256=VQcGIFCAEmfZcl77E5riPCN4v2TIsc_qtacnjxKHJoI,1795 ++six-1.16.0.dist-info/WHEEL,sha256=Z-nyYpwrcSqxfdux5Mbn_DQ525iP7J2DG3JgGvOYyTQ,110 ++six-1.16.0.dist-info/top_level.txt,sha256=_iVH_iYEtEXnD8nYGQYpYFUvkUW9sEO1GYbkeKSAais,4 ++six-1.16.0.dist-info/RECORD,, +diff --git a/third_party/python/urllib3/urllib3-1.26.0.dist-info/WHEEL b/third_party/python/six/six-1.16.0.dist-info/WHEEL +similarity index 70% +rename from third_party/python/urllib3/urllib3-1.26.0.dist-info/WHEEL +rename to third_party/python/six/six-1.16.0.dist-info/WHEEL +index 6d38aa0601b31..01b8fc7d4a10c 100644 +--- a/third_party/python/urllib3/urllib3-1.26.0.dist-info/WHEEL ++++ b/third_party/python/six/six-1.16.0.dist-info/WHEEL +@@ -1,5 +1,5 @@ + Wheel-Version: 1.0 +-Generator: bdist_wheel (0.35.1) ++Generator: bdist_wheel (0.36.2) + Root-Is-Purelib: true + Tag: py2-none-any + Tag: py3-none-any +diff --git a/third_party/python/six/six-1.13.0.dist-info/top_level.txt b/third_party/python/six/six-1.16.0.dist-info/top_level.txt +similarity index 100% +rename from third_party/python/six/six-1.13.0.dist-info/top_level.txt +rename to third_party/python/six/six-1.16.0.dist-info/top_level.txt +diff --git a/third_party/python/six/six.py b/third_party/python/six/six.py +index 357e624abc6c9..4e15675d8b5ca 100644 +--- a/third_party/python/six/six.py ++++ b/third_party/python/six/six.py +@@ -1,4 +1,4 @@ +-# Copyright (c) 2010-2019 Benjamin Peterson ++# Copyright (c) 2010-2020 Benjamin Peterson + # + # Permission is hereby granted, free of charge, to any person obtaining a copy + # of this software and associated documentation files (the "Software"), to deal +@@ -29,7 +29,7 @@ + import types + + __author__ = "Benjamin Peterson " +-__version__ = "1.13.0" ++__version__ = "1.16.0" + + + # Useful for very coarse version differentiation. +@@ -71,6 +71,11 @@ def __len__(self): + MAXSIZE = int((1 << 63) - 1) + del X + ++if PY34: ++ from importlib.util import spec_from_loader ++else: ++ spec_from_loader = None ++ + + def _add_doc(func, doc): + """Add documentation to a function.""" +@@ -186,6 +191,11 @@ def find_module(self, fullname, path=None): + return self + return None + ++ def find_spec(self, fullname, path, target=None): ++ if fullname in self.known_modules: ++ return spec_from_loader(fullname, self) ++ return None ++ + def __get_module(self, fullname): + try: + return self.known_modules[fullname] +@@ -223,6 +233,12 @@ def get_code(self, fullname): + return None + get_source = get_code # same as get_code + ++ def create_module(self, spec): ++ return self.load_module(spec.name) ++ ++ def exec_module(self, module): ++ pass ++ + _importer = _SixMetaPathImporter(__name__) + + +@@ -259,7 +275,7 @@ class _MovedItems(_LazyModule): + MovedModule("copyreg", "copy_reg"), + MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), + MovedModule("dbm_ndbm", "dbm", "dbm.ndbm"), +- MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"), ++ MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread" if sys.version_info < (3, 9) else "_thread"), + MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), + MovedModule("http_cookies", "Cookie", "http.cookies"), + MovedModule("html_entities", "htmlentitydefs", "html.entities"), +@@ -644,9 +660,11 @@ def u(s): + if sys.version_info[1] <= 1: + _assertRaisesRegex = "assertRaisesRegexp" + _assertRegex = "assertRegexpMatches" ++ _assertNotRegex = "assertNotRegexpMatches" + else: + _assertRaisesRegex = "assertRaisesRegex" + _assertRegex = "assertRegex" ++ _assertNotRegex = "assertNotRegex" + else: + def b(s): + return s +@@ -668,6 +686,7 @@ def indexbytes(buf, i): + _assertCountEqual = "assertItemsEqual" + _assertRaisesRegex = "assertRaisesRegexp" + _assertRegex = "assertRegexpMatches" ++ _assertNotRegex = "assertNotRegexpMatches" + _add_doc(b, """Byte literal""") + _add_doc(u, """Text literal""") + +@@ -684,6 +703,10 @@ def assertRegex(self, *args, **kwargs): + return getattr(self, _assertRegex)(*args, **kwargs) + + ++def assertNotRegex(self, *args, **kwargs): ++ return getattr(self, _assertNotRegex)(*args, **kwargs) ++ ++ + if PY3: + exec_ = getattr(moves.builtins, "exec") + +@@ -719,16 +742,7 @@ def exec_(_code_, _globs_=None, _locs_=None): + """) + + +-if sys.version_info[:2] == (3, 2): +- exec_("""def raise_from(value, from_value): +- try: +- if from_value is None: +- raise value +- raise value from from_value +- finally: +- value = None +-""") +-elif sys.version_info[:2] > (3, 2): ++if sys.version_info[:2] > (3,): + exec_("""def raise_from(value, from_value): + try: + raise value from from_value +@@ -808,13 +822,33 @@ def print_(*args, **kwargs): + _add_doc(reraise, """Reraise an exception.""") + + if sys.version_info[0:2] < (3, 4): ++ # This does exactly the same what the :func:`py3:functools.update_wrapper` ++ # function does on Python versions after 3.2. It sets the ``__wrapped__`` ++ # attribute on ``wrapper`` object and it doesn't raise an error if any of ++ # the attributes mentioned in ``assigned`` and ``updated`` are missing on ++ # ``wrapped`` object. ++ def _update_wrapper(wrapper, wrapped, ++ assigned=functools.WRAPPER_ASSIGNMENTS, ++ updated=functools.WRAPPER_UPDATES): ++ for attr in assigned: ++ try: ++ value = getattr(wrapped, attr) ++ except AttributeError: ++ continue ++ else: ++ setattr(wrapper, attr, value) ++ for attr in updated: ++ getattr(wrapper, attr).update(getattr(wrapped, attr, {})) ++ wrapper.__wrapped__ = wrapped ++ return wrapper ++ _update_wrapper.__doc__ = functools.update_wrapper.__doc__ ++ + def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS, + updated=functools.WRAPPER_UPDATES): +- def wrapper(f): +- f = functools.wraps(wrapped, assigned, updated)(f) +- f.__wrapped__ = wrapped +- return f +- return wrapper ++ return functools.partial(_update_wrapper, wrapped=wrapped, ++ assigned=assigned, updated=updated) ++ wraps.__doc__ = functools.wraps.__doc__ ++ + else: + wraps = functools.wraps + +@@ -872,12 +906,11 @@ def ensure_binary(s, encoding='utf-8', errors='strict'): + - `str` -> encoded to `bytes` + - `bytes` -> `bytes` + """ ++ if isinstance(s, binary_type): ++ return s + if isinstance(s, text_type): + return s.encode(encoding, errors) +- elif isinstance(s, binary_type): +- return s +- else: +- raise TypeError("not expecting type '%s'" % type(s)) ++ raise TypeError("not expecting type '%s'" % type(s)) + + + def ensure_str(s, encoding='utf-8', errors='strict'): +@@ -891,12 +924,15 @@ def ensure_str(s, encoding='utf-8', errors='strict'): + - `str` -> `str` + - `bytes` -> decoded to `str` + """ +- if not isinstance(s, (text_type, binary_type)): +- raise TypeError("not expecting type '%s'" % type(s)) ++ # Optimization: Fast return for the common case. ++ if type(s) is str: ++ return s + if PY2 and isinstance(s, text_type): +- s = s.encode(encoding, errors) ++ return s.encode(encoding, errors) + elif PY3 and isinstance(s, binary_type): +- s = s.decode(encoding, errors) ++ return s.decode(encoding, errors) ++ elif not isinstance(s, (text_type, binary_type)): ++ raise TypeError("not expecting type '%s'" % type(s)) + return s + + +@@ -919,10 +955,9 @@ def ensure_text(s, encoding='utf-8', errors='strict'): + raise TypeError("not expecting type '%s'" % type(s)) + + +- + def python_2_unicode_compatible(klass): + """ +- A decorator that defines __unicode__ and __str__ methods under Python 2. ++ A class decorator that defines __unicode__ and __str__ methods under Python 2. + Under Python 3 it does nothing. + + To support Python 2 and 3 with a single code base, define a __str__ method +diff --git a/third_party/python/urllib3/urllib3-1.26.0.dist-info/RECORD b/third_party/python/urllib3/urllib3-1.26.0.dist-info/RECORD +deleted file mode 100644 +index ec9088a111a41..0000000000000 +--- a/third_party/python/urllib3/urllib3-1.26.0.dist-info/RECORD ++++ /dev/null +@@ -1,44 +0,0 @@ +-urllib3/__init__.py,sha256=j3yzHIbmW7CS-IKQJ9-PPQf_YKO8EOAey_rMW0UR7us,2763 +-urllib3/_collections.py,sha256=Rp1mVyBgc_UlAcp6M3at1skJBXR5J43NawRTvW2g_XY,10811 +-urllib3/_version.py,sha256=H0vLQ8PY350EPZlZQa8ri0tEjVS-xhGdQOHcU360-0A,63 +-urllib3/connection.py,sha256=BdaUSNpGzO0zq28i9MhOXb6QZspeVdVrYtjnkk2Eqg4,18396 +-urllib3/connectionpool.py,sha256=IKoeuJZY9YAYm0GK4q-MXAhyXW0M_FnvabYaNsDIR-E,37133 +-urllib3/exceptions.py,sha256=lNrKC5J8zeBXIu9SSKSNb7cLi8iXl9ARu9DHD2SflZM,7810 +-urllib3/fields.py,sha256=kvLDCg_JmH1lLjUUEY_FLS8UhY7hBvDPuVETbY8mdrM,8579 +-urllib3/filepost.py,sha256=5b_qqgRHVlL7uLtdAYBzBh-GHmU5AfJVt_2N0XS3PeY,2440 +-urllib3/poolmanager.py,sha256=whzlX6UTEgODMOCy0ZDMUONRBCz5wyIM8Z9opXAY-Lk,19763 +-urllib3/request.py,sha256=ZFSIqX0C6WizixecChZ3_okyu7BEv0lZu1VT0s6h4SM,5985 +-urllib3/response.py,sha256=hGhGBh7TkEkh_IQg5C1W_xuPNrgIKv5BUXPyE-q0LuE,28203 +-urllib3/contrib/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +-urllib3/contrib/_appengine_environ.py,sha256=bDbyOEhW2CKLJcQqAKAyrEHN-aklsyHFKq6vF8ZFsmk,957 +-urllib3/contrib/appengine.py,sha256=7Pxb0tKfDB_LTGPERiswH0qomhDoUUOo5kwybAKLQyE,11010 +-urllib3/contrib/ntlmpool.py,sha256=6I95h1_71fzxmoMSNtY0gB8lnyCoVtP_DpqFGj14fdU,4160 +-urllib3/contrib/pyopenssl.py,sha256=vgh6j52w9xgwq-3R2kfB5M2JblQATJfKAK3lIAc1kSg,16778 +-urllib3/contrib/securetransport.py,sha256=KxGPZk8d4YepWm7Rc-SBt1XrzIfnLKc8JkUVV75XzgE,34286 +-urllib3/contrib/socks.py,sha256=DcRjM2l0rQMIyhYrN6r-tnVkY6ZTDxHJlM8_usAkGCA,7097 +-urllib3/contrib/_securetransport/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +-urllib3/contrib/_securetransport/bindings.py,sha256=E1_7ScsgOchfxneozbAueK7ziCwF35fna4DuDCYJ9_o,17637 +-urllib3/contrib/_securetransport/low_level.py,sha256=lgIdsSycqfB0Xm5BiJzXGeIKT7ybCQMFPJAgkcwPa1s,13908 +-urllib3/packages/__init__.py,sha256=h4BLhD4tLaBx1adaDtKXfupsgqY0wWLXb_f1_yVlV6A,108 +-urllib3/packages/six.py,sha256=adx4z-eM_D0Vvu0IIqVzFACQ_ux9l64y7DkSEfbxCDs,32536 +-urllib3/packages/backports/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +-urllib3/packages/backports/makefile.py,sha256=nbzt3i0agPVP07jqqgjhaYjMmuAi_W5E0EywZivVO8E,1417 +-urllib3/packages/ssl_match_hostname/__init__.py,sha256=zppezdEQdpGsYerI6mV6MfUYy495JV4mcOWC_GgbljU,757 +-urllib3/packages/ssl_match_hostname/_implementation.py,sha256=6dZ-q074g7XhsJ27MFCgkct8iVNZB3sMZvKhf-KUVy0,5679 +-urllib3/util/__init__.py,sha256=JEmSmmqqLyaw8P51gUImZh8Gwg9i1zSe-DoqAitn2nc,1155 +-urllib3/util/connection.py,sha256=21B-LX0c8fkxPDssyHCaK0pCnmrKmhltg5EoouHiAPU,4910 +-urllib3/util/proxy.py,sha256=FGipAEnvZteyldXNjce4DEB7YzwU-a5lep8y5S0qHQg,1604 +-urllib3/util/queue.py,sha256=nRgX8_eX-_VkvxoX096QWoz8Ps0QHUAExILCY_7PncM,498 +-urllib3/util/request.py,sha256=NnzaEKQ1Pauw5MFMV6HmgEMHITf0Aua9fQuzi2uZzGc,4123 +-urllib3/util/response.py,sha256=GJpg3Egi9qaJXRwBh5wv-MNuRWan5BIu40oReoxWP28,3510 +-urllib3/util/retry.py,sha256=tn168HDMUynFmXRP-uVaLRUOlbTEJikoB1RuZdwfCes,21366 +-urllib3/util/ssl_.py,sha256=cUsmU604z2zAOZcaXDpINXOokQ1RtlJMe96TBDkaJp0,16199 +-urllib3/util/ssltransport.py,sha256=IvGQvs9YWkf4jzfqVjTu_UWjwAUgPn5ActajW8VLz6A,6908 +-urllib3/util/timeout.py,sha256=QSbBUNOB9yh6AnDn61SrLQ0hg5oz0I9-uXEG91AJuIg,10003 +-urllib3/util/url.py,sha256=LWfLSlI4l2FmUMKfCkElCaW10-0N-sJDT9bxaDZJkjs,13964 +-urllib3/util/wait.py,sha256=3MUKRSAUJDB2tgco7qRUskW0zXGAWYvRRE4Q1_6xlLs,5404 +-urllib3-1.26.0.dist-info/LICENSE.txt,sha256=w3vxhuJ8-dvpYZ5V7f486nswCRzrPaY8fay-Dm13kHs,1115 +-urllib3-1.26.0.dist-info/METADATA,sha256=Wghdt6nLf9HfZHhWj8Dpgz4n9vGRqXYhdIwJRPgki6M,42629 +-urllib3-1.26.0.dist-info/WHEEL,sha256=ADKeyaGyKF5DwBNE0sRE5pvW-bSkFMJfBuhzZ3rceP4,110 +-urllib3-1.26.0.dist-info/top_level.txt,sha256=EMiXL2sKrTcmrMxIHTqdc3ET54pQI2Y072LexFEemvo,8 +-urllib3-1.26.0.dist-info/RECORD,, +diff --git a/third_party/python/urllib3/urllib3-1.26.0.dist-info/LICENSE.txt b/third_party/python/urllib3/urllib3-1.26.17.dist-info/LICENSE.txt +similarity index 100% +rename from third_party/python/urllib3/urllib3-1.26.0.dist-info/LICENSE.txt +rename to third_party/python/urllib3/urllib3-1.26.17.dist-info/LICENSE.txt +diff --git a/third_party/python/urllib3/urllib3-1.26.0.dist-info/METADATA b/third_party/python/urllib3/urllib3-1.26.17.dist-info/METADATA +similarity index 86% +rename from third_party/python/urllib3/urllib3-1.26.0.dist-info/METADATA +rename to third_party/python/urllib3/urllib3-1.26.17.dist-info/METADATA +index 39869aafada8a..9493faee66c01 100644 +--- a/third_party/python/urllib3/urllib3-1.26.0.dist-info/METADATA ++++ b/third_party/python/urllib3/urllib3-1.26.17.dist-info/METADATA +@@ -1,6 +1,6 @@ + Metadata-Version: 2.1 + Name: urllib3 +-Version: 1.26.0 ++Version: 1.26.17 + Summary: HTTP library with thread-safe connection pooling, file post, and more. + Home-page: https://urllib3.readthedocs.io/ + Author: Andrey Petrov +@@ -10,7 +10,6 @@ Project-URL: Documentation, https://urllib3.readthedocs.io/ + Project-URL: Code, https://github.com/urllib3/urllib3 + Project-URL: Issue tracker, https://github.com/urllib3/urllib3/issues + Keywords: urllib httplib threadsafe filepost http https ssl pooling +-Platform: UNKNOWN + Classifier: Environment :: Web Environment + Classifier: Intended Audience :: Developers + Classifier: License :: OSI Approved :: MIT License +@@ -19,27 +18,33 @@ Classifier: Programming Language :: Python + Classifier: Programming Language :: Python :: 2 + Classifier: Programming Language :: Python :: 2.7 + Classifier: Programming Language :: Python :: 3 +-Classifier: Programming Language :: Python :: 3.5 + Classifier: Programming Language :: Python :: 3.6 + Classifier: Programming Language :: Python :: 3.7 + Classifier: Programming Language :: Python :: 3.8 + Classifier: Programming Language :: Python :: 3.9 ++Classifier: Programming Language :: Python :: 3.10 ++Classifier: Programming Language :: Python :: 3.11 + Classifier: Programming Language :: Python :: Implementation :: CPython + Classifier: Programming Language :: Python :: Implementation :: PyPy + Classifier: Topic :: Internet :: WWW/HTTP + Classifier: Topic :: Software Development :: Libraries +-Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4 ++Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.* + Description-Content-Type: text/x-rst ++License-File: LICENSE.txt + Provides-Extra: brotli +-Requires-Dist: brotlipy (>=0.6.0) ; extra == 'brotli' ++Requires-Dist: brotlicffi >=0.8.0 ; ((os_name != "nt" or python_version >= "3") and platform_python_implementation != "CPython") and extra == 'brotli' ++Requires-Dist: brotli ==1.0.9 ; (os_name != "nt" and python_version < "3" and platform_python_implementation == "CPython") and extra == 'brotli' ++Requires-Dist: brotlipy >=0.6.0 ; (os_name == "nt" and python_version < "3") and extra == 'brotli' ++Requires-Dist: brotli >=1.0.9 ; (python_version >= "3" and platform_python_implementation == "CPython") and extra == 'brotli' + Provides-Extra: secure +-Requires-Dist: pyOpenSSL (>=0.14) ; extra == 'secure' +-Requires-Dist: cryptography (>=1.3.4) ; extra == 'secure' +-Requires-Dist: idna (>=2.0.0) ; extra == 'secure' ++Requires-Dist: pyOpenSSL >=0.14 ; extra == 'secure' ++Requires-Dist: cryptography >=1.3.4 ; extra == 'secure' ++Requires-Dist: idna >=2.0.0 ; extra == 'secure' + Requires-Dist: certifi ; extra == 'secure' ++Requires-Dist: urllib3-secure-extra ; extra == 'secure' + Requires-Dist: ipaddress ; (python_version == "2.7") and extra == 'secure' + Provides-Extra: socks +-Requires-Dist: PySocks (!=1.5.7,<2.0,>=1.5.6) ; extra == 'socks' ++Requires-Dist: PySocks !=1.5.7,<2.0,>=1.5.6 ; extra == 'socks' + + + urllib3 is a powerful, *user-friendly* HTTP client for Python. Much of the +@@ -78,8 +83,10 @@ urllib3 can be installed with `pip `_:: + + Alternatively, you can grab the latest source code from `GitHub `_:: + +- $ git clone git://github.com/urllib3/urllib3.git +- $ python setup.py install ++ $ git clone https://github.com/urllib3/urllib3.git ++ $ cd urllib3 ++ $ git checkout 1.26.x ++ $ pip install . + + + Documentation +@@ -148,6 +155,152 @@ For Enterprise + Changes + ======= + ++1.26.17 (2023-10-02) ++-------------------- ++ ++* Added the ``Cookie`` header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via ``Retry.remove_headers_on_redirect``. ++ ++ ++1.26.16 (2023-05-23) ++-------------------- ++ ++* Fixed thread-safety issue where accessing a ``PoolManager`` with many distinct origins ++ would cause connection pools to be closed while requests are in progress (`#2954 `_) ++ ++ ++1.26.15 (2023-03-10) ++-------------------- ++ ++* Fix socket timeout value when ``HTTPConnection`` is reused (`#2645 `__) ++* Remove "!" character from the unreserved characters in IPv6 Zone ID parsing ++ (`#2899 `__) ++* Fix IDNA handling of '\x80' byte (`#2901 `__) ++ ++1.26.14 (2023-01-11) ++-------------------- ++ ++* Fixed parsing of port 0 (zero) returning None, instead of 0. (`#2850 `__) ++* Removed deprecated getheaders() calls in contrib module. ++ ++1.26.13 (2022-11-23) ++-------------------- ++ ++* Deprecated the ``HTTPResponse.getheaders()`` and ``HTTPResponse.getheader()`` methods. ++* Fixed an issue where parsing a URL with leading zeroes in the port would be rejected ++ even when the port number after removing the zeroes was valid. ++* Fixed a deprecation warning when using cryptography v39.0.0. ++* Removed the ``<4`` in the ``Requires-Python`` packaging metadata field. ++ ++ ++1.26.12 (2022-08-22) ++-------------------- ++ ++* Deprecated the `urllib3[secure]` extra and the `urllib3.contrib.pyopenssl` module. ++ Both will be removed in v2.x. See this `GitHub issue `_ ++ for justification and info on how to migrate. ++ ++ ++1.26.11 (2022-07-25) ++-------------------- ++ ++* Fixed an issue where reading more than 2 GiB in a call to ``HTTPResponse.read`` would ++ raise an ``OverflowError`` on Python 3.9 and earlier. ++ ++ ++1.26.10 (2022-07-07) ++-------------------- ++ ++* Removed support for Python 3.5 ++* Fixed an issue where a ``ProxyError`` recommending configuring the proxy as HTTP ++ instead of HTTPS could appear even when an HTTPS proxy wasn't configured. ++ ++ ++1.26.9 (2022-03-16) ++------------------- ++ ++* Changed ``urllib3[brotli]`` extra to favor installing Brotli libraries that are still ++ receiving updates like ``brotli`` and ``brotlicffi`` instead of ``brotlipy``. ++ This change does not impact behavior of urllib3, only which dependencies are installed. ++* Fixed a socket leaking when ``HTTPSConnection.connect()`` raises an exception. ++* Fixed ``server_hostname`` being forwarded from ``PoolManager`` to ``HTTPConnectionPool`` ++ when requesting an HTTP URL. Should only be forwarded when requesting an HTTPS URL. ++ ++ ++1.26.8 (2022-01-07) ++------------------- ++ ++* Added extra message to ``urllib3.exceptions.ProxyError`` when urllib3 detects that ++ a proxy is configured to use HTTPS but the proxy itself appears to only use HTTP. ++* Added a mention of the size of the connection pool when discarding a connection due to the pool being full. ++* Added explicit support for Python 3.11. ++* Deprecated the ``Retry.MAX_BACKOFF`` class property in favor of ``Retry.DEFAULT_MAX_BACKOFF`` ++ to better match the rest of the default parameter names. ``Retry.MAX_BACKOFF`` is removed in v2.0. ++* Changed location of the vendored ``ssl.match_hostname`` function from ``urllib3.packages.ssl_match_hostname`` ++ to ``urllib3.util.ssl_match_hostname`` to ensure Python 3.10+ compatibility after being repackaged ++ by downstream distributors. ++* Fixed absolute imports, all imports are now relative. ++ ++ ++1.26.7 (2021-09-22) ++------------------- ++ ++* Fixed a bug with HTTPS hostname verification involving IP addresses and lack ++ of SNI. (Issue #2400) ++* Fixed a bug where IPv6 braces weren't stripped during certificate hostname ++ matching. (Issue #2240) ++ ++ ++1.26.6 (2021-06-25) ++------------------- ++ ++* Deprecated the ``urllib3.contrib.ntlmpool`` module. urllib3 is not able to support ++ it properly due to `reasons listed in this issue `_. ++ If you are a user of this module please leave a comment. ++* Changed ``HTTPConnection.request_chunked()`` to not erroneously emit multiple ++ ``Transfer-Encoding`` headers in the case that one is already specified. ++* Fixed typo in deprecation message to recommend ``Retry.DEFAULT_ALLOWED_METHODS``. ++ ++ ++1.26.5 (2021-05-26) ++------------------- ++ ++* Fixed deprecation warnings emitted in Python 3.10. ++* Updated vendored ``six`` library to 1.16.0. ++* Improved performance of URL parser when splitting ++ the authority component. ++ ++ ++1.26.4 (2021-03-15) ++------------------- ++ ++* Changed behavior of the default ``SSLContext`` when connecting to HTTPS proxy ++ during HTTPS requests. The default ``SSLContext`` now sets ``check_hostname=True``. ++ ++ ++1.26.3 (2021-01-26) ++------------------- ++ ++* Fixed bytes and string comparison issue with headers (Pull #2141) ++ ++* Changed ``ProxySchemeUnknown`` error message to be ++ more actionable if the user supplies a proxy URL without ++ a scheme. (Pull #2107) ++ ++ ++1.26.2 (2020-11-12) ++------------------- ++ ++* Fixed an issue where ``wrap_socket`` and ``CERT_REQUIRED`` wouldn't ++ be imported properly on Python 2.7.8 and earlier (Pull #2052) ++ ++ ++1.26.1 (2020-11-11) ++------------------- ++ ++* Fixed an issue where two ``User-Agent`` headers would be sent if a ++ ``User-Agent`` header key is passed as ``bytes`` (Pull #2047) ++ ++ + 1.26.0 (2020-11-10) + ------------------- + +@@ -1331,5 +1484,3 @@ Changes + ---------------- + + * First release. +- +- +diff --git a/third_party/python/urllib3/urllib3-1.26.17.dist-info/RECORD b/third_party/python/urllib3/urllib3-1.26.17.dist-info/RECORD +new file mode 100644 +index 0000000000000..1afc6580589c0 +--- /dev/null ++++ b/third_party/python/urllib3/urllib3-1.26.17.dist-info/RECORD +@@ -0,0 +1,44 @@ ++urllib3/__init__.py,sha256=iXLcYiJySn0GNbWOOZDDApgBL1JgP44EZ8i1760S8Mc,3333 ++urllib3/_collections.py,sha256=Rp1mVyBgc_UlAcp6M3at1skJBXR5J43NawRTvW2g_XY,10811 ++urllib3/_version.py,sha256=azoM7M7BUADl2kBhMVR6PPf2GhBDI90me1fcnzTwdcw,64 ++urllib3/connection.py,sha256=92k9td_y4PEiTIjNufCUa1NzMB3J3w0LEdyokYgXnW8,20300 ++urllib3/connectionpool.py,sha256=ItVDasDnPRPP9R8bNxY7tPBlC724nJ9nlxVgXG_SLbI,39990 ++urllib3/exceptions.py,sha256=0Mnno3KHTNfXRfY7638NufOPkUb6mXOm-Lqj-4x2w8A,8217 ++urllib3/fields.py,sha256=kvLDCg_JmH1lLjUUEY_FLS8UhY7hBvDPuVETbY8mdrM,8579 ++urllib3/filepost.py,sha256=5b_qqgRHVlL7uLtdAYBzBh-GHmU5AfJVt_2N0XS3PeY,2440 ++urllib3/poolmanager.py,sha256=0i8cJgrqupza67IBPZ_u9jXvnSxr5UBlVEiUqdkPtYI,19752 ++urllib3/request.py,sha256=YTWFNr7QIwh7E1W9dde9LM77v2VWTJ5V78XuTTw7D1A,6691 ++urllib3/response.py,sha256=UPgLmnHj4z71ZnH8ivYOyncATifTOw9FQukUqDnckCc,30761 ++urllib3/contrib/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 ++urllib3/contrib/_appengine_environ.py,sha256=bDbyOEhW2CKLJcQqAKAyrEHN-aklsyHFKq6vF8ZFsmk,957 ++urllib3/contrib/appengine.py,sha256=6IBW6lPOoVUxASPwtn6IH1AATe5DK3lLJCfwyWlLKAE,11012 ++urllib3/contrib/ntlmpool.py,sha256=NlfkW7WMdW8ziqudopjHoW299og1BTWi0IeIibquFwk,4528 ++urllib3/contrib/pyopenssl.py,sha256=4AJAlo9NmjWofY4dJwRa4kbZuRuHfNJxu8Pv6yQk1ss,17055 ++urllib3/contrib/securetransport.py,sha256=QOhVbWrFQTKbmV-vtyG69amekkKVxXkdjk9oymaO0Ag,34416 ++urllib3/contrib/socks.py,sha256=aRi9eWXo9ZEb95XUxef4Z21CFlnnjbEiAo9HOseoMt4,7097 ++urllib3/contrib/_securetransport/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 ++urllib3/contrib/_securetransport/bindings.py,sha256=4Xk64qIkPBt09A5q-RIFUuDhNc9mXilVapm7WnYnzRw,17632 ++urllib3/contrib/_securetransport/low_level.py,sha256=B2JBB2_NRP02xK6DCa1Pa9IuxrPwxzDzZbixQkb7U9M,13922 ++urllib3/packages/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 ++urllib3/packages/six.py,sha256=b9LM0wBXv7E7SrbCjAm4wwN-hrH-iNxv18LgWNMMKPo,34665 ++urllib3/packages/backports/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 ++urllib3/packages/backports/makefile.py,sha256=nbzt3i0agPVP07jqqgjhaYjMmuAi_W5E0EywZivVO8E,1417 ++urllib3/packages/backports/weakref_finalize.py,sha256=tRCal5OAhNSRyb0DhHp-38AtIlCsRP8BxF3NX-6rqIA,5343 ++urllib3/util/__init__.py,sha256=JEmSmmqqLyaw8P51gUImZh8Gwg9i1zSe-DoqAitn2nc,1155 ++urllib3/util/connection.py,sha256=5Lx2B1PW29KxBn2T0xkN1CBgRBa3gGVJBKoQoRogEVk,4901 ++urllib3/util/proxy.py,sha256=zUvPPCJrp6dOF0N4GAVbOcl6o-4uXKSrGiTkkr5vUS4,1605 ++urllib3/util/queue.py,sha256=nRgX8_eX-_VkvxoX096QWoz8Ps0QHUAExILCY_7PncM,498 ++urllib3/util/request.py,sha256=fWiAaa8pwdLLIqoTLBxCC2e4ed80muzKU3e3HWWTzFQ,4225 ++urllib3/util/response.py,sha256=GJpg3Egi9qaJXRwBh5wv-MNuRWan5BIu40oReoxWP28,3510 ++urllib3/util/retry.py,sha256=Z6WEf518eTOXP5jr5QSQ9gqJI0DVYt3Xs3EKnYaTmus,22013 ++urllib3/util/ssl_.py,sha256=c0sYiSC6272r6uPkxQpo5rYPP9QC1eR6oI7004gYqZo,17165 ++urllib3/util/ssl_match_hostname.py,sha256=Ir4cZVEjmAk8gUAIHWSi7wtOO83UCYABY2xFD1Ql_WA,5758 ++urllib3/util/ssltransport.py,sha256=NA-u5rMTrDFDFC8QzRKUEKMG0561hOD4qBTr3Z4pv6E,6895 ++urllib3/util/timeout.py,sha256=cwq4dMk87mJHSBktK1miYJ-85G-3T3RmT20v7SFCpno,10168 ++urllib3/util/url.py,sha256=kMxL1k0d-aQm_iZDw_zMmnyYyjrIA_DbsMy3cm3V55M,14279 ++urllib3/util/wait.py,sha256=fOX0_faozG2P7iVojQoE1mbydweNyTcm-hXEfFrTtLI,5403 ++urllib3-1.26.17.dist-info/LICENSE.txt,sha256=w3vxhuJ8-dvpYZ5V7f486nswCRzrPaY8fay-Dm13kHs,1115 ++urllib3-1.26.17.dist-info/METADATA,sha256=swEiQKmb2m5Vl4fygmy4aLSzZjxDjD8q2-_XzuhO9pA,48743 ++urllib3-1.26.17.dist-info/WHEEL,sha256=iYlv5fX357PQyRT2o6tw1bN-YcKFFHKqB_LwHO5wP-g,110 ++urllib3-1.26.17.dist-info/top_level.txt,sha256=EMiXL2sKrTcmrMxIHTqdc3ET54pQI2Y072LexFEemvo,8 ++urllib3-1.26.17.dist-info/RECORD,, +diff --git a/third_party/python/six/six-1.13.0.dist-info/WHEEL b/third_party/python/urllib3/urllib3-1.26.17.dist-info/WHEEL +similarity index 70% +rename from third_party/python/six/six-1.13.0.dist-info/WHEEL +rename to third_party/python/urllib3/urllib3-1.26.17.dist-info/WHEEL +index 8b701e93c2315..c34f1162ef9a5 100644 +--- a/third_party/python/six/six-1.13.0.dist-info/WHEEL ++++ b/third_party/python/urllib3/urllib3-1.26.17.dist-info/WHEEL +@@ -1,5 +1,5 @@ + Wheel-Version: 1.0 +-Generator: bdist_wheel (0.33.6) ++Generator: bdist_wheel (0.41.2) + Root-Is-Purelib: true + Tag: py2-none-any + Tag: py3-none-any +diff --git a/third_party/python/urllib3/urllib3-1.26.0.dist-info/top_level.txt b/third_party/python/urllib3/urllib3-1.26.17.dist-info/top_level.txt +similarity index 100% +rename from third_party/python/urllib3/urllib3-1.26.0.dist-info/top_level.txt +rename to third_party/python/urllib3/urllib3-1.26.17.dist-info/top_level.txt +diff --git a/third_party/python/urllib3/urllib3/__init__.py b/third_party/python/urllib3/urllib3/__init__.py +index fe86b59d782bd..c6fa38212fb55 100644 +--- a/third_party/python/urllib3/urllib3/__init__.py ++++ b/third_party/python/urllib3/urllib3/__init__.py +@@ -19,6 +19,23 @@ + from .util.timeout import Timeout + from .util.url import get_host + ++# === NOTE TO REPACKAGERS AND VENDORS === ++# Please delete this block, this logic is only ++# for urllib3 being distributed via PyPI. ++# See: https://github.com/urllib3/urllib3/issues/2680 ++try: ++ import urllib3_secure_extra # type: ignore # noqa: F401 ++except ImportError: ++ pass ++else: ++ warnings.warn( ++ "'urllib3[secure]' extra is deprecated and will be removed " ++ "in a future release of urllib3 2.x. Read more in this issue: " ++ "https://github.com/urllib3/urllib3/issues/2680", ++ category=DeprecationWarning, ++ stacklevel=2, ++ ) ++ + __author__ = "Andrey Petrov (andrey.petrov@shazow.net)" + __license__ = "MIT" + __version__ = __version__ +diff --git a/third_party/python/urllib3/urllib3/_version.py b/third_party/python/urllib3/urllib3/_version.py +index cee465f88a931..cad75fb5df82a 100644 +--- a/third_party/python/urllib3/urllib3/_version.py ++++ b/third_party/python/urllib3/urllib3/_version.py +@@ -1,2 +1,2 @@ + # This file is protected via CODEOWNERS +-__version__ = "1.26.0" ++__version__ = "1.26.17" +diff --git a/third_party/python/urllib3/urllib3/connection.py b/third_party/python/urllib3/urllib3/connection.py +index 52487417c946b..54b96b19154cc 100644 +--- a/third_party/python/urllib3/urllib3/connection.py ++++ b/third_party/python/urllib3/urllib3/connection.py +@@ -43,6 +43,7 @@ class BrokenPipeError(Exception): + pass + + ++from ._collections import HTTPHeaderDict # noqa (historical, removed in v2) + from ._version import __version__ + from .exceptions import ( + ConnectTimeoutError, +@@ -50,15 +51,16 @@ class BrokenPipeError(Exception): + SubjectAltNameWarning, + SystemTimeWarning, + ) +-from .packages.ssl_match_hostname import CertificateError, match_hostname + from .util import SKIP_HEADER, SKIPPABLE_HEADERS, connection + from .util.ssl_ import ( + assert_fingerprint, + create_urllib3_context, ++ is_ipaddress, + resolve_cert_reqs, + resolve_ssl_version, + ssl_wrap_socket, + ) ++from .util.ssl_match_hostname import CertificateError, match_hostname + + log = logging.getLogger(__name__) + +@@ -66,7 +68,7 @@ class BrokenPipeError(Exception): + + # When it comes time to update this value as a part of regular maintenance + # (ie test_recent_date is failing) update it to ~6 months before the current date. +-RECENT_DATE = datetime.date(2019, 1, 1) ++RECENT_DATE = datetime.date(2022, 1, 1) + + _CONTAINS_CONTROL_CHAR_RE = re.compile(r"[^-!#$%&'*+.^_`|~0-9a-zA-Z]") + +@@ -106,6 +108,10 @@ class HTTPConnection(_HTTPConnection, object): + #: Whether this connection verifies the host's certificate. + is_verified = False + ++ #: Whether this proxy connection (if used) verifies the proxy host's ++ #: certificate. ++ proxy_is_verified = None ++ + def __init__(self, *args, **kw): + if not six.PY2: + kw.pop("strict", None) +@@ -200,7 +206,7 @@ def connect(self): + self._prepare_conn(conn) + + def putrequest(self, method, url, *args, **kwargs): +- """""" ++ """ """ + # Empty docstring because the indentation of CPython's implementation + # is broken but we don't want this method in our documentation. + match = _CONTAINS_CONTROL_CHAR_RE.search(method) +@@ -213,8 +219,8 @@ def putrequest(self, method, url, *args, **kwargs): + return _HTTPConnection.putrequest(self, method, url, *args, **kwargs) + + def putheader(self, header, *values): +- """""" +- if SKIP_HEADER not in values: ++ """ """ ++ if not any(isinstance(v, str) and v == SKIP_HEADER for v in values): + _HTTPConnection.putheader(self, header, *values) + elif six.ensure_str(header.lower()) not in SKIPPABLE_HEADERS: + raise ValueError( +@@ -223,12 +229,17 @@ def putheader(self, header, *values): + ) + + def request(self, method, url, body=None, headers=None): ++ # Update the inner socket's timeout value to send the request. ++ # This only triggers if the connection is re-used. ++ if getattr(self, "sock", None) is not None: ++ self.sock.settimeout(self.timeout) ++ + if headers is None: + headers = {} + else: + # Avoid modifying the headers passed into .request() + headers = headers.copy() +- if "user-agent" not in (k.lower() for k in headers): ++ if "user-agent" not in (six.ensure_str(k.lower()) for k in headers): + headers["User-Agent"] = _get_default_user_agent() + super(HTTPConnection, self).request(method, url, body=body, headers=headers) + +@@ -248,7 +259,7 @@ def request_chunked(self, method, url, body=None, headers=None): + self.putheader("User-Agent", _get_default_user_agent()) + for header, value in headers.items(): + self.putheader(header, value) +- if "transfer-encoding" not in headers: ++ if "transfer-encoding" not in header_keys: + self.putheader("Transfer-Encoding", "chunked") + self.endheaders() + +@@ -349,17 +360,15 @@ def set_cert( + + def connect(self): + # Add certificate verification +- conn = self._new_conn() ++ self.sock = conn = self._new_conn() + hostname = self.host + tls_in_tls = False + + if self._is_using_tunnel(): + if self.tls_in_tls_required: +- conn = self._connect_tls_proxy(hostname, conn) ++ self.sock = conn = self._connect_tls_proxy(hostname, conn) + tls_in_tls = True + +- self.sock = conn +- + # Calls self._set_hostport(), so self.host is + # self._tunnel_host below. + self._tunnel() +@@ -492,7 +501,7 @@ def _connect_tls_proxy(self, hostname, conn): + + # If no cert was provided, use only the default options for server + # certificate validation +- return ssl_wrap_socket( ++ socket = ssl_wrap_socket( + sock=conn, + ca_certs=self.ca_certs, + ca_cert_dir=self.ca_cert_dir, +@@ -501,8 +510,37 @@ def _connect_tls_proxy(self, hostname, conn): + ssl_context=ssl_context, + ) + ++ if ssl_context.verify_mode != ssl.CERT_NONE and not getattr( ++ ssl_context, "check_hostname", False ++ ): ++ # While urllib3 attempts to always turn off hostname matching from ++ # the TLS library, this cannot always be done. So we check whether ++ # the TLS Library still thinks it's matching hostnames. ++ cert = socket.getpeercert() ++ if not cert.get("subjectAltName", ()): ++ warnings.warn( ++ ( ++ "Certificate for {0} has no `subjectAltName`, falling back to check for a " ++ "`commonName` for now. This feature is being removed by major browsers and " ++ "deprecated by RFC 2818. (See https://github.com/urllib3/urllib3/issues/497 " ++ "for details.)".format(hostname) ++ ), ++ SubjectAltNameWarning, ++ ) ++ _match_hostname(cert, hostname) ++ ++ self.proxy_is_verified = ssl_context.verify_mode == ssl.CERT_REQUIRED ++ return socket ++ + + def _match_hostname(cert, asserted_hostname): ++ # Our upstream implementation of ssl.match_hostname() ++ # only applies this normalization to IP addresses so it doesn't ++ # match DNS SANs so we do the same thing! ++ stripped_hostname = asserted_hostname.strip("u[]") ++ if is_ipaddress(stripped_hostname): ++ asserted_hostname = stripped_hostname ++ + try: + match_hostname(cert, asserted_hostname) + except CertificateError as e: +diff --git a/third_party/python/urllib3/urllib3/connectionpool.py b/third_party/python/urllib3/urllib3/connectionpool.py +index 4708c5bfc7862..96844d933745d 100644 +--- a/third_party/python/urllib3/urllib3/connectionpool.py ++++ b/third_party/python/urllib3/urllib3/connectionpool.py +@@ -2,6 +2,7 @@ + + import errno + import logging ++import re + import socket + import sys + import warnings +@@ -35,7 +36,6 @@ + ) + from .packages import six + from .packages.six.moves import queue +-from .packages.ssl_match_hostname import CertificateError + from .request import RequestMethods + from .response import HTTPResponse + from .util.connection import is_connection_dropped +@@ -44,11 +44,19 @@ + from .util.request import set_file_position + from .util.response import assert_header_parsing + from .util.retry import Retry ++from .util.ssl_match_hostname import CertificateError + from .util.timeout import Timeout + from .util.url import Url, _encode_target + from .util.url import _normalize_host as normalize_host + from .util.url import get_host, parse_url + ++try: # Platform-specific: Python 3 ++ import weakref ++ ++ weakref_finalize = weakref.finalize ++except AttributeError: # Platform-specific: Python 2 ++ from .packages.backports.weakref_finalize import weakref_finalize ++ + xrange = six.moves.xrange + + log = logging.getLogger(__name__) +@@ -219,6 +227,16 @@ def __init__( + self.conn_kw["proxy"] = self.proxy + self.conn_kw["proxy_config"] = self.proxy_config + ++ # Do not pass 'self' as callback to 'finalize'. ++ # Then the 'finalize' would keep an endless living (leak) to self. ++ # By just passing a reference to the pool allows the garbage collector ++ # to free self if nobody else has a reference to it. ++ pool = self.pool ++ ++ # Close all the HTTPConnections in the pool before the ++ # HTTPConnectionPool object is garbage collected. ++ weakref_finalize(self, _close_pool_connections, pool) ++ + def _new_conn(self): + """ + Return a fresh :class:`HTTPConnection`. +@@ -301,8 +319,11 @@ def _put_conn(self, conn): + pass + except queue.Full: + # This should never happen if self.block == True +- log.warning("Connection pool is full, discarding connection: %s", self.host) +- ++ log.warning( ++ "Connection pool is full, discarding connection: %s. Connection pool size: %s", ++ self.host, ++ self.pool.qsize(), ++ ) + # Connection never got put back into the pool, close it. + if conn: + conn.close() +@@ -318,7 +339,7 @@ def _prepare_proxy(self, conn): + pass + + def _get_timeout(self, timeout): +- """ Helper that always returns a :class:`urllib3.util.Timeout` """ ++ """Helper that always returns a :class:`urllib3.util.Timeout`""" + if timeout is _Default: + return self.timeout.clone() + +@@ -375,7 +396,7 @@ def _make_request( + + timeout_obj = self._get_timeout(timeout) + timeout_obj.start_connect() +- conn.timeout = timeout_obj.connect_timeout ++ conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) + + # Trigger any extra validation we need to do. + try: +@@ -485,14 +506,8 @@ def close(self): + # Disable access to the pool + old_pool, self.pool = self.pool, None + +- try: +- while True: +- conn = old_pool.get(block=False) +- if conn: +- conn.close() +- +- except queue.Empty: +- pass # Done. ++ # Close all the HTTPConnections in the pool. ++ _close_pool_connections(old_pool) + + def is_same_host(self, url): + """ +@@ -745,7 +760,35 @@ def urlopen( + # Discard the connection for these exceptions. It will be + # replaced during the next _get_conn() call. + clean_exit = False +- if isinstance(e, (BaseSSLError, CertificateError)): ++ ++ def _is_ssl_error_message_from_http_proxy(ssl_error): ++ # We're trying to detect the message 'WRONG_VERSION_NUMBER' but ++ # SSLErrors are kinda all over the place when it comes to the message, ++ # so we try to cover our bases here! ++ message = " ".join(re.split("[^a-z]", str(ssl_error).lower())) ++ return ( ++ "wrong version number" in message or "unknown protocol" in message ++ ) ++ ++ # Try to detect a common user error with proxies which is to ++ # set an HTTP proxy to be HTTPS when it should be 'http://' ++ # (ie {'http': 'http://proxy', 'https': 'https://proxy'}) ++ # Instead we add a nice error message and point to a URL. ++ if ( ++ isinstance(e, BaseSSLError) ++ and self.proxy ++ and _is_ssl_error_message_from_http_proxy(e) ++ and conn.proxy ++ and conn.proxy.scheme == "https" ++ ): ++ e = ProxyError( ++ "Your proxy appears to only use HTTP and not HTTPS, " ++ "try changing your proxy URL to be HTTP. See: " ++ "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" ++ "#https-proxy-error-http-proxy", ++ SSLError(e), ++ ) ++ elif isinstance(e, (BaseSSLError, CertificateError)): + e = SSLError(e) + elif isinstance(e, (SocketError, NewConnectionError)) and self.proxy: + e = ProxyError("Cannot connect to proxy.", e) +@@ -830,7 +873,7 @@ def urlopen( + ) + + # Check if we should retry the HTTP response. +- has_retry_after = bool(response.getheader("Retry-After")) ++ has_retry_after = bool(response.headers.get("Retry-After")) + if retries.is_retry(method, response.status, has_retry_after): + try: + retries = retries.increment(method, url, response=response, _pool=self) +@@ -1014,12 +1057,23 @@ def _validate_conn(self, conn): + ( + "Unverified HTTPS request is being made to host '%s'. " + "Adding certificate verification is strongly advised. See: " +- "https://urllib3.readthedocs.io/en/latest/advanced-usage.html" ++ "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" + "#ssl-warnings" % conn.host + ), + InsecureRequestWarning, + ) + ++ if getattr(conn, "proxy_is_verified", None) is False: ++ warnings.warn( ++ ( ++ "Unverified HTTPS connection done to an HTTPS proxy. " ++ "Adding certificate verification is strongly advised. See: " ++ "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" ++ "#ssl-warnings" ++ ), ++ InsecureRequestWarning, ++ ) ++ + + def connection_from_url(url, **kw): + """ +@@ -1065,3 +1119,14 @@ def _normalize_host(host, scheme): + if host.startswith("[") and host.endswith("]"): + host = host[1:-1] + return host ++ ++ ++def _close_pool_connections(pool): ++ """Drains a queue of connections and closes each one.""" ++ try: ++ while True: ++ conn = pool.get(block=False) ++ if conn: ++ conn.close() ++ except queue.Empty: ++ pass # Done. +diff --git a/third_party/python/urllib3/urllib3/contrib/_securetransport/bindings.py b/third_party/python/urllib3/urllib3/contrib/_securetransport/bindings.py +index 11524d400bab2..264d564dbda67 100644 +--- a/third_party/python/urllib3/urllib3/contrib/_securetransport/bindings.py ++++ b/third_party/python/urllib3/urllib3/contrib/_securetransport/bindings.py +@@ -48,7 +48,7 @@ + ) + from ctypes.util import find_library + +-from urllib3.packages.six import raise_from ++from ...packages.six import raise_from + + if platform.system() != "Darwin": + raise ImportError("Only macOS is supported") +diff --git a/third_party/python/urllib3/urllib3/contrib/_securetransport/low_level.py b/third_party/python/urllib3/urllib3/contrib/_securetransport/low_level.py +index ed8120190c06f..fa0b245d279e9 100644 +--- a/third_party/python/urllib3/urllib3/contrib/_securetransport/low_level.py ++++ b/third_party/python/urllib3/urllib3/contrib/_securetransport/low_level.py +@@ -188,6 +188,7 @@ def _cert_array_from_pem(pem_bundle): + # We only want to do that if an error occurs: otherwise, the caller + # should free. + CoreFoundation.CFRelease(cert_array) ++ raise + + return cert_array + +diff --git a/third_party/python/urllib3/urllib3/contrib/appengine.py b/third_party/python/urllib3/urllib3/contrib/appengine.py +index aa64a0914c601..a5a6d91035f0a 100644 +--- a/third_party/python/urllib3/urllib3/contrib/appengine.py ++++ b/third_party/python/urllib3/urllib3/contrib/appengine.py +@@ -111,7 +111,7 @@ def __init__( + warnings.warn( + "urllib3 is using URLFetch on Google App Engine sandbox instead " + "of sockets. To use sockets directly instead of URLFetch see " +- "https://urllib3.readthedocs.io/en/latest/reference/urllib3.contrib.html.", ++ "https://urllib3.readthedocs.io/en/1.26.x/reference/urllib3.contrib.html.", + AppEnginePlatformWarning, + ) + +@@ -224,7 +224,7 @@ def urlopen( + ) + + # Check if we should retry the HTTP response. +- has_retry_after = bool(http_response.getheader("Retry-After")) ++ has_retry_after = bool(http_response.headers.get("Retry-After")) + if retries.is_retry(method, http_response.status, has_retry_after): + retries = retries.increment(method, url, response=http_response, _pool=self) + log.debug("Retry: %s", url) +diff --git a/third_party/python/urllib3/urllib3/contrib/ntlmpool.py b/third_party/python/urllib3/urllib3/contrib/ntlmpool.py +index b2df45dcf6065..471665754e9f1 100644 +--- a/third_party/python/urllib3/urllib3/contrib/ntlmpool.py ++++ b/third_party/python/urllib3/urllib3/contrib/ntlmpool.py +@@ -5,6 +5,7 @@ + """ + from __future__ import absolute_import + ++import warnings + from logging import getLogger + + from ntlm import ntlm +@@ -12,6 +13,14 @@ + from .. import HTTPSConnectionPool + from ..packages.six.moves.http_client import HTTPSConnection + ++warnings.warn( ++ "The 'urllib3.contrib.ntlmpool' module is deprecated and will be removed " ++ "in urllib3 v2.0 release, urllib3 is not able to support it properly due " ++ "to reasons listed in issue: https://github.com/urllib3/urllib3/issues/2282. " ++ "If you are a user of this module please comment in the mentioned issue.", ++ DeprecationWarning, ++) ++ + log = getLogger(__name__) + + +@@ -60,7 +69,7 @@ def _new_conn(self): + log.debug("Request headers: %s", headers) + conn.request("GET", self.authurl, None, headers) + res = conn.getresponse() +- reshdr = dict(res.getheaders()) ++ reshdr = dict(res.headers) + log.debug("Response status: %s %s", res.status, res.reason) + log.debug("Response headers: %s", reshdr) + log.debug("Response data: %s [...]", res.read(100)) +@@ -92,7 +101,7 @@ def _new_conn(self): + conn.request("GET", self.authurl, None, headers) + res = conn.getresponse() + log.debug("Response status: %s %s", res.status, res.reason) +- log.debug("Response headers: %s", dict(res.getheaders())) ++ log.debug("Response headers: %s", dict(res.headers)) + log.debug("Response data: %s [...]", res.read()[:100]) + if res.status != 200: + if res.status == 401: +diff --git a/third_party/python/urllib3/urllib3/contrib/pyopenssl.py b/third_party/python/urllib3/urllib3/contrib/pyopenssl.py +index 0cabab1aed14a..1ed214b1d78fc 100644 +--- a/third_party/python/urllib3/urllib3/contrib/pyopenssl.py ++++ b/third_party/python/urllib3/urllib3/contrib/pyopenssl.py +@@ -47,10 +47,10 @@ + """ + from __future__ import absolute_import + ++import OpenSSL.crypto + import OpenSSL.SSL + from cryptography import x509 + from cryptography.hazmat.backends.openssl import backend as openssl_backend +-from cryptography.hazmat.backends.openssl.x509 import _Certificate + + try: + from cryptography.x509 import UnsupportedExtension +@@ -73,9 +73,19 @@ class UnsupportedExtension(Exception): + import logging + import ssl + import sys ++import warnings + + from .. import util + from ..packages import six ++from ..util.ssl_ import PROTOCOL_TLS_CLIENT ++ ++warnings.warn( ++ "'urllib3.contrib.pyopenssl' module is deprecated and will be removed " ++ "in a future release of urllib3 2.x. Read more in this issue: " ++ "https://github.com/urllib3/urllib3/issues/2680", ++ category=DeprecationWarning, ++ stacklevel=2, ++) + + __all__ = ["inject_into_urllib3", "extract_from_urllib3"] + +@@ -85,6 +95,7 @@ class UnsupportedExtension(Exception): + # Map from urllib3 to PyOpenSSL compatible parameter-values. + _openssl_versions = { + util.PROTOCOL_TLS: OpenSSL.SSL.SSLv23_METHOD, ++ PROTOCOL_TLS_CLIENT: OpenSSL.SSL.SSLv23_METHOD, + ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD, + } + +@@ -217,9 +228,8 @@ def get_subj_alt_name(peer_cert): + if hasattr(peer_cert, "to_cryptography"): + cert = peer_cert.to_cryptography() + else: +- # This is technically using private APIs, but should work across all +- # relevant versions before PyOpenSSL got a proper API for this. +- cert = _Certificate(openssl_backend, peer_cert._x509) ++ der = OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_ASN1, peer_cert) ++ cert = x509.load_der_x509_certificate(der, openssl_backend) + + # We want to find the SAN extension. Ask Cryptography to locate it (it's + # faster than looping in Python) +@@ -404,7 +414,6 @@ def makefile(self, mode, bufsize=-1): + self._makefile_refs += 1 + return _fileobject(self, mode, bufsize, close=True) + +- + else: # Platform-specific: Python 3 + makefile = backport_makefile + +diff --git a/third_party/python/urllib3/urllib3/contrib/securetransport.py b/third_party/python/urllib3/urllib3/contrib/securetransport.py +index ab092de67a57c..6c46a3b9f0375 100644 +--- a/third_party/python/urllib3/urllib3/contrib/securetransport.py ++++ b/third_party/python/urllib3/urllib3/contrib/securetransport.py +@@ -67,6 +67,7 @@ + import six + + from .. import util ++from ..util.ssl_ import PROTOCOL_TLS_CLIENT + from ._securetransport.bindings import CoreFoundation, Security, SecurityConst + from ._securetransport.low_level import ( + _assert_no_error, +@@ -154,7 +155,8 @@ + # TLSv1 and a high of TLSv1.2. For everything else, we pin to that version. + # TLSv1 to 1.2 are supported on macOS 10.8+ + _protocol_to_min_max = { +- util.PROTOCOL_TLS: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol12) ++ util.PROTOCOL_TLS: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol12), ++ PROTOCOL_TLS_CLIENT: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol12), + } + + if hasattr(ssl, "PROTOCOL_SSLv2"): +@@ -768,7 +770,6 @@ def makefile(self, mode, bufsize=-1): + self._makefile_refs += 1 + return _fileobject(self, mode, bufsize, close=True) + +- + else: # Platform-specific: Python 3 + + def makefile(self, mode="r", buffering=None, *args, **kwargs): +diff --git a/third_party/python/urllib3/urllib3/contrib/socks.py b/third_party/python/urllib3/urllib3/contrib/socks.py +index 93df8325d59c4..c326e80dd1174 100644 +--- a/third_party/python/urllib3/urllib3/contrib/socks.py ++++ b/third_party/python/urllib3/urllib3/contrib/socks.py +@@ -51,7 +51,7 @@ + ( + "SOCKS support in urllib3 requires the installation of optional " + "dependencies: specifically, PySocks. For more information, see " +- "https://urllib3.readthedocs.io/en/latest/contrib.html#socks-proxies" ++ "https://urllib3.readthedocs.io/en/1.26.x/contrib.html#socks-proxies" + ), + DependencyWarning, + ) +diff --git a/third_party/python/urllib3/urllib3/exceptions.py b/third_party/python/urllib3/urllib3/exceptions.py +index d69958d5dfc29..cba6f3f560f71 100644 +--- a/third_party/python/urllib3/urllib3/exceptions.py ++++ b/third_party/python/urllib3/urllib3/exceptions.py +@@ -289,7 +289,17 @@ class ProxySchemeUnknown(AssertionError, URLSchemeUnknown): + # TODO(t-8ch): Stop inheriting from AssertionError in v2.0. + + def __init__(self, scheme): +- message = "Not supported proxy scheme %s" % scheme ++ # 'localhost' is here because our URL parser parses ++ # localhost:8080 -> scheme=localhost, remove if we fix this. ++ if scheme == "localhost": ++ scheme = None ++ if scheme is None: ++ message = "Proxy URL had no scheme, should start with http:// or https://" ++ else: ++ message = ( ++ "Proxy URL had unsupported scheme %s, should use http:// or https://" ++ % scheme ++ ) + super(ProxySchemeUnknown, self).__init__(message) + + +diff --git a/third_party/python/urllib3/urllib3/packages/__init__.py b/third_party/python/urllib3/urllib3/packages/__init__.py +index fce4caa65d2ee..e69de29bb2d1d 100644 +--- a/third_party/python/urllib3/urllib3/packages/__init__.py ++++ b/third_party/python/urllib3/urllib3/packages/__init__.py +@@ -1,5 +0,0 @@ +-from __future__ import absolute_import +- +-from . import ssl_match_hostname +- +-__all__ = ("ssl_match_hostname",) +diff --git a/third_party/python/urllib3/urllib3/packages/backports/weakref_finalize.py b/third_party/python/urllib3/urllib3/packages/backports/weakref_finalize.py +new file mode 100644 +index 0000000000000..a2f2966e54966 +--- /dev/null ++++ b/third_party/python/urllib3/urllib3/packages/backports/weakref_finalize.py +@@ -0,0 +1,155 @@ ++# -*- coding: utf-8 -*- ++""" ++backports.weakref_finalize ++~~~~~~~~~~~~~~~~~~ ++ ++Backports the Python 3 ``weakref.finalize`` method. ++""" ++from __future__ import absolute_import ++ ++import itertools ++import sys ++from weakref import ref ++ ++__all__ = ["weakref_finalize"] ++ ++ ++class weakref_finalize(object): ++ """Class for finalization of weakrefable objects ++ finalize(obj, func, *args, **kwargs) returns a callable finalizer ++ object which will be called when obj is garbage collected. The ++ first time the finalizer is called it evaluates func(*arg, **kwargs) ++ and returns the result. After this the finalizer is dead, and ++ calling it just returns None. ++ When the program exits any remaining finalizers for which the ++ atexit attribute is true will be run in reverse order of creation. ++ By default atexit is true. ++ """ ++ ++ # Finalizer objects don't have any state of their own. They are ++ # just used as keys to lookup _Info objects in the registry. This ++ # ensures that they cannot be part of a ref-cycle. ++ ++ __slots__ = () ++ _registry = {} ++ _shutdown = False ++ _index_iter = itertools.count() ++ _dirty = False ++ _registered_with_atexit = False ++ ++ class _Info(object): ++ __slots__ = ("weakref", "func", "args", "kwargs", "atexit", "index") ++ ++ def __init__(self, obj, func, *args, **kwargs): ++ if not self._registered_with_atexit: ++ # We may register the exit function more than once because ++ # of a thread race, but that is harmless ++ import atexit ++ ++ atexit.register(self._exitfunc) ++ weakref_finalize._registered_with_atexit = True ++ info = self._Info() ++ info.weakref = ref(obj, self) ++ info.func = func ++ info.args = args ++ info.kwargs = kwargs or None ++ info.atexit = True ++ info.index = next(self._index_iter) ++ self._registry[self] = info ++ weakref_finalize._dirty = True ++ ++ def __call__(self, _=None): ++ """If alive then mark as dead and return func(*args, **kwargs); ++ otherwise return None""" ++ info = self._registry.pop(self, None) ++ if info and not self._shutdown: ++ return info.func(*info.args, **(info.kwargs or {})) ++ ++ def detach(self): ++ """If alive then mark as dead and return (obj, func, args, kwargs); ++ otherwise return None""" ++ info = self._registry.get(self) ++ obj = info and info.weakref() ++ if obj is not None and self._registry.pop(self, None): ++ return (obj, info.func, info.args, info.kwargs or {}) ++ ++ def peek(self): ++ """If alive then return (obj, func, args, kwargs); ++ otherwise return None""" ++ info = self._registry.get(self) ++ obj = info and info.weakref() ++ if obj is not None: ++ return (obj, info.func, info.args, info.kwargs or {}) ++ ++ @property ++ def alive(self): ++ """Whether finalizer is alive""" ++ return self in self._registry ++ ++ @property ++ def atexit(self): ++ """Whether finalizer should be called at exit""" ++ info = self._registry.get(self) ++ return bool(info) and info.atexit ++ ++ @atexit.setter ++ def atexit(self, value): ++ info = self._registry.get(self) ++ if info: ++ info.atexit = bool(value) ++ ++ def __repr__(self): ++ info = self._registry.get(self) ++ obj = info and info.weakref() ++ if obj is None: ++ return "<%s object at %#x; dead>" % (type(self).__name__, id(self)) ++ else: ++ return "<%s object at %#x; for %r at %#x>" % ( ++ type(self).__name__, ++ id(self), ++ type(obj).__name__, ++ id(obj), ++ ) ++ ++ @classmethod ++ def _select_for_exit(cls): ++ # Return live finalizers marked for exit, oldest first ++ L = [(f, i) for (f, i) in cls._registry.items() if i.atexit] ++ L.sort(key=lambda item: item[1].index) ++ return [f for (f, i) in L] ++ ++ @classmethod ++ def _exitfunc(cls): ++ # At shutdown invoke finalizers for which atexit is true. ++ # This is called once all other non-daemonic threads have been ++ # joined. ++ reenable_gc = False ++ try: ++ if cls._registry: ++ import gc ++ ++ if gc.isenabled(): ++ reenable_gc = True ++ gc.disable() ++ pending = None ++ while True: ++ if pending is None or weakref_finalize._dirty: ++ pending = cls._select_for_exit() ++ weakref_finalize._dirty = False ++ if not pending: ++ break ++ f = pending.pop() ++ try: ++ # gc is disabled, so (assuming no daemonic ++ # threads) the following is the only line in ++ # this function which might trigger creation ++ # of a new finalizer ++ f() ++ except Exception: ++ sys.excepthook(*sys.exc_info()) ++ assert f not in cls._registry ++ finally: ++ # prevent any more finalizers from executing during shutdown ++ weakref_finalize._shutdown = True ++ if reenable_gc: ++ gc.enable() +diff --git a/third_party/python/urllib3/urllib3/packages/six.py b/third_party/python/urllib3/urllib3/packages/six.py +index 314424099f624..f099a3dcd28d2 100644 +--- a/third_party/python/urllib3/urllib3/packages/six.py ++++ b/third_party/python/urllib3/urllib3/packages/six.py +@@ -1,4 +1,4 @@ +-# Copyright (c) 2010-2019 Benjamin Peterson ++# Copyright (c) 2010-2020 Benjamin Peterson + # + # Permission is hereby granted, free of charge, to any person obtaining a copy + # of this software and associated documentation files (the "Software"), to deal +@@ -29,7 +29,7 @@ + import types + + __author__ = "Benjamin Peterson " +-__version__ = "1.12.0" ++__version__ = "1.16.0" + + + # Useful for very coarse version differentiation. +@@ -71,6 +71,11 @@ def __len__(self): + MAXSIZE = int((1 << 63) - 1) + del X + ++if PY34: ++ from importlib.util import spec_from_loader ++else: ++ spec_from_loader = None ++ + + def _add_doc(func, doc): + """Add documentation to a function.""" +@@ -182,6 +187,11 @@ def find_module(self, fullname, path=None): + return self + return None + ++ def find_spec(self, fullname, path, target=None): ++ if fullname in self.known_modules: ++ return spec_from_loader(fullname, self) ++ return None ++ + def __get_module(self, fullname): + try: + return self.known_modules[fullname] +@@ -220,6 +230,12 @@ def get_code(self, fullname): + + get_source = get_code # same as get_code + ++ def create_module(self, spec): ++ return self.load_module(spec.name) ++ ++ def exec_module(self, module): ++ pass ++ + + _importer = _SixMetaPathImporter(__name__) + +@@ -260,9 +276,19 @@ class _MovedItems(_LazyModule): + ), + MovedModule("builtins", "__builtin__"), + MovedModule("configparser", "ConfigParser"), ++ MovedModule( ++ "collections_abc", ++ "collections", ++ "collections.abc" if sys.version_info >= (3, 3) else "collections", ++ ), + MovedModule("copyreg", "copy_reg"), + MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), +- MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"), ++ MovedModule("dbm_ndbm", "dbm", "dbm.ndbm"), ++ MovedModule( ++ "_dummy_thread", ++ "dummy_thread", ++ "_dummy_thread" if sys.version_info < (3, 9) else "_thread", ++ ), + MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), + MovedModule("http_cookies", "Cookie", "http.cookies"), + MovedModule("html_entities", "htmlentitydefs", "html.entities"), +@@ -307,7 +333,9 @@ class _MovedItems(_LazyModule): + ] + # Add windows specific modules. + if sys.platform == "win32": +- _moved_attributes += [MovedModule("winreg", "_winreg")] ++ _moved_attributes += [ ++ MovedModule("winreg", "_winreg"), ++ ] + + for attr in _moved_attributes: + setattr(_MovedItems, attr.name, attr) +@@ -476,7 +504,7 @@ class Module_six_moves_urllib_robotparser(_LazyModule): + + + _urllib_robotparser_moved_attributes = [ +- MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser") ++ MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), + ] + for attr in _urllib_robotparser_moved_attributes: + setattr(Module_six_moves_urllib_robotparser, attr.name, attr) +@@ -678,9 +706,11 @@ def u(s): + if sys.version_info[1] <= 1: + _assertRaisesRegex = "assertRaisesRegexp" + _assertRegex = "assertRegexpMatches" ++ _assertNotRegex = "assertNotRegexpMatches" + else: + _assertRaisesRegex = "assertRaisesRegex" + _assertRegex = "assertRegex" ++ _assertNotRegex = "assertNotRegex" + else: + + def b(s): +@@ -707,6 +737,7 @@ def indexbytes(buf, i): + _assertCountEqual = "assertItemsEqual" + _assertRaisesRegex = "assertRaisesRegexp" + _assertRegex = "assertRegexpMatches" ++ _assertNotRegex = "assertNotRegexpMatches" + _add_doc(b, """Byte literal""") + _add_doc(u, """Text literal""") + +@@ -723,6 +754,10 @@ def assertRegex(self, *args, **kwargs): + return getattr(self, _assertRegex)(*args, **kwargs) + + ++def assertNotRegex(self, *args, **kwargs): ++ return getattr(self, _assertNotRegex)(*args, **kwargs) ++ ++ + if PY3: + exec_ = getattr(moves.builtins, "exec") + +@@ -737,7 +772,6 @@ def reraise(tp, value, tb=None): + value = None + tb = None + +- + else: + + def exec_(_code_, _globs_=None, _locs_=None): +@@ -750,7 +784,7 @@ def exec_(_code_, _globs_=None, _locs_=None): + del frame + elif _locs_ is None: + _locs_ = _globs_ +- exec("""exec _code_ in _globs_, _locs_""") ++ exec ("""exec _code_ in _globs_, _locs_""") + + exec_( + """def reraise(tp, value, tb=None): +@@ -762,18 +796,7 @@ def exec_(_code_, _globs_=None, _locs_=None): + ) + + +-if sys.version_info[:2] == (3, 2): +- exec_( +- """def raise_from(value, from_value): +- try: +- if from_value is None: +- raise value +- raise value from from_value +- finally: +- value = None +-""" +- ) +-elif sys.version_info[:2] > (3, 2): ++if sys.version_info[:2] > (3,): + exec_( + """def raise_from(value, from_value): + try: +@@ -863,19 +886,41 @@ def print_(*args, **kwargs): + _add_doc(reraise, """Reraise an exception.""") + + if sys.version_info[0:2] < (3, 4): ++ # This does exactly the same what the :func:`py3:functools.update_wrapper` ++ # function does on Python versions after 3.2. It sets the ``__wrapped__`` ++ # attribute on ``wrapper`` object and it doesn't raise an error if any of ++ # the attributes mentioned in ``assigned`` and ``updated`` are missing on ++ # ``wrapped`` object. ++ def _update_wrapper( ++ wrapper, ++ wrapped, ++ assigned=functools.WRAPPER_ASSIGNMENTS, ++ updated=functools.WRAPPER_UPDATES, ++ ): ++ for attr in assigned: ++ try: ++ value = getattr(wrapped, attr) ++ except AttributeError: ++ continue ++ else: ++ setattr(wrapper, attr, value) ++ for attr in updated: ++ getattr(wrapper, attr).update(getattr(wrapped, attr, {})) ++ wrapper.__wrapped__ = wrapped ++ return wrapper ++ ++ _update_wrapper.__doc__ = functools.update_wrapper.__doc__ + + def wraps( + wrapped, + assigned=functools.WRAPPER_ASSIGNMENTS, + updated=functools.WRAPPER_UPDATES, + ): +- def wrapper(f): +- f = functools.wraps(wrapped, assigned, updated)(f) +- f.__wrapped__ = wrapped +- return f +- +- return wrapper ++ return functools.partial( ++ _update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated ++ ) + ++ wraps.__doc__ = functools.wraps.__doc__ + + else: + wraps = functools.wraps +@@ -888,7 +933,15 @@ def with_metaclass(meta, *bases): + # the actual metaclass. + class metaclass(type): + def __new__(cls, name, this_bases, d): +- return meta(name, bases, d) ++ if sys.version_info[:2] >= (3, 7): ++ # This version introduced PEP 560 that requires a bit ++ # of extra care (we mimic what is done by __build_class__). ++ resolved_bases = types.resolve_bases(bases) ++ if resolved_bases is not bases: ++ d["__orig_bases__"] = bases ++ else: ++ resolved_bases = bases ++ return meta(name, resolved_bases, d) + + @classmethod + def __prepare__(cls, name, this_bases): +@@ -928,12 +981,11 @@ def ensure_binary(s, encoding="utf-8", errors="strict"): + - `str` -> encoded to `bytes` + - `bytes` -> `bytes` + """ ++ if isinstance(s, binary_type): ++ return s + if isinstance(s, text_type): + return s.encode(encoding, errors) +- elif isinstance(s, binary_type): +- return s +- else: +- raise TypeError("not expecting type '%s'" % type(s)) ++ raise TypeError("not expecting type '%s'" % type(s)) + + + def ensure_str(s, encoding="utf-8", errors="strict"): +@@ -947,12 +999,15 @@ def ensure_str(s, encoding="utf-8", errors="strict"): + - `str` -> `str` + - `bytes` -> decoded to `str` + """ +- if not isinstance(s, (text_type, binary_type)): +- raise TypeError("not expecting type '%s'" % type(s)) ++ # Optimization: Fast return for the common case. ++ if type(s) is str: ++ return s + if PY2 and isinstance(s, text_type): +- s = s.encode(encoding, errors) ++ return s.encode(encoding, errors) + elif PY3 and isinstance(s, binary_type): +- s = s.decode(encoding, errors) ++ return s.decode(encoding, errors) ++ elif not isinstance(s, (text_type, binary_type)): ++ raise TypeError("not expecting type '%s'" % type(s)) + return s + + +@@ -977,7 +1032,7 @@ def ensure_text(s, encoding="utf-8", errors="strict"): + + def python_2_unicode_compatible(klass): + """ +- A decorator that defines __unicode__ and __str__ methods under Python 2. ++ A class decorator that defines __unicode__ and __str__ methods under Python 2. + Under Python 3 it does nothing. + + To support Python 2 and 3 with a single code base, define a __str__ method +diff --git a/third_party/python/urllib3/urllib3/packages/ssl_match_hostname/__init__.py b/third_party/python/urllib3/urllib3/packages/ssl_match_hostname/__init__.py +deleted file mode 100644 +index 6b12fd90aadec..0000000000000 +--- a/third_party/python/urllib3/urllib3/packages/ssl_match_hostname/__init__.py ++++ /dev/null +@@ -1,22 +0,0 @@ +-import sys +- +-try: +- # Our match_hostname function is the same as 3.5's, so we only want to +- # import the match_hostname function if it's at least that good. +- if sys.version_info < (3, 5): +- raise ImportError("Fallback to vendored code") +- +- from ssl import CertificateError, match_hostname +-except ImportError: +- try: +- # Backport of the function from a pypi module +- from backports.ssl_match_hostname import ( # type: ignore +- CertificateError, +- match_hostname, +- ) +- except ImportError: +- # Our vendored copy +- from ._implementation import CertificateError, match_hostname # type: ignore +- +-# Not needed, but documenting what we provide. +-__all__ = ("CertificateError", "match_hostname") +diff --git a/third_party/python/urllib3/urllib3/poolmanager.py b/third_party/python/urllib3/urllib3/poolmanager.py +index 3a31a285bf648..14b10daf3a962 100644 +--- a/third_party/python/urllib3/urllib3/poolmanager.py ++++ b/third_party/python/urllib3/urllib3/poolmanager.py +@@ -34,6 +34,7 @@ + "ca_cert_dir", + "ssl_context", + "key_password", ++ "server_hostname", + ) + + # All known keyword arguments that could be provided to the pool manager, its +@@ -170,7 +171,7 @@ class PoolManager(RequestMethods): + def __init__(self, num_pools=10, headers=None, **connection_pool_kw): + RequestMethods.__init__(self, headers) + self.connection_pool_kw = connection_pool_kw +- self.pools = RecentlyUsedContainer(num_pools, dispose_func=lambda p: p.close()) ++ self.pools = RecentlyUsedContainer(num_pools) + + # Locally set the pool classes and keys so other PoolManagers can + # override them. +diff --git a/third_party/python/urllib3/urllib3/request.py b/third_party/python/urllib3/urllib3/request.py +index 398386a5b9f61..3b4cf999225b8 100644 +--- a/third_party/python/urllib3/urllib3/request.py ++++ b/third_party/python/urllib3/urllib3/request.py +@@ -1,6 +1,9 @@ + from __future__ import absolute_import + ++import sys ++ + from .filepost import encode_multipart_formdata ++from .packages import six + from .packages.six.moves.urllib.parse import urlencode + + __all__ = ["RequestMethods"] +@@ -168,3 +171,21 @@ def request_encode_body( + extra_kw.update(urlopen_kw) + + return self.urlopen(method, url, **extra_kw) ++ ++ ++if not six.PY2: ++ ++ class RequestModule(sys.modules[__name__].__class__): ++ def __call__(self, *args, **kwargs): ++ """ ++ If user tries to call this module directly urllib3 v2.x style raise an error to the user ++ suggesting they may need urllib3 v2 ++ """ ++ raise TypeError( ++ "'module' object is not callable\n" ++ "urllib3.request() method is not supported in this release, " ++ "upgrade to urllib3 v2 to use it\n" ++ "see https://urllib3.readthedocs.io/en/stable/v2-migration-guide.html" ++ ) ++ ++ sys.modules[__name__].__class__ = RequestModule +diff --git a/third_party/python/urllib3/urllib3/response.py b/third_party/python/urllib3/urllib3/response.py +index 38693f4fc6e33..0bd13d40b8ac7 100644 +--- a/third_party/python/urllib3/urllib3/response.py ++++ b/third_party/python/urllib3/urllib3/response.py +@@ -2,16 +2,22 @@ + + import io + import logging ++import sys ++import warnings + import zlib + from contextlib import contextmanager + from socket import error as SocketError + from socket import timeout as SocketTimeout + + try: +- import brotli ++ try: ++ import brotlicffi as brotli ++ except ImportError: ++ import brotli + except ImportError: + brotli = None + ++from . import util + from ._collections import HTTPHeaderDict + from .connection import BaseSSLError, HTTPException + from .exceptions import ( +@@ -478,6 +484,54 @@ def _error_catcher(self): + if self._original_response and self._original_response.isclosed(): + self.release_conn() + ++ def _fp_read(self, amt): ++ """ ++ Read a response with the thought that reading the number of bytes ++ larger than can fit in a 32-bit int at a time via SSL in some ++ known cases leads to an overflow error that has to be prevented ++ if `amt` or `self.length_remaining` indicate that a problem may ++ happen. ++ ++ The known cases: ++ * 3.8 <= CPython < 3.9.7 because of a bug ++ https://github.com/urllib3/urllib3/issues/2513#issuecomment-1152559900. ++ * urllib3 injected with pyOpenSSL-backed SSL-support. ++ * CPython < 3.10 only when `amt` does not fit 32-bit int. ++ """ ++ assert self._fp ++ c_int_max = 2 ** 31 - 1 ++ if ( ++ ( ++ (amt and amt > c_int_max) ++ or (self.length_remaining and self.length_remaining > c_int_max) ++ ) ++ and not util.IS_SECURETRANSPORT ++ and (util.IS_PYOPENSSL or sys.version_info < (3, 10)) ++ ): ++ buffer = io.BytesIO() ++ # Besides `max_chunk_amt` being a maximum chunk size, it ++ # affects memory overhead of reading a response by this ++ # method in CPython. ++ # `c_int_max` equal to 2 GiB - 1 byte is the actual maximum ++ # chunk size that does not lead to an overflow error, but ++ # 256 MiB is a compromise. ++ max_chunk_amt = 2 ** 28 ++ while amt is None or amt != 0: ++ if amt is not None: ++ chunk_amt = min(amt, max_chunk_amt) ++ amt -= chunk_amt ++ else: ++ chunk_amt = max_chunk_amt ++ data = self._fp.read(chunk_amt) ++ if not data: ++ break ++ buffer.write(data) ++ del data # to reduce peak memory usage by `max_chunk_amt`. ++ return buffer.getvalue() ++ else: ++ # StringIO doesn't like amt=None ++ return self._fp.read(amt) if amt is not None else self._fp.read() ++ + def read(self, amt=None, decode_content=None, cache_content=False): + """ + Similar to :meth:`http.client.HTTPResponse.read`, but with two additional +@@ -510,13 +564,11 @@ def read(self, amt=None, decode_content=None, cache_content=False): + fp_closed = getattr(self._fp, "closed", False) + + with self._error_catcher(): ++ data = self._fp_read(amt) if not fp_closed else b"" + if amt is None: +- # cStringIO doesn't like amt=None +- data = self._fp.read() if not fp_closed else b"" + flush_decoder = True + else: + cache_content = False +- data = self._fp.read(amt) if not fp_closed else b"" + if ( + amt != 0 and not data + ): # Platform-specific: Buggy versions of Python. +@@ -612,9 +664,21 @@ def from_httplib(ResponseCls, r, **response_kw): + + # Backwards-compatibility methods for http.client.HTTPResponse + def getheaders(self): ++ warnings.warn( ++ "HTTPResponse.getheaders() is deprecated and will be removed " ++ "in urllib3 v2.1.0. Instead access HTTPResponse.headers directly.", ++ category=DeprecationWarning, ++ stacklevel=2, ++ ) + return self.headers + + def getheader(self, name, default=None): ++ warnings.warn( ++ "HTTPResponse.getheader() is deprecated and will be removed " ++ "in urllib3 v2.1.0. Instead use HTTPResponse.headers.get(name, default).", ++ category=DeprecationWarning, ++ stacklevel=2, ++ ) + return self.headers.get(name, default) + + # Backwards compatibility for http.cookiejar +diff --git a/third_party/python/urllib3/urllib3/util/connection.py b/third_party/python/urllib3/urllib3/util/connection.py +index cd57455748be0..6af1138f260e4 100644 +--- a/third_party/python/urllib3/urllib3/util/connection.py ++++ b/third_party/python/urllib3/urllib3/util/connection.py +@@ -2,9 +2,8 @@ + + import socket + +-from urllib3.exceptions import LocationParseError +- + from ..contrib import _appengine_environ ++from ..exceptions import LocationParseError + from ..packages import six + from .wait import NoWayToWaitForSocketError, wait_for_read + +@@ -118,7 +117,7 @@ def allowed_gai_family(): + + + def _has_ipv6(host): +- """ Returns True if the system can bind an IPv6 address. """ ++ """Returns True if the system can bind an IPv6 address.""" + sock = None + has_ipv6 = False + +diff --git a/third_party/python/urllib3/urllib3/util/proxy.py b/third_party/python/urllib3/urllib3/util/proxy.py +index 34f884d5b314d..2199cc7b7f004 100644 +--- a/third_party/python/urllib3/urllib3/util/proxy.py ++++ b/third_party/python/urllib3/urllib3/util/proxy.py +@@ -45,6 +45,7 @@ def create_proxy_ssl_context( + ssl_version=resolve_ssl_version(ssl_version), + cert_reqs=resolve_cert_reqs(cert_reqs), + ) ++ + if ( + not ca_certs + and not ca_cert_dir +diff --git a/third_party/python/urllib3/urllib3/util/request.py b/third_party/python/urllib3/urllib3/util/request.py +index 25103383ec7ab..b574b081e98a0 100644 +--- a/third_party/python/urllib3/urllib3/util/request.py ++++ b/third_party/python/urllib3/urllib3/util/request.py +@@ -14,7 +14,10 @@ + + ACCEPT_ENCODING = "gzip,deflate" + try: +- import brotli as _unused_module_brotli # noqa: F401 ++ try: ++ import brotlicffi as _unused_module_brotli # noqa: F401 ++ except ImportError: ++ import brotli as _unused_module_brotli # noqa: F401 + except ImportError: + pass + else: +diff --git a/third_party/python/urllib3/urllib3/util/retry.py b/third_party/python/urllib3/urllib3/util/retry.py +index ee51f922f8452..60ef6c4f3f9d0 100644 +--- a/third_party/python/urllib3/urllib3/util/retry.py ++++ b/third_party/python/urllib3/urllib3/util/retry.py +@@ -37,7 +37,7 @@ class _RetryMeta(type): + def DEFAULT_METHOD_WHITELIST(cls): + warnings.warn( + "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " +- "will be removed in v2.0. Use 'Retry.DEFAULT_METHODS_ALLOWED' instead", ++ "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", + DeprecationWarning, + ) + return cls.DEFAULT_ALLOWED_METHODS +@@ -69,6 +69,24 @@ def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls, value): + ) + cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT = value + ++ @property ++ def BACKOFF_MAX(cls): ++ warnings.warn( ++ "Using 'Retry.BACKOFF_MAX' is deprecated and " ++ "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", ++ DeprecationWarning, ++ ) ++ return cls.DEFAULT_BACKOFF_MAX ++ ++ @BACKOFF_MAX.setter ++ def BACKOFF_MAX(cls, value): ++ warnings.warn( ++ "Using 'Retry.BACKOFF_MAX' is deprecated and " ++ "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", ++ DeprecationWarning, ++ ) ++ cls.DEFAULT_BACKOFF_MAX = value ++ + + @six.add_metaclass(_RetryMeta) + class Retry(object): +@@ -181,7 +199,7 @@ class Retry(object): + + seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep + for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer +- than :attr:`Retry.BACKOFF_MAX`. ++ than :attr:`Retry.DEFAULT_BACKOFF_MAX`. + + By default, backoff is disabled (set to 0). + +@@ -217,10 +235,10 @@ class Retry(object): + RETRY_AFTER_STATUS_CODES = frozenset([413, 429, 503]) + + #: Default headers to be used for ``remove_headers_on_redirect`` +- DEFAULT_REMOVE_HEADERS_ON_REDIRECT = frozenset(["Authorization"]) ++ DEFAULT_REMOVE_HEADERS_ON_REDIRECT = frozenset(["Cookie", "Authorization"]) + + #: Maximum backoff time. +- BACKOFF_MAX = 120 ++ DEFAULT_BACKOFF_MAX = 120 + + def __init__( + self, +@@ -253,6 +271,7 @@ def __init__( + "Using 'method_whitelist' with Retry is deprecated and " + "will be removed in v2.0. Use 'allowed_methods' instead", + DeprecationWarning, ++ stacklevel=2, + ) + allowed_methods = method_whitelist + if allowed_methods is _Default: +@@ -320,7 +339,7 @@ def new(self, **kw): + + @classmethod + def from_int(cls, retries, redirect=True, default=None): +- """ Backwards-compatibility for the old retries format.""" ++ """Backwards-compatibility for the old retries format.""" + if retries is None: + retries = default if default is not None else cls.DEFAULT + +@@ -347,7 +366,7 @@ def get_backoff_time(self): + return 0 + + backoff_value = self.backoff_factor * (2 ** (consecutive_errors_len - 1)) +- return min(self.BACKOFF_MAX, backoff_value) ++ return min(self.DEFAULT_BACKOFF_MAX, backoff_value) + + def parse_retry_after(self, retry_after): + # Whitespace: https://tools.ietf.org/html/rfc7230#section-3.2.4 +@@ -373,9 +392,9 @@ def parse_retry_after(self, retry_after): + return seconds + + def get_retry_after(self, response): +- """ Get the value of Retry-After in seconds. """ ++ """Get the value of Retry-After in seconds.""" + +- retry_after = response.getheader("Retry-After") ++ retry_after = response.headers.get("Retry-After") + + if retry_after is None: + return None +@@ -467,7 +486,7 @@ def is_retry(self, method, status_code, has_retry_after=False): + ) + + def is_exhausted(self): +- """ Are we out of retries? """ ++ """Are we out of retries?""" + retry_counts = ( + self.total, + self.connect, +diff --git a/third_party/python/urllib3/urllib3/util/ssl_.py b/third_party/python/urllib3/urllib3/util/ssl_.py +index 1cb5e7cdc1c0c..8f867812a5eb3 100644 +--- a/third_party/python/urllib3/urllib3/util/ssl_.py ++++ b/third_party/python/urllib3/urllib3/util/ssl_.py +@@ -44,13 +44,21 @@ def _const_compare_digest_backport(a, b): + + try: # Test for SSL features + import ssl +- from ssl import HAS_SNI # Has SNI? + from ssl import CERT_REQUIRED, wrap_socket ++except ImportError: ++ pass ++ ++try: ++ from ssl import HAS_SNI # Has SNI? ++except ImportError: ++ pass + ++try: + from .ssltransport import SSLTransport + except ImportError: + pass + ++ + try: # Platform-specific: Python 3.6 + from ssl import PROTOCOL_TLS + +@@ -63,6 +71,11 @@ def _const_compare_digest_backport(a, b): + except ImportError: + PROTOCOL_SSLv23 = PROTOCOL_TLS = 2 + ++try: ++ from ssl import PROTOCOL_TLS_CLIENT ++except ImportError: ++ PROTOCOL_TLS_CLIENT = PROTOCOL_TLS ++ + + try: + from ssl import OP_NO_COMPRESSION, OP_NO_SSLv2, OP_NO_SSLv3 +@@ -151,7 +164,7 @@ def wrap_socket(self, socket, server_hostname=None, server_side=False): + "urllib3 from configuring SSL appropriately and may cause " + "certain SSL connections to fail. You can upgrade to a newer " + "version of Python to solve this. For more information, see " +- "https://urllib3.readthedocs.io/en/latest/advanced-usage.html" ++ "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" + "#ssl-warnings", + InsecurePlatformWarning, + ) +@@ -270,7 +283,11 @@ def create_urllib3_context( + Constructed SSLContext object with specified options + :rtype: SSLContext + """ +- context = SSLContext(ssl_version or PROTOCOL_TLS) ++ # PROTOCOL_TLS is deprecated in Python 3.10 ++ if not ssl_version or ssl_version == PROTOCOL_TLS: ++ ssl_version = PROTOCOL_TLS_CLIENT ++ ++ context = SSLContext(ssl_version) + + context.set_ciphers(ciphers or DEFAULT_CIPHERS) + +@@ -305,13 +322,25 @@ def create_urllib3_context( + ) is not None: + context.post_handshake_auth = True + +- context.verify_mode = cert_reqs +- if ( +- getattr(context, "check_hostname", None) is not None +- ): # Platform-specific: Python 3.2 +- # We do our own verification, including fingerprints and alternative +- # hostnames. So disable it here +- context.check_hostname = False ++ def disable_check_hostname(): ++ if ( ++ getattr(context, "check_hostname", None) is not None ++ ): # Platform-specific: Python 3.2 ++ # We do our own verification, including fingerprints and alternative ++ # hostnames. So disable it here ++ context.check_hostname = False ++ ++ # The order of the below lines setting verify_mode and check_hostname ++ # matter due to safe-guards SSLContext has to prevent an SSLContext with ++ # check_hostname=True, verify_mode=NONE/OPTIONAL. This is made even more ++ # complex because we don't know whether PROTOCOL_TLS_CLIENT will be used ++ # or not so we don't know the initial state of the freshly created SSLContext. ++ if cert_reqs == ssl.CERT_REQUIRED: ++ context.verify_mode = cert_reqs ++ disable_check_hostname() ++ else: ++ disable_check_hostname() ++ context.verify_mode = cert_reqs + + # Enable logging of TLS session keys via defacto standard environment variable + # 'SSLKEYLOGFILE', if the feature is available (Python 3.8+). Skip empty values. +@@ -393,7 +422,7 @@ def ssl_wrap_socket( + try: + if hasattr(context, "set_alpn_protocols"): + context.set_alpn_protocols(ALPN_PROTOCOLS) +- except NotImplementedError: ++ except NotImplementedError: # Defensive: in CI, we always have set_alpn_protocols + pass + + # If we detect server_hostname is an IP address then the SNI +@@ -411,7 +440,7 @@ def ssl_wrap_socket( + "This may cause the server to present an incorrect TLS " + "certificate, which can cause validation failures. You can upgrade to " + "a newer version of Python to solve this. For more information, see " +- "https://urllib3.readthedocs.io/en/latest/advanced-usage.html" ++ "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" + "#ssl-warnings", + SNIMissingWarning, + ) +diff --git a/third_party/python/urllib3/urllib3/packages/ssl_match_hostname/_implementation.py b/third_party/python/urllib3/urllib3/util/ssl_match_hostname.py +similarity index 92% +rename from third_party/python/urllib3/urllib3/packages/ssl_match_hostname/_implementation.py +rename to third_party/python/urllib3/urllib3/util/ssl_match_hostname.py +index 689208d3c63f1..1dd950c489607 100644 +--- a/third_party/python/urllib3/urllib3/packages/ssl_match_hostname/_implementation.py ++++ b/third_party/python/urllib3/urllib3/util/ssl_match_hostname.py +@@ -9,7 +9,7 @@ + # ipaddress has been backported to 2.6+ in pypi. If it is installed on the + # system, use it to handle IPAddress ServerAltnames (this was added in + # python-3.5) otherwise only do DNS matching. This allows +-# backports.ssl_match_hostname to continue to be used in Python 2.7. ++# util.ssl_match_hostname to continue to be used in Python 2.7. + try: + import ipaddress + except ImportError: +@@ -78,7 +78,8 @@ def _dnsname_match(dn, hostname, max_wildcards=1): + + def _to_unicode(obj): + if isinstance(obj, str) and sys.version_info < (3,): +- obj = unicode(obj, encoding="ascii", errors="strict") ++ # ignored flake8 # F821 to support python 2.7 function ++ obj = unicode(obj, encoding="ascii", errors="strict") # noqa: F821 + return obj + + +@@ -111,11 +112,9 @@ def match_hostname(cert, hostname): + try: + # Divergence from upstream: ipaddress can't handle byte str + host_ip = ipaddress.ip_address(_to_unicode(hostname)) +- except ValueError: +- # Not an IP address (common case) +- host_ip = None +- except UnicodeError: +- # Divergence from upstream: Have to deal with ipaddress not taking ++ except (UnicodeError, ValueError): ++ # ValueError: Not an IP address (common case) ++ # UnicodeError: Divergence from upstream: Have to deal with ipaddress not taking + # byte strings. addresses should be all ascii, so we consider it not + # an ipaddress in this case + host_ip = None +@@ -123,7 +122,7 @@ def match_hostname(cert, hostname): + # Divergence from upstream: Make ipaddress library optional + if ipaddress is None: + host_ip = None +- else: ++ else: # Defensive + raise + dnsnames = [] + san = cert.get("subjectAltName", ()) +diff --git a/third_party/python/urllib3/urllib3/util/ssltransport.py b/third_party/python/urllib3/urllib3/util/ssltransport.py +index 1e41354f5d458..4a7105d17916a 100644 +--- a/third_party/python/urllib3/urllib3/util/ssltransport.py ++++ b/third_party/python/urllib3/urllib3/util/ssltransport.py +@@ -2,8 +2,8 @@ + import socket + import ssl + +-from urllib3.exceptions import ProxySchemeUnsupported +-from urllib3.packages import six ++from ..exceptions import ProxySchemeUnsupported ++from ..packages import six + + SSL_BLOCKSIZE = 16384 + +@@ -193,7 +193,7 @@ def _wrap_ssl_read(self, len, buffer=None): + raise + + def _ssl_io_loop(self, func, *args): +- """ Performs an I/O loop between incoming/outgoing and the socket.""" ++ """Performs an I/O loop between incoming/outgoing and the socket.""" + should_loop = True + ret = None + +diff --git a/third_party/python/urllib3/urllib3/util/timeout.py b/third_party/python/urllib3/urllib3/util/timeout.py +index ff69593b05b5e..78e18a6272482 100644 +--- a/third_party/python/urllib3/urllib3/util/timeout.py ++++ b/third_party/python/urllib3/urllib3/util/timeout.py +@@ -2,9 +2,8 @@ + + import time + +-# The default socket timeout, used by httplib to indicate that no timeout was +-# specified by the user +-from socket import _GLOBAL_DEFAULT_TIMEOUT ++# The default socket timeout, used by httplib to indicate that no timeout was; specified by the user ++from socket import _GLOBAL_DEFAULT_TIMEOUT, getdefaulttimeout + + from ..exceptions import TimeoutStateError + +@@ -116,6 +115,10 @@ def __repr__(self): + # __str__ provided for backwards compatibility + __str__ = __repr__ + ++ @classmethod ++ def resolve_default_timeout(cls, timeout): ++ return getdefaulttimeout() if timeout is cls.DEFAULT_TIMEOUT else timeout ++ + @classmethod + def _validate_timeout(cls, value, name): + """Check that a timeout attribute is valid. +diff --git a/third_party/python/urllib3/urllib3/util/url.py b/third_party/python/urllib3/urllib3/util/url.py +index 6ff238fe3cbd0..e5682d3be4293 100644 +--- a/third_party/python/urllib3/urllib3/util/url.py ++++ b/third_party/python/urllib3/urllib3/util/url.py +@@ -50,7 +50,7 @@ + "(?:(?:%(hex)s:){0,6}%(hex)s)?::", + ] + +-UNRESERVED_PAT = r"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._!\-~" ++UNRESERVED_PAT = r"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._\-~" + IPV6_PAT = "(?:" + "|".join([x % _subs for x in _variations]) + ")" + ZONE_ID_PAT = "(?:%25|%)(?:[" + UNRESERVED_PAT + "]|%[a-fA-F0-9]{2})+" + IPV6_ADDRZ_PAT = r"\[" + IPV6_PAT + r"(?:" + ZONE_ID_PAT + r")?\]" +@@ -63,12 +63,12 @@ + BRACELESS_IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT[2:-2] + "$") + ZONE_ID_RE = re.compile("(" + ZONE_ID_PAT + r")\]$") + +-SUBAUTHORITY_PAT = (u"^(?:(.*)@)?(%s|%s|%s)(?::([0-9]{0,5}))?$") % ( ++_HOST_PORT_PAT = ("^(%s|%s|%s)(?::0*?(|0|[1-9][0-9]{0,4}))?$") % ( + REG_NAME_PAT, + IPV4_PAT, + IPV6_ADDRZ_PAT, + ) +-SUBAUTHORITY_RE = re.compile(SUBAUTHORITY_PAT, re.UNICODE | re.DOTALL) ++_HOST_PORT_RE = re.compile(_HOST_PORT_PAT, re.UNICODE | re.DOTALL) + + UNRESERVED_CHARS = set( + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._-~" +@@ -279,6 +279,9 @@ def _normalize_host(host, scheme): + if scheme in NORMALIZABLE_SCHEMES: + is_ipv6 = IPV6_ADDRZ_RE.match(host) + if is_ipv6: ++ # IPv6 hosts of the form 'a::b%zone' are encoded in a URL as ++ # such per RFC 6874: 'a::b%25zone'. Unquote the ZoneID ++ # separator as necessary to return a valid RFC 4007 scoped IP. + match = ZONE_ID_RE.search(host) + if match: + start, end = match.span(1) +@@ -300,7 +303,7 @@ def _normalize_host(host, scheme): + + + def _idna_encode(name): +- if name and any([ord(x) > 128 for x in name]): ++ if name and any(ord(x) >= 128 for x in name): + try: + import idna + except ImportError: +@@ -331,7 +334,7 @@ def parse_url(url): + """ + Given a url, return a parsed :class:`.Url` namedtuple. Best-effort is + performed to parse incomplete urls. Fields not provided will be None. +- This parser is RFC 3986 compliant. ++ This parser is RFC 3986 and RFC 6874 compliant. + + The parser logic and helper functions are based heavily on + work done in the ``rfc3986`` module. +@@ -365,7 +368,9 @@ def parse_url(url): + scheme = scheme.lower() + + if authority: +- auth, host, port = SUBAUTHORITY_RE.match(authority).groups() ++ auth, _, host_port = authority.rpartition("@") ++ auth = auth or None ++ host, port = _HOST_PORT_RE.match(host_port).groups() + if auth and normalize_uri: + auth = _encode_invalid_chars(auth, USERINFO_CHARS) + if port == "": +diff --git a/third_party/python/urllib3/urllib3/util/wait.py b/third_party/python/urllib3/urllib3/util/wait.py +index c280646c7be0b..21b4590b3dc9b 100644 +--- a/third_party/python/urllib3/urllib3/util/wait.py ++++ b/third_party/python/urllib3/urllib3/util/wait.py +@@ -42,7 +42,6 @@ class NoWayToWaitForSocketError(Exception): + def _retry_on_intr(fn, timeout): + return fn(timeout) + +- + else: + # Old and broken Pythons. + def _retry_on_intr(fn, timeout): diff --git a/meta-oe/recipes-extended/mozjs/mozjs-115_115.2.0.bb b/meta-oe/recipes-extended/mozjs/mozjs-115_115.2.0.bb index d0acabd8b..e1a547c33 100644 --- a/meta-oe/recipes-extended/mozjs/mozjs-115_115.2.0.bb +++ b/meta-oe/recipes-extended/mozjs/mozjs-115_115.2.0.bb @@ -15,6 +15,7 @@ SRC_URI = "https://archive.mozilla.org/pub/firefox/releases/${PV}esr/source/fire file://0001-rewrite-cargo-host-linker-in-python3.patch \ file://musl-disable-stackwalk.patch \ file://0001-add-arm-to-list-of-mozinline.patch \ + file://py3.12.patch \ " SRC_URI[sha256sum] = "51534dd2a158d955a2cb67cc1308f100f6c9def0788713ed8b4d743f3ad72457" From patchwork Fri Dec 22 15:11:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kanavin X-Patchwork-Id: 36869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2644C47073 for ; Fri, 22 Dec 2023 15:11:36 +0000 (UTC) Received: from mail-lj1-f179.google.com (mail-lj1-f179.google.com [209.85.208.179]) by mx.groups.io with SMTP id smtpd.web10.25180.1703257890770808584 for ; Fri, 22 Dec 2023 07:11:31 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@gmail.com header.s=20230601 header.b=O2YY8fn5; spf=pass (domain: gmail.com, ip: 209.85.208.179, mailfrom: alex.kanavin@gmail.com) Received: by mail-lj1-f179.google.com with SMTP id 38308e7fff4ca-2cc6c028229so23194601fa.2 for ; Fri, 22 Dec 2023 07:11:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703257889; x=1703862689; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uSKcjJ4uyE3+TEZ189DRTJH0+wCgp1jowd/cKoCEqzU=; b=O2YY8fn5UCACsYy7QDUrZASQ1EvCuSosYo0ymPd4dBGA6lZpfS007bNKadI5yATWlN edq1FGk5JmUEdW1magW3tThrbcDMGPHUqqWIpM6nbKV3EN6YsrXnM8IYqzMfhQNjX6CP Y+4dQt0i1EMv/hERtAnDgYi/7R1Rt8Fbr0CL0MbPumcBf/UZVvYV+KDDzkMLUNRe24ON sMrNPz9a+iKIGVaUXMZBT80l4pCdjMUHXn57TpzsdCmY9aQM3fHuyBkOSfGNdiGfwpl1 Kg6UqgYEBONlnjBzf/V7wXH/Ztf7FYPHhYnoumZn/GNe1MwgUsO0C83Zlt01Ft9bjKw2 Cmcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703257889; x=1703862689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uSKcjJ4uyE3+TEZ189DRTJH0+wCgp1jowd/cKoCEqzU=; b=sQ7SgrVrUeGkaZGhP+aeNCLjWnWSIg14I/yLlQOTEJKq2RtM6JmJQxjvZYOiTsi1ah 3ziMrvEG71peQWfv3k4aKJqNRhyrGdkmRcZX4Sa8tCicTFVdbousLOyI2WEX0ecy6+Tn wgfeEE2WgMdpQXmVHZFOCi24OcySDLlFwDCicwy/Np5Qk9FuzbbzHrIZO6JKMlOaXQZz WGChJGgJ4wdQoVKU+7X+qnv2XV0BnVV9qyR05jUeLK7ZxZIfTHMmwfeV7oNGwwwdctSz fnt5eR2gXXJOy5Jwyw3ukcf+oW9vL8ZiAQJShr64YSAcaA6Flytpx4w7VXPoe6yi2AF5 Vuxw== X-Gm-Message-State: AOJu0YwyoVbnNMOdZYXYy1NHqUlFAeN2b7/3Ax7ZV9MM2n9ncmt/yK3+ pWvyT2jrNmM8rdRkOabdWP1Jst3AFsJEWA== X-Google-Smtp-Source: AGHT+IF42FadWQu5G4PLNg7UeunhCCA0aAnLW0MNdxEL4FE3kC4QaFOss5KZ0wWuYDQGo5XERP4VXQ== X-Received: by 2002:a2e:9092:0:b0:2cc:6579:9e35 with SMTP id l18-20020a2e9092000000b002cc65799e35mr797398ljg.23.1703257888287; Fri, 22 Dec 2023 07:11:28 -0800 (PST) Received: from Zen2.lab.linutronix.de. (drugstore.linutronix.de. [80.153.143.164]) by smtp.gmail.com with ESMTPSA id m9-20020aa7c2c9000000b00552666f4745sm2650247edp.22.2023.12.22.07.11.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 07:11:27 -0800 (PST) From: Alexander Kanavin X-Google-Original-From: Alexander Kanavin To: openembedded-devel@lists.openembedded.org Cc: Alexander Kanavin Subject: [PATCH 6/9] mozjs-102: remove the recipe Date: Fri, 22 Dec 2023 16:11:05 +0100 Message-Id: <20231222151108.645675-6-alex@linutronix.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231222151108.645675-1-alex@linutronix.de> References: <20231222151108.645675-1-alex@linutronix.de> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Fri, 22 Dec 2023 15:11:36 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-devel/message/107760 mozjs-102 was retained for the purpose of supporting polkit; with the backport of mozjs-115 patch for polkit there are no further consumers, and it's not compatible with python 3.12. I didn't look into what specifically breaks with 3.12, as getting mozjs-115 to work with it was tricky enough, so I'd rather drop mozjs-102, than attempt to make it work. mozjs-115 is esr (long term support) release like mozjs-102, but it is also newer and therefore will remain in support longer. Signed-off-by: Alexander Kanavin --- .../packagegroup-meta-oe.bbappend | 1 - ...001-Cargo.toml-do-not-abort-on-panic.patch | 32 -------- .../0001-add-arm-to-list-of-mozinline.patch | 25 ------ ...-autoconf-s-config.sub-to-canonicali.patch | 29 ------- ...rewrite-cargo-host-linker-in-python3.patch | 54 ------------ ...ix-one-occasionally-reproduced-confi.patch | 48 ----------- ...nfigure-do-not-look-for-llvm-objdump.patch | 44 ---------- ...o-not-try-to-find-a-suitable-upstrea.patch | 66 --------------- .../mozjs-102/0004-use-asm-sgidefs.h.patch | 38 --------- .../mozjs/mozjs-102/fix-musl-build.patch | 29 ------- .../mozjs-102/musl-disable-stackwalk.patch | 18 ---- .../mozjs/mozjs-102/riscv32.patch | 60 -------------- .../mozjs/mozjs-102_102.15.1.bb | 82 ------------------- 13 files changed, 526 deletions(-) delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/0001-Cargo.toml-do-not-abort-on-panic.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/0001-add-arm-to-list-of-mozinline.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/0001-build-do-not-use-autoconf-s-config.sub-to-canonicali.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/0001-rewrite-cargo-host-linker-in-python3.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/0001-util.configure-fix-one-occasionally-reproduced-confi.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/0002-moz.configure-do-not-look-for-llvm-objdump.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/0003-rust.configure-do-not-try-to-find-a-suitable-upstrea.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/0004-use-asm-sgidefs.h.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/fix-musl-build.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/musl-disable-stackwalk.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102/riscv32.patch delete mode 100644 meta-oe/recipes-extended/mozjs/mozjs-102_102.15.1.bb diff --git a/meta-oe/dynamic-layers/meta-python/recipes-core/packagegroups/packagegroup-meta-oe.bbappend b/meta-oe/dynamic-layers/meta-python/recipes-core/packagegroups/packagegroup-meta-oe.bbappend index c3d4cbc50..db1813189 100644 --- a/meta-oe/dynamic-layers/meta-python/recipes-core/packagegroups/packagegroup-meta-oe.bbappend +++ b/meta-oe/dynamic-layers/meta-python/recipes-core/packagegroups/packagegroup-meta-oe.bbappend @@ -14,7 +14,6 @@ RDEPENDS:packagegroup-meta-oe-connectivity += "\ RDEPENDS:packagegroup-meta-oe-extended += "\ lcdproc \ - mozjs-102 \ " RDEPENDS:packagegroup-meta-oe-support += "\ nvmetcli \ diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/0001-Cargo.toml-do-not-abort-on-panic.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/0001-Cargo.toml-do-not-abort-on-panic.patch deleted file mode 100644 index 0dd936197..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/0001-Cargo.toml-do-not-abort-on-panic.patch +++ /dev/null @@ -1,32 +0,0 @@ -From bb46a8a729cc4d66ad36db40c17e36a5111f19c3 Mon Sep 17 00:00:00 2001 -From: Alexander Kanavin -Date: Fri, 1 Oct 2021 13:00:24 +0200 -Subject: [PATCH] Cargo.toml: do not abort on panic - -OE's rust is configured to unwind, and this setting clashes with it/ - -Upstream-Status: Inappropriate [oe-core specific] -Signed-off-by: Alexander Kanavin - ---- - Cargo.toml | 2 -- - 1 file changed, 2 deletions(-) - -diff --git a/Cargo.toml b/Cargo.toml -index f576534bf3..5ecc17c319 100644 ---- a/Cargo.toml -+++ b/Cargo.toml -@@ -56,13 +56,11 @@ opt-level = 1 - rpath = false - lto = false - debug-assertions = true --panic = "abort" - - [profile.release] - opt-level = 2 - rpath = false - debug-assertions = false --panic = "abort" - - # Optimize build dependencies, because bindgen and proc macros / style - # compilation take more to run than to build otherwise. diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/0001-add-arm-to-list-of-mozinline.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/0001-add-arm-to-list-of-mozinline.patch deleted file mode 100644 index 02f5e5c7e..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/0001-add-arm-to-list-of-mozinline.patch +++ /dev/null @@ -1,25 +0,0 @@ -Backport patch from firefox bugzilla to fix compile error for qemuarm with -some armv7ve tunes such as 'armv7vethf' and 'armv7vet-vfpv3d16': - -| /path/to/build/tmp/work/armv7vet2hf-vfp-poky-linux-gnueabi/mozjs-102/102.5.0-r0/build/js/src/jit/AtomicOperationsGenerated.h:240:17: - error: 'asm' operand has impossible constraints -| 240 | asm volatile ( -| | ^~~ - -Upstream-Status: Submitted [https://bugzilla.mozilla.org/show_bug.cgi?id=1761665] - -Signed-off-by: Kai Kang - -diff --git a/js/src/jit/GenerateAtomicOperations.py b/js/src/jit/GenerateAtomicOperations.py -index d8a38a0..65f91ab 100644 ---- a/js/src/jit/GenerateAtomicOperations.py -+++ b/js/src/jit/GenerateAtomicOperations.py -@@ -856,7 +856,7 @@ def generate_atomics_header(c_out): - - # Work around a GCC issue on 32-bit x86 by adding MOZ_NEVER_INLINE. - # See bug 1756347. -- if is_gcc and cpu_arch == "x86": -+ if is_gcc and cpu_arch in ("x86", "arm"): - contents = contents.replace("INLINE_ATTR", "MOZ_NEVER_INLINE inline") - else: - contents = contents.replace("INLINE_ATTR", "inline") diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/0001-build-do-not-use-autoconf-s-config.sub-to-canonicali.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/0001-build-do-not-use-autoconf-s-config.sub-to-canonicali.patch deleted file mode 100644 index fe905fe4d..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/0001-build-do-not-use-autoconf-s-config.sub-to-canonicali.patch +++ /dev/null @@ -1,29 +0,0 @@ -From c860dcbe63b0e393c95bfb0131238f91aaac11d3 Mon Sep 17 00:00:00 2001 -From: Alexander Kanavin -Date: Thu, 7 Oct 2021 12:44:18 +0200 -Subject: [PATCH] build: do not use autoconf's config.sub to 'canonicalize' - names - -The outcome is that processed names no longer match our custom rust -target definitions, and the build fails. - -Upstream-Status: Inappropriate [oe-core specific] -Signed-off-by: Alexander Kanavin - ---- - build/moz.configure/init.configure | 2 +- - 1 file changed, 1 insertion(+), 1 deletion(-) - -diff --git a/build/moz.configure/init.configure b/build/moz.configure/init.configure -index 81f500a0b7..0b7a2ff60f 100644 ---- a/build/moz.configure/init.configure -+++ b/build/moz.configure/init.configure -@@ -585,7 +585,7 @@ def help_host_target(help, host, target): - - def config_sub(shell, triplet): - config_sub = os.path.join(os.path.dirname(__file__), "..", "autoconf", "config.sub") -- return check_cmd_output(shell, config_sub, triplet).strip() -+ return triplet - - - @depends("--host", shell) diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/0001-rewrite-cargo-host-linker-in-python3.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/0001-rewrite-cargo-host-linker-in-python3.patch deleted file mode 100644 index 73bcffe94..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/0001-rewrite-cargo-host-linker-in-python3.patch +++ /dev/null @@ -1,54 +0,0 @@ -From 8e318c4e7e732327dabf51027860de45b6fb731e Mon Sep 17 00:00:00 2001 -From: Changqing Li -Date: Thu, 18 Nov 2021 07:16:39 +0000 -Subject: [PATCH] Rewrite cargo-host-linker in python3 - -Mozjs compile failed with this failure: -/bin/sh: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by /build/tmp-glibc/work/corei7-64-wrs-linux/mozjs/91.1.0-r0/recipe-sysroot-native/usr/lib/libtinfo.so.5) - -Root Cause: -cargo-host-linker has /bin/sh as it's interpreter, but cargo run the cmd -with LD_LIBRARY_PATH set to recipe-sysroot-native. The host /bin/sh links -libtinfo.so.5 under recipe-sysroot-native, which needs higher libc. But -host libc is older libc. So the incompatible problem occurred. - -Solution: -rewrite cargo-host-linker in python3 - -Upstream-Status: Inappropriate [oe specific] - -Signed-off-by: Changqing Li - ---- - build/cargo-host-linker | 24 +++++++++++++++++++++--- - 1 file changed, 21 insertions(+), 3 deletions(-) - -diff --git a/build/cargo-host-linker b/build/cargo-host-linker -index cbd0472bf7..87d43ce9ec 100755 ---- a/build/cargo-host-linker -+++ b/build/cargo-host-linker -@@ -1,3 +1,21 @@ --#!/bin/sh --# See comment in cargo-linker. --eval ${MOZ_CARGO_WRAP_HOST_LD} ${MOZ_CARGO_WRAP_HOST_LDFLAGS} '"$@"' -+#!/usr/bin/env python3 -+ -+import os,sys -+ -+if os.environ['MOZ_CARGO_WRAP_HOST_LD'].strip(): -+ binary=os.environ['MOZ_CARGO_WRAP_HOST_LD'].split()[0] -+else: -+ sys.exit(0) -+ -+if os.environ['MOZ_CARGO_WRAP_HOST_LDFLAGS'].strip(): -+ if os.environ['MOZ_CARGO_WRAP_HOST_LD'].split()[1:]: -+ args=[os.environ['MOZ_CARGO_WRAP_HOST_LD'].split()[0]] + os.environ['MOZ_CARGO_WRAP_HOST_LD'].split()[1:] + [os.environ['MOZ_CARGO_WRAP_HOST_LDFLAGS']] + sys.argv[1:] -+ else: -+ args=[os.environ['MOZ_CARGO_WRAP_HOST_LD'].split()[0]] + [os.environ['MOZ_CARGO_WRAP_HOST_LDFLAGS']] + sys.argv[1:] -+else: -+ if os.environ['MOZ_CARGO_WRAP_HOST_LD'].split()[1:]: -+ args=[os.environ['MOZ_CARGO_WRAP_HOST_LD'].split()[0]] + os.environ['MOZ_CARGO_WRAP_HOST_LD'].split()[1:] + sys.argv[1:] -+ else: -+ args=[os.environ['MOZ_CARGO_WRAP_HOST_LD'].split()[0]] + sys.argv[1:] -+ -+os.execvp(binary, args) diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/0001-util.configure-fix-one-occasionally-reproduced-confi.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/0001-util.configure-fix-one-occasionally-reproduced-confi.patch deleted file mode 100644 index d732fdaf6..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/0001-util.configure-fix-one-occasionally-reproduced-confi.patch +++ /dev/null @@ -1,48 +0,0 @@ -From 2a6f66f39b4e623428b6d282bd4cb72dde67c1a6 Mon Sep 17 00:00:00 2001 -From: Changqing Li -Date: Thu, 11 Nov 2021 16:05:54 +0800 -Subject: [PATCH] util.configure: fix one occasionally reproduced configure - failure - -error: -| checking whether the C++ compiler supports -Wno-range-loop-analysis... -| DEBUG: Creating /tmp/conftest.jr1qrcw3.cpp with content: -| DEBUG: | int -| DEBUG: | main(void) -| DEBUG: | { -| DEBUG: | -| DEBUG: | ; -| DEBUG: | return 0; -| DEBUG: | } -| DEBUG: Executing: aarch64-wrs-linux-g++ -mcpu=cortex-a53 -march=armv8-a+crc -fstack-protector-strong -O2 -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/mozjs/91.1.0-r0/recipe-sysroot /tmp/conftest.jr1qrcw3.cpp -Werror -Wrange-loop-analysis -c -| DEBUG: The command returned non-zero exit status 1. -| DEBUG: Its error output was: -... -| File "/mozjs/91.1.0-r0/firefox-91.1.0/build/moz.configure/util.configure", line 239, in try_invoke_compiler -| os.remove(path) -| FileNotFoundError: [Errno 2] No such file or directory: '/tmp/conftest.jr1qrcw3.cpp' - -It should be another process that deleted this file by using -"rm -rf conftest*" inappropriately - -Upstream-Status: Submitted [https://bugzilla.mozilla.org/show_bug.cgi?id=1740667] - -Signed-off-by: Changqing Li - ---- - build/moz.configure/util.configure | 2 +- - 1 file changed, 1 insertion(+), 1 deletion(-) - -diff --git a/build/moz.configure/util.configure b/build/moz.configure/util.configure -index 80c3a34522..0ac0c6b611 100644 ---- a/build/moz.configure/util.configure -+++ b/build/moz.configure/util.configure -@@ -216,7 +216,7 @@ def try_invoke_compiler(compiler, language, source, flags=None, onerror=None): - "C++": ".cpp", - }[language] - -- fd, path = mkstemp(prefix="conftest.", suffix=suffix, text=True) -+ fd, path = mkstemp(prefix="try_invoke_compiler_conftest.", suffix=suffix, text=True) - try: - source = source.encode("ascii", "replace") - diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/0002-moz.configure-do-not-look-for-llvm-objdump.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/0002-moz.configure-do-not-look-for-llvm-objdump.patch deleted file mode 100644 index b3d3c1ffa..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/0002-moz.configure-do-not-look-for-llvm-objdump.patch +++ /dev/null @@ -1,44 +0,0 @@ -From 0133ddb86eb6e0741e02b0032c41468db6438530 Mon Sep 17 00:00:00 2001 -From: Alexander Kanavin -Date: Fri, 1 Oct 2021 13:01:10 +0200 -Subject: [PATCH] moz.configure: do not look for llvm-objdump - -This avoid dragging in a dependency that isn't even needed -for js builds. - -Upstream-Status: Inappropriate [oe-core specific] -Signed-off-by: Alexander Kanavin ---- - moz.configure | 18 +++++++++--------- - 1 file changed, 9 insertions(+), 9 deletions(-) - -diff --git a/moz.configure b/moz.configure -index fc66b520d0..15de9a2ee0 100755 ---- a/moz.configure -+++ b/moz.configure -@@ -785,15 +785,15 @@ - return llvm_tool - - --llvm_objdump = check_prog( -- "LLVM_OBJDUMP", -- llvm_tool("llvm-objdump"), -- what="llvm-objdump", -- when="--enable-compile-environment", -- paths=clang_search_path, --) -- --add_old_configure_assignment("LLVM_OBJDUMP", llvm_objdump) -+#llvm_objdump = check_prog( -+# "LLVM_OBJDUMP", -+# llvm_tool("llvm-objdump"), -+# what="llvm-objdump", -+# when="--enable-compile-environment", -+# paths=clang_search_path, -+#) -+# -+#add_old_configure_assignment("LLVM_OBJDUMP", llvm_objdump) - - - @depends(llvm_tool("llvm-readelf"), toolchain_prefix) - diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/0003-rust.configure-do-not-try-to-find-a-suitable-upstrea.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/0003-rust.configure-do-not-try-to-find-a-suitable-upstrea.patch deleted file mode 100644 index 202f12612..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/0003-rust.configure-do-not-try-to-find-a-suitable-upstrea.patch +++ /dev/null @@ -1,66 +0,0 @@ -From 33ff25e2b126dd4135006139641d8b7f6e4da200 Mon Sep 17 00:00:00 2001 -From: Alexander Kanavin -Date: Fri, 1 Oct 2021 13:02:17 +0200 -Subject: [PATCH] rust.configure: do not try to find a suitable upstream target - -OE is using custom targets and so this is bound to fail. - -Upstream-Status: Inappropriate [oe-core specific] -Signed-off-by: Alexander Kanavin - ---- - build/moz.configure/rust.configure | 34 ++---------------------------- - 1 file changed, 2 insertions(+), 32 deletions(-) - -diff --git a/build/moz.configure/rust.configure b/build/moz.configure/rust.configure -index e64dc5d5ec..edf21baca6 100644 ---- a/build/moz.configure/rust.configure -+++ b/build/moz.configure/rust.configure -@@ -471,33 +471,7 @@ def assert_rust_compile(host_or_target, rustc_target, rustc): - def rust_host_triple( - rustc, host, compiler_info, rustc_host, rust_supported_targets, arm_target - ): -- rustc_target = detect_rustc_target( -- host, compiler_info, arm_target, rust_supported_targets -- ) -- if rustc_target != rustc_host: -- if host.alias == rustc_target: -- configure_host = host.alias -- else: -- configure_host = "{}/{}".format(host.alias, rustc_target) -- die( -- dedent( -- """\ -- The rust compiler host ({rustc}) is not suitable for the configure host ({configure}). -- -- You can solve this by: -- * Set your configure host to match the rust compiler host by editing your -- mozconfig and adding "ac_add_options --host={rustc}". -- * Or, install the rust toolchain for {configure}, if supported, by running -- "rustup default stable-{rustc_target}" -- """.format( -- rustc=rustc_host, -- configure=configure_host, -- rustc_target=rustc_target, -- ) -- ) -- ) -- assert_rust_compile(host, rustc_target, rustc) -- return rustc_target -+ return rustc_host - - - @depends( -@@ -507,11 +481,7 @@ def rust_host_triple( - def rust_target_triple( - rustc, target, compiler_info, rust_supported_targets, arm_target - ): -- rustc_target = detect_rustc_target( -- target, compiler_info, arm_target, rust_supported_targets -- ) -- assert_rust_compile(target, rustc_target, rustc) -- return rustc_target -+ return target.alias - - - set_config("RUST_TARGET", rust_target_triple) diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/0004-use-asm-sgidefs.h.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/0004-use-asm-sgidefs.h.patch deleted file mode 100644 index ff28654b5..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/0004-use-asm-sgidefs.h.patch +++ /dev/null @@ -1,38 +0,0 @@ -From 0ec73937b01869a701ed9b60a6a84469e035ded4 Mon Sep 17 00:00:00 2001 -From: Andre McCurdy -Date: Sat, 30 Apr 2016 15:29:06 -0700 -Subject: [PATCH] use - -Build fix for MIPS with musl libc - -The MIPS specific header is provided by glibc and uclibc -but not by musl. Regardless of the libc, the kernel headers provide - which provides the same definitions, so use that -instead. - -Upstream-Status: Pending - -[Vincent: -Taken from: https://sourceware.org/bugzilla/show_bug.cgi?id=21070] - -Signed-off-by: Andre McCurdy -Signed-off-by: Khem Raj -Signed-off-by: Vicente Olivert Riera - ---- - mfbt/RandomNum.cpp | 2 +- - 1 file changed, 1 insertion(+), 1 deletion(-) - -diff --git a/mfbt/RandomNum.cpp b/mfbt/RandomNum.cpp -index 23381db0cd..7f127c0715 100644 ---- a/mfbt/RandomNum.cpp -+++ b/mfbt/RandomNum.cpp -@@ -52,7 +52,7 @@ extern "C" BOOLEAN NTAPI RtlGenRandom(PVOID RandomBuffer, - # elif defined(__s390__) - # define GETRANDOM_NR 349 - # elif defined(__mips__) --# include -+# include - # if _MIPS_SIM == _MIPS_SIM_ABI32 - # define GETRANDOM_NR 4353 - # elif _MIPS_SIM == _MIPS_SIM_ABI64 diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/fix-musl-build.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/fix-musl-build.patch deleted file mode 100644 index 6905282eb..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/fix-musl-build.patch +++ /dev/null @@ -1,29 +0,0 @@ -From 1110483c6c06adf2d03ed9154a8957defc175c80 Mon Sep 17 00:00:00 2001 -From: Khem Raj -Date: Wed, 20 Oct 2021 16:21:14 -0700 -Subject: [PATCH] mozjs: Fix musl miscompiles with HAVE_THREAD_TLS_KEYWORD - -Upstream: No -Reason: mozjs60 miscompiles on musl if built with HAVE_THREAD_TLS_KEYWORD: -https://github.com/void-linux/void-packages/issues/2598 - ---- -Upstream-Status: Pending - - js/src/old-configure.in | 3 +++ - 1 file changed, 3 insertions(+) - -diff --git a/js/src/old-configure.in b/js/src/old-configure.in -index 8dfd75c63d..c82e580428 100644 ---- a/js/src/old-configure.in -+++ b/js/src/old-configure.in -@@ -839,6 +839,9 @@ if test "$ac_cv_thread_keyword" = yes; then - *-android*|*-linuxandroid*) - : - ;; -+ *-musl*) -+ : -+ ;; - *) - AC_DEFINE(HAVE_THREAD_TLS_KEYWORD) - ;; diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/musl-disable-stackwalk.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/musl-disable-stackwalk.patch deleted file mode 100644 index a3ba469a4..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/musl-disable-stackwalk.patch +++ /dev/null @@ -1,18 +0,0 @@ -Musl does not have stack unwinder like glibc therefore -we can not assume that its always available on musl, we -do need to check for target environment as well which -could be musl or glibc. - -Upstream-Status: Pending -Signed-off-by: Khem Raj ---- a/mozglue/misc/StackWalk.cpp -+++ b/mozglue/misc/StackWalk.cpp -@@ -44,7 +44,7 @@ using namespace mozilla; - # define MOZ_STACKWALK_SUPPORTS_MACOSX 0 - #endif - --#if (defined(linux) && \ -+#if (defined(linux) && defined(__GLIBC__) && \ - ((defined(__GNUC__) && (defined(__i386) || defined(PPC))) || \ - defined(HAVE__UNWIND_BACKTRACE))) - # define MOZ_STACKWALK_SUPPORTS_LINUX 1 diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102/riscv32.patch b/meta-oe/recipes-extended/mozjs/mozjs-102/riscv32.patch deleted file mode 100644 index a6a0a9ede..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102/riscv32.patch +++ /dev/null @@ -1,60 +0,0 @@ -From 81385fe53ffde5e1636e9ace0736d914da8dbc0f Mon Sep 17 00:00:00 2001 -From: Khem Raj -Date: Sun, 24 Oct 2021 22:32:50 -0700 -Subject: [PATCH] Add RISCV32 support - -Upstream-Status: Pending -Signed-off-by: Khem Raj - ---- - build/moz.configure/init.configure | 3 +++ - python/mozbuild/mozbuild/configure/constants.py | 2 ++ - .../mozbuild/test/configure/test_toolchain_configure.py | 1 + - 3 files changed, 6 insertions(+) - -diff --git a/build/moz.configure/init.configure b/build/moz.configure/init.configure -index 0b7a2ff60f..54f8325b44 100644 ---- a/build/moz.configure/init.configure -+++ b/build/moz.configure/init.configure -@@ -524,6 +524,9 @@ def split_triplet(triplet, allow_msvc=False, allow_wasi=False): - elif cpu.startswith("aarch64"): - canonical_cpu = "aarch64" - endianness = "little" -+ elif cpu in ("riscv32", "riscv32gc"): -+ canonical_cpu = "riscv32" -+ endianness = "little" - elif cpu in ("riscv64", "riscv64gc"): - canonical_cpu = "riscv64" - endianness = "little" -diff --git a/python/mozbuild/mozbuild/configure/constants.py b/python/mozbuild/mozbuild/configure/constants.py -index c71460cb20..15bef93e19 100644 ---- a/python/mozbuild/mozbuild/configure/constants.py -+++ b/python/mozbuild/mozbuild/configure/constants.py -@@ -53,6 +53,7 @@ CPU_bitness = { - "mips64": 64, - "ppc": 32, - "ppc64": 64, -+ 'riscv32': 32, - "riscv64": 64, - "s390": 32, - "s390x": 64, -@@ -95,6 +96,7 @@ CPU_preprocessor_checks = OrderedDict( - ("m68k", "__m68k__"), - ("mips64", "__mips64"), - ("mips32", "__mips__"), -+ ("riscv32", "__riscv && __riscv_xlen == 32"), - ("riscv64", "__riscv && __riscv_xlen == 64"), - ("loongarch64", "__loongarch64"), - ("sh4", "__sh__"), -diff --git a/python/mozbuild/mozbuild/test/configure/test_toolchain_configure.py b/python/mozbuild/mozbuild/test/configure/test_toolchain_configure.py -index 059cde0139..4f9986eb31 100644 ---- a/python/mozbuild/mozbuild/test/configure/test_toolchain_configure.py -+++ b/python/mozbuild/mozbuild/test/configure/test_toolchain_configure.py -@@ -1192,6 +1192,7 @@ class LinuxCrossCompileToolchainTest(BaseToolchainTest): - "m68k-unknown-linux-gnu": big_endian + {"__m68k__": 1}, - "mips64-unknown-linux-gnuabi64": big_endian + {"__mips64": 1, "__mips__": 1}, - "mips-unknown-linux-gnu": big_endian + {"__mips__": 1}, -+ "riscv32-unknown-linux-gnu": little_endian + {"__riscv": 1, "__riscv_xlen": 32}, - "riscv64-unknown-linux-gnu": little_endian + {"__riscv": 1, "__riscv_xlen": 64}, - "sh4-unknown-linux-gnu": little_endian + {"__sh__": 1}, - } diff --git a/meta-oe/recipes-extended/mozjs/mozjs-102_102.15.1.bb b/meta-oe/recipes-extended/mozjs/mozjs-102_102.15.1.bb deleted file mode 100644 index 3a7b51c14..000000000 --- a/meta-oe/recipes-extended/mozjs/mozjs-102_102.15.1.bb +++ /dev/null @@ -1,82 +0,0 @@ -SUMMARY = "SpiderMonkey is Mozilla's JavaScript engine written in C/C++" -HOMEPAGE = "https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey" -LICENSE = "MPL-2.0" -LIC_FILES_CHKSUM = "file://LICENSE;md5=dc9b6ecd19a14a54a628edaaf23733bf" - -SRC_URI = "https://archive.mozilla.org/pub/firefox/releases/${PV}esr/source/firefox-${PV}esr.source.tar.xz \ - file://0001-Cargo.toml-do-not-abort-on-panic.patch \ - file://0002-moz.configure-do-not-look-for-llvm-objdump.patch \ - file://0003-rust.configure-do-not-try-to-find-a-suitable-upstrea.patch \ - file://0004-use-asm-sgidefs.h.patch \ - file://fix-musl-build.patch \ - file://0001-build-do-not-use-autoconf-s-config.sub-to-canonicali.patch \ - file://riscv32.patch \ - file://0001-util.configure-fix-one-occasionally-reproduced-confi.patch \ - file://0001-rewrite-cargo-host-linker-in-python3.patch \ - file://musl-disable-stackwalk.patch \ - file://0001-add-arm-to-list-of-mozinline.patch \ - " -SRC_URI[sha256sum] = "09194fb765953bc6979a35aa8834118c453b9d6060bf1ec4e134551bad740113" - -S = "${WORKDIR}/firefox-${PV}" - -inherit pkgconfig perlnative python3native rust - -DEPENDS += "zlib cargo-native python3 icu" -DEPENDS:remove:mipsarch = "icu" -DEPENDS:remove:powerpc:toolchain-clang = "icu" - -B = "${WORKDIR}/build" - -export PYTHONPATH = "${S}/build:${S}/third_party/python/PyYAML/lib3:${S}/testing/mozbase/mozfile:${S}/python/mozboot:${S}/third_party/python/distro:${S}/testing/mozbase/mozinfo:${S}/config:${S}/testing/mozbase/manifestparser:${S}/third_party/python/pytoml:${S}/testing/mozbase/mozprocess:${S}/third_party/python/six:${S}/python/mozbuild:${S}/python/mozbuild/mozbuild:${S}/python/mach:${S}/third_party/python/jsmin:${S}/python/mozversioncontrol" - -export HOST_CC = "${BUILD_CC}" -export HOST_CXX = "${BUILD_CXX}" -export HOST_CFLAGS = "${BUILD_CFLAGS}" -export HOST_CPPFLAGS = "${BUILD_CPPFLAGS}" -export HOST_CXXFLAGS = "${BUILD_CXXFLAGS}" - -export AS = "${CC}" - -export RUSTFLAGS - -JIT ?= "" -JIT:mipsarch = "--disable-jit" -ICU ?= "--with-system-icu" -ICU:mipsarch = "" -ICU:powerpc:toolchain-clang = "" - -do_configure() { - cd ${B} - python3 ${S}/configure.py \ - --enable-project=js \ - --target=${RUST_HOST_SYS} \ - --host=${BUILD_SYS} \ - --prefix=${prefix} \ - --libdir=${libdir} \ - --disable-jemalloc \ - --disable-strip \ - ${JIT} \ - ${ICU} -} - -do_install() { - oe_runmake 'DESTDIR=${D}' install -} - -inherit multilib_script multilib_header - -MAJ_VER = "${@oe.utils.trim_version("${PV}", 1)}" -MULTILIB_SCRIPTS += "${PN}-dev:${bindir}/js${MAJ_VER}-config" - -do_install:append() { - oe_multilib_header mozjs-${MAJ_VER}/js-config.h - sed -e 's@${STAGING_DIR_HOST}@@g' \ - -i ${D}${bindir}/js${MAJ_VER}-config - rm -f ${D}${libdir}/libjs_static.ajs - # remove the build path - sed -i -e 's@${WORKDIR}@@g' `find ${B} -name Unified_c*.c*` -} - -PACKAGES =+ "lib${BPN}" -FILES:lib${BPN} += "${libdir}/lib*" From patchwork Fri Dec 22 15:11:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kanavin X-Patchwork-Id: 36865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 837FAC4706E for ; Fri, 22 Dec 2023 15:11:36 +0000 (UTC) Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by mx.groups.io with SMTP id smtpd.web10.25179.1703257890347049576 for ; Fri, 22 Dec 2023 07:11:30 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@gmail.com header.s=20230601 header.b=mqNu7Y0J; spf=pass (domain: gmail.com, ip: 209.85.208.49, mailfrom: alex.kanavin@gmail.com) Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-55361b7f38eso2400298a12.0 for ; Fri, 22 Dec 2023 07:11:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703257889; x=1703862689; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0zkrMY69GOfT2058EwqnqZeExYaHdjA1Cf8zbMOyexc=; b=mqNu7Y0JKdRRH0X1M5UzC6ro5lVsSmJB5utD+WmHSfuAN6wRruOgp/lefHFo5XMDOJ 7k3PdlY2Nnv+PP793mVnbfu4EM0IzFK3TGcrEpaBiJImcSG4+lmpFfPPcbZeXDHZ5Lzj PJVwTGeb/8h/YSpq50DnpD3sWcqOh3mf46sjniPXwMYAEChjje+lwGWfTrqcyD5Z3n0s Z/GBA9+sJOqT7DcbE5T3rOdHbZdPIkuaTymXRmwgl0oOUEh6vd0LKzqRm3eYkPd6+Ci4 jKpb1JZJtS5dAeafbymDSEgN7AlFcL+FEvs7CynmKg3eDBTXFnQZyx7xXELAgjHH+lOr 02wA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703257889; x=1703862689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0zkrMY69GOfT2058EwqnqZeExYaHdjA1Cf8zbMOyexc=; b=tibnt2lMLKzLIjeXmZY1ez+KVpkAQaA10hpzOPeQv8IWLrzhAi2pdq4E9TnTtmSp1l 9LQiFtfcg/Drcn5X2RzZv0jdVQjs/roV/MAIyJZRX7L1GIgGWZ3vTGV6MRfFZ16tWzeA te08ETzgSMKD/xLbuduqlLjYCYTKnaHI5joMphxUHqC8Ed6MqCdAagg4SYYmFDXrjxw8 s3aOJBHm2Dl6KT+XhHkxZE+1N9aO35laZ+MHhF71fyuP8XzlI7eQNQOO5z4vMsdHS2vY GBTJBjs1R05LoljQAdSb5mLFJWUpkCjamR7WdE7wR10l46kfGzPDbn5Q5KFgAOB8KfAP +9ew== X-Gm-Message-State: AOJu0YxJL+xdBjf/6ocSOIYf7lNqDWMfvjjH8A4sCChefHXm66cmcZbF mu/zwVwUkvFNCC0sB6g4gYRmVK/ge/DQNg== X-Google-Smtp-Source: AGHT+IG+rBvV92WqWMLQBioBG0iEHUgGgtSYMq/irSmz+1Y3Db67IZyLpNH64//F9eP5HO3SH7xfXw== X-Received: by 2002:a50:ed11:0:b0:553:b473:6d70 with SMTP id j17-20020a50ed11000000b00553b4736d70mr477547eds.120.1703257888828; Fri, 22 Dec 2023 07:11:28 -0800 (PST) Received: from Zen2.lab.linutronix.de. (drugstore.linutronix.de. [80.153.143.164]) by smtp.gmail.com with ESMTPSA id m9-20020aa7c2c9000000b00552666f4745sm2650247edp.22.2023.12.22.07.11.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 07:11:28 -0800 (PST) From: Alexander Kanavin X-Google-Original-From: Alexander Kanavin To: openembedded-devel@lists.openembedded.org Cc: Alexander Kanavin Subject: [PATCH 7/9] gthumb: update 3.12.2 -> 3.12.4 Date: Fri, 22 Dec 2023 16:11:06 +0100 Message-Id: <20231222151108.645675-7-alex@linutronix.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231222151108.645675-1-alex@linutronix.de> References: <20231222151108.645675-1-alex@linutronix.de> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Fri, 22 Dec 2023 15:11:36 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-devel/message/107757 Drop erroneous autotools assignment as well; not sure how this wasn't noticed until now. Signed-off-by: Alexander Kanavin --- .../gthumb/{gthumb_3.12.2.bb => gthumb_3.12.4.bb} | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) rename meta-gnome/recipes-gnome/gthumb/{gthumb_3.12.2.bb => gthumb_3.12.4.bb} (81%) diff --git a/meta-gnome/recipes-gnome/gthumb/gthumb_3.12.2.bb b/meta-gnome/recipes-gnome/gthumb/gthumb_3.12.4.bb similarity index 81% rename from meta-gnome/recipes-gnome/gthumb/gthumb_3.12.2.bb rename to meta-gnome/recipes-gnome/gthumb/gthumb_3.12.4.bb index ecf8f6ab5..79db8b7d7 100644 --- a/meta-gnome/recipes-gnome/gthumb/gthumb_3.12.2.bb +++ b/meta-gnome/recipes-gnome/gthumb/gthumb_3.12.4.bb @@ -23,9 +23,8 @@ DEPENDS = " \ libsecret \ " -GNOMEBASEBUILDCLASS = "autotools" inherit features_check gnomebase gnome-help gsettings itstool mime-xdg -SRC_URI[archive.sha256sum] = "97f8afe522535216541ebbf1e3b546d12a6beb38a8f0eb85f26e676934aad425" +SRC_URI[archive.sha256sum] = "add693ac0aeb9a30d829ba03a06208289d3f6868dc3b02573549e88190c794e8" FILES:${PN} += "${datadir}/metainfo" From patchwork Fri Dec 22 15:11:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kanavin X-Patchwork-Id: 36864 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FA18C4706C for ; Fri, 22 Dec 2023 15:11:36 +0000 (UTC) Received: from mail-ed1-f41.google.com (mail-ed1-f41.google.com [209.85.208.41]) by mx.groups.io with SMTP id smtpd.web10.25181.1703257890812123885 for ; Fri, 22 Dec 2023 07:11:31 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@gmail.com header.s=20230601 header.b=I+2wJMOZ; spf=pass (domain: gmail.com, ip: 209.85.208.41, mailfrom: alex.kanavin@gmail.com) Received: by mail-ed1-f41.google.com with SMTP id 4fb4d7f45d1cf-5534abbc637so2283103a12.0 for ; Fri, 22 Dec 2023 07:11:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703257889; x=1703862689; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mXbN6EQjgv2BNsfxe+wmv4mgAp+SHqPTDfAA41oFw0s=; b=I+2wJMOZv6az9fE3l0+yBD1iACVJspPOJ4xX0C3QAFmYwl4VMUK2ustEZpjMnKd+3g BptmSQyuF+J/bIVTI0tkMe4HlbkwyLuNcr3r6I4X2/lqEBTzsPGBezQ7jumn19tA6ryB Y2mmasEgQKt0TFFKjCcJwRCGliAN+RTBrMwZk9am1jgwsxH1dxO+QsCnusNh83TMBcDF clBpB9L7uRqrPI8mr5wIAJX97G1339QXoUqUDitPEk/MewtwtKxJ5KC8el86noXO8eBF SuR/tm/l2GA2s03QF00BLG1uOQsmNe1AO6IoJzYqkvLpM8x2rJzDTlZaL86eP9Yl1FlU TZMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703257889; x=1703862689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mXbN6EQjgv2BNsfxe+wmv4mgAp+SHqPTDfAA41oFw0s=; b=wQhkQhC14AI9GzcvQk1jCcU4eI82OiiUwaFNY9xz66oeXIlIAqRPp3fEAGi1Lv7cv1 rYGG7LVk7VOtyTS3S2GGIdw8BX0Dsxwa7Bud299xXcMcTcK4lCuxlO5v6QjJtASEOJql +QmnP4fEjtMeAnR7xVamco3hD9Jord2zgLsDTkXE96FTYulvAS9OiHgYy/pQK5AJOEPN meMqtRx/ARElJXaFM/ev2Mtl86agk3M+X/rJnXTfJgptzb/MQSThnmEH+cv4h+iMb4qp K8yszX+JpTL2PXXwOS2XIYW7NBF9r2MbFJHCemtKQyC/DUk2Zplbiwr7khvmKrvMqz/b Cxew== X-Gm-Message-State: AOJu0YzkZyyjA04Az964NC2NMK+QNaBLZFgKhbiBAirEAPlZtEeuQGcW akFHl8QkpO8UFnoODTXql5AvE6hxsatf0Q== X-Google-Smtp-Source: AGHT+IEY97+BElBVeLfVpShU5JTzIYim/W4mLLFO2pt/Y3CU/8X/cUfJTiM0E5SbwkGnRLapyM+Ycw== X-Received: by 2002:a50:d702:0:b0:554:4ad0:f4a6 with SMTP id t2-20020a50d702000000b005544ad0f4a6mr682972edi.13.1703257889277; Fri, 22 Dec 2023 07:11:29 -0800 (PST) Received: from Zen2.lab.linutronix.de. (drugstore.linutronix.de. [80.153.143.164]) by smtp.gmail.com with ESMTPSA id m9-20020aa7c2c9000000b00552666f4745sm2650247edp.22.2023.12.22.07.11.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 07:11:29 -0800 (PST) From: Alexander Kanavin X-Google-Original-From: Alexander Kanavin To: openembedded-devel@lists.openembedded.org Cc: Alexander Kanavin Subject: [PATCH 8/9] flatpak: do not rely on executables from the host Date: Fri, 22 Dec 2023 16:11:07 +0100 Message-Id: <20231222151108.645675-8-alex@linutronix.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231222151108.645675-1-alex@linutronix.de> References: <20231222151108.645675-1-alex@linutronix.de> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Fri, 22 Dec 2023 15:11:36 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-devel/message/107759 This is not how yocto builds work: any needed executables should come from the build itself, with limited exceptions listed in HOSTTOOLS. flatpak is entirely capable of building without requiring them upfront. Signed-off-by: Alexander Kanavin --- meta-oe/recipes-extended/flatpak/flatpak_1.15.6.bb | 2 -- 1 file changed, 2 deletions(-) diff --git a/meta-oe/recipes-extended/flatpak/flatpak_1.15.6.bb b/meta-oe/recipes-extended/flatpak/flatpak_1.15.6.bb index 0ee53afb6..caa353bb8 100644 --- a/meta-oe/recipes-extended/flatpak/flatpak_1.15.6.bb +++ b/meta-oe/recipes-extended/flatpak/flatpak_1.15.6.bb @@ -43,8 +43,6 @@ RDEPENDS:${PN} = " \ xdg-dbus-proxy \ " -EXTRA_OEMESON += "-Dsystem_dbus_proxy=${bindir}/xdg-dbus-proxy -Dsystem_bubblewrap=${bindir}/bwrap" - GIR_MESON_OPTION = "gir" GIR_MESON_ENABLE_FLAG = 'enabled' GIR_MESON_DISABLE_FLAG = 'disabled' From patchwork Fri Dec 22 15:11:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kanavin X-Patchwork-Id: 36868 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id C07ABC47074 for ; Fri, 22 Dec 2023 15:11:36 +0000 (UTC) Received: from mail-lj1-f177.google.com (mail-lj1-f177.google.com [209.85.208.177]) by mx.groups.io with SMTP id smtpd.web10.25182.1703257891795279124 for ; Fri, 22 Dec 2023 07:11:32 -0800 Authentication-Results: mx.groups.io; dkim=pass header.i=@gmail.com header.s=20230601 header.b=WzoiSM8r; spf=pass (domain: gmail.com, ip: 209.85.208.177, mailfrom: alex.kanavin@gmail.com) Received: by mail-lj1-f177.google.com with SMTP id 38308e7fff4ca-2ccad57dadbso6271601fa.1 for ; Fri, 22 Dec 2023 07:11:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703257890; x=1703862690; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5w+zfQ6bq8ZPC32re01x3QWOOx0/cg+VJY/0oqUZFqk=; b=WzoiSM8rIhb4DuHi9EiHrEMyZXDKVErRx03BhcKeomaKBY4AARcs07/iI4hXMXaEou jNFSRwp5E5QLrsdQYbrEwPQUnBaksgk+WpzKHX/VTonSd8AdAoOGdPIqRtoz9rTxmwRB Z50nQX44EXYltftADwkvNpeWfx+mOTGKm1Sg2N/HmJ6MtfcO27qFJAjncaM+KecHeG6c uDASFlhNk4bnlRh8Shc0mi8L1IjFRSttdbqefFG1B4s/5JIVZw5TImYsRHBiqfCFzGV1 KQIiYm9b2Gx8myR20w/QOhY20A5sDHXHoTuVbpDM96neWoDl6AWcevN00KZgoAABxdl3 9L1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703257890; x=1703862690; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5w+zfQ6bq8ZPC32re01x3QWOOx0/cg+VJY/0oqUZFqk=; b=fDT89KTo1FFdXA0QSvZByK3kU0M8+wcfe3V+V7W74SmoNREgowCuC8TBW+8btC5g4A TOhk/a3lcnKOl70CL5VSvPDGP23oiA+V369fB8fbk2TwNqcEgfGMc9aY84UrEsx8uLmQ jb57BEqduAQ4mvRDnQWlLteYE029sngsz03KZlGKPzWUtJwVVYQm2qAD+RUaZRL+F096 xhYyHfo6gIcnwmXHiz1XJPALa7C4zwE0SiLe+CUyzTofK2H09vB+ZAUWDPyDNkV/uWLP OUHGEWJsaKBjRVI4EgZgHvvGliOjtAqVEASfKPJGgrTcF5VCnO1NZUH73zbiT288gxg0 zTrQ== X-Gm-Message-State: AOJu0YxCk/gLDxWayUnY3jOxqNi9w+TCgx3h1Eo2rb0KxuI/ZH8qSmYZ YzjMMJIRAGgqFGIHZjAzWzgPp3Wj6KjhPg== X-Google-Smtp-Source: AGHT+IFFM3TX0jzt8GUIvN94gS4bNz3x6PCHcb9c0pyDikJ+aUsQr+MVAJwigUL0sBmkQMZOM2LXlQ== X-Received: by 2002:a05:651c:c6:b0:2cc:671c:a44f with SMTP id 6-20020a05651c00c600b002cc671ca44fmr775738ljr.33.1703257889859; Fri, 22 Dec 2023 07:11:29 -0800 (PST) Received: from Zen2.lab.linutronix.de. (drugstore.linutronix.de. [80.153.143.164]) by smtp.gmail.com with ESMTPSA id m9-20020aa7c2c9000000b00552666f4745sm2650247edp.22.2023.12.22.07.11.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 07:11:29 -0800 (PST) From: Alexander Kanavin X-Google-Original-From: Alexander Kanavin To: openembedded-devel@lists.openembedded.org Cc: Alexander Kanavin Subject: [PATCH 9/9] bolt: package systemd units Date: Fri, 22 Dec 2023 16:11:08 +0100 Message-Id: <20231222151108.645675-9-alex@linutronix.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231222151108.645675-1-alex@linutronix.de> References: <20231222151108.645675-1-alex@linutronix.de> MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Fri, 22 Dec 2023 15:11:36 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-devel/message/107761 This wasn't seen because the recipe is enabled only when systemd and polkit are both in distro features. Signed-off-by: Alexander Kanavin --- meta-oe/recipes-bsp/bolt/bolt_0.9.6.bb | 1 + 1 file changed, 1 insertion(+) diff --git a/meta-oe/recipes-bsp/bolt/bolt_0.9.6.bb b/meta-oe/recipes-bsp/bolt/bolt_0.9.6.bb index 860cb8381..4688ae860 100644 --- a/meta-oe/recipes-bsp/bolt/bolt_0.9.6.bb +++ b/meta-oe/recipes-bsp/bolt/bolt_0.9.6.bb @@ -18,4 +18,5 @@ inherit cmake pkgconfig meson features_check FILES:${PN} += "${datadir}/dbus-1/* \ ${datadir}/polkit-1/* \ + ${libdir}/systemd/* \ "