[2/3,v3] scripts/test-case-mgmt: store test result and reporting

Submitted by Yeoh Ee Peng on Jan. 4, 2019, 6:46 a.m. | Patch ID: 157569

Details

Message ID 1546584363-72836-3-git-send-email-ee.peng.yeoh@intel.com
State New
Headers show

Commit Message

Yeoh Ee Peng Jan. 4, 2019, 6:46 a.m.
These scripts were developed as an alternative testcase management
tool to Testopia. Using these scripts, user can manage the
testresults.json files generated by oeqa automated tests. Using the
"store" operation, user can store multiple groups of test result each
into individual git branch. Within each git branch, user can store
multiple testresults.json files under different directories (eg.
categorize directory by selftest-<distro>, runtime-<image>-<machine>).
Then, using the "report" operation, user can view the test result
summary for all available testresults.json files being stored that
were grouped by directory and test configuration.

The "report" operation expect the testresults.json file to use the
json format below.
{
    "<testresult_1>": {
        "configuration": {
            "<config_name_1>": "<config_value_1>",
            "<config_name_2>": "<config_value_2>",
            ...
            "<config_name_n>": "<config_value_n>",
        },
        "result": {
            "<testcase_namespace_1>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            "<testcase_namespace_2>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            ...
            "<testcase_namespace_n>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
        }
    },
    ...
    "<testresult_n>": {
        "configuration": {
            "<config_name_1>": "<config_value_1>",
            "<config_name_2>": "<config_value_2>",
            ...
            "<config_name_n>": "<config_value_n>",
        },
        "result": {
            "<testcase_namespace_1>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            "<testcase_namespace_2>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            ...
            "<testcase_namespace_n>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
        }
    },
}

To use these scripts, first source oe environment, then run the
entry point script to look for help.
    $ test-case-mgmt

To store test result from oeqa automated tests, execute the below
    $ test-case-mgmt store <source_dir> <git_branch>
By default, test result will be stored at <top_dir>/testresults

To store test result from oeqa automated tests under a specific
directory, execute the below
    $ test-case-mgmt store <source_dir> <git_branch> -s <sub_directory>

To view test report, execute the below
    $ test-case-mgmt view <git_branch>

This scripts depends on scripts/oe-git-archive where it was
facing error if gitpython package was not installed. Refer to
[YOCTO# 13082] for more detail.

[YOCTO# 12654]

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 scripts/lib/testcasemgmt/__init__.py               |   0
 scripts/lib/testcasemgmt/gitstore.py               | 172 +++++++++++++++++++++
 scripts/lib/testcasemgmt/report.py                 | 136 ++++++++++++++++
 scripts/lib/testcasemgmt/store.py                  |  40 +++++
 .../template/test_report_full_text.txt             |  33 ++++
 scripts/test-case-mgmt                             |  96 ++++++++++++
 6 files changed, 477 insertions(+)
 create mode 100644 scripts/lib/testcasemgmt/__init__.py
 create mode 100644 scripts/lib/testcasemgmt/gitstore.py
 create mode 100644 scripts/lib/testcasemgmt/report.py
 create mode 100644 scripts/lib/testcasemgmt/store.py
 create mode 100644 scripts/lib/testcasemgmt/template/test_report_full_text.txt
 create mode 100755 scripts/test-case-mgmt

Patch hide | download patch | download mbox

diff --git a/scripts/lib/testcasemgmt/__init__.py b/scripts/lib/testcasemgmt/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/scripts/lib/testcasemgmt/gitstore.py b/scripts/lib/testcasemgmt/gitstore.py
new file mode 100644
index 0000000..19ff28f
--- /dev/null
+++ b/scripts/lib/testcasemgmt/gitstore.py
@@ -0,0 +1,172 @@ 
+# test case management tool - store test result & log to git repository
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import tempfile
+import os
+import subprocess
+import shutil
+import scriptpath
+scriptpath.add_bitbake_lib_path()
+scriptpath.add_oe_lib_path()
+from oeqa.utils.git import GitRepo, GitError
+
+class GitStore(object):
+
+    def __init__(self, git_dir, git_branch):
+        self.git_dir = git_dir
+        self.git_branch = git_branch
+
+    def _git_init(self):
+        return GitRepo(self.git_dir, is_topdir=True)
+
+    def _run_git_cmd(self, repo, cmd):
+        try:
+            output = repo.run_cmd(cmd)
+            return True, output
+        except GitError:
+            return False, None
+
+    def check_if_git_dir_exist(self, logger):
+        if not os.path.exists('%s/.git' % self.git_dir):
+            logger.debug('Could not find destination git directory: %s' % self.git_dir)
+            return False
+        logger.debug('Found destination git directory: %s' % self.git_dir)
+        return True
+
+    def checkout_git_dir(self, logger):
+        repo = self._git_init()
+        cmd = 'checkout %s' % self.git_branch
+        (status, output) = self._run_git_cmd(repo, cmd)
+        if not status:
+            logger.debug('Could not find git branch: %s' % self.git_branch)
+            return False
+        logger.debug('Found git branch: %s' % self.git_branch)
+        return status
+
+    def _check_if_need_sub_dir(self, logger, git_sub_dir):
+        if len(git_sub_dir) > 0:
+            logger.debug('Need to store into sub dir: %s' % git_sub_dir)
+            return True
+        logger.debug('No need to store into sub dir')
+        return False
+
+    def _check_if_sub_dir_exist(self, logger, git_sub_dir):
+        if os.path.exists(os.path.join(self.git_dir, git_sub_dir)):
+            logger.debug('Found existing sub directory: %s' % os.path.join(self.git_dir, git_sub_dir))
+            return True
+        logger.debug('Could not find existing sub directory: %s' % os.path.join(self.git_dir, git_sub_dir))
+        return False
+
+    def _check_if_testresults_file_exist(self, logger, file_name):
+        if os.path.exists(os.path.join(self.git_dir, file_name)):
+            logger.debug('Found existing %s file inside: %s' % (file_name, self.git_dir))
+            return True
+        logger.debug('Could not find %s file inside: %s' % (file_name, self.git_dir))
+        return False
+
+    def _check_if_need_overwrite_existing(self, logger, overwrite_result):
+        if overwrite_result:
+            logger.debug('Overwriting existing testresult')
+        else:
+            logger.error('Skipped storing test result as it already exist. '
+                         'Specify overwrite argument if you wish to delete existing testresult and store again.')
+        return overwrite_result
+
+    def _create_temporary_workspace_dir(self):
+        return tempfile.mkdtemp(prefix='testresultlog.')
+
+    def _remove_temporary_workspace_dir(self, workspace_dir):
+        return subprocess.run(["rm", "-rf",  workspace_dir])
+
+    def _oe_copy_files(self, logger, source_dir, destination_dir):
+        from oe.path import copytree
+        if os.path.exists(source_dir):
+            logger.debug('Copying test result from %s to %s' % (source_dir, destination_dir))
+            copytree(source_dir, destination_dir)
+        else:
+            logger.error('Could not find the source directory: %s' % source_dir)
+
+    def _copy_files(self, logger, source_dir, destination_dir, copy_ignore=None):
+        from shutil import copytree
+        if os.path.exists(source_dir):
+            logger.debug('Copying test result from %s to %s' % (source_dir, destination_dir))
+            copytree(source_dir, destination_dir, ignore=copy_ignore)
+        else:
+            logger.error('Could not find the source directory: %s' % source_dir)
+
+    def _get_commit_subject_and_body(self, git_sub_dir):
+        commit_msg_subject = 'Store %s from {hostname}' % os.path.join(self.git_dir, git_sub_dir)
+        commit_msg_body = 'git dir: %s\nsub dir list: %s\nhostname: {hostname}' % (self.git_dir, git_sub_dir)
+        return commit_msg_subject, commit_msg_body
+
+    def _store_files_to_git(self, logger, file_dir, commit_msg_subject, commit_msg_body):
+        logger.debug('Storing test result into git repository (%s) and branch (%s)'
+                     % (self.git_dir, self.git_branch))
+        return subprocess.run(["oe-git-archive",
+                               file_dir,
+                               "-g", self.git_dir,
+                               "-b", self.git_branch,
+                               "--commit-msg-subject", commit_msg_subject,
+                               "--commit-msg-body", commit_msg_body])
+
+    def _store_files_to_new_git(self, logger, source_dir, git_sub_dir):
+        logger.debug('Could not find destination git directory (%s) or git branch (%s)' %
+                     (self.git_dir, self.git_branch))
+        logger.debug('Storing files to new git or branch')
+        dest_top_dir = self._create_temporary_workspace_dir()
+        dest_sub_dir = os.path.join(dest_top_dir, git_sub_dir)
+        self._oe_copy_files(logger, source_dir, dest_sub_dir)
+        commit_msg_subject, commit_msg_body = self._get_commit_subject_and_body(git_sub_dir)
+        self._store_files_to_git(logger, dest_top_dir, commit_msg_subject, commit_msg_body)
+        self._remove_temporary_workspace_dir(dest_top_dir)
+
+    def _store_files_into_sub_dir_of_existing_git(self, logger, source_dir, git_sub_dir):
+        from shutil import ignore_patterns
+        logger.debug('Storing files to existing git with sub directory')
+        dest_ori_dir = self._create_temporary_workspace_dir()
+        dest_top_dir = os.path.join(dest_ori_dir, 'top_dir')
+        self._copy_files(logger, self.git_dir, dest_top_dir, copy_ignore=ignore_patterns('.git'))
+        dest_sub_dir = os.path.join(dest_top_dir, git_sub_dir)
+        self._oe_copy_files(logger, source_dir, dest_sub_dir)
+        commit_msg_subject, commit_msg_body = self._get_commit_subject_and_body(git_sub_dir)
+        self._store_files_to_git(logger, dest_top_dir, commit_msg_subject, commit_msg_body)
+        self._remove_temporary_workspace_dir(dest_ori_dir)
+
+    def _store_files_into_existing_git(self, logger, source_dir):
+        from shutil import ignore_patterns
+        logger.debug('Storing files to existing git without sub directory')
+        dest_ori_dir = self._create_temporary_workspace_dir()
+        dest_top_dir = os.path.join(dest_ori_dir, 'top_dir')
+        self._copy_files(logger, self.git_dir, dest_top_dir, copy_ignore=ignore_patterns('.git'))
+        self._oe_copy_files(logger, source_dir, dest_top_dir)
+        commit_msg_subject, commit_msg_body = self._get_commit_subject_and_body('')
+        self._store_files_to_git(logger, dest_top_dir, commit_msg_subject, commit_msg_body)
+        self._remove_temporary_workspace_dir(dest_ori_dir)
+
+    def store_test_result(self, logger, source_dir, git_sub_dir, overwrite_result):
+        if self.check_if_git_dir_exist(logger) and self.checkout_git_dir(logger):
+            if self._check_if_need_sub_dir(logger, git_sub_dir):
+                if self._check_if_sub_dir_exist(logger, git_sub_dir):
+                    if self._check_if_need_overwrite_existing(logger, overwrite_result):
+                        shutil.rmtree(os.path.join(self.git_dir, git_sub_dir))
+                        self._store_files_into_sub_dir_of_existing_git(logger, source_dir, git_sub_dir)
+                else:
+                    self._store_files_into_sub_dir_of_existing_git(logger, source_dir, git_sub_dir)
+            else:
+                if self._check_if_testresults_file_exist(logger, 'testresults.json'):
+                    if self._check_if_need_overwrite_existing(logger, overwrite_result):
+                        self._store_files_into_existing_git(logger, source_dir)
+                else:
+                    self._store_files_into_existing_git(logger, source_dir)
+        else:
+            self._store_files_to_new_git(logger, source_dir, git_sub_dir)
diff --git a/scripts/lib/testcasemgmt/report.py b/scripts/lib/testcasemgmt/report.py
new file mode 100644
index 0000000..7c9c440
--- /dev/null
+++ b/scripts/lib/testcasemgmt/report.py
@@ -0,0 +1,136 @@ 
+import os
+import glob
+import json
+from testcasemgmt.gitstore import GitStore
+
+class TextTestReport(object):
+
+    def _get_test_result_files(self, git_dir, excludes, test_result_file):
+        testresults = []
+        for root, dirs, files in os.walk(git_dir, topdown=True):
+            [dirs.remove(d) for d in list(dirs) if d in excludes]
+            for name in files:
+                if name == test_result_file:
+                    testresults.append(os.path.join(root, name))
+        return testresults
+
+    def _load_json_test_results(self, file):
+        if os.path.exists(file):
+            with open(file, "r") as f:
+                return json.load(f)
+        else:
+            return None
+
+    def _map_raw_test_result_to_predefined_list(self, testresult):
+        passed_list = ['PASSED', 'passed']
+        failed_list = ['FAILED', 'failed', 'ERROR', 'error']
+        skipped_list = ['SKIPPED', 'skipped']
+        test_result = {'passed': 0, 'failed': 0, 'skipped': 0, 'failed_testcases': []}
+
+        result = testresult["result"]
+        for testcase in result.keys():
+            test_status = result[testcase]["status"]
+            if test_status in passed_list:
+                test_result['passed'] += 1
+            elif test_status in failed_list:
+                test_result['failed'] += 1
+                test_result['failed_testcases'].append(testcase)
+            elif test_status in skipped_list:
+                test_result['skipped'] += 1
+        return test_result
+
+    def _compute_test_result_percentage(self, test_result):
+        total_tested = test_result['passed'] + test_result['failed'] + test_result['skipped']
+        test_result['passed_percent'] = 0
+        test_result['failed_percent'] = 0
+        test_result['skipped_percent'] = 0
+        if total_tested > 0:
+            test_result['passed_percent'] = format(test_result['passed']/total_tested * 100, '.2f')
+            test_result['failed_percent'] = format(test_result['failed']/total_tested * 100, '.2f')
+            test_result['skipped_percent'] = format(test_result['skipped']/total_tested * 100, '.2f')
+
+    def _convert_test_result_to_string(self, test_result):
+        test_result['passed_percent'] = str(test_result['passed_percent'])
+        test_result['failed_percent'] = str(test_result['failed_percent'])
+        test_result['skipped_percent'] = str(test_result['skipped_percent'])
+        test_result['passed'] = str(test_result['passed'])
+        test_result['failed'] = str(test_result['failed'])
+        test_result['skipped'] = str(test_result['skipped'])
+        if 'idle' in test_result:
+            test_result['idle'] = str(test_result['idle'])
+        if 'idle_percent' in test_result:
+            test_result['idle_percent'] = str(test_result['idle_percent'])
+        if 'complete' in test_result:
+            test_result['complete'] = str(test_result['complete'])
+        if 'complete_percent' in test_result:
+            test_result['complete_percent'] = str(test_result['complete_percent'])
+
+    def _compile_test_result(self, testresult):
+        test_result = self._map_raw_test_result_to_predefined_list(testresult)
+        self._compute_test_result_percentage(test_result)
+        self._convert_test_result_to_string(test_result)
+        return test_result
+
+    def _get_test_component(self, git_dir, file_dir):
+        test_component = 'None'
+        if git_dir != os.path.dirname(file_dir):
+            test_component = file_dir.replace(git_dir + '/', '')
+        return test_component
+
+    def _get_max_string_len(self, test_result_list, key, default_max_len):
+        max_len = default_max_len
+        for test_result in test_result_list:
+            value_len = len(test_result[key])
+            if value_len > max_len:
+                max_len = value_len
+        return max_len
+
+    def _render_text_test_report(self, template_file_name, test_result_list, max_len_component, max_len_config):
+        from jinja2 import Environment, FileSystemLoader
+        script_path = os.path.dirname(os.path.realpath(__file__))
+        file_loader = FileSystemLoader(script_path + '/template')
+        env = Environment(loader=file_loader, trim_blocks=True)
+        template = env.get_template(template_file_name)
+        output = template.render(test_reports=test_result_list,
+                                 max_len_component=max_len_component,
+                                 max_len_config=max_len_config)
+        print('Printing text-based test report:')
+        print(output)
+
+    def view_test_report(self, logger, git_dir):
+        test_result_list = []
+        for test_result_file in self._get_test_result_files(git_dir, ['.git'], 'testresults.json'):
+            logger.debug('Computing test result for test result file: %s' % test_result_file)
+            testresults = self._load_json_test_results(test_result_file)
+            for testresult_key in testresults.keys():
+                test_result = self._compile_test_result(testresults[testresult_key])
+                test_result['test_component'] = self._get_test_component(git_dir, test_result_file)
+                test_result['test_configuration'] = testresult_key
+                test_result['test_component_configuration'] = '%s_%s' % (test_result['test_component'],
+                                                                         test_result['test_configuration'])
+                test_result_list.append(test_result)
+        max_len_component = self._get_max_string_len(test_result_list, 'test_component', len('test_component'))
+        max_len_config = self._get_max_string_len(test_result_list, 'test_configuration', len('test_configuration'))
+        self._render_text_test_report('test_report_full_text.txt', test_result_list, max_len_component, max_len_config)
+
+def report(args, logger):
+    gitstore = GitStore(args.git_dir, args.git_branch)
+    if gitstore.check_if_git_dir_exist(logger):
+        if gitstore.checkout_git_dir(logger):
+            logger.debug('Checkout git branch: %s' % args.git_branch)
+            testreport = TextTestReport()
+            testreport.view_test_report(logger, args.git_dir)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('report', help='report test result summary',
+                                         description='report text-based test result summary from the source git '
+                                                     'directory with the given git branch',
+                                         group='report')
+    parser_build.set_defaults(func=report)
+    parser_build.add_argument('git_branch', help='git branch to be used to compute test summary report')
+    parser_build.add_argument('-d', '--git-dir', default='',
+                              help='(optional) source directory to be used as git repository '
+                                   'to compute test report where default location for source directory '
+                                   'will be <top_dir>/testresults')
diff --git a/scripts/lib/testcasemgmt/store.py b/scripts/lib/testcasemgmt/store.py
new file mode 100644
index 0000000..c80f7be
--- /dev/null
+++ b/scripts/lib/testcasemgmt/store.py
@@ -0,0 +1,40 @@ 
+# test case management tool - store test result
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+from testcasemgmt.gitstore import GitStore
+
+def store(args, logger):
+    gitstore = GitStore(args.git_dir, args.git_branch)
+    gitstore.store_test_result(logger, args.source_dir, args.git_sub_dir, args.overwrite_result)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('store', help='store test result files into git repository',
+                                         description='store the testresults.json file from the source directory into '
+                                                     'the destination git repository with the given git branch',
+                                         group='store')
+    parser_build.set_defaults(func=store)
+    parser_build.add_argument('source_dir',
+                              help='source directory that contain the test result files to be stored')
+    parser_build.add_argument('git_branch', help='git branch (new or existing) used to store the test result files')
+    parser_build.add_argument('-d', '--git-dir', default='',
+                              help='(optional) destination directory (new or existing) to be used as git repository '
+                                   'to store the test result files from the source directory where '
+                                   'default location for destination directory will be <top_dir>/testresults')
+    parser_build.add_argument('-s', '--git-sub-dir', default='',
+                              help='(optional) additional sub directory (new or existing) under the destination '
+                                   'directory (git-dir) where it will be used to hold the test result files, used '
+                                   'this if storing multiple test result files')
+    parser_build.add_argument('-o', '--overwrite-result', action='store_true',
+                              help='(optional) overwrite existing test result file with new file provided')
diff --git a/scripts/lib/testcasemgmt/template/test_report_full_text.txt b/scripts/lib/testcasemgmt/template/test_report_full_text.txt
new file mode 100644
index 0000000..2cec64c
--- /dev/null
+++ b/scripts/lib/testcasemgmt/template/test_report_full_text.txt
@@ -0,0 +1,33 @@ 
+==============================================================================================================
+Test Report (Count of passed, failed, skipped group by test_component, test_configuration)
+==============================================================================================================
+--------------------------------------------------------------------------------------------------------------
+{{ 'test_component'.ljust(max_len_component) }} | {{ 'test_configuration'.ljust(max_len_config) }} | {{ 'passed'.ljust(10) }} | {{ 'failed'.ljust(10) }} | {{ 'skipped'.ljust(10) }}
+--------------------------------------------------------------------------------------------------------------
+{% for report in test_reports |sort(attribute='test_component_configuration') %}
+{{ report.test_component.ljust(max_len_component) }} | {{ report.test_configuration.ljust(max_len_config) }} | {{ report.passed.ljust(10) }} | {{ report.failed.ljust(10) }} | {{ report.skipped.ljust(10) }}
+{% endfor %}
+--------------------------------------------------------------------------------------------------------------
+
+==============================================================================================================
+Test Report (Percent of passed, failed, skipped group by test_component, test_configuration)
+==============================================================================================================
+--------------------------------------------------------------------------------------------------------------
+{{ 'test_component'.ljust(max_len_component) }} | {{ 'test_configuration'.ljust(max_len_config) }} | {{ 'passed_%'.ljust(10) }} | {{ 'failed_%'.ljust(10) }} | {{ 'skipped_%'.ljust(10) }}
+--------------------------------------------------------------------------------------------------------------
+{% for report in test_reports |sort(attribute='test_component_configuration') %}
+{{ report.test_component.ljust(max_len_component) }} | {{ report.test_configuration.ljust(max_len_config) }} | {{ report.passed_percent.ljust(10) }} | {{ report.failed_percent.ljust(10) }} | {{ report.skipped_percent.ljust(10) }}
+{% endfor %}
+--------------------------------------------------------------------------------------------------------------
+
+==============================================================================================================
+Test Report (Failed test cases group by test_component, test_configuration)
+==============================================================================================================
+--------------------------------------------------------------------------------------------------------------
+{% for report in test_reports |sort(attribute='test_component_configuration') %}
+test_component | test_configuration : {{ report.test_component }} | {{ report.test_configuration }}
+{% for testcase in report.failed_testcases %}
+    {{ testcase }}
+{% endfor %}
+{% endfor %}
+--------------------------------------------------------------------------------------------------------------
\ No newline at end of file
diff --git a/scripts/test-case-mgmt b/scripts/test-case-mgmt
new file mode 100755
index 0000000..0df305d
--- /dev/null
+++ b/scripts/test-case-mgmt
@@ -0,0 +1,96 @@ 
+#!/usr/bin/env python3
+#
+# test case management tool - store test result, report test result summary,
+# & manual test execution
+#
+# As part of the initiative to provide LITE version Test Case Management System
+# with command-line to replace Testopia.
+# test-case-mgmt script was designed as part of the helper script for below purpose:
+# 1. To store test result inside git repository
+# 2. To report text-based test result summary
+# 3. (Future) To execute manual test cases
+#
+# To look for help information.
+#    $ test-case-mgmt
+#
+# To store test result, execute the below
+#    $ test-case-mgmt store <source_dir> <git_branch>
+#
+# To report test result summary, execute the below
+#     $ test-case-mgmt report <git_branch>
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+
+import os
+import sys
+import argparse
+import logging
+script_path = os.path.dirname(os.path.realpath(__file__))
+lib_path = script_path + '/lib'
+sys.path = sys.path + [lib_path]
+import argparse_oe
+import scriptutils
+import testcasemgmt.store
+import testcasemgmt.report
+logger = scriptutils.logger_create('test-case-mgmt')
+
+def _validate_user_input_arguments(args):
+    if hasattr(args, "source_dir"):
+        if not os.path.isdir(args.source_dir):
+            logger.error('source_dir argument need to be a directory : %s' % args.source_dir)
+            return False
+    if hasattr(args, "git_sub_dir"):
+        if '/' in args.git_sub_dir:
+            logger.error('git_sub_dir argument cannot contain / : %s' % args.git_sub_dir)
+            return False
+        if '\\' in r"%r" % args.git_sub_dir:
+            logger.error('git_sub_dir argument cannot contain \\ : %r' % args.git_sub_dir)
+            return False
+    return True
+
+def _set_default_arg_value(args):
+    if hasattr(args, "git_dir"):
+        if args.git_dir == '':
+            base_path = script_path + '/..'
+            args.git_dir = os.path.join(os.path.abspath(base_path), 'testresults')
+        logger.debug('Set git_dir argument: %s' % args.git_dir)
+
+def main():
+    parser = argparse_oe.ArgumentParser(description="OpenEmbedded test case management tool.",
+                                        epilog="Use %(prog)s <subcommand> --help to get help on a specific command")
+    parser.add_argument('-d', '--debug', help='enable debug output', action='store_true')
+    parser.add_argument('-q', '--quiet', help='print only errors', action='store_true')
+    subparsers = parser.add_subparsers(dest="subparser_name", title='subcommands', metavar='<subcommand>')
+    subparsers.required = True
+    subparsers.add_subparser_group('store', 'store test result', 200)
+    testcasemgmt.store.register_commands(subparsers)
+    subparsers.add_subparser_group('report', 'report test result summary', 100)
+    testcasemgmt.report.register_commands(subparsers)
+    args = parser.parse_args()
+    if args.debug:
+        logger.setLevel(logging.DEBUG)
+    elif args.quiet:
+        logger.setLevel(logging.ERROR)
+
+    if not _validate_user_input_arguments(args):
+        return -1
+    _set_default_arg_value(args)
+
+    try:
+        ret = args.func(args, logger)
+    except argparse_oe.ArgumentUsageError as ae:
+        parser.error_subcommand(ae.message, ae.subcommand)
+    return ret
+
+if __name__ == "__main__":
+    sys.exit(main())

Comments

Richard Purdie Jan. 21, 2019, 2:25 p.m.
On Fri, 2019-01-04 at 14:46 +0800, Yeoh Ee Peng wrote:
> These scripts were developed as an alternative testcase management
> tool to Testopia. Using these scripts, user can manage the
> testresults.json files generated by oeqa automated tests. Using the
> "store" operation, user can store multiple groups of test result each
> into individual git branch. Within each git branch, user can store
> multiple testresults.json files under different directories (eg.
> categorize directory by selftest-<distro>, runtime-<image>-
> <machine>).
> Then, using the "report" operation, user can view the test result
> summary for all available testresults.json files being stored that
> were grouped by directory and test configuration.
>
> This scripts depends on scripts/oe-git-archive where it was
> facing error if gitpython package was not installed. Refer to
> [YOCTO# 13082] for more detail.

Thanks for the patches. These are a lot more readable than the previous
versions and the code quality is much better which in turn helped
review!

I experimented with the code a bit. I'm fine with the manual test
execution piece of this, I do have some questions/concerns with the
result storage/reporting piece though.

What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test
results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per
test run? or ???
- Are branches used for each release series (master, thud, sumo etc?)
Basically, the layout we'd use to import the autobuilder results for
each master run for example remains unclear to me, or how we'd look up
the status of a given commit.

The code doesn't support comparison of two sets of test results (which
tests were added/removed? passed when previously failed? failed when
previously passed?)

The code also doesn't allow investigation of test report "subdata" like
looking at the ptest results, comparing them to previous runs, showing
the logs for passed/failed ptests.

There is also the question of json build performance data.

The idea behind this code is to give us a report which allows us to
decide on the QA state of a given set of testreport data. I'm just not
sure this patch set lets us do that, or gives us a path to allow us to
do that either.

Cheers,

Richard
Yeoh Ee Peng Jan. 22, 2019, 9:44 a.m.
Hi Richard,

After your recently sharing on pythonic, we had revised these scripts in hope to improve the code readability and ease of maintenance. Also new functionalities were developed following pythonic style. 

The latest patches are just submitted today at below URL. 
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278240.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278238.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278239.html

Changes compared to previous version:
1. Add new features, merge multiple testresults.json file & regression analysis for two specified testresults.json
2. Add selftest to test merge, store, report and regression functionalities
3. Revised code style to align with pythonic

Regarding your questions below:
1. What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? or ???
- Are branches used for each release series (master, thud, sumo etc?) Basically, the layout we'd use to import the autobuilder results for each master run for example remains unclear to me, or how we'd look up the status of a given commit.

The target layout shall be a specific git branch for each commit tested, where the file directories shall be  based on existing Autobuilder results archive (eg. assuming store command was executed inside Autobuilder machine that stored the testresults.json files and predefined directory), simply execute: $ resultstool store <source_dir> <git_branch> where source_dir was the top directory used by Autobuilder to archive all testresults.json file, git_branch was the QA cycle for current tested commit. 

The first instance to execute "resultstool store" will generate a git repository under <poky>/<build>/ directory. To update files to be stored, simply execute $ resultstool store <source_dir> <git_branch> -d <poky>/<build>/<testresults_datetime>.

2. The code doesn't support comparison of two sets of test results (which tests were added/removed? passed when previously failed? failed when previously passed?)

Assuming results from a particular tested commit were merged into a single file (using existing "merge" functionality), user shall use the newly added "regression" functionality for comparing results status for two testresults.json files. Based on the configurations data for each result_id set, the comparison logic will select result with same configurations for comparison. More advance regression and automation can be developed from current code base. 

3. The code also doesn't allow investigation of test report "subdata" like looking at the ptest results, comparing them to previous runs, showing the logs for passed/failed ptests.

There is also the question of json build performance data.

This was not supported as of now, this will need further enhancement. 

Please let me know if any questions and inputs. Thank you very much for your sharing and help!

Thanks,
Yeoh Ee Peng 



-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, January 21, 2019 10:26 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Cc: Burton, Ross <ross.burton@intel.com>; Paul Eggleton <paul.eggleton@linux.intel.com>
Subject: Re: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

On Fri, 2019-01-04 at 14:46 +0800, Yeoh Ee Peng wrote:
> These scripts were developed as an alternative testcase management 
> tool to Testopia. Using these scripts, user can manage the 
> testresults.json files generated by oeqa automated tests. Using the 
> "store" operation, user can store multiple groups of test result each 
> into individual git branch. Within each git branch, user can store 
> multiple testresults.json files under different directories (eg.
> categorize directory by selftest-<distro>, runtime-<image>- 
> <machine>).
> Then, using the "report" operation, user can view the test result 
> summary for all available testresults.json files being stored that 
> were grouped by directory and test configuration.
>
> This scripts depends on scripts/oe-git-archive where it was facing 
> error if gitpython package was not installed. Refer to [YOCTO# 13082] 
> for more detail.

Thanks for the patches. These are a lot more readable than the previous versions and the code quality is much better which in turn helped review!

I experimented with the code a bit. I'm fine with the manual test execution piece of this, I do have some questions/concerns with the result storage/reporting piece though.

What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? or ???
- Are branches used for each release series (master, thud, sumo etc?) Basically, the layout we'd use to import the autobuilder results for each master run for example remains unclear to me, or how we'd look up the status of a given commit.

The code doesn't support comparison of two sets of test results (which tests were added/removed? passed when previously failed? failed when previously passed?)

The code also doesn't allow investigation of test report "subdata" like looking at the ptest results, comparing them to previous runs, showing the logs for passed/failed ptests.

There is also the question of json build performance data.

The idea behind this code is to give us a report which allows us to decide on the QA state of a given set of testreport data. I'm just not sure this patch set lets us do that, or gives us a path to allow us to do that either.

Cheers,

Richard
Yeoh Ee Peng Jan. 22, 2019, 10:19 a.m.
Sorry, I realized that I had missed to include the files used for oe-selftest that testing the store operation.
Submitted v5 patches that added the required files for oe-selftest -r resultstooltests.

http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278243.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278244.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278245.html

-----Original Message-----
From: Yeoh, Ee Peng 
Sent: Tuesday, January 22, 2019 5:45 PM
To: Richard Purdie <richard.purdie@linuxfoundation.org>; openembedded-core@lists.openembedded.org
Cc: Burton, Ross <ross.burton@intel.com>; Paul Eggleton <paul.eggleton@linux.intel.com>
Subject: RE: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

Hi Richard,

After your recently sharing on pythonic, we had revised these scripts in hope to improve the code readability and ease of maintenance. Also new functionalities were developed following pythonic style. 

The latest patches are just submitted today at below URL. 
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278240.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278238.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278239.html

Changes compared to previous version:
1. Add new features, merge multiple testresults.json file & regression analysis for two specified testresults.json 2. Add selftest to test merge, store, report and regression functionalities 3. Revised code style to align with pythonic

Regarding your questions below:
1. What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? or ???
- Are branches used for each release series (master, thud, sumo etc?) Basically, the layout we'd use to import the autobuilder results for each master run for example remains unclear to me, or how we'd look up the status of a given commit.

The target layout shall be a specific git branch for each commit tested, where the file directories shall be  based on existing Autobuilder results archive (eg. assuming store command was executed inside Autobuilder machine that stored the testresults.json files and predefined directory), simply execute: $ resultstool store <source_dir> <git_branch> where source_dir was the top directory used by Autobuilder to archive all testresults.json file, git_branch was the QA cycle for current tested commit. 

The first instance to execute "resultstool store" will generate a git repository under <poky>/<build>/ directory. To update files to be stored, simply execute $ resultstool store <source_dir> <git_branch> -d <poky>/<build>/<testresults_datetime>.

2. The code doesn't support comparison of two sets of test results (which tests were added/removed? passed when previously failed? failed when previously passed?)

Assuming results from a particular tested commit were merged into a single file (using existing "merge" functionality), user shall use the newly added "regression" functionality for comparing results status for two testresults.json files. Based on the configurations data for each result_id set, the comparison logic will select result with same configurations for comparison. More advance regression and automation can be developed from current code base. 

3. The code also doesn't allow investigation of test report "subdata" like looking at the ptest results, comparing them to previous runs, showing the logs for passed/failed ptests.

There is also the question of json build performance data.

This was not supported as of now, this will need further enhancement. 

Please let me know if any questions and inputs. Thank you very much for your sharing and help!

Thanks,
Yeoh Ee Peng 



-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org]
Sent: Monday, January 21, 2019 10:26 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Cc: Burton, Ross <ross.burton@intel.com>; Paul Eggleton <paul.eggleton@linux.intel.com>
Subject: Re: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

On Fri, 2019-01-04 at 14:46 +0800, Yeoh Ee Peng wrote:
> These scripts were developed as an alternative testcase management 
> tool to Testopia. Using these scripts, user can manage the 
> testresults.json files generated by oeqa automated tests. Using the 
> "store" operation, user can store multiple groups of test result each 
> into individual git branch. Within each git branch, user can store 
> multiple testresults.json files under different directories (eg.
> categorize directory by selftest-<distro>, runtime-<image>- 
> <machine>).
> Then, using the "report" operation, user can view the test result 
> summary for all available testresults.json files being stored that 
> were grouped by directory and test configuration.
>
> This scripts depends on scripts/oe-git-archive where it was facing 
> error if gitpython package was not installed. Refer to [YOCTO# 13082] 
> for more detail.

Thanks for the patches. These are a lot more readable than the previous versions and the code quality is much better which in turn helped review!

I experimented with the code a bit. I'm fine with the manual test execution piece of this, I do have some questions/concerns with the result storage/reporting piece though.

What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? or ???
- Are branches used for each release series (master, thud, sumo etc?) Basically, the layout we'd use to import the autobuilder results for each master run for example remains unclear to me, or how we'd look up the status of a given commit.

The code doesn't support comparison of two sets of test results (which tests were added/removed? passed when previously failed? failed when previously passed?)

The code also doesn't allow investigation of test report "subdata" like looking at the ptest results, comparing them to previous runs, showing the logs for passed/failed ptests.

There is also the question of json build performance data.

The idea behind this code is to give us a report which allows us to decide on the QA state of a given set of testreport data. I'm just not sure this patch set lets us do that, or gives us a path to allow us to do that either.

Cheers,

Richard