mbox

[00/10] kernel: consolidated pull request

Message ID cover.1696017123.git.bruce.ashfield@gmail.com
State Not Applicable, archived
Headers show

Pull-request

https://git.yoctoproject.org/poky-contrib zedd/kernel

Message

Bruce Ashfield Sept. 29, 2023, 8:04 p.m. UTC
From: Bruce Ashfield <bruce.ashfield@gmail.com>

Richard,

Given where we are in the release cycle, this clearly is NOT a typical
consolidated pull request.

I've done what normally takes about three weeks in about 4 days.

With 6.4 going EOL before expected upstream, it really isn't a suitable
reference kernel for the release.

So we've decided to take on the task of getting 6.5 ready and available,
and at the same time moving the -dev kernel to v6.6. The -dev kernel
testing for 6.5 was critical for this, since I already knew the core
was sane.

Also we've never shipped purposely mismatched libc-headers in the release,
so I also took the leap to update the libc-headers to match.

I've already sent fixes to meta-oe, and there's a btrfs update in this
series to fix breakage that I found in the tightly coupled packages.

I've built and booted core-image-kernel-dev, core-image-minimal, core-image-sato
for both glibc and musl for all the supported architectures.
There will be some things that break regardless, but this needs the
better coverage of the AB.

If this causes too much problems, our choices are to ship 6.4 EOLd, or
fall all the way back to 6.1.

I'll remove 6.4 from master once we've figured out the fallout from
this kernel, and which direction we are going.

Cheers,

Bruce

The following changes since commit d4f2f8269cff0e4e9a98ad1ef9c0f7b8a909d563:

  recipetool/devtool: Ensure server knows about changed files (2023-09-18 11:35:38 +0100)

are available in the Git repository at:

  https://git.yoctoproject.org/poky-contrib zedd/kernel
  https://git.yoctoproject.org/poky-contrib/log/?h=zedd/kernel

Bruce Ashfield (10):
  linux-yocto/6.4: update to v6.4.15
  linux-yocto/6.1: update to v6.1.52
  linux-yocto/6.4: update to v6.4.16
  linux-yocto/6.1: update to v6.1.53
  linux-yocto/6.1: update to v6.1.55
  linux-yocto-dev: update to v6.6-rcX
  linux-yocto: introduce 6.5 reference kernel recipes
  linux-libc-headers: uprev to v6.5
  linux-libc-headers: default to 6.5
  btrfs-progs: update to version v6.5.1

 meta/conf/distro/include/tcmode-default.inc   |    2 +-
 ...fs-tools_6.3.3.bb => btrfs-tools_6.5.1.bb} |    2 +-
 ...aders_6.4.bb => linux-libc-headers_6.5.bb} |    2 +-
 .../linux/cve-exclusion_6.1.inc               |   54 +-
 .../linux/cve-exclusion_6.4.inc               |   36 +-
 .../linux/cve-exclusion_6.5.inc               | 5072 +++++++++++++++++
 meta/recipes-kernel/linux/linux-yocto-dev.bb  |    4 +-
 .../linux/linux-yocto-rt_6.1.bb               |    6 +-
 .../linux/linux-yocto-rt_6.4.bb               |    6 +-
 .../linux/linux-yocto-rt_6.5.bb               |   48 +
 .../linux/linux-yocto-tiny_6.1.bb             |    6 +-
 .../linux/linux-yocto-tiny_6.4.bb             |    6 +-
 .../linux/linux-yocto-tiny_6.5.bb             |   33 +
 meta/recipes-kernel/linux/linux-yocto_6.1.bb  |   28 +-
 meta/recipes-kernel/linux/linux-yocto_6.4.bb  |   28 +-
 meta/recipes-kernel/linux/linux-yocto_6.5.bb  |   72 +
 16 files changed, 5343 insertions(+), 62 deletions(-)
 rename meta/recipes-devtools/btrfs-tools/{btrfs-tools_6.3.3.bb => btrfs-tools_6.5.1.bb} (98%)
 rename meta/recipes-kernel/linux-libc-headers/{linux-libc-headers_6.4.bb => linux-libc-headers_6.5.bb} (83%)
 create mode 100644 meta/recipes-kernel/linux/cve-exclusion_6.5.inc
 create mode 100644 meta/recipes-kernel/linux/linux-yocto-rt_6.5.bb
 create mode 100644 meta/recipes-kernel/linux/linux-yocto-tiny_6.5.bb
 create mode 100644 meta/recipes-kernel/linux/linux-yocto_6.5.bb

Comments

Richard Purdie Sept. 30, 2023, 11:07 a.m. UTC | #1
Hi Bruce,

On Fri, 2023-09-29 at 16:04 -0400, bruce.ashfield@gmail.com wrote:
> Given where we are in the release cycle, this clearly is NOT a typical
> consolidated pull request.
> 
> I've done what normally takes about three weeks in about 4 days.

Thanks, I know this isn't where any of us wanted to be.

> 
> With 6.4 going EOL before expected upstream, it really isn't a suitable
> reference kernel for the release.
> 
> So we've decided to take on the task of getting 6.5 ready and available,
> and at the same time moving the -dev kernel to v6.6. The -dev kernel
> testing for 6.5 was critical for this, since I already knew the core
> was sane.
> 
> Also we've never shipped purposely mismatched libc-headers in the release,
> so I also took the leap to update the libc-headers to match.

Agreed on both counts, I think we need to make 6.5 work.

> I've already sent fixes to meta-oe, and there's a btrfs update in this
> series to fix breakage that I found in the tightly coupled packages.

I think btrfs-tools was already upgraded in master?

> I've built and booted core-image-kernel-dev, core-image-minimal, core-image-sato
> for both glibc and musl for all the supported architectures.
> There will be some things that break regardless, but this needs the
> better coverage of the AB.
> 
> If this causes too much problems, our choices are to ship 6.4 EOLd, or
> fall all the way back to 6.1.
> 
> I'll remove 6.4 from master once we've figured out the fallout from
> this kernel, and which direction we are going.

I had some difficulties with this series since it doesn't apply against
master. The issue was that someone else had updated the kernel CVEs and
those changes weren't in your tree (nor was the btrfs upgrade). This
meant all the cve inc changes threw errors. We will likely need to
assume someone will update the CVE includes semi regularly just so we
can keep the noise on the CVE reports down.

Since we're short on time, I regenerated the series re-running the CVE
script and rebuilding that piece of each commit. I suspect now we
understand what happened we'll be able to better handle it in future.

The first autobuilder test run crashed and burned due to unrelated
patches. I've a new build running:

https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/5969

Cheers,

Richard
Bruce Ashfield Sept. 30, 2023, 4:33 p.m. UTC | #2
On Sat, Sep 30, 2023 at 7:07 AM Richard Purdie <
richard.purdie@linuxfoundation.org> wrote:

> Hi Bruce,
>
> On Fri, 2023-09-29 at 16:04 -0400, bruce.ashfield@gmail.com wrote:
> > Given where we are in the release cycle, this clearly is NOT a typical
> > consolidated pull request.
> >
> > I've done what normally takes about three weeks in about 4 days.
>
> Thanks, I know this isn't where any of us wanted to be.
>
> >
> > With 6.4 going EOL before expected upstream, it really isn't a suitable
> > reference kernel for the release.
> >
> > So we've decided to take on the task of getting 6.5 ready and available,
> > and at the same time moving the -dev kernel to v6.6. The -dev kernel
> > testing for 6.5 was critical for this, since I already knew the core
> > was sane.
> >
> > Also we've never shipped purposely mismatched libc-headers in the
> release,
> > so I also took the leap to update the libc-headers to match.
>
> Agreed on both counts, I think we need to make 6.5 work.
>
> > I've already sent fixes to meta-oe, and there's a btrfs update in this
> > series to fix breakage that I found in the tightly coupled packages.
>
> I think btrfs-tools was already upgraded in master?
>

Ah bugger. That would have saved me some time :)



>
> > I've built and booted core-image-kernel-dev, core-image-minimal,
> core-image-sato
> > for both glibc and musl for all the supported architectures.
> > There will be some things that break regardless, but this needs the
> > better coverage of the AB.
> >
> > If this causes too much problems, our choices are to ship 6.4 EOLd, or
> > fall all the way back to 6.1.
> >
> > I'll remove 6.4 from master once we've figured out the fallout from
> > this kernel, and which direction we are going.
>
> I had some difficulties with this series since it doesn't apply against
> master. The issue was that someone else had updated the kernel CVEs and
> those changes weren't in your tree (nor was the btrfs upgrade). This
> meant all the cve inc changes threw errors. We will likely need to
> assume someone will update the CVE includes semi regularly just so we
> can keep the noise on the CVE reports down.
>

That's odd. I always do a pull --rebase before sending my changes, but yet
none of them showed up  (on any of my builders, so I had 3x machines
running that queue of patches and none of them had the changes from
master).

For the kernel CVEs. They either need to be part of my kernel releases
or not. I've updated my scripts, and they'll always be updated as part
of the process. Having something / someone else update that file is
just a huge pain, and we shouldn't do that.

Bruce



> Since we're short on time, I regenerated the series re-running the CVE
> script and rebuilding that piece of each commit. I suspect now we
> understand what happened we'll be able to better handle it in future.
>
> The first autobuilder test run crashed and burned due to unrelated
> patches. I've a new build running:
>
> https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/5969
>
> Cheers,
>
> Richard
>
Richard Purdie Sept. 30, 2023, 4:58 p.m. UTC | #3
On Sat, 2023-09-30 at 12:33 -0400, Bruce Ashfield wrote:
> On Sat, Sep 30, 2023 at 7:07 AM Richard Purdie
> <richard.purdie@linuxfoundation.org> wrote:
> 
> > 
> > I had some difficulties with this series since it doesn't apply
> > against
> > master. The issue was that someone else had updated the kernel CVEs
> > and
> > those changes weren't in your tree (nor was the btrfs upgrade).
> > This
> > meant all the cve inc changes threw errors. We will likely need to
> > assume someone will update the CVE includes semi regularly just so
> > we
> > can keep the noise on the CVE reports down.
> > 
> 
> 
> That's odd. I always do a pull --rebase before sending my changes,
> but yet none of them showed up  (on any of my builders, so I had 3x
> machines running that queue of patches and none of them had the
> changes from master).

I don't know what happened but you were definitely not on a recent
master branch as the changes did not apply.

> For the kernel CVEs. They either need to be part of my kernel
> releases or not. I've updated my scripts, and they'll always be
> updated as part of the process. Having something / someone else
> update that file is just a huge pain, and we shouldn't do that.

The question is whether you're able to just update the CVE revisions
out of cycle with the kernel point release bumps?

With the number of CVEs coming through, the files may need updating a
little more frequently than we add new kernel point releases.

I know the plan is this "goes away" when the kernel cves repo is worked
into the cve check workflow so hopefully we don't have this for too
long.

Cheers,

Richard
Bruce Ashfield Sept. 30, 2023, 5:05 p.m. UTC | #4
On Sat, Sep 30, 2023 at 12:58 PM Richard Purdie <
richard.purdie@linuxfoundation.org> wrote:

> On Sat, 2023-09-30 at 12:33 -0400, Bruce Ashfield wrote:
> > On Sat, Sep 30, 2023 at 7:07 AM Richard Purdie
> > <richard.purdie@linuxfoundation.org> wrote:
> >
> > >
> > > I had some difficulties with this series since it doesn't apply
> > > against
> > > master. The issue was that someone else had updated the kernel CVEs
> > > and
> > > those changes weren't in your tree (nor was the btrfs upgrade).
> > > This
> > > meant all the cve inc changes threw errors. We will likely need to
> > > assume someone will update the CVE includes semi regularly just so
> > > we
> > > can keep the noise on the CVE reports down.
> > >
> >
> >
> > That's odd. I always do a pull --rebase before sending my changes,
> > but yet none of them showed up  (on any of my builders, so I had 3x
> > machines running that queue of patches and none of them had the
> > changes from master).
>
> I don't know what happened but you were definitely not on a recent
> master branch as the changes did not apply.
>
> > For the kernel CVEs. They either need to be part of my kernel
> > releases or not. I've updated my scripts, and they'll always be
> > updated as part of the process. Having something / someone else
> > update that file is just a huge pain, and we shouldn't do that.
>
> The question is whether you're able to just update the CVE revisions
> out of cycle with the kernel point release bumps?
>

I mean I could, but that's not something I want to take on. I'm not actively
monitoring the kernel CVEs, and take the fixes as they flow through
-stable and are tested in my sanity. So the only point they matter (to me)
is when a -stable bump proves to be sane enough to send to the list
with bumped SRCREVs.

I'm going to drop the part of my script that updates the CVE file when
I do a release, since the conflicts are such a hassle when I'm working
through my -stable queue. I sometimes need to hold it for a week
(or more) depending on what is broken or what part of the cycle
we are in.

It sounds like there's a better solution down the road, so me dropping
the update of the .inc file won't be an issue for long.

Bruce


>
> With the number of CVEs coming through, the files may need updating a
> little more frequently than we add new kernel point releases.
>
> I know the plan is this "goes away" when the kernel cves repo is worked
> into the cve check workflow so hopefully we don't have this for too
> long.
>
> Cheers,
>
> Richard
>
Richard Purdie Sept. 30, 2023, 5:26 p.m. UTC | #5
On Sat, 2023-09-30 at 13:05 -0400, Bruce Ashfield wrote:
> On Sat, Sep 30, 2023 at 12:58 PM Richard Purdie
> <richard.purdie@linuxfoundation.org> wrote:
> > On Sat, 2023-09-30 at 12:33 -0400, Bruce Ashfield wrote:
> > > On Sat, Sep 30, 2023 at 7:07 AM Richard Purdie
> > > <richard.purdie@linuxfoundation.org> wrote:
> > > 
> > > > 
> > > > I had some difficulties with this series since it doesn't apply
> > > > against
> > > > master. The issue was that someone else had updated the kernel
> > > > CVEs
> > > > and
> > > > those changes weren't in your tree (nor was the btrfs upgrade).
> > > > This
> > > > meant all the cve inc changes threw errors. We will likely need
> > > > to
> > > > assume someone will update the CVE includes semi regularly just
> > > > so
> > > > we
> > > > can keep the noise on the CVE reports down.
> > > > 
> > > 
> > > 
> > > That's odd. I always do a pull --rebase before sending my
> > > changes,
> > > but yet none of them showed up  (on any of my builders, so I had
> > > 3x
> > > machines running that queue of patches and none of them had the
> > > changes from master).
> > 
> > I don't know what happened but you were definitely not on a recent
> > master branch as the changes did not apply.
> > 
> > > For the kernel CVEs. They either need to be part of my kernel
> > > releases or not. I've updated my scripts, and they'll always be
> > > updated as part of the process. Having something / someone else
> > > update that file is just a huge pain, and we shouldn't do that.
> > 
> > The question is whether you're able to just update the CVE
> > revisions
> > out of cycle with the kernel point release bumps?
> > 
> 
> 
> I mean I could, but that's not something I want to take on. I'm not
> actively monitoring the kernel CVEs, and take the fixes as they flow
> through -stable and are tested in my sanity. So the only point they
> matter (to me) is when a -stable bump proves to be sane enough to
> send to the list with bumped SRCREVs.
> 
> I'm going to drop the part of my script that updates the CVE file
> when I do a release, since the conflicts are such a hassle when I'm
> working through my -stable queue. I sometimes need to hold it for a
> week (or more) depending on what is broken or what part of the cycle
> we are in.
> 
> It sounds like there's a better solution down the road, so me
> dropping the update of the .inc file won't be an issue for long.

Ok, I'll need to do it when I process your patches but I've just proven
I can do that.

Cheers,

Richard
Richard Purdie Oct. 1, 2023, 10:13 a.m. UTC | #6
On Fri, 2023-09-29 at 16:04 -0400, bruce.ashfield@gmail.com wrote:
> Given where we are in the release cycle, this clearly is NOT a typical
> consolidated pull request.
> 
> I've done what normally takes about three weeks in about 4 days.
> 
> With 6.4 going EOL before expected upstream, it really isn't a suitable
> reference kernel for the release.
> 
> So we've decided to take on the task of getting 6.5 ready and available,
> and at the same time moving the -dev kernel to v6.6. The -dev kernel
> testing for 6.5 was critical for this, since I already knew the core
> was sane.
> 
> Also we've never shipped purposely mismatched libc-headers in the release,
> so I also took the leap to update the libc-headers to match.
> 
> I've already sent fixes to meta-oe, and there's a btrfs update in this
> series to fix breakage that I found in the tightly coupled packages.
> 
> I've built and booted core-image-kernel-dev, core-image-minimal, core-image-sato
> for both glibc and musl for all the supported architectures.
> There will be some things that break regardless, but this needs the
> better coverage of the AB.
> 
> If this causes too much problems, our choices are to ship 6.4 EOLd, or
> fall all the way back to 6.1.
> 
> I'll remove 6.4 from master once we've figured out the fallout from
> this kernel, and which direction we are going.

I've merged this series which seemed to work fine. Given the time
constraints, I thought I'd throw some 6.5 testing at the autobuilder.
It ran into two issues. One was cryptodev, I have a patch for that in
master-next. The other were entropy boot failures on arm kvm:

https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5512/steps/12/logs/stdio

[    0.796831] Key type id_resolver registered
[    0.797581] Key type id_legacy registered
[    0.798724] Key type cifs.idmap registered
[    0.808070] jitterentropy: Initialization failed with host not compliant with requirements: 9
[    0.809690] xor: measuring software checksum speed
[    0.811307]    8regs           : 12333 MB/sec
[    0.812862]    32regs          : 12322 MB/sec
[    0.814885]    arm64_neon      :  7851 MB/sec
[    0.815626] xor: using function: 8regs (12333 MB/sec)


-----------------------
Central error: [    0.808070] jitterentropy: Initialization failed with host not compliant with requirements: 9
***********************

I did find this in google:

https://lore.kernel.org/linux-arm-kernel/68c6b70a-8d6c-08b5-46ce-243607479d5c@i2se.com/T/

which does bisect to a change.

I'll rerun the autobuilder testing with the cryptodev patch and see if
anything else transpires.

Cheers,

Richard
Richard Purdie Oct. 1, 2023, 11:49 a.m. UTC | #7
On Sun, 2023-10-01 at 11:13 +0100, Richard Purdie wrote:
> On Fri, 2023-09-29 at 16:04 -0400, bruce.ashfield@gmail.com wrote:
> > Given where we are in the release cycle, this clearly is NOT a typical
> > consolidated pull request.
> > 
> > I've done what normally takes about three weeks in about 4 days.
> > 
> > With 6.4 going EOL before expected upstream, it really isn't a suitable
> > reference kernel for the release.
> > 
> > So we've decided to take on the task of getting 6.5 ready and available,
> > and at the same time moving the -dev kernel to v6.6. The -dev kernel
> > testing for 6.5 was critical for this, since I already knew the core
> > was sane.
> > 
> > Also we've never shipped purposely mismatched libc-headers in the release,
> > so I also took the leap to update the libc-headers to match.
> > 
> > I've already sent fixes to meta-oe, and there's a btrfs update in this
> > series to fix breakage that I found in the tightly coupled packages.
> > 
> > I've built and booted core-image-kernel-dev, core-image-minimal, core-image-sato
> > for both glibc and musl for all the supported architectures.
> > There will be some things that break regardless, but this needs the
> > better coverage of the AB.
> > 
> > If this causes too much problems, our choices are to ship 6.4 EOLd, or
> > fall all the way back to 6.1.
> > 
> > I'll remove 6.4 from master once we've figured out the fallout from
> > this kernel, and which direction we are going.
> 
> I've merged this series which seemed to work fine. Given the time
> constraints, I thought I'd throw some 6.5 testing at the autobuilder.
> It ran into two issues. One was cryptodev, I have a patch for that in
> master-next. The other were entropy boot failures on arm kvm:
> 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5512/steps/12/logs/stdio
> 
> [    0.796831] Key type id_resolver registered
> [    0.797581] Key type id_legacy registered
> [    0.798724] Key type cifs.idmap registered
> [    0.808070] jitterentropy: Initialization failed with host not compliant with requirements: 9
> [    0.809690] xor: measuring software checksum speed
> [    0.811307]    8regs           : 12333 MB/sec
> [    0.812862]    32regs          : 12322 MB/sec
> [    0.814885]    arm64_neon      :  7851 MB/sec
> [    0.815626] xor: using function: 8regs (12333 MB/sec)
> 
> 
> -----------------------
> Central error: [    0.808070] jitterentropy: Initialization failed with host not compliant with requirements: 9
> ***********************
> 
> I did find this in google:
> 
> https://lore.kernel.org/linux-arm-kernel/68c6b70a-8d6c-08b5-46ce-243607479d5c@i2se.com/T/
> 
> which does bisect to a change.
> 
> I'll rerun the autobuilder testing with the cryptodev patch and see if
> anything else transpires.

The LTP on arm run failed:

https://autobuilder.yoctoproject.org/typhoon/#/builders/96/builds/5406

which diving into the logs shows it went OOM and keeled over badly:

https://autobuilder.yocto.io/pub/failed-builds-data/qemu_boot_log.20231001101358

meta-virtualization doesn't like something:

https://autobuilder.yoctoproject.org/typhoon/#/builders/128/builds/2295

The arm ptest failures above are unsurprisingly still around:

https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5513

There may or may not be failing strace ptests on x86:

https://autobuilder.yoctoproject.org/typhoon/#/builders/81/builds/5694/steps/13/logs/stdio

but we didn't get the world build failures from cryptodev.

Cheers,

Richard
Richard Purdie Oct. 1, 2023, 1:20 p.m. UTC | #8
On Sun, 2023-10-01 at 11:13 +0100, Richard Purdie via
lists.openembedded.org wrote:
> On Fri, 2023-09-29 at 16:04 -0400, bruce.ashfield@gmail.com wrote:
> > Given where we are in the release cycle, this clearly is NOT a typical
> > consolidated pull request.
> > 
> > I've done what normally takes about three weeks in about 4 days.
> > 
> > With 6.4 going EOL before expected upstream, it really isn't a suitable
> > reference kernel for the release.
> > 
> > So we've decided to take on the task of getting 6.5 ready and available,
> > and at the same time moving the -dev kernel to v6.6. The -dev kernel
> > testing for 6.5 was critical for this, since I already knew the core
> > was sane.
> > 
> > Also we've never shipped purposely mismatched libc-headers in the release,
> > so I also took the leap to update the libc-headers to match.
> > 
> > I've already sent fixes to meta-oe, and there's a btrfs update in this
> > series to fix breakage that I found in the tightly coupled packages.
> > 
> > I've built and booted core-image-kernel-dev, core-image-minimal, core-image-sato
> > for both glibc and musl for all the supported architectures.
> > There will be some things that break regardless, but this needs the
> > better coverage of the AB.
> > 
> > If this causes too much problems, our choices are to ship 6.4 EOLd, or
> > fall all the way back to 6.1.
> > 
> > I'll remove 6.4 from master once we've figured out the fallout from
> > this kernel, and which direction we are going.
> 
> I've merged this series which seemed to work fine. Given the time
> constraints, I thought I'd throw some 6.5 testing at the autobuilder.
> It ran into two issues. One was cryptodev, I have a patch for that in
> master-next. The other were entropy boot failures on arm kvm:
> 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5512/steps/12/logs/stdio
> 
> [    0.796831] Key type id_resolver registered
> [    0.797581] Key type id_legacy registered
> [    0.798724] Key type cifs.idmap registered
> [    0.808070] jitterentropy: Initialization failed with host not compliant with requirements: 9
> [    0.809690] xor: measuring software checksum speed
> [    0.811307]    8regs           : 12333 MB/sec
> [    0.812862]    32regs          : 12322 MB/sec
> [    0.814885]    arm64_neon      :  7851 MB/sec
> [    0.815626] xor: using function: 8regs (12333 MB/sec)
> 
> 
> -----------------------
> Central error: [    0.808070] jitterentropy: Initialization failed with host not compliant with requirements: 9
> ***********************
> 
> I did find this in google:
> 
> https://lore.kernel.org/linux-arm-kernel/68c6b70a-8d6c-08b5-46ce-243607479d5c@i2se.com/T/
> 
> which does bisect to a change.
> 
> I'll rerun the autobuilder testing with the cryptodev patch and see if
> anything else transpires.

In case anyone else tries to trace it, the dependency chain for
jitterentropy looks like:

CONFIG_CRYPTO_JITTERENTROPY
CONFIG_CRYPTO_DRBG
CONFIG_CRYPTO_DRBG_MENU
CONFIG_CRYPTO_RNG_DEFAULT
CONFIG_CRYPTO_ECC
CONFIG_CRYPTO_ECDH
CONFIG_BT

i.e. bluetooth ultimately stops you making jitterentropy a module.

Cheers,

Richard
Bruce Ashfield Oct. 1, 2023, 2:58 p.m. UTC | #9
On Sun, Oct 1, 2023 at 7:49 AM Richard Purdie <
richard.purdie@linuxfoundation.org> wrote:

> On Sun, 2023-10-01 at 11:13 +0100, Richard Purdie wrote:
> > On Fri, 2023-09-29 at 16:04 -0400, bruce.ashfield@gmail.com wrote:
> > > Given where we are in the release cycle, this clearly is NOT a typical
> > > consolidated pull request.
> > >
> > > I've done what normally takes about three weeks in about 4 days.
> > >
> > > With 6.4 going EOL before expected upstream, it really isn't a suitable
> > > reference kernel for the release.
> > >
> > > So we've decided to take on the task of getting 6.5 ready and
> available,
> > > and at the same time moving the -dev kernel to v6.6. The -dev kernel
> > > testing for 6.5 was critical for this, since I already knew the core
> > > was sane.
> > >
> > > Also we've never shipped purposely mismatched libc-headers in the
> release,
> > > so I also took the leap to update the libc-headers to match.
> > >
> > > I've already sent fixes to meta-oe, and there's a btrfs update in this
> > > series to fix breakage that I found in the tightly coupled packages.
> > >
> > > I've built and booted core-image-kernel-dev, core-image-minimal,
> core-image-sato
> > > for both glibc and musl for all the supported architectures.
> > > There will be some things that break regardless, but this needs the
> > > better coverage of the AB.
> > >
> > > If this causes too much problems, our choices are to ship 6.4 EOLd, or
> > > fall all the way back to 6.1.
> > >
> > > I'll remove 6.4 from master once we've figured out the fallout from
> > > this kernel, and which direction we are going.
> >
> > I've merged this series which seemed to work fine. Given the time
> > constraints, I thought I'd throw some 6.5 testing at the autobuilder.
> > It ran into two issues. One was cryptodev, I have a patch for that in
> > master-next. The other were entropy boot failures on arm kvm:
> >
> >
> https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5512/steps/12/logs/stdio
> >
> > [    0.796831] Key type id_resolver registered
> > [    0.797581] Key type id_legacy registered
> > [    0.798724] Key type cifs.idmap registered
> > [    0.808070] jitterentropy: Initialization failed with host not
> compliant with requirements: 9
> > [    0.809690] xor: measuring software checksum speed
> > [    0.811307]    8regs           : 12333 MB/sec
> > [    0.812862]    32regs          : 12322 MB/sec
> > [    0.814885]    arm64_neon      :  7851 MB/sec
> > [    0.815626] xor: using function: 8regs (12333 MB/sec)
> >
> >
> > -----------------------
> > Central error: [    0.808070] jitterentropy: Initialization failed with
> host not compliant with requirements: 9
> > ***********************
> >
> > I did find this in google:
> >
> >
> https://lore.kernel.org/linux-arm-kernel/68c6b70a-8d6c-08b5-46ce-243607479d5c@i2se.com/T/
> >
> > which does bisect to a change.
> >
> > I'll rerun the autobuilder testing with the cryptodev patch and see if
> > anything else transpires.
>
> The LTP on arm run failed:
>
> https://autobuilder.yoctoproject.org/typhoon/#/builders/96/builds/5406
>
> which diving into the logs shows it went OOM and keeled over badly:
>
>
> https://autobuilder.yocto.io/pub/failed-builds-data/qemu_boot_log.20231001101358


I also had to up the memory on some of my ARM target testing. On target
builds were failing in strange ways until I went to 512 or 1G of memory.

But I thought that could have just been my setup.


>
> meta-virtualization doesn't like something:
>
> https://autobuilder.yoctoproject.org/typhoon/#/builders/128/builds/2295
>
>
There's a Xen uprev in the works, but obviously this one is something
that I'll eventually sort out.



> The arm ptest failures above are unsurprisingly still around:
>
> https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5513
>
> There may or may not be failing strace ptests on x86:
>
>
> https://autobuilder.yoctoproject.org/typhoon/#/builders/81/builds/5694/steps/13/logs/stdio


strace can be a problem at times. I'll have a look at that first thing
monday or
late tonight, if anyone else solves it .. let me know.

Hopefully some of our ARM colleagues can help us out with the other
issues, otherwise, I will start looking at them Monday.

Bruce



>
>
> but we didn't get the world build failures from cryptodev.
>
> Cheers,
>
> Richard
>
>
>
Bruce Ashfield Oct. 1, 2023, 3:06 p.m. UTC | #10
On Sun, Oct 1, 2023 at 10:58 AM Bruce Ashfield <bruce.ashfield@gmail.com>
wrote:

>
>
> On Sun, Oct 1, 2023 at 7:49 AM Richard Purdie <
> richard.purdie@linuxfoundation.org> wrote:
>
>> On Sun, 2023-10-01 at 11:13 +0100, Richard Purdie wrote:
>> > On Fri, 2023-09-29 at 16:04 -0400, bruce.ashfield@gmail.com wrote:
>> > > Given where we are in the release cycle, this clearly is NOT a typical
>> > > consolidated pull request.
>> > >
>> > > I've done what normally takes about three weeks in about 4 days.
>> > >
>> > > With 6.4 going EOL before expected upstream, it really isn't a
>> suitable
>> > > reference kernel for the release.
>> > >
>> > > So we've decided to take on the task of getting 6.5 ready and
>> available,
>> > > and at the same time moving the -dev kernel to v6.6. The -dev kernel
>> > > testing for 6.5 was critical for this, since I already knew the core
>> > > was sane.
>> > >
>> > > Also we've never shipped purposely mismatched libc-headers in the
>> release,
>> > > so I also took the leap to update the libc-headers to match.
>> > >
>> > > I've already sent fixes to meta-oe, and there's a btrfs update in this
>> > > series to fix breakage that I found in the tightly coupled packages.
>> > >
>> > > I've built and booted core-image-kernel-dev, core-image-minimal,
>> core-image-sato
>> > > for both glibc and musl for all the supported architectures.
>> > > There will be some things that break regardless, but this needs the
>> > > better coverage of the AB.
>> > >
>> > > If this causes too much problems, our choices are to ship 6.4 EOLd, or
>> > > fall all the way back to 6.1.
>> > >
>> > > I'll remove 6.4 from master once we've figured out the fallout from
>> > > this kernel, and which direction we are going.
>> >
>> > I've merged this series which seemed to work fine. Given the time
>> > constraints, I thought I'd throw some 6.5 testing at the autobuilder.
>> > It ran into two issues. One was cryptodev, I have a patch for that in
>> > master-next. The other were entropy boot failures on arm kvm:
>> >
>> >
>> https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5512/steps/12/logs/stdio
>> >
>> > [    0.796831] Key type id_resolver registered
>> > [    0.797581] Key type id_legacy registered
>> > [    0.798724] Key type cifs.idmap registered
>> > [    0.808070] jitterentropy: Initialization failed with host not
>> compliant with requirements: 9
>> > [    0.809690] xor: measuring software checksum speed
>> > [    0.811307]    8regs           : 12333 MB/sec
>> > [    0.812862]    32regs          : 12322 MB/sec
>> > [    0.814885]    arm64_neon      :  7851 MB/sec
>> > [    0.815626] xor: using function: 8regs (12333 MB/sec)
>> >
>> >
>> > -----------------------
>> > Central error: [    0.808070] jitterentropy: Initialization failed with
>> host not compliant with requirements: 9
>> > ***********************
>> >
>> > I did find this in google:
>> >
>> >
>> https://lore.kernel.org/linux-arm-kernel/68c6b70a-8d6c-08b5-46ce-243607479d5c@i2se.com/T/
>> >
>> > which does bisect to a change.
>> >
>> > I'll rerun the autobuilder testing with the cryptodev patch and see if
>> > anything else transpires.
>>
>> The LTP on arm run failed:
>>
>> https://autobuilder.yoctoproject.org/typhoon/#/builders/96/builds/5406
>>
>> which diving into the logs shows it went OOM and keeled over badly:
>>
>>
>> https://autobuilder.yocto.io/pub/failed-builds-data/qemu_boot_log.20231001101358
>
>
> I also had to up the memory on some of my ARM target testing. On target
> builds were failing in strange ways until I went to 512 or 1G of memory.
>
> But I thought that could have just been my setup.
>
>
>>
>> meta-virtualization doesn't like something:
>>
>> https://autobuilder.yoctoproject.org/typhoon/#/builders/128/builds/2295
>>
>>
> There's a Xen uprev in the works, but obviously this one is something
> that I'll eventually sort out.
>

Aha. I didn't push the 6.5 kernel .inc file to meta-virt,so there's likely
missing configuration. I'll do that right now.

Bruce



>
>
>
>> The arm ptest failures above are unsurprisingly still around:
>>
>> https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5513
>>
>> There may or may not be failing strace ptests on x86:
>>
>>
>> https://autobuilder.yoctoproject.org/typhoon/#/builders/81/builds/5694/steps/13/logs/stdio
>
>
> strace can be a problem at times. I'll have a look at that first thing
> monday or
> late tonight, if anyone else solves it .. let me know.
>
> Hopefully some of our ARM colleagues can help us out with the other
> issues, otherwise, I will start looking at them Monday.
>
> Bruce
>
>
>
>>
>>
>> but we didn't get the world build failures from cryptodev.
>>
>> Cheers,
>>
>> Richard
>>
>>
>>
>
> --
> - Thou shalt not follow the NULL pointer, for chaos and madness await thee
> at its end
> - "Use the force Harry" - Gandalf, Star Trek II
>
>
Khem Raj Oct. 1, 2023, 5:40 p.m. UTC | #11
On Sun, Oct 1, 2023 at 4:49 AM Richard Purdie
<richard.purdie@linuxfoundation.org> wrote:
>
> On Sun, 2023-10-01 at 11:13 +0100, Richard Purdie wrote:
> > On Fri, 2023-09-29 at 16:04 -0400, bruce.ashfield@gmail.com wrote:
> > > Given where we are in the release cycle, this clearly is NOT a typical
> > > consolidated pull request.
> > >
> > > I've done what normally takes about three weeks in about 4 days.
> > >
> > > With 6.4 going EOL before expected upstream, it really isn't a suitable
> > > reference kernel for the release.
> > >
> > > So we've decided to take on the task of getting 6.5 ready and available,
> > > and at the same time moving the -dev kernel to v6.6. The -dev kernel
> > > testing for 6.5 was critical for this, since I already knew the core
> > > was sane.
> > >
> > > Also we've never shipped purposely mismatched libc-headers in the release,
> > > so I also took the leap to update the libc-headers to match.
> > >
> > > I've already sent fixes to meta-oe, and there's a btrfs update in this
> > > series to fix breakage that I found in the tightly coupled packages.
> > >
> > > I've built and booted core-image-kernel-dev, core-image-minimal, core-image-sato
> > > for both glibc and musl for all the supported architectures.
> > > There will be some things that break regardless, but this needs the
> > > better coverage of the AB.
> > >
> > > If this causes too much problems, our choices are to ship 6.4 EOLd, or
> > > fall all the way back to 6.1.
> > >
> > > I'll remove 6.4 from master once we've figured out the fallout from
> > > this kernel, and which direction we are going.
> >
> > I've merged this series which seemed to work fine. Given the time
> > constraints, I thought I'd throw some 6.5 testing at the autobuilder.
> > It ran into two issues. One was cryptodev, I have a patch for that in
> > master-next. The other were entropy boot failures on arm kvm:
> >
> > https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5512/steps/12/logs/stdio
> >
> > [    0.796831] Key type id_resolver registered
> > [    0.797581] Key type id_legacy registered
> > [    0.798724] Key type cifs.idmap registered
> > [    0.808070] jitterentropy: Initialization failed with host not compliant with requirements: 9
> > [    0.809690] xor: measuring software checksum speed
> > [    0.811307]    8regs           : 12333 MB/sec
> > [    0.812862]    32regs          : 12322 MB/sec
> > [    0.814885]    arm64_neon      :  7851 MB/sec
> > [    0.815626] xor: using function: 8regs (12333 MB/sec)
> >
> >
> > -----------------------
> > Central error: [    0.808070] jitterentropy: Initialization failed with host not compliant with requirements: 9
> > ***********************
> >
> > I did find this in google:
> >
> > https://lore.kernel.org/linux-arm-kernel/68c6b70a-8d6c-08b5-46ce-243607479d5c@i2se.com/T/
> >
> > which does bisect to a change.
> >
> > I'll rerun the autobuilder testing with the cryptodev patch and see if
> > anything else transpires.
>
> The LTP on arm run failed:
>
> https://autobuilder.yoctoproject.org/typhoon/#/builders/96/builds/5406
>
> which diving into the logs shows it went OOM and keeled over badly:
>
> https://autobuilder.yocto.io/pub/failed-builds-data/qemu_boot_log.20231001101358
>
> meta-virtualization doesn't like something:
>
> https://autobuilder.yoctoproject.org/typhoon/#/builders/128/builds/2295
>
> The arm ptest failures above are unsurprisingly still around:
>
> https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5513
>
> There may or may not be failing strace ptests on x86:
>
> https://autobuilder.yoctoproject.org/typhoon/#/builders/81/builds/5694/steps/13/logs/stdio

strace might be related to something like this
https://blog.sebastianwick.net/posts/so-peerpidfd-usefulness/

>
> but we didn't get the world build failures from cryptodev.
>
> Cheers,
>
> Richard
>
>
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> View/Reply Online (#188481): https://lists.openembedded.org/g/openembedded-core/message/188481
> Mute This Topic: https://lists.openembedded.org/mt/101665418/1997914
> Group Owner: openembedded-core+owner@lists.openembedded.org
> Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [raj.khem@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
Khem Raj Oct. 1, 2023, 9:30 p.m. UTC | #12
seeing couple of build failure in meta-oe too

https://errors.yoctoproject.org/Errors/Build/172283/


On Sun, Oct 1, 2023 at 10:40 AM Khem Raj <raj.khem@gmail.com> wrote:
>
> On Sun, Oct 1, 2023 at 4:49 AM Richard Purdie
> <richard.purdie@linuxfoundation.org> wrote:
> >
> > On Sun, 2023-10-01 at 11:13 +0100, Richard Purdie wrote:
> > > On Fri, 2023-09-29 at 16:04 -0400, bruce.ashfield@gmail.com wrote:
> > > > Given where we are in the release cycle, this clearly is NOT a typical
> > > > consolidated pull request.
> > > >
> > > > I've done what normally takes about three weeks in about 4 days.
> > > >
> > > > With 6.4 going EOL before expected upstream, it really isn't a suitable
> > > > reference kernel for the release.
> > > >
> > > > So we've decided to take on the task of getting 6.5 ready and available,
> > > > and at the same time moving the -dev kernel to v6.6. The -dev kernel
> > > > testing for 6.5 was critical for this, since I already knew the core
> > > > was sane.
> > > >
> > > > Also we've never shipped purposely mismatched libc-headers in the release,
> > > > so I also took the leap to update the libc-headers to match.
> > > >
> > > > I've already sent fixes to meta-oe, and there's a btrfs update in this
> > > > series to fix breakage that I found in the tightly coupled packages.
> > > >
> > > > I've built and booted core-image-kernel-dev, core-image-minimal, core-image-sato
> > > > for both glibc and musl for all the supported architectures.
> > > > There will be some things that break regardless, but this needs the
> > > > better coverage of the AB.
> > > >
> > > > If this causes too much problems, our choices are to ship 6.4 EOLd, or
> > > > fall all the way back to 6.1.
> > > >
> > > > I'll remove 6.4 from master once we've figured out the fallout from
> > > > this kernel, and which direction we are going.
> > >
> > > I've merged this series which seemed to work fine. Given the time
> > > constraints, I thought I'd throw some 6.5 testing at the autobuilder.
> > > It ran into two issues. One was cryptodev, I have a patch for that in
> > > master-next. The other were entropy boot failures on arm kvm:
> > >
> > > https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5512/steps/12/logs/stdio
> > >
> > > [    0.796831] Key type id_resolver registered
> > > [    0.797581] Key type id_legacy registered
> > > [    0.798724] Key type cifs.idmap registered
> > > [    0.808070] jitterentropy: Initialization failed with host not compliant with requirements: 9
> > > [    0.809690] xor: measuring software checksum speed
> > > [    0.811307]    8regs           : 12333 MB/sec
> > > [    0.812862]    32regs          : 12322 MB/sec
> > > [    0.814885]    arm64_neon      :  7851 MB/sec
> > > [    0.815626] xor: using function: 8regs (12333 MB/sec)
> > >
> > >
> > > -----------------------
> > > Central error: [    0.808070] jitterentropy: Initialization failed with host not compliant with requirements: 9
> > > ***********************
> > >
> > > I did find this in google:
> > >
> > > https://lore.kernel.org/linux-arm-kernel/68c6b70a-8d6c-08b5-46ce-243607479d5c@i2se.com/T/
> > >
> > > which does bisect to a change.
> > >
> > > I'll rerun the autobuilder testing with the cryptodev patch and see if
> > > anything else transpires.
> >
> > The LTP on arm run failed:
> >
> > https://autobuilder.yoctoproject.org/typhoon/#/builders/96/builds/5406
> >
> > which diving into the logs shows it went OOM and keeled over badly:
> >
> > https://autobuilder.yocto.io/pub/failed-builds-data/qemu_boot_log.20231001101358
> >
> > meta-virtualization doesn't like something:
> >
> > https://autobuilder.yoctoproject.org/typhoon/#/builders/128/builds/2295
> >
> > The arm ptest failures above are unsurprisingly still around:
> >
> > https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5513
> >
> > There may or may not be failing strace ptests on x86:
> >
> > https://autobuilder.yoctoproject.org/typhoon/#/builders/81/builds/5694/steps/13/logs/stdio
>
> strace might be related to something like this
> https://blog.sebastianwick.net/posts/so-peerpidfd-usefulness/
>
> >
> > but we didn't get the world build failures from cryptodev.
> >
> > Cheers,
> >
> > Richard
> >
> >
> >
> > -=-=-=-=-=-=-=-=-=-=-=-
> > Links: You receive all messages sent to this group.
> > View/Reply Online (#188481): https://lists.openembedded.org/g/openembedded-core/message/188481
> > Mute This Topic: https://lists.openembedded.org/mt/101665418/1997914
> > Group Owner: openembedded-core+owner@lists.openembedded.org
> > Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [raj.khem@gmail.com]
> > -=-=-=-=-=-=-=-=-=-=-=-
> >