diff mbox series

[6/6] u-boot: Rework signing to remove interdependencies

Message ID 20221021233726.1751124-7-sean.anderson@seco.com
State Accepted, archived
Commit 5e12dc911d0c541f43aa6d0c046fb87e8b7c1f7e
Headers show
Series u-boot: Rework signing process to remove interdependencies | expand

Commit Message

Sean Anderson Oct. 21, 2022, 11:37 p.m. UTC
The U-Boot signing code is a bit of a mess. The problem is that mkimage
determines the public keys to embed into a device tree based on an image
that it is signing. This results in all sorts of contortions: U-Boot has to
be available to the kernel recipe so that it can have the correct public
keys embedded. Then, the signed U-Boot has to be made available to U-Boot's
do_deploy. This same dance is then repeated for SPL. To complicate matters,
signing for U-Boot and U-Boot SPL is optional, so the whole process must be
seamlessly integrated with a non-signed build.

The complexity and interdependency of this process makes it difficult to
extend. For example, it is not possible to install a signed U-Boot binary
into the root filesystem. This is first because u-boot:do_install must run
before linux:do_assemble_fitimage, which must run before u-boot:do_deploy.
But aside from infrastructure issues, installing a signed U-Boot also can't
happen, because the kernel image might have an embedded initramfs
(containing the signed U-Boot).

However, all of this complexity is accidental. It is not necessary to embed
the public keys into U-Boot and sign the kernel in one fell swoop. Instead,
we can sign the kernel, stage it, and sign the staged kernel again to embed
the public keys into U-Boot [1]. This twice-signed kernel serves only to
provide the correct parameters to mkimage, and does not have to be
installed or deployed. By cutting the dependency of
linux:do_assemble_fitimage on u-boot:do_install, we can drastically
simplify the build process, making it much more extensible.

The process of doing this conversion is a bit involved, since the U-Boot
and Linux recipes are so intertwined at the moment. The most major change
is that uboot-sign is no longer inherited by kernel-fitimage. Similarly,
all U-Boot-related tasks have been removed from kernel-fitimage. We add a
new step to the install task to stage the kernel in /sysroot-only. The
logic to disable assemble_fitimage has been removed. We always assemble it,
even if the final fitImage will use a bundled initramfs, because U-Boot
will need it.

On the U-Boot side, much of the churn stems from multiple config support.
Previously, we took a fairly ad-hoc approach to UBOOT_CONFIG and
UBOOT_MACHINE, introducing for loops wherever we needed to deal with them.
However, I have chosen to use a much more structured approach. Each task
which needs to use the build directory uses the following pseudocode:

do_mytask() {
	if ${UBOOT_CONFIG}; then
		for config, type in zip(${UBOOT_CONFIG}, ${UBOOT_MACHINE}); do
			cd ${config}
			mytask_helper ${type}
		done
	else
		cd ${B}
		mytask_helper ""
	fi
}

By explicitly placing the work in mytask_helper, we make it easier to
ensure that everything is covered, and we also allow bbappends files to
more easily extend the task (as otherwise they would need to reimplement
the loop themselves).

[1] It doesn't particularly matter what we sign. Any FIT will do, but I
chose the kernel's because we already went to the trouble of setting it up
with the correct hashes and signatures. In the future, we could create a
"dummy" image and sign that instead, but it would probably have to happen
in the kernel recipe anyway (so we have access to the appropriate
variables).

Signed-off-by: Sean Anderson <sean.anderson@seco.com>
---

 meta/classes-recipe/kernel-fitimage.bbclass |  68 +---
 meta/classes-recipe/uboot-config.bbclass    |   3 +
 meta/classes-recipe/uboot-sign.bbclass      | 421 +++++++++-----------
 meta/lib/oeqa/selftest/cases/fitimage.py    |  29 +-
 meta/recipes-bsp/u-boot/u-boot.inc          |   3 -
 5 files changed, 226 insertions(+), 298 deletions(-)

Comments

Sean Anderson Oct. 26, 2022, 4:49 p.m. UTC | #1
On 10/26/22 10:08, Richard Purdie wrote:
> On Fri, 2022-10-21 at 19:37 -0400, Sean Anderson wrote:
>> The U-Boot signing code is a bit of a mess. The problem is that mkimage
>> determines the public keys to embed into a device tree based on an image
>> that it is signing. This results in all sorts of contortions: U-Boot has to
>> be available to the kernel recipe so that it can have the correct public
>> keys embedded. Then, the signed U-Boot has to be made available to U-Boot's
>> do_deploy. This same dance is then repeated for SPL. To complicate matters,
>> signing for U-Boot and U-Boot SPL is optional, so the whole process must be
>> seamlessly integrated with a non-signed build.
>> 
>> The complexity and interdependency of this process makes it difficult to
>> extend. For example, it is not possible to install a signed U-Boot binary
>> into the root filesystem. This is first because u-boot:do_install must run
>> before linux:do_assemble_fitimage, which must run before u-boot:do_deploy.
>> But aside from infrastructure issues, installing a signed U-Boot also can't
>> happen, because the kernel image might have an embedded initramfs
>> (containing the signed U-Boot).
>> 
>> However, all of this complexity is accidental. It is not necessary to embed
>> the public keys into U-Boot and sign the kernel in one fell swoop. Instead,
>> we can sign the kernel, stage it, and sign the staged kernel again to embed
>> the public keys into U-Boot [1]. This twice-signed kernel serves only to
>> provide the correct parameters to mkimage, and does not have to be
>> installed or deployed. By cutting the dependency of
>> linux:do_assemble_fitimage on u-boot:do_install, we can drastically
>> simplify the build process, making it much more extensible.
>> 
>> The process of doing this conversion is a bit involved, since the U-Boot
>> and Linux recipes are so intertwined at the moment. The most major change
>> is that uboot-sign is no longer inherited by kernel-fitimage. Similarly,
>> all U-Boot-related tasks have been removed from kernel-fitimage. We add a
>> new step to the install task to stage the kernel in /sysroot-only. The
>> logic to disable assemble_fitimage has been removed. We always assemble it,
>> even if the final fitImage will use a bundled initramfs, because U-Boot
>> will need it.
>> 
>> On the U-Boot side, much of the churn stems from multiple config support.
>> Previously, we took a fairly ad-hoc approach to UBOOT_CONFIG and
>> UBOOT_MACHINE, introducing for loops wherever we needed to deal with them.
>> However, I have chosen to use a much more structured approach. Each task
>> which needs to use the build directory uses the following pseudocode:
>> 
>> do_mytask() {
>> 	if ${UBOOT_CONFIG}; then
>> 		for config, type in zip(${UBOOT_CONFIG}, ${UBOOT_MACHINE}); do
>> 			cd ${config}
>> 			mytask_helper ${type}
>> 		done
>> 	else
>> 		cd ${B}
>> 		mytask_helper ""
>> 	fi
>> }
>> 
>> By explicitly placing the work in mytask_helper, we make it easier to
>> ensure that everything is covered, and we also allow bbappends files to
>> more easily extend the task (as otherwise they would need to reimplement
>> the loop themselves).
>> 
>> [1] It doesn't particularly matter what we sign. Any FIT will do, but I
>> chose the kernel's because we already went to the trouble of setting it up
>> with the correct hashes and signatures. In the future, we could create a
>> "dummy" image and sign that instead, but it would probably have to happen
>> in the kernel recipe anyway (so we have access to the appropriate
>> variables).
>> 
>> Signed-off-by: Sean Anderson <sean.anderson@seco.com>
>> ---
>> 
>>  meta/classes-recipe/kernel-fitimage.bbclass |  68 +---
>>  meta/classes-recipe/uboot-config.bbclass    |   3 +
>>  meta/classes-recipe/uboot-sign.bbclass      | 421 +++++++++-----------
>>  meta/lib/oeqa/selftest/cases/fitimage.py    |  29 +-
>>  meta/recipes-bsp/u-boot/u-boot.inc          |   3 -
>>  5 files changed, 226 insertions(+), 298 deletions(-)
> 
> This looks like a pretty nice improvement, I've been worried about how
> entangled all this was becoming. Is there anything better we could do
> with testing of this code? We rely increasingly on the tests to spot
> regressions and I doubt the testcases we have cover the code
> complexity.

FWIW, I found the test suite to be fairly helpful when debugging this
whole process, if agonizingly slow. I found many bugs in my initial
implementation which were uncovered by the test suites,

The best way to extend the test suite would probably be to have QEMU try
to boot using the Linux and U-Boot. This would help cases where all the
artifacts are created correctly but e.g. the wrong U-Boot is packaged.
However, I am rather loathe to add more test cases like this, because it
already takes around 10 minutes (with sstate!) to run the tests. This
basically kills the speed of iterative development.

--Sean
Richard Purdie Oct. 26, 2022, 5 p.m. UTC | #2
On Wed, 2022-10-26 at 12:49 -0400, Sean Anderson wrote:
> On 10/26/22 10:08, Richard Purdie wrote:
> > This looks like a pretty nice improvement, I've been worried about how
> > entangled all this was becoming. Is there anything better we could do
> > with testing of this code? We rely increasingly on the tests to spot
> > regressions and I doubt the testcases we have cover the code
> > complexity.
> 
> FWIW, I found the test suite to be fairly helpful when debugging this
> whole process, if agonizingly slow. I found many bugs in my initial
> implementation which were uncovered by the test suites,
> 
> The best way to extend the test suite would probably be to have QEMU try
> to boot using the Linux and U-Boot. This would help cases where all the
> artifacts are created correctly but e.g. the wrong U-Boot is packaged.
> However, I am rather loathe to add more test cases like this, because it
> already takes around 10 minutes (with sstate!) to run the tests. This
> basically kills the speed of iterative development.

Out of interest how were you running the tests? Did you look at the
parallelism options? Were you just running specific tests or specific
suites of tests?

I do think we need something like you describe and adding a linux+uboot
approach is something I've wanted to do for a long time. Usually the
time is taken on our automated CI rather than impacting users, unless
they're working in that area at which point the tests are hopefully
helpful.

We haven't really had anyone try and optimise the tests either, I'm
sure there will be things in there which can help. Please don't let the
speed put you off trying to improve things and extend our coverage!

Cheers,

Richard
Sean Anderson Oct. 26, 2022, 5:21 p.m. UTC | #3
On 10/26/22 13:00, Richard Purdie wrote:
> On Wed, 2022-10-26 at 12:49 -0400, Sean Anderson wrote:
>> On 10/26/22 10:08, Richard Purdie wrote:
>> > This looks like a pretty nice improvement, I've been worried about how
>> > entangled all this was becoming. Is there anything better we could do
>> > with testing of this code? We rely increasingly on the tests to spot
>> > regressions and I doubt the testcases we have cover the code
>> > complexity.
>> 
>> FWIW, I found the test suite to be fairly helpful when debugging this
>> whole process, if agonizingly slow. I found many bugs in my initial
>> implementation which were uncovered by the test suites,
>> 
>> The best way to extend the test suite would probably be to have QEMU try
>> to boot using the Linux and U-Boot. This would help cases where all the
>> artifacts are created correctly but e.g. the wrong U-Boot is packaged.
>> However, I am rather loathe to add more test cases like this, because it
>> already takes around 10 minutes (with sstate!) to run the tests. This
>> basically kills the speed of iterative development.
> 
> Out of interest how were you running the tests? Did you look at the
> parallelism options? Were you just running specific tests or specific
> suites of tests?

As noted in the cover letter, I ran

oe-selftest -r fitimage.FitImageTests

I also tried using -j$(nproc), but I saw no increase in parallelism
outside of the usual for bitbake. This was especially noticable for
do_rootfs, which is single-threaded.

This is ommitted above, but I *had* to use -j1 in order to avoid
manually wiping out my existing build directory each time (and instead
ending up with dozens of pid-named directories). This is documented
nowhere, and I found it in some old IRC logs.

> I do think we need something like you describe and adding a linux+uboot
> approach is something I've wanted to do for a long time. Usually the
> time is taken on our automated CI rather than impacting users, unless
> they're working in that area at which point the tests are hopefully
> helpful.

Which of course provides no incentive to reduce runtime.

> We haven't really had anyone try and optimise the tests either, I'm
> sure there will be things in there which can help. Please don't let the
> speed put you off trying to improve things and extend our coverage!

The poor speed of these self tests (and of everything related to the
yocto project in general) makes this project frustrating to contribute
to. It took me around 2 days to go from my prototype to this series,
most of which was spent waiting for tests to compile and losing whatever
train of thought I had. I probably went through perhaps 20 revisions. If
I was working on e.g. U-Boot, I could have made 20 revisions in 2 hours,
as it takes around 15 seconds to recompile it and run the full unit test
suite.

On the topic of these specific tests, part of the problem is that
do_rootfs is a bottleneck which takes around 45-60s on my system. Every
test which modifies something in the rootfs incurs this overhead.

--Sean
Richard Purdie Oct. 28, 2022, 3:09 p.m. UTC | #4
On Wed, 2022-10-26 at 13:21 -0400, Sean Anderson wrote:
> On 10/26/22 13:00, Richard Purdie wrote:
> > On Wed, 2022-10-26 at 12:49 -0400, Sean Anderson wrote:
> > > On 10/26/22 10:08, Richard Purdie wrote:
> > > > This looks like a pretty nice improvement, I've been worried about how
> > > > entangled all this was becoming. Is there anything better we could do
> > > > with testing of this code? We rely increasingly on the tests to spot
> > > > regressions and I doubt the testcases we have cover the code
> > > > complexity.
> > > 
> > > FWIW, I found the test suite to be fairly helpful when debugging this
> > > whole process, if agonizingly slow. I found many bugs in my initial
> > > implementation which were uncovered by the test suites,
> > > 
> > > The best way to extend the test suite would probably be to have QEMU try
> > > to boot using the Linux and U-Boot. This would help cases where all the
> > > artifacts are created correctly but e.g. the wrong U-Boot is packaged.
> > > However, I am rather loathe to add more test cases like this, because it
> > > already takes around 10 minutes (with sstate!) to run the tests. This
> > > basically kills the speed of iterative development.
> > 
> > Out of interest how were you running the tests? Did you look at the
> > parallelism options? Were you just running specific tests or specific
> > suites of tests?
> 
> As noted in the cover letter, I ran
> 
> oe-selftest -r fitimage.FitImageTests

Ok, good. That at least means you were only running one class of tests.
I was worried you were running all of them!

> I also tried using -j$(nproc), but I saw no increase in parallelism
> outside of the usual for bitbake. This was especially noticable for
> do_rootfs, which is single-threaded.

Sadly the parallelism works on a per test class basis so it wouldn't
help in this case. There are only small marginal gains from running
tests in individual build directories so we don't do that.

> This is ommitted above, but I *had* to use -j1 in order to avoid
> manually wiping out my existing build directory each time (and instead
> ending up with dozens of pid-named directories). This is documented
> nowhere, and I found it in some old IRC logs.

Parallelism using differently named build directories is an
implementation detail, not something which the -j option implies. I
guess you were also using --keep-builddir and could have used the -B
option with $PID or similar for the same effect. We could document
that?

> > I do think we need something like you describe and adding a linux+uboot
> > approach is something I've wanted to do for a long time. Usually the
> > time is taken on our automated CI rather than impacting users, unless
> > they're working in that area at which point the tests are hopefully
> > helpful.
> 
> Which of course provides no incentive to reduce runtime.

Most of the tests do use our standard tasks so there is as much
incentive to optimise do_rootfs as there would be otherwise :/.

> > We haven't really had anyone try and optimise the tests either, I'm
> > sure there will be things in there which can help. Please don't let the
> > speed put you off trying to improve things and extend our coverage!
> 
> The poor speed of these self tests (and of everything related to the
> yocto project in general) makes this project frustrating to contribute
> to. It took me around 2 days to go from my prototype to this series,
> most of which was spent waiting for tests to compile and losing whatever
> train of thought I had. I probably went through perhaps 20 revisions. If
> I was working on e.g. U-Boot, I could have made 20 revisions in 2 hours,
> as it takes around 15 seconds to recompile it and run the full unit test
> suite.
> 
> On the topic of these specific tests, part of the problem is that
> do_rootfs is a bottleneck which takes around 45-60s on my system. Every
> test which modifies something in the rootfs incurs this overhead.

For better or worse we've 'a few' more moving pieces than U-Boot.

Building a root filesystem from packages is a non-trivial task, taking
under a minute is in some ways pretty good already. The only other
thing we could do is incremental rootfs construction where it would
add/remove changed packages. I'd worry that the result may not always
be equal to a build from scratch and it might cause weird and
interesting reproducibility problems (particularly when you consider
things like postinsts).

I would love to improve our development "iteration" time but I'm
struggling to see where we could get the speed gains from :(. Open to
other ideas...

Cheers,

Richard
Sean Anderson Oct. 28, 2022, 3:29 p.m. UTC | #5
On 10/28/22 11:09, Richard Purdie wrote:
> On Wed, 2022-10-26 at 13:21 -0400, Sean Anderson wrote:
>> On 10/26/22 13:00, Richard Purdie wrote:
>>> On Wed, 2022-10-26 at 12:49 -0400, Sean Anderson wrote:
>>>> On 10/26/22 10:08, Richard Purdie wrote:
>>>>> This looks like a pretty nice improvement, I've been worried about how
>>>>> entangled all this was becoming. Is there anything better we could do
>>>>> with testing of this code? We rely increasingly on the tests to spot
>>>>> regressions and I doubt the testcases we have cover the code
>>>>> complexity.
>>>>
>>>> FWIW, I found the test suite to be fairly helpful when debugging this
>>>> whole process, if agonizingly slow. I found many bugs in my initial
>>>> implementation which were uncovered by the test suites,
>>>>
>>>> The best way to extend the test suite would probably be to have QEMU try
>>>> to boot using the Linux and U-Boot. This would help cases where all the
>>>> artifacts are created correctly but e.g. the wrong U-Boot is packaged.
>>>> However, I am rather loathe to add more test cases like this, because it
>>>> already takes around 10 minutes (with sstate!) to run the tests. This
>>>> basically kills the speed of iterative development.
>>>
>>> Out of interest how were you running the tests? Did you look at the
>>> parallelism options? Were you just running specific tests or specific
>>> suites of tests?
>>
>> As noted in the cover letter, I ran
>>
>> oe-selftest -r fitimage.FitImageTests
> 
> Ok, good. That at least means you were only running one class of tests.
> I was worried you were running all of them!
> 
>> I also tried using -j$(nproc), but I saw no increase in parallelism
>> outside of the usual for bitbake. This was especially noticable for
>> do_rootfs, which is single-threaded.
> 
> Sadly the parallelism works on a per test class basis so it wouldn't
> help in this case. There are only small marginal gains from running
> tests in individual build directories so we don't do that.

I estimate it could have saved me 2-3 minutes every build, since it could
have parallelized the root filesystem stuff.

>> This is ommitted above, but I *had* to use -j1 in order to avoid
>> manually wiping out my existing build directory each time (and instead
>> ending up with dozens of pid-named directories). This is documented
>> nowhere, and I found it in some old IRC logs.
> 
> Parallelism using differently named build directories is an
> implementation detail, not something which the -j option implies.I
> guess you were also using --keep-builddir

Failing builds don't remove the test directory so you can inspect the build
output. As you might imagine, I had a lot of failing builds.

> and could have used the -B
> option with $PID or similar for the same effect.

Probably.

> We could document
> that?

Please do.

>>> I do think we need something like you describe and adding a linux+uboot
>>> approach is something I've wanted to do for a long time. Usually the
>>> time is taken on our automated CI rather than impacting users, unless
>>> they're working in that area at which point the tests are hopefully
>>> helpful.
>>
>> Which of course provides no incentive to reduce runtime.
> 
> Most of the tests do use our standard tasks so there is as much
> incentive to optimise do_rootfs as there would be otherwise :/.
> 
>>> We haven't really had anyone try and optimise the tests either, I'm
>>> sure there will be things in there which can help. Please don't let the
>>> speed put you off trying to improve things and extend our coverage!
>>
>> The poor speed of these self tests (and of everything related to the
>> yocto project in general) makes this project frustrating to contribute
>> to. It took me around 2 days to go from my prototype to this series,
>> most of which was spent waiting for tests to compile and losing whatever
>> train of thought I had. I probably went through perhaps 20 revisions. If
>> I was working on e.g. U-Boot, I could have made 20 revisions in 2 hours,
>> as it takes around 15 seconds to recompile it and run the full unit test
>> suite.
>>
>> On the topic of these specific tests, part of the problem is that
>> do_rootfs is a bottleneck which takes around 45-60s on my system. Every
>> test which modifies something in the rootfs incurs this overhead.
> 
> For better or worse we've 'a few' more moving pieces than U-Boot.
> 
> Building a root filesystem from packages is a non-trivial task, taking
> under a minute is in some ways pretty good already. The only other
> thing we could do is incremental rootfs construction where it would
> add/remove changed packages. I'd worry that the result may not always
> be equal to a build from scratch and it might cause weird and
> interesting reproducibility problems (particularly when you consider
> things like postinsts).
> 
> I would love to improve our development "iteration" time but I'm
> struggling to see where we could get the speed gains from :(. Open to
> other ideas...

We don't have to build a full root filesystem. All of these tests just want
e.g. an initramfs. An empty (or one file) filesystem would work just as well.
If you still want to boot, you can make a busybox filesystem.

--Sean
Richard Purdie Oct. 28, 2022, 3:37 p.m. UTC | #6
On Fri, 2022-10-28 at 11:29 -0400, Sean Anderson wrote:
> On 10/28/22 11:09, Richard Purdie wrote:
> > On Wed, 2022-10-26 at 13:21 -0400, Sean Anderson wrote:
> > > As noted in the cover letter, I ran
> > > 
> > > oe-selftest -r fitimage.FitImageTests
> > 
> > Ok, good. That at least means you were only running one class of tests.
> > I was worried you were running all of them!
> > 
> > > I also tried using -j$(nproc), but I saw no increase in parallelism
> > > outside of the usual for bitbake. This was especially noticable for
> > > do_rootfs, which is single-threaded.
> > 
> > Sadly the parallelism works on a per test class basis so it wouldn't
> > help in this case. There are only small marginal gains from running
> > tests in individual build directories so we don't do that.
> 
> I estimate it could have saved me 2-3 minutes every build, since it could
> have parallelized the root filesystem stuff.

On an initial run, it could have also ended up building a lot of pieces
in parallel needlessly so it is all a bit of a compromise. It might be
worth looking into whether we can make that an option, off by default.

> > > This is ommitted above, but I *had* to use -j1 in order to avoid
> > > manually wiping out my existing build directory each time (and instead
> > > ending up with dozens of pid-named directories). This is documented
> > > nowhere, and I found it in some old IRC logs.
> > 
> > Parallelism using differently named build directories is an
> > implementation detail, not something which the -j option implies.I
> > guess you were also using --keep-builddir
> 
> Failing builds don't remove the test directory so you can inspect the build
> output. As you might imagine, I had a lot of failing builds.

I'm very familiar with that myself, yes.

We did once used to reuse the build directory, that challenge is we
have no idea what the user has done in there prior to the test so it
potentially makes the test results potentially incorrect.

> > > > We haven't really had anyone try and optimise the tests either, I'm
> > > > sure there will be things in there which can help. Please don't let the
> > > > speed put you off trying to improve things and extend our coverage!
> > > 
> > > The poor speed of these self tests (and of everything related to the
> > > yocto project in general) makes this project frustrating to contribute
> > > to. It took me around 2 days to go from my prototype to this series,
> > > most of which was spent waiting for tests to compile and losing whatever
> > > train of thought I had. I probably went through perhaps 20 revisions. If
> > > I was working on e.g. U-Boot, I could have made 20 revisions in 2 hours,
> > > as it takes around 15 seconds to recompile it and run the full unit test
> > > suite.
> > > 
> > > On the topic of these specific tests, part of the problem is that
> > > do_rootfs is a bottleneck which takes around 45-60s on my system. Every
> > > test which modifies something in the rootfs incurs this overhead.
> > 
> > For better or worse we've 'a few' more moving pieces than U-Boot.
> > 
> > Building a root filesystem from packages is a non-trivial task, taking
> > under a minute is in some ways pretty good already. The only other
> > thing we could do is incremental rootfs construction where it would
> > add/remove changed packages. I'd worry that the result may not always
> > be equal to a build from scratch and it might cause weird and
> > interesting reproducibility problems (particularly when you consider
> > things like postinsts).
> > 
> > I would love to improve our development "iteration" time but I'm
> > struggling to see where we could get the speed gains from :(. Open to
> > other ideas...
> 
> We don't have to build a full root filesystem. All of these tests just want
> e.g. an initramfs. An empty (or one file) filesystem would work just as well.
> If you still want to boot, you can make a busybox filesystem.

Could we update the test just to use an initramfs then?

I'm definitely a fan of keeping the tests as simple as we can whilst
still testing what we need to test.

Cheers,

Richard
Sean Anderson Nov. 1, 2022, 4:14 p.m. UTC | #7
On 10/28/22 11:37, Richard Purdie wrote:
> On Fri, 2022-10-28 at 11:29 -0400, Sean Anderson wrote:
>> On 10/28/22 11:09, Richard Purdie wrote:
>> > On Wed, 2022-10-26 at 13:21 -0400, Sean Anderson wrote:
>> > > As noted in the cover letter, I ran
>> > > 
>> > > oe-selftest -r fitimage.FitImageTests
>> > 
>> > Ok, good. That at least means you were only running one class of tests.
>> > I was worried you were running all of them!
>> > 
>> > > I also tried using -j$(nproc), but I saw no increase in parallelism
>> > > outside of the usual for bitbake. This was especially noticable for
>> > > do_rootfs, which is single-threaded.
>> > 
>> > Sadly the parallelism works on a per test class basis so it wouldn't
>> > help in this case. There are only small marginal gains from running
>> > tests in individual build directories so we don't do that.
>> 
>> I estimate it could have saved me 2-3 minutes every build, since it could
>> have parallelized the root filesystem stuff.
> 
> On an initial run, it could have also ended up building a lot of pieces
> in parallel needlessly so it is all a bit of a compromise. It might be
> worth looking into whether we can make that an option, off by default.
> 
>> > > This is ommitted above, but I *had* to use -j1 in order to avoid
>> > > manually wiping out my existing build directory each time (and instead
>> > > ending up with dozens of pid-named directories). This is documented
>> > > nowhere, and I found it in some old IRC logs.
>> > 
>> > Parallelism using differently named build directories is an
>> > implementation detail, not something which the -j option implies.I
>> > guess you were also using --keep-builddir
>> 
>> Failing builds don't remove the test directory so you can inspect the build
>> output. As you might imagine, I had a lot of failing builds.
> 
> I'm very familiar with that myself, yes.
> 
> We did once used to reuse the build directory, that challenge is we
> have no idea what the user has done in there prior to the test so it
> potentially makes the test results potentially incorrect.
> 
>> > > > We haven't really had anyone try and optimise the tests either, I'm
>> > > > sure there will be things in there which can help. Please don't let the
>> > > > speed put you off trying to improve things and extend our coverage!
>> > > 
>> > > The poor speed of these self tests (and of everything related to the
>> > > yocto project in general) makes this project frustrating to contribute
>> > > to. It took me around 2 days to go from my prototype to this series,
>> > > most of which was spent waiting for tests to compile and losing whatever
>> > > train of thought I had. I probably went through perhaps 20 revisions. If
>> > > I was working on e.g. U-Boot, I could have made 20 revisions in 2 hours,
>> > > as it takes around 15 seconds to recompile it and run the full unit test
>> > > suite.
>> > > 
>> > > On the topic of these specific tests, part of the problem is that
>> > > do_rootfs is a bottleneck which takes around 45-60s on my system. Every
>> > > test which modifies something in the rootfs incurs this overhead.
>> > 
>> > For better or worse we've 'a few' more moving pieces than U-Boot.
>> > 
>> > Building a root filesystem from packages is a non-trivial task, taking
>> > under a minute is in some ways pretty good already. The only other
>> > thing we could do is incremental rootfs construction where it would
>> > add/remove changed packages. I'd worry that the result may not always
>> > be equal to a build from scratch and it might cause weird and
>> > interesting reproducibility problems (particularly when you consider
>> > things like postinsts).
>> > 
>> > I would love to improve our development "iteration" time but I'm
>> > struggling to see where we could get the speed gains from :(. Open to
>> > other ideas...
>> 
>> We don't have to build a full root filesystem. All of these tests just want
>> e.g. an initramfs. An empty (or one file) filesystem would work just as well.
>> If you still want to boot, you can make a busybox filesystem.
> 
> Could we update the test just to use an initramfs then?
> 
> I'm definitely a fan of keeping the tests as simple as we can whilst
> still testing what we need to test.

I can look into this, but I'd prefer to do it as a follow-up to this series.

I'll probably send a v2 later this week a fleshed-out commit message for patch
5/6 (and with it possibly all those variables moved to a separate bbclass to
make it easier for other classes to create signed FITs).

--Sean
Richard Purdie Nov. 1, 2022, 5:29 p.m. UTC | #8
On Tue, 2022-11-01 at 12:14 -0400, Sean Anderson wrote:
> On 10/28/22 11:37, Richard Purdie wrote:
> > On Fri, 2022-10-28 at 11:29 -0400, Sean Anderson wrote:
> > > On 10/28/22 11:09, Richard Purdie wrote:
> > > > On Wed, 2022-10-26 at 13:21 -0400, Sean Anderson wrote:
> > > > > As noted in the cover letter, I ran
> > > > > 
> > > > > oe-selftest -r fitimage.FitImageTests
> > > > 
> > > > Ok, good. That at least means you were only running one class of tests.
> > > > I was worried you were running all of them!
> > > > 
> > > > > I also tried using -j$(nproc), but I saw no increase in parallelism
> > > > > outside of the usual for bitbake. This was especially noticable for
> > > > > do_rootfs, which is single-threaded.
> > > > 
> > > > Sadly the parallelism works on a per test class basis so it wouldn't
> > > > help in this case. There are only small marginal gains from running
> > > > tests in individual build directories so we don't do that.
> > > 
> > > I estimate it could have saved me 2-3 minutes every build, since it could
> > > have parallelized the root filesystem stuff.
> > 
> > On an initial run, it could have also ended up building a lot of pieces
> > in parallel needlessly so it is all a bit of a compromise. It might be
> > worth looking into whether we can make that an option, off by default.
> > 
> > > > > This is ommitted above, but I *had* to use -j1 in order to avoid
> > > > > manually wiping out my existing build directory each time (and instead
> > > > > ending up with dozens of pid-named directories). This is documented
> > > > > nowhere, and I found it in some old IRC logs.
> > > > 
> > > > Parallelism using differently named build directories is an
> > > > implementation detail, not something which the -j option implies.I
> > > > guess you were also using --keep-builddir
> > > 
> > > Failing builds don't remove the test directory so you can inspect the build
> > > output. As you might imagine, I had a lot of failing builds.
> > 
> > I'm very familiar with that myself, yes.
> > 
> > We did once used to reuse the build directory, that challenge is we
> > have no idea what the user has done in there prior to the test so it
> > potentially makes the test results potentially incorrect.
> > 
> > > > > > We haven't really had anyone try and optimise the tests either, I'm
> > > > > > sure there will be things in there which can help. Please don't let the
> > > > > > speed put you off trying to improve things and extend our coverage!
> > > > > 
> > > > > The poor speed of these self tests (and of everything related to the
> > > > > yocto project in general) makes this project frustrating to contribute
> > > > > to. It took me around 2 days to go from my prototype to this series,
> > > > > most of which was spent waiting for tests to compile and losing whatever
> > > > > train of thought I had. I probably went through perhaps 20 revisions. If
> > > > > I was working on e.g. U-Boot, I could have made 20 revisions in 2 hours,
> > > > > as it takes around 15 seconds to recompile it and run the full unit test
> > > > > suite.
> > > > > 
> > > > > On the topic of these specific tests, part of the problem is that
> > > > > do_rootfs is a bottleneck which takes around 45-60s on my system. Every
> > > > > test which modifies something in the rootfs incurs this overhead.
> > > > 
> > > > For better or worse we've 'a few' more moving pieces than U-Boot.
> > > > 
> > > > Building a root filesystem from packages is a non-trivial task, taking
> > > > under a minute is in some ways pretty good already. The only other
> > > > thing we could do is incremental rootfs construction where it would
> > > > add/remove changed packages. I'd worry that the result may not always
> > > > be equal to a build from scratch and it might cause weird and
> > > > interesting reproducibility problems (particularly when you consider
> > > > things like postinsts).
> > > > 
> > > > I would love to improve our development "iteration" time but I'm
> > > > struggling to see where we could get the speed gains from :(. Open to
> > > > other ideas...
> > > 
> > > We don't have to build a full root filesystem. All of these tests just want
> > > e.g. an initramfs. An empty (or one file) filesystem would work just as well.
> > > If you still want to boot, you can make a busybox filesystem.
> > 
> > Could we update the test just to use an initramfs then?
> > 
> > I'm definitely a fan of keeping the tests as simple as we can whilst
> > still testing what we need to test.
> 
> I can look into this, but I'd prefer to do it as a follow-up to this series.
> 
> I'll probably send a v2 later this week a fleshed-out commit message for patch
> 5/6 (and with it possibly all those variables moved to a separate bbclass to
> make it easier for other classes to create signed FITs).

The series did already merge so anything would be incremental
improvements at this point!

Cheers,

Richard
Sean Anderson Nov. 1, 2022, 5:40 p.m. UTC | #9
On 11/1/22 13:29, Richard Purdie wrote:
> On Tue, 2022-11-01 at 12:14 -0400, Sean Anderson wrote:
>> On 10/28/22 11:37, Richard Purdie wrote:
>> > On Fri, 2022-10-28 at 11:29 -0400, Sean Anderson wrote:
>> > > On 10/28/22 11:09, Richard Purdie wrote:
>> > > > On Wed, 2022-10-26 at 13:21 -0400, Sean Anderson wrote:
>> > > > > As noted in the cover letter, I ran
>> > > > > 
>> > > > > oe-selftest -r fitimage.FitImageTests
>> > > > 
>> > > > Ok, good. That at least means you were only running one class of tests.
>> > > > I was worried you were running all of them!
>> > > > 
>> > > > > I also tried using -j$(nproc), but I saw no increase in parallelism
>> > > > > outside of the usual for bitbake. This was especially noticable for
>> > > > > do_rootfs, which is single-threaded.
>> > > > 
>> > > > Sadly the parallelism works on a per test class basis so it wouldn't
>> > > > help in this case. There are only small marginal gains from running
>> > > > tests in individual build directories so we don't do that.
>> > > 
>> > > I estimate it could have saved me 2-3 minutes every build, since it could
>> > > have parallelized the root filesystem stuff.
>> > 
>> > On an initial run, it could have also ended up building a lot of pieces
>> > in parallel needlessly so it is all a bit of a compromise. It might be
>> > worth looking into whether we can make that an option, off by default.
>> > 
>> > > > > This is ommitted above, but I *had* to use -j1 in order to avoid
>> > > > > manually wiping out my existing build directory each time (and instead
>> > > > > ending up with dozens of pid-named directories). This is documented
>> > > > > nowhere, and I found it in some old IRC logs.
>> > > > 
>> > > > Parallelism using differently named build directories is an
>> > > > implementation detail, not something which the -j option implies.I
>> > > > guess you were also using --keep-builddir
>> > > 
>> > > Failing builds don't remove the test directory so you can inspect the build
>> > > output. As you might imagine, I had a lot of failing builds.
>> > 
>> > I'm very familiar with that myself, yes.
>> > 
>> > We did once used to reuse the build directory, that challenge is we
>> > have no idea what the user has done in there prior to the test so it
>> > potentially makes the test results potentially incorrect.
>> > 
>> > > > > > We haven't really had anyone try and optimise the tests either, I'm
>> > > > > > sure there will be things in there which can help. Please don't let the
>> > > > > > speed put you off trying to improve things and extend our coverage!
>> > > > > 
>> > > > > The poor speed of these self tests (and of everything related to the
>> > > > > yocto project in general) makes this project frustrating to contribute
>> > > > > to. It took me around 2 days to go from my prototype to this series,
>> > > > > most of which was spent waiting for tests to compile and losing whatever
>> > > > > train of thought I had. I probably went through perhaps 20 revisions. If
>> > > > > I was working on e.g. U-Boot, I could have made 20 revisions in 2 hours,
>> > > > > as it takes around 15 seconds to recompile it and run the full unit test
>> > > > > suite.
>> > > > > 
>> > > > > On the topic of these specific tests, part of the problem is that
>> > > > > do_rootfs is a bottleneck which takes around 45-60s on my system. Every
>> > > > > test which modifies something in the rootfs incurs this overhead.
>> > > > 
>> > > > For better or worse we've 'a few' more moving pieces than U-Boot.
>> > > > 
>> > > > Building a root filesystem from packages is a non-trivial task, taking
>> > > > under a minute is in some ways pretty good already. The only other
>> > > > thing we could do is incremental rootfs construction where it would
>> > > > add/remove changed packages. I'd worry that the result may not always
>> > > > be equal to a build from scratch and it might cause weird and
>> > > > interesting reproducibility problems (particularly when you consider
>> > > > things like postinsts).
>> > > > 
>> > > > I would love to improve our development "iteration" time but I'm
>> > > > struggling to see where we could get the speed gains from :(. Open to
>> > > > other ideas...
>> > > 
>> > > We don't have to build a full root filesystem. All of these tests just want
>> > > e.g. an initramfs. An empty (or one file) filesystem would work just as well.
>> > > If you still want to boot, you can make a busybox filesystem.
>> > 
>> > Could we update the test just to use an initramfs then?
>> > 
>> > I'm definitely a fan of keeping the tests as simple as we can whilst
>> > still testing what we need to test.
>> 
>> I can look into this, but I'd prefer to do it as a follow-up to this series.
>> 
>> I'll probably send a v2 later this week a fleshed-out commit message for patch
>> 5/6 (and with it possibly all those variables moved to a separate bbclass to
>> make it easier for other classes to create signed FITs).
> 
> The series did already merge so anything would be incremental
> improvements at this point!

Huh...

I really wish you guys sent thank-you messages when you merged something. Makes
it a lot easier to keep track of which series still need work.

--Sean
Richard Purdie Nov. 1, 2022, 5:44 p.m. UTC | #10
On Tue, 2022-11-01 at 13:40 -0400, Sean Anderson wrote:
> On 11/1/22 13:29, Richard Purdie wrote:
> > On Tue, 2022-11-01 at 12:14 -0400, Sean Anderson wrote:
> > > On 10/28/22 11:37, Richard Purdie wrote:
> > > > On Fri, 2022-10-28 at 11:29 -0400, Sean Anderson wrote:
> > > > > On 10/28/22 11:09, Richard Purdie wrote:
> > > > > > On Wed, 2022-10-26 at 13:21 -0400, Sean Anderson wrote:
> > > > > > > As noted in the cover letter, I ran
> > > > > > > 
> > > > > > > oe-selftest -r fitimage.FitImageTests
> > > > > > 
> > > > > > Ok, good. That at least means you were only running one class of tests.
> > > > > > I was worried you were running all of them!
> > > > > > 
> > > > > > > I also tried using -j$(nproc), but I saw no increase in parallelism
> > > > > > > outside of the usual for bitbake. This was especially noticable for
> > > > > > > do_rootfs, which is single-threaded.
> > > > > > 
> > > > > > Sadly the parallelism works on a per test class basis so it wouldn't
> > > > > > help in this case. There are only small marginal gains from running
> > > > > > tests in individual build directories so we don't do that.
> > > > > 
> > > > > I estimate it could have saved me 2-3 minutes every build, since it could
> > > > > have parallelized the root filesystem stuff.
> > > > 
> > > > On an initial run, it could have also ended up building a lot of pieces
> > > > in parallel needlessly so it is all a bit of a compromise. It might be
> > > > worth looking into whether we can make that an option, off by default.
> > > > 
> > > > > > > This is ommitted above, but I *had* to use -j1 in order to avoid
> > > > > > > manually wiping out my existing build directory each time (and instead
> > > > > > > ending up with dozens of pid-named directories). This is documented
> > > > > > > nowhere, and I found it in some old IRC logs.
> > > > > > 
> > > > > > Parallelism using differently named build directories is an
> > > > > > implementation detail, not something which the -j option implies.I
> > > > > > guess you were also using --keep-builddir
> > > > > 
> > > > > Failing builds don't remove the test directory so you can inspect the build
> > > > > output. As you might imagine, I had a lot of failing builds.
> > > > 
> > > > I'm very familiar with that myself, yes.
> > > > 
> > > > We did once used to reuse the build directory, that challenge is we
> > > > have no idea what the user has done in there prior to the test so it
> > > > potentially makes the test results potentially incorrect.
> > > > 
> > > > > > > > We haven't really had anyone try and optimise the tests either, I'm
> > > > > > > > sure there will be things in there which can help. Please don't let the
> > > > > > > > speed put you off trying to improve things and extend our coverage!
> > > > > > > 
> > > > > > > The poor speed of these self tests (and of everything related to the
> > > > > > > yocto project in general) makes this project frustrating to contribute
> > > > > > > to. It took me around 2 days to go from my prototype to this series,
> > > > > > > most of which was spent waiting for tests to compile and losing whatever
> > > > > > > train of thought I had. I probably went through perhaps 20 revisions. If
> > > > > > > I was working on e.g. U-Boot, I could have made 20 revisions in 2 hours,
> > > > > > > as it takes around 15 seconds to recompile it and run the full unit test
> > > > > > > suite.
> > > > > > > 
> > > > > > > On the topic of these specific tests, part of the problem is that
> > > > > > > do_rootfs is a bottleneck which takes around 45-60s on my system. Every
> > > > > > > test which modifies something in the rootfs incurs this overhead.
> > > > > > 
> > > > > > For better or worse we've 'a few' more moving pieces than U-Boot.
> > > > > > 
> > > > > > Building a root filesystem from packages is a non-trivial task, taking
> > > > > > under a minute is in some ways pretty good already. The only other
> > > > > > thing we could do is incremental rootfs construction where it would
> > > > > > add/remove changed packages. I'd worry that the result may not always
> > > > > > be equal to a build from scratch and it might cause weird and
> > > > > > interesting reproducibility problems (particularly when you consider
> > > > > > things like postinsts).
> > > > > > 
> > > > > > I would love to improve our development "iteration" time but I'm
> > > > > > struggling to see where we could get the speed gains from :(. Open to
> > > > > > other ideas...
> > > > > 
> > > > > We don't have to build a full root filesystem. All of these tests just want
> > > > > e.g. an initramfs. An empty (or one file) filesystem would work just as well.
> > > > > If you still want to boot, you can make a busybox filesystem.
> > > > 
> > > > Could we update the test just to use an initramfs then?
> > > > 
> > > > I'm definitely a fan of keeping the tests as simple as we can whilst
> > > > still testing what we need to test.
> > > 
> > > I can look into this, but I'd prefer to do it as a follow-up to this series.
> > > 
> > > I'll probably send a v2 later this week a fleshed-out commit message for patch
> > > 5/6 (and with it possibly all those variables moved to a separate bbclass to
> > > make it easier for other classes to create signed FITs).
> > 
> > The series did already merge so anything would be incremental
> > improvements at this point!
> 
> Huh...
> 
> I really wish you guys sent thank-you messages when you merged something. Makes
> it a lot easier to keep track of which series still need work.

That would result in a lot of messages on the mailing list and ends up
being a lot of work for the maintainers as well.

Looking at the repository is accurate and you can see exactly what
merged and when...

Cheers,

Richard
Sean Anderson Nov. 1, 2022, 5:54 p.m. UTC | #11
On 11/1/22 13:44, Richard Purdie wrote:
> On Tue, 2022-11-01 at 13:40 -0400, Sean Anderson wrote:
>> On 11/1/22 13:29, Richard Purdie wrote:
>> > On Tue, 2022-11-01 at 12:14 -0400, Sean Anderson wrote:
>> > > On 10/28/22 11:37, Richard Purdie wrote:
>> > > > On Fri, 2022-10-28 at 11:29 -0400, Sean Anderson wrote:
>> > > > > On 10/28/22 11:09, Richard Purdie wrote:
>> > > > > > On Wed, 2022-10-26 at 13:21 -0400, Sean Anderson wrote:
>> > > > > > > As noted in the cover letter, I ran
>> > > > > > > 
>> > > > > > > oe-selftest -r fitimage.FitImageTests
>> > > > > > 
>> > > > > > Ok, good. That at least means you were only running one class of tests.
>> > > > > > I was worried you were running all of them!
>> > > > > > 
>> > > > > > > I also tried using -j$(nproc), but I saw no increase in parallelism
>> > > > > > > outside of the usual for bitbake. This was especially noticable for
>> > > > > > > do_rootfs, which is single-threaded.
>> > > > > > 
>> > > > > > Sadly the parallelism works on a per test class basis so it wouldn't
>> > > > > > help in this case. There are only small marginal gains from running
>> > > > > > tests in individual build directories so we don't do that.
>> > > > > 
>> > > > > I estimate it could have saved me 2-3 minutes every build, since it could
>> > > > > have parallelized the root filesystem stuff.
>> > > > 
>> > > > On an initial run, it could have also ended up building a lot of pieces
>> > > > in parallel needlessly so it is all a bit of a compromise. It might be
>> > > > worth looking into whether we can make that an option, off by default.
>> > > > 
>> > > > > > > This is ommitted above, but I *had* to use -j1 in order to avoid
>> > > > > > > manually wiping out my existing build directory each time (and instead
>> > > > > > > ending up with dozens of pid-named directories). This is documented
>> > > > > > > nowhere, and I found it in some old IRC logs.
>> > > > > > 
>> > > > > > Parallelism using differently named build directories is an
>> > > > > > implementation detail, not something which the -j option implies.I
>> > > > > > guess you were also using --keep-builddir
>> > > > > 
>> > > > > Failing builds don't remove the test directory so you can inspect the build
>> > > > > output. As you might imagine, I had a lot of failing builds.
>> > > > 
>> > > > I'm very familiar with that myself, yes.
>> > > > 
>> > > > We did once used to reuse the build directory, that challenge is we
>> > > > have no idea what the user has done in there prior to the test so it
>> > > > potentially makes the test results potentially incorrect.
>> > > > 
>> > > > > > > > We haven't really had anyone try and optimise the tests either, I'm
>> > > > > > > > sure there will be things in there which can help. Please don't let the
>> > > > > > > > speed put you off trying to improve things and extend our coverage!
>> > > > > > > 
>> > > > > > > The poor speed of these self tests (and of everything related to the
>> > > > > > > yocto project in general) makes this project frustrating to contribute
>> > > > > > > to. It took me around 2 days to go from my prototype to this series,
>> > > > > > > most of which was spent waiting for tests to compile and losing whatever
>> > > > > > > train of thought I had. I probably went through perhaps 20 revisions. If
>> > > > > > > I was working on e.g. U-Boot, I could have made 20 revisions in 2 hours,
>> > > > > > > as it takes around 15 seconds to recompile it and run the full unit test
>> > > > > > > suite.
>> > > > > > > 
>> > > > > > > On the topic of these specific tests, part of the problem is that
>> > > > > > > do_rootfs is a bottleneck which takes around 45-60s on my system. Every
>> > > > > > > test which modifies something in the rootfs incurs this overhead.
>> > > > > > 
>> > > > > > For better or worse we've 'a few' more moving pieces than U-Boot.
>> > > > > > 
>> > > > > > Building a root filesystem from packages is a non-trivial task, taking
>> > > > > > under a minute is in some ways pretty good already. The only other
>> > > > > > thing we could do is incremental rootfs construction where it would
>> > > > > > add/remove changed packages. I'd worry that the result may not always
>> > > > > > be equal to a build from scratch and it might cause weird and
>> > > > > > interesting reproducibility problems (particularly when you consider
>> > > > > > things like postinsts).
>> > > > > > 
>> > > > > > I would love to improve our development "iteration" time but I'm
>> > > > > > struggling to see where we could get the speed gains from :(. Open to
>> > > > > > other ideas...
>> > > > > 
>> > > > > We don't have to build a full root filesystem. All of these tests just want
>> > > > > e.g. an initramfs. An empty (or one file) filesystem would work just as well.
>> > > > > If you still want to boot, you can make a busybox filesystem.
>> > > > 
>> > > > Could we update the test just to use an initramfs then?
>> > > > 
>> > > > I'm definitely a fan of keeping the tests as simple as we can whilst
>> > > > still testing what we need to test.
>> > > 
>> > > I can look into this, but I'd prefer to do it as a follow-up to this series.
>> > > 
>> > > I'll probably send a v2 later this week a fleshed-out commit message for patch
>> > > 5/6 (and with it possibly all those variables moved to a separate bbclass to
>> > > make it easier for other classes to create signed FITs).
>> > 
>> > The series did already merge so anything would be incremental
>> > improvements at this point!
>> 
>> Huh...
>> 
>> I really wish you guys sent thank-you messages when you merged something. Makes
>> it a lot easier to keep track of which series still need work.
> 
> That would result in a lot of messages on the mailing list and ends up
> being a lot of work for the maintainers as well.

If you use a tool like b4 [1] it can automatically generate a summary thank-you message.
As an example, [2]. This keeps the number of messages down and is not terribly
burdensome for maintainers (since it fits into the usual patch workflow),

You can set up something like pwbot [3], which integrates with patchwork
and your repository. It produces messages like [4] automatically when it
notices they have been merged into the repositiry, and marks them as
"Accepted" in patchwork. This is even lower effort for your maintainers.

--Sean

[1] https://git.kernel.org/pub/scm/utils/b4/b4.git
[2] https://lore.kernel.org/u-boot/164866411209.441601.3349958857188274518.b4-ty@gmail.com/
[3] https://korg.docs.kernel.org/patchwork/pwbot.html
[4] https://lore.kernel.org/netdev/166702502193.25217.18098583636278445187.git-patchwork-notify@kernel.org/

> Looking at the repository is accurate and you can see exactly what
> merged and when...

I don't pull from upstream very often. Usually I only do it when
upgrading releases, or when preparing a series. It's easy to miss when
your patches get applied. However, since review is via email, I make
sure to check that regularly so I don't miss anything.

--Sean
Alexander Kanavin Nov. 1, 2022, 6:02 p.m. UTC | #12
On Tue, 1 Nov 2022 at 18:54, Sean Anderson via lists.openembedded.org
<sean.anderson=seco.com@lists.openembedded.org> wrote:
> If you use a tool like b4 [1] it can automatically generate a summary thank-you message.
> As an example, [2]. This keeps the number of messages down and is not terribly
> burdensome for maintainers (since it fits into the usual patch workflow),
>
> You can set up something like pwbot [3], which integrates with patchwork
> and your repository. It produces messages like [4] automatically when it
> notices they have been merged into the repositiry, and marks them as
> "Accepted" in patchwork. This is even lower effort for your maintainers.

You are very much welcome to set these things up for the project and
maintain them.

Alex
Alexander Kanavin Nov. 1, 2022, 6:24 p.m. UTC | #13
On Tue, 1 Nov 2022 at 19:02, Alexander Kanavin via
lists.openembedded.org <alex.kanavin=gmail.com@lists.openembedded.org>
wrote:
> > You can set up something like pwbot [3], which integrates with patchwork
> > and your repository. It produces messages like [4] automatically when it
> > notices they have been merged into the repositiry, and marks them as
> > "Accepted" in patchwork. This is even lower effort for your maintainers.
>
> You are very much welcome to set these things up for the project and
> maintain them.

And by the way, this is a serious offer. We do not have a maintainer
for patchwork, and we are looking for one, as seen here:
https://git.yoctoproject.org/poky/tree/MAINTAINERS.md?h=master-next

Alex
Sean Anderson Nov. 1, 2022, 6:29 p.m. UTC | #14
On 11/1/22 14:02, Alexander Kanavin wrote:
> [You don't often get email from alex.kanavin@gmail.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
> 
> On Tue, 1 Nov 2022 at 18:54, Sean Anderson via lists.openembedded.org
> <sean.anderson=seco.com@lists.openembedded.org> wrote:
>> If you use a tool like b4 [1] it can automatically generate a summary thank-you message.
>> As an example, [2]. This keeps the number of messages down and is not terribly
>> burdensome for maintainers (since it fits into the usual patch workflow),
>>
>> You can set up something like pwbot [3], which integrates with patchwork
>> and your repository. It produces messages like [4] automatically when it
>> notices they have been merged into the repositiry, and marks them as
>> "Accepted" in patchwork. This is even lower effort for your maintainers.
> 
> You are very much welcome to set these things up for the project and
> maintain them.

Well, b4 is something you'd have to integrate into your personal workflow,
so I can't help with that.

I believe in order to set up pwbot you need a computer with

- The git repository you want to track on it. The easiest way to do this is
  probably to run it on git.openembedded.org itself.
- A patchwork API key (to modify the patch status)
- A working email with smtp access (to send out the emails). This probably
  means a local sendmail (or other) server.
- cron (or something else)

Although I would be happy to help set things up, I'd need some help from the
people responsible for maintaining these existing services.

--Sean
Alexander Kanavin Nov. 1, 2022, 6:40 p.m. UTC | #15
On Tue, 1 Nov 2022 at 19:30, Sean Anderson <sean.anderson@seco.com> wrote:
> I believe in order to set up pwbot you need a computer with
>
> - The git repository you want to track on it. The easiest way to do this is
>   probably to run it on git.openembedded.org itself.
> - A patchwork API key (to modify the patch status)
> - A working email with smtp access (to send out the emails). This probably
>   means a local sendmail (or other) server.
> - cron (or something else)
>
> Although I would be happy to help set things up, I'd need some help from the
> people responsible for maintaining these existing services.

I think Michael Halstead can help you with access rights, if you'd
like to look into existing patchwork instance?
https://patchwork.yoctoproject.org/

I would very much like to see confirmation emails when things get
merged into master. Complete absence of feedback in case of success is
not friendly to first time contributors, or someone who does it
rarely.

Alex
Ross Burton Nov. 1, 2022, 9:01 p.m. UTC | #16
> On 1 Nov 2022, at 17:40, Sean Anderson via lists.openembedded.org <sean.anderson=seco.com@lists.openembedded.org> wrote:
> 
> I really wish you guys sent thank-you messages when you merged something. Makes
> it a lot easier to keep track of which series still need work.

My workflow is to have everything I submit in a local feature branch, which I rebase a few times a week. This tells me when stuff is merged, and more importantly when stuff I’ve submitted no longer applies due to other changes in the repository.

Ross
Alexandre Belloni Nov. 9, 2022, 11:07 p.m. UTC | #17
Hi,

On 01/11/2022 13:54:18-0400, Sean Anderson wrote:
> > That would result in a lot of messages on the mailing list and ends up
> > being a lot of work for the maintainers as well.
> 
> If you use a tool like b4 [1] it can automatically generate a summary thank-you message.
> As an example, [2]. This keeps the number of messages down and is not terribly
> burdensome for maintainers (since it fits into the usual patch workflow),
> 

I was also surprised to not receive confirmation that my patches were
applied. However, unfortunately, b4 is not working with our current
workflow as I am applying the patches and providing tested branches to
Richard that he is cherry-picking from.

> You can set up something like pwbot [3], which integrates with patchwork
> and your repository. It produces messages like [4] automatically when it
> notices they have been merged into the repositiry, and marks them as
> "Accepted" in patchwork. This is even lower effort for your maintainers.
> 

This would integrate better but I4m not sure we have a pw instace for
every repository that compose poky.
Sean Anderson Nov. 10, 2022, 3:19 p.m. UTC | #18
On 11/9/22 18:07, Alexandre Belloni wrote:
> Hi,
> 
> On 01/11/2022 13:54:18-0400, Sean Anderson wrote:
>> > That would result in a lot of messages on the mailing list and ends up
>> > being a lot of work for the maintainers as well.
>> 
>> If you use a tool like b4 [1] it can automatically generate a summary thank-you message.
>> As an example, [2]. This keeps the number of messages down and is not terribly
>> burdensome for maintainers (since it fits into the usual patch workflow),
>> 
> 
> I was also surprised to not receive confirmation that my patches were
> applied. However, unfortunately, b4 is not working with our current
> workflow as I am applying the patches and providing tested branches to
> Richard that he is cherry-picking from.
> 
>> You can set up something like pwbot [3], which integrates with patchwork
>> and your repository. It produces messages like [4] automatically when it
>> notices they have been merged into the repositiry, and marks them as
>> "Accepted" in patchwork. This is even lower effort for your maintainers.
>> 
> 
> This would integrate better but I4m not sure we have a pw instace for
> every repository that compose poky.

I think even incremental changes are good here. We can start just e.g.
openembedded-core and see how that goes.

--Sean
diff mbox series

Patch

diff --git a/meta/classes-recipe/kernel-fitimage.bbclass b/meta/classes-recipe/kernel-fitimage.bbclass
index e4a130a0f2..befdf2568c 100644
--- a/meta/classes-recipe/kernel-fitimage.bbclass
+++ b/meta/classes-recipe/kernel-fitimage.bbclass
@@ -4,7 +4,7 @@ 
 # SPDX-License-Identifier: MIT
 #
 
-inherit kernel-uboot kernel-artifact-names uboot-sign
+inherit kernel-uboot kernel-artifact-names uboot-config
 
 def get_fit_replacement_type(d):
     kerneltypes = d.getVar('KERNEL_IMAGETYPES') or ""
@@ -50,15 +50,6 @@  python __anonymous () {
             d.appendVarFlag('do_assemble_fitimage', 'depends', ' virtual/dtb:do_populate_sysroot')
             d.appendVarFlag('do_assemble_fitimage_initramfs', 'depends', ' virtual/dtb:do_populate_sysroot')
             d.setVar('EXTERNAL_KERNEL_DEVICETREE', "${RECIPE_SYSROOT}/boot/devicetree")
-
-        # Verified boot will sign the fitImage and append the public key to
-        # U-Boot dtb. We ensure the U-Boot dtb is deployed before assembling
-        # the fitImage:
-        if d.getVar('UBOOT_SIGN_ENABLE') == "1" and d.getVar('UBOOT_DTB_BINARY'):
-            uboot_pn = d.getVar('PREFERRED_PROVIDER_u-boot') or 'u-boot'
-            d.appendVarFlag('do_assemble_fitimage', 'depends', ' %s:do_populate_sysroot' % uboot_pn)
-            if d.getVar('INITRAMFS_IMAGE_BUNDLE') == "1":
-                d.appendVarFlag('do_assemble_fitimage_initramfs', 'depends', ' %s:do_populate_sysroot' % uboot_pn)
 }
 
 
@@ -678,20 +669,12 @@  fitimage_assemble() {
 		${KERNEL_OUTPUT_DIR}/$2
 
 	#
-	# Step 8: Sign the image and add public key to U-Boot dtb
+	# Step 8: Sign the image
 	#
 	if [ "x${UBOOT_SIGN_ENABLE}" = "x1" ] ; then
-		add_key_to_u_boot=""
-		if [ -n "${UBOOT_DTB_BINARY}" ]; then
-			# The u-boot.dtb is a symlink to UBOOT_DTB_IMAGE, so we need copy
-			# both of them, and don't dereference the symlink.
-			cp -P ${STAGING_DATADIR}/u-boot*.dtb ${B}
-			add_key_to_u_boot="-K ${B}/${UBOOT_DTB_BINARY}"
-		fi
 		${UBOOT_MKIMAGE_SIGN} \
 			${@'-D "${UBOOT_MKIMAGE_DTCOPTS}"' if len('${UBOOT_MKIMAGE_DTCOPTS}') else ''} \
 			-F -k "${UBOOT_SIGN_KEYDIR}" \
-			$add_key_to_u_boot \
 			-r ${KERNEL_OUTPUT_DIR}/$2 \
 			${UBOOT_MKIMAGE_SIGN_ARGS}
 	fi
@@ -700,18 +683,30 @@  fitimage_assemble() {
 do_assemble_fitimage() {
 	if echo ${KERNEL_IMAGETYPES} | grep -wq "fitImage"; then
 		cd ${B}
-		fitimage_assemble fit-image.its fitImage ""
+		fitimage_assemble fit-image.its fitImage-none ""
+		if [ "${INITRAMFS_IMAGE_BUNDLE}" != "1" ]; then
+			ln -sf fitImage-none ${B}/${KERNEL_OUTPUT_DIR}/fitImage
+		fi
 	fi
 }
 
 addtask assemble_fitimage before do_install after do_compile
 
+SYSROOT_DIRS:append = " /sysroot-only"
+do_install:append() {
+	if echo ${KERNEL_IMAGETYPES} | grep -wq "fitImage" && \
+		[ "${UBOOT_SIGN_ENABLE}" = "1" ]; then
+		install -D ${B}/${KERNEL_OUTPUT_DIR}/fitImage-none ${D}/sysroot-only/fitImage
+	fi
+}
+
 do_assemble_fitimage_initramfs() {
 	if echo ${KERNEL_IMAGETYPES} | grep -wq "fitImage" && \
 		test -n "${INITRAMFS_IMAGE}" ; then
 		cd ${B}
 		if [ "${INITRAMFS_IMAGE_BUNDLE}" = "1" ]; then
-			fitimage_assemble fit-image-${INITRAMFS_IMAGE}.its fitImage ""
+			fitimage_assemble fit-image-${INITRAMFS_IMAGE}.its fitImage-bundle ""
+			ln -sf fitImage-bundle ${B}/${KERNEL_OUTPUT_DIR}/fitImage
 		else
 			fitimage_assemble fit-image-${INITRAMFS_IMAGE}.its fitImage-${INITRAMFS_IMAGE} 1
 		fi
@@ -802,35 +797,4 @@  kernel_do_deploy:append() {
 			fi
 		fi
 	fi
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" -o "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
-	   [ -n "${UBOOT_DTB_BINARY}" ] ; then
-		# UBOOT_DTB_IMAGE is a realfile, but we can't use
-		# ${UBOOT_DTB_IMAGE} since it contains ${PV} which is aimed
-		# for u-boot, but we are in kernel env now.
-		install -m 0644 ${B}/u-boot-${MACHINE}*.dtb "$deployDir/"
-	fi
-	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${UBOOT_BINARY}" -a -n "${SPL_DTB_BINARY}" ] ; then
-		# If we're also creating and/or signing the uboot fit, now we need to
-		# deploy it, it's its file, as well as u-boot-spl.dtb
-		install -m 0644 ${B}/u-boot-spl-${MACHINE}*.dtb "$deployDir/"
-		bbnote "Copying u-boot-fitImage file..."
-		install -m 0644 ${B}/u-boot-fitImage-* "$deployDir/"
-		bbnote "Copying u-boot-its file..."
-		install -m 0644 ${B}/u-boot-its-* "$deployDir/"
-	fi
-}
-
-# The function below performs the following in case of initramfs bundles:
-# - Removes do_assemble_fitimage. FIT generation is done through
-#   do_assemble_fitimage_initramfs. do_assemble_fitimage is not needed
-#   and should not be part of the tasks to be executed.
-# - Since do_kernel_generate_rsa_keys is inserted by default
-#   between do_compile and do_assemble_fitimage, this is
-#   not suitable in case of initramfs bundles.  do_kernel_generate_rsa_keys
-#   should be between do_bundle_initramfs and do_assemble_fitimage_initramfs.
-python () {
-    if d.getVar('INITRAMFS_IMAGE_BUNDLE') == "1":
-        bb.build.deltask('do_assemble_fitimage', d)
-        bb.build.deltask('kernel_generate_rsa_keys', d)
-        bb.build.addtask('kernel_generate_rsa_keys', 'do_assemble_fitimage_initramfs', 'do_bundle_initramfs', d)
 }
diff --git a/meta/classes-recipe/uboot-config.bbclass b/meta/classes-recipe/uboot-config.bbclass
index 73dc464444..fb7a4bc498 100644
--- a/meta/classes-recipe/uboot-config.bbclass
+++ b/meta/classes-recipe/uboot-config.bbclass
@@ -19,6 +19,9 @@  def removesuffix(s, suffix):
         return s[:-len(suffix)]
     return s
 
+UBOOT_ENTRYPOINT ?= "20008000"
+UBOOT_LOADADDRESS ?= "${UBOOT_ENTRYPOINT}"
+
 # Some versions of u-boot use .bin and others use .img.  By default use .bin
 # but enable individual recipes to change this value.
 UBOOT_SUFFIX ??= "bin"
diff --git a/meta/classes-recipe/uboot-sign.bbclass b/meta/classes-recipe/uboot-sign.bbclass
index 569907fa68..3dc029c429 100644
--- a/meta/classes-recipe/uboot-sign.bbclass
+++ b/meta/classes-recipe/uboot-sign.bbclass
@@ -5,7 +5,7 @@ 
 #
 
 # This file is part of U-Boot verified boot support and is intended to be
-# inherited from u-boot recipe and from kernel-fitimage.bbclass.
+# inherited from the u-boot recipe.
 #
 # The signature procedure requires the user to generate an RSA key and
 # certificate in a directory and to define the following variable:
@@ -22,19 +22,6 @@ 
 #
 # The signature support is limited to the use of CONFIG_OF_SEPARATE in U-Boot.
 #
-# The tasks sequence is set as below, using DEPLOY_IMAGE_DIR as common place to
-# treat the device tree blob:
-#
-# * u-boot:do_install:append
-#   Install UBOOT_DTB_BINARY to datadir, so that kernel can use it for
-#   signing, and kernel will deploy UBOOT_DTB_BINARY after signs it.
-#
-# * virtual/kernel:do_assemble_fitimage
-#   Sign the image
-#
-# * u-boot:do_deploy[postfuncs]
-#   Deploy files like UBOOT_DTB_IMAGE, UBOOT_DTB_SYMLINK and others.
-#
 # For more details on signature process, please refer to U-Boot documentation.
 
 # We need some variables from u-boot-config
@@ -49,6 +36,7 @@  SPL_SIGN_ENABLE ?= "0"
 # Default value for deployment filenames.
 UBOOT_DTB_IMAGE ?= "u-boot-${MACHINE}-${PV}-${PR}.dtb"
 UBOOT_DTB_BINARY ?= "u-boot.dtb"
+UBOOT_DTB_SIGNED ?= "${UBOOT_DTB_BINARY}-signed"
 UBOOT_DTB_SYMLINK ?= "u-boot-${MACHINE}.dtb"
 UBOOT_NODTB_IMAGE ?= "u-boot-nodtb-${MACHINE}-${PV}-${PR}.bin"
 UBOOT_NODTB_BINARY ?= "u-boot-nodtb.bin"
@@ -62,6 +50,7 @@  UBOOT_FITIMAGE_SYMLINK ?= "u-boot-fitImage-${MACHINE}"
 SPL_DIR ?= "spl"
 SPL_DTB_IMAGE ?= "u-boot-spl-${MACHINE}-${PV}-${PR}.dtb"
 SPL_DTB_BINARY ?= "u-boot-spl.dtb"
+SPL_DTB_SIGNED ?= "${SPL_DTB_BINARY}-signed"
 SPL_DTB_SYMLINK ?= "u-boot-spl-${MACHINE}.dtb"
 SPL_NODTB_IMAGE ?= "u-boot-spl-nodtb-${MACHINE}-${PV}-${PR}.bin"
 SPL_NODTB_BINARY ?= "u-boot-spl-nodtb.bin"
@@ -92,58 +81,48 @@  UBOOT_FIT_KEY_REQ_ARGS ?= "-batch -new"
 # Standard format for public key certificate
 UBOOT_FIT_KEY_SIGN_PKCS ?= "-x509"
 
-# Functions on this bbclass can apply to either U-boot or Kernel,
-# depending on the scenario
-UBOOT_PN = "${@d.getVar('PREFERRED_PROVIDER_u-boot') or 'u-boot'}"
-KERNEL_PN = "${@d.getVar('PREFERRED_PROVIDER_virtual/kernel')}"
+# This is only necessary for determining the signing configuration
+KERNEL_PN = "${PREFERRED_PROVIDER_virtual/kernel}"
 
-# We need u-boot-tools-native if we're creating a U-Boot fitImage
 python() {
-    if d.getVar('UBOOT_FITIMAGE_ENABLE') == '1':
-        depends = d.getVar("DEPENDS")
-        depends = "%s u-boot-tools-native dtc-native" % depends
-        d.setVar("DEPENDS", depends)
+    # We need u-boot-tools-native if we're creating a U-Boot fitImage
+    sign = d.getVar('UBOOT_SIGN_ENABLE') == '1'
+    if d.getVar('UBOOT_FITIMAGE_ENABLE') == '1' or sign:
+        d.appendVar('DEPENDS', " u-boot-tools-native dtc-native")
+    if sign:
+        d.appendVar('DEPENDS', " " + d.getVar('KERNEL_PN'))
 }
 
-concat_dtb_helper() {
+concat_dtb() {
+	type="$1"
+	binary="$2"
+
 	if [ -e "${UBOOT_DTB_BINARY}" ]; then
-		ln -sf ${UBOOT_DTB_IMAGE} ${DEPLOYDIR}/${UBOOT_DTB_BINARY}
-		ln -sf ${UBOOT_DTB_IMAGE} ${DEPLOYDIR}/${UBOOT_DTB_SYMLINK}
-	fi
-
-	if [ -f "${UBOOT_NODTB_BINARY}" ]; then
-		install ${UBOOT_NODTB_BINARY} ${DEPLOYDIR}/${UBOOT_NODTB_IMAGE}
-		ln -sf ${UBOOT_NODTB_IMAGE} ${DEPLOYDIR}/${UBOOT_NODTB_SYMLINK}
-		ln -sf ${UBOOT_NODTB_IMAGE} ${DEPLOYDIR}/${UBOOT_NODTB_BINARY}
+		# Re-sign the kernel in order to add the keys to our dtb
+		${UBOOT_MKIMAGE_SIGN} \
+			${@'-D "${UBOOT_MKIMAGE_DTCOPTS}"' if len('${UBOOT_MKIMAGE_DTCOPTS}') else ''} \
+			-F -k "${UBOOT_SIGN_KEYDIR}" \
+			-K "${UBOOT_DTB_BINARY}" \
+			-r ${B}/fitImage-linux \
+			${UBOOT_MKIMAGE_SIGN_ARGS}
+		cp ${UBOOT_DTB_BINARY} ${UBOOT_DTB_SIGNED}
 	fi
 
 	# If we're not using a signed u-boot fit, concatenate SPL w/o DTB & U-Boot DTB
-	# with public key (otherwise it will be deployed by the equivalent
-	# concat_spl_dtb_helper function - cf. kernel-fitimage.bbclass for more details)
+	# with public key (otherwise U-Boot will be packaged by uboot_fitimage_assemble)
 	if [ "${SPL_SIGN_ENABLE}" != "1" ] ; then
-		deployed_uboot_dtb_binary='${DEPLOY_DIR_IMAGE}/${UBOOT_DTB_IMAGE}'
 		if [ "x${UBOOT_SUFFIX}" = "ximg" -o "x${UBOOT_SUFFIX}" = "xrom" ] && \
-			[ -e "$deployed_uboot_dtb_binary" ]; then
-			oe_runmake EXT_DTB=$deployed_uboot_dtb_binary
-			install ${UBOOT_BINARY} ${DEPLOYDIR}/${UBOOT_IMAGE}
-		elif [ -e "${DEPLOYDIR}/${UBOOT_NODTB_IMAGE}" -a -e "$deployed_uboot_dtb_binary" ]; then
-			cd ${DEPLOYDIR}
-			cat ${UBOOT_NODTB_IMAGE} $deployed_uboot_dtb_binary | tee ${B}/${CONFIG_B_PATH}/${UBOOT_BINARY} > ${UBOOT_IMAGE}
-
-			if [ -n "${UBOOT_CONFIG}" ]
-			then
-				i=0
-				j=0
-				for config in ${UBOOT_MACHINE}; do
-					i=$(expr $i + 1);
-					for type in ${UBOOT_CONFIG}; do
-						j=$(expr $j + 1);
-						if [ $j -eq $i ]
-						then
-							cp ${UBOOT_IMAGE} ${B}/${CONFIG_B_PATH}/u-boot-$type.${UBOOT_SUFFIX}
-						fi
-					done
-				done
+			[ -e "${UBOOT_DTB_BINARY}" ]; then
+			oe_runmake EXT_DTB="${UBOOT_DTB_SIGNED}" ${UBOOT_MAKE_TARGET}
+			if [ -n "${binary}" ]; then
+				cp ${binary} ${UBOOT_BINARYNAME}-${type}.${UBOOT_SUFFIX}
+			fi
+		elif [ -e "${UBOOT_NODTB_BINARY}" -a -e "${UBOOT_DTB_BINARY}" ]; then
+			if [ -n "${binary}" ]; then
+				cat ${UBOOT_NODTB_BINARY} ${UBOOT_DTB_SIGNED} | tee ${binary} > \
+					${UBOOT_BINARYNAME}-${type}.${UBOOT_SUFFIX}
+			else
+				cat ${UBOOT_NODTB_BINARY} ${UBOOT_DTB_SIGNED} > ${UBOOT_BINARY}
 			fi
 		else
 			bbwarn "Failure while adding public key to u-boot binary. Verified boot won't be available."
@@ -151,120 +130,67 @@  concat_dtb_helper() {
 	fi
 }
 
-concat_spl_dtb_helper() {
+deploy_dtb() {
+	type="$1"
 
-	# We only deploy symlinks to the u-boot-spl.dtb,as the KERNEL_PN will
-	# be responsible for deploying the real file
-	if [ -e "${SPL_DIR}/${SPL_DTB_BINARY}" ] ; then
-		ln -sf ${SPL_DTB_IMAGE} ${DEPLOYDIR}/${SPL_DTB_SYMLINK}
-		ln -sf ${SPL_DTB_IMAGE} ${DEPLOYDIR}/${SPL_DTB_BINARY}
-	fi
-
-	# Concatenate the SPL nodtb binary and u-boot.dtb
-	deployed_spl_dtb_binary='${DEPLOY_DIR_IMAGE}/${SPL_DTB_IMAGE}'
-	if [ -e "${DEPLOYDIR}/${SPL_NODTB_IMAGE}" -a -e "$deployed_spl_dtb_binary" ] ; then
-		cd ${DEPLOYDIR}
-		cat ${SPL_NODTB_IMAGE} $deployed_spl_dtb_binary | tee ${B}/${CONFIG_B_PATH}/${SPL_BINARY} > ${SPL_IMAGE}
+	if [ -n "${type}" ]; then
+		uboot_dtb_binary="u-boot-${type}-${PV}-${PR}.dtb"
+		uboot_nodtb_binary="u-boot-nodtb-${type}-${PV}-${PR}.bin"
 	else
-		bbwarn "Failure while adding public key to spl binary. Verified U-Boot boot won't be available."
+		uboot_dtb_binary="${UBOOT_DTB_IMAGE}"
+		uboot_nodtb_binary="${UBOOT_NODTB_IMAGE}"
 	fi
-}
 
+	if [ -e "${UBOOT_DTB_SIGNED}" ]; then
+		install -Dm644 ${UBOOT_DTB_SIGNED} ${DEPLOYDIR}/${uboot_dtb_binary}
+		if [ -n "${type}" ]; then
+			ln -sf ${uboot_dtb_binary} ${DEPLOYDIR}/${UBOOT_DTB_IMAGE}
+		fi
+	fi
 
-concat_dtb() {
-	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a "${PN}" = "${UBOOT_PN}" -a -n "${UBOOT_DTB_BINARY}" ]; then
-		mkdir -p ${DEPLOYDIR}
-		if [ -n "${UBOOT_CONFIG}" ]; then
-			for config in ${UBOOT_MACHINE}; do
-				CONFIG_B_PATH="$config"
-				cd ${B}/$config
-				concat_dtb_helper
-			done
-		else
-			CONFIG_B_PATH=""
-			cd ${B}
-			concat_dtb_helper
+	if [ -f "${UBOOT_NODTB_BINARY}" ]; then
+		install -Dm644 ${UBOOT_DTB_BINARY} ${DEPLOYDIR}/${uboot_nodtb_binary}
+		if [ -n "${type}" ]; then
+			ln -sf ${uboot_nodtb_binary} ${DEPLOYDIR}/${UBOOT_NODTB_IMAGE}
 		fi
 	fi
 }
 
 concat_spl_dtb() {
-	if [ "${SPL_SIGN_ENABLE}" = "1" -a "${PN}" = "${UBOOT_PN}" -a -n "${SPL_DTB_BINARY}" ]; then
-		mkdir -p ${DEPLOYDIR}
-		if [ -n "${UBOOT_CONFIG}" ]; then
-			for config in ${UBOOT_MACHINE}; do
-				CONFIG_B_PATH="$config"
-				cd ${B}/$config
-				concat_spl_dtb_helper
-			done
-		else
-			CONFIG_B_PATH=""
-			cd ${B}
-			concat_spl_dtb_helper
-		fi
+	if [ -e "${SPL_DIR}/${SPL_NODTB_BINARY}" -a -e "${SPL_DIR}/${SPL_DTB_BINARY}" ] ; then
+		cat ${SPL_DIR}/${SPL_NODTB_BINARY} ${SPL_DIR}/${SPL_DTB_SIGNED} > "${SPL_BINARY}"
+	else
+		bbwarn "Failure while adding public key to spl binary. Verified U-Boot boot won't be available."
 	fi
 }
 
+deploy_spl_dtb() {
+	type="$1"
 
-# Install UBOOT_DTB_BINARY to datadir, so that kernel can use it for
-# signing, and kernel will deploy UBOOT_DTB_BINARY after signs it.
-install_helper() {
-	if [ -f "${UBOOT_DTB_BINARY}" ]; then
-		# UBOOT_DTB_BINARY is a symlink to UBOOT_DTB_IMAGE, so we
-		# need both of them.
-		install -Dm 0644 ${UBOOT_DTB_BINARY} ${D}${datadir}/${UBOOT_DTB_IMAGE}
-		ln -sf ${UBOOT_DTB_IMAGE} ${D}${datadir}/${UBOOT_DTB_BINARY}
+	if [ -n "${type}" ]; then
+		spl_dtb_binary="u-boot-spl-${type}-${PV}-${PR}.dtb"
+		spl_nodtb_binary="u-boot-spl-nodtb-${type}-${PV}-${PR}.bin"
 	else
-		bbwarn "${UBOOT_DTB_BINARY} not found"
+		spl_dtb_binary="${SPL_DTB_IMAGE}"
+		spl_nodtb_binary="${SPL_NODTB_IMAGE}"
 	fi
-}
 
-# Install SPL dtb and u-boot nodtb to datadir,
-install_spl_helper() {
-	if [ -f "${SPL_DIR}/${SPL_DTB_BINARY}" ]; then
-		install -Dm 0644 ${SPL_DIR}/${SPL_DTB_BINARY} ${D}${datadir}/${SPL_DTB_IMAGE}
-		ln -sf ${SPL_DTB_IMAGE} ${D}${datadir}/${SPL_DTB_BINARY}
-	else
-		bbwarn "${SPL_DTB_BINARY} not found"
-	fi
-	if [ -f "${UBOOT_NODTB_BINARY}" ] ; then
-		install -Dm 0644 ${UBOOT_NODTB_BINARY} ${D}${datadir}/${UBOOT_NODTB_IMAGE}
-		ln -sf ${UBOOT_NODTB_IMAGE} ${D}${datadir}/${UBOOT_NODTB_BINARY}
-	else
-		bbwarn "${UBOOT_NODTB_BINARY} not found"
+	if [ -e "${SPL_DIR}/${SPL_DTB_SIGNED}" ] ; then
+		install -Dm644 ${SPL_DIR}/${SPL_DTB_SIGNED} ${DEPLOYDIR}/${spl_dtb_binary}
+		if [ -n "${type}" ]; then
+			ln -sf ${spl_dtb_binary} ${DEPLOYDIR}/${SPL_DTB_IMAGE}
+		fi
 	fi
 
-	# We need to install a 'stub' u-boot-fitimage + its to datadir,
-	# so that the KERNEL_PN can use the correct filename when
-	# assembling and deploying them
-	touch ${D}/${datadir}/${UBOOT_FITIMAGE_IMAGE}
-	touch ${D}/${datadir}/${UBOOT_ITS_IMAGE}
-}
-
-do_install:append() {
-	if [ "${PN}" = "${UBOOT_PN}" ]; then
-		if [ -n "${UBOOT_CONFIG}" ]; then
-			for config in ${UBOOT_MACHINE}; do
-				cd ${B}/$config
-				if [ "${UBOOT_SIGN_ENABLE}" = "1" -o "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
-					[ -n "${UBOOT_DTB_BINARY}" ]; then
-					install_helper
-				fi
-				if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ]; then
-					install_spl_helper
-				fi
-			done
-		else
-			cd ${B}
-			if [ "${UBOOT_SIGN_ENABLE}" = "1" -o "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
-				[ -n "${UBOOT_DTB_BINARY}" ]; then
-				install_helper
-			fi
-			if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ]; then
-				install_spl_helper
-			fi
+	if [ -f "${SPL_DIR}/${SPL_NODTB_BINARY}" ] ; then
+		install -Dm644 ${SPL_DIR}/${SPL_NODTB_BINARY} ${DEPLOYDIR}/${spl_nodtb_binary}
+		if [ -n "${type}" ]; then
+			ln -sf ${spl_nodtb_binary} ${DEPLOYDIR}/${SPL_NODTB_IMAGE}
 		fi
 	fi
+
+	# For backwards compatibility...
+	install -Dm644 ${SPL_BINARY} ${DEPLOYDIR}/${SPL_IMAGE}
 }
 
 do_uboot_generate_rsa_keys() {
@@ -300,13 +226,10 @@  addtask uboot_generate_rsa_keys before do_uboot_assemble_fitimage after do_compi
 # Create a ITS file for the U-boot FIT, for use when
 # we want to sign it so that the SPL can verify it
 uboot_fitimage_assemble() {
-	uboot_its="$(basename ${STAGING_DATADIR}/u-boot-its-*)"
-	uboot_bin="$(basename ${STAGING_DATADIR}/u-boot-fitImage-*)"
-
-	rm -f $uboot_its $uboot_bin
+	rm -f ${UBOOT_ITS} ${UBOOT_FITIMAGE_BINARY}
 
 	# First we create the ITS script
-	cat << EOF >> $uboot_its
+	cat << EOF >> ${UBOOT_ITS}
 /dts-v1/;
 
 / {
@@ -326,7 +249,7 @@  uboot_fitimage_assemble() {
 EOF
 
 	if [ "${SPL_SIGN_ENABLE}" = "1" ] ; then
-		cat << EOF >> $uboot_its
+		cat << EOF >> ${UBOOT_ITS}
             signature {
                 algo = "${UBOOT_FIT_HASH_ALG},${UBOOT_FIT_SIGN_ALG}";
                 key-name-hint = "${SPL_SIGN_KEYNAME}";
@@ -334,7 +257,7 @@  EOF
 EOF
 	fi
 
-	cat << EOF >> $uboot_its
+	cat << EOF >> ${UBOOT_ITS}
         };
         fdt {
             description = "U-Boot FDT";
@@ -345,7 +268,7 @@  EOF
 EOF
 
 	if [ "${SPL_SIGN_ENABLE}" = "1" ] ; then
-		cat << EOF >> $uboot_its
+		cat << EOF >> ${UBOOT_ITS}
             signature {
                 algo = "${UBOOT_FIT_HASH_ALG},${UBOOT_FIT_SIGN_ALG}";
                 key-name-hint = "${SPL_SIGN_KEYNAME}";
@@ -353,7 +276,7 @@  EOF
 EOF
 	fi
 
-	cat << EOF >> $uboot_its
+	cat << EOF >> ${UBOOT_ITS}
         };
     };
 
@@ -373,8 +296,8 @@  EOF
 	#
 	${UBOOT_MKIMAGE} \
 		${@'-D "${SPL_MKIMAGE_DTCOPTS}"' if len('${SPL_MKIMAGE_DTCOPTS}') else ''} \
-		-f $uboot_its \
-		$uboot_bin
+		-f ${UBOOT_ITS} \
+		${UBOOT_FITIMAGE_BINARY}
 
 	if [ "${SPL_SIGN_ENABLE}" = "1" ] ; then
 		#
@@ -383,74 +306,136 @@  EOF
 		${UBOOT_MKIMAGE_SIGN} \
 			${@'-D "${SPL_MKIMAGE_DTCOPTS}"' if len('${SPL_MKIMAGE_DTCOPTS}') else ''} \
 			-F -k "${SPL_SIGN_KEYDIR}" \
-			-K "${SPL_DTB_BINARY}" \
-			-r $uboot_bin \
+			-K "${SPL_DIR}/${SPL_DTB_BINARY}" \
+			-r ${UBOOT_FITIMAGE_BINARY} \
 			${SPL_MKIMAGE_SIGN_ARGS}
 	fi
 
+	cp ${SPL_DIR}/${SPL_DTB_BINARY} ${SPL_DIR}/${SPL_DTB_SIGNED}
 }
 
-do_uboot_assemble_fitimage() {
-	# This function runs in KERNEL_PN context. The reason for that is that we need to
-	# support the scenario where UBOOT_SIGN_ENABLE is placing the Kernel fitImage's
-	# pubkey in the u-boot.dtb file, so that we can use it when building the U-Boot
-	# fitImage itself.
-	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" ] && \
-	   [ -n "${SPL_DTB_BINARY}" -a "${PN}" = "${KERNEL_PN}" ] ; then
-		if [ "${UBOOT_SIGN_ENABLE}" != "1" ]; then
-			# If we're not signing the Kernel fitImage, that means
-			# we need to copy the u-boot.dtb from staging ourselves
-			cp -P ${STAGING_DATADIR}/u-boot*.dtb ${B}
-		fi
-		# As we are in the kernel context, we need to copy u-boot-spl.dtb from staging first.
-		# Unfortunately, need to glob on top of ${SPL_DTB_BINARY} since _IMAGE and _SYMLINK
-		# will contain U-boot's PV
-		# Similarly, we need to get the filename for the 'stub' u-boot-fitimage + its in
-		# staging so that we can use it for creating the image with the correct filename
-		# in the KERNEL_PN context.
-		# As for the u-boot.dtb (with fitimage's pubkey), it should come from the dependent
-		# do_assemble_fitimage task
-		cp -P ${STAGING_DATADIR}/u-boot-spl*.dtb ${B}
-		cp -P ${STAGING_DATADIR}/u-boot-nodtb*.bin ${B}
-		rm -rf ${B}/u-boot-fitImage-* ${B}/u-boot-its-*
-		cd ${B}
+uboot_assemble_fitimage_helper() {
+	type="$1"
+	binary="$2"
+
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a -n "${UBOOT_DTB_BINARY}" ] ; then
+		concat_dtb $type $binary
+	fi
+
+	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ]; then
 		uboot_fitimage_assemble
 	fi
+
+	if [ "${SPL_SIGN_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ] ; then
+		concat_spl_dtb
+	fi
+}
+
+do_uboot_assemble_fitimage() {
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" ] ; then
+		cp "${STAGING_DIR_HOST}/sysroot-only/fitImage" "${B}/fitImage-linux"
+	fi
+
+	if [ -n "${UBOOT_CONFIG}" ]; then
+		unset i j k
+		for config in ${UBOOT_MACHINE}; do
+			i=$(expr $i + 1);
+			for type in ${UBOOT_CONFIG}; do
+				j=$(expr $j + 1);
+				if [ $j -eq $i ]; then
+					break;
+				fi
+			done
+
+			for binary in ${UBOOT_BINARIES}; do
+				k=$(expr $j + 1);
+				if [ $k -eq $i ]; then
+					break;
+				fi
+			done
+
+			cd ${B}/${config}
+			uboot_assemble_fitimage_helper ${type} ${binary}
+		done
+	else
+		cd ${B}
+		uboot_assemble_fitimage_helper "" ${UBOOT_BINARY}
+	fi
+}
+
+addtask uboot_assemble_fitimage before do_install do_deploy after do_compile
+
+deploy_helper() {
+	type="$1"
+
+	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a -n "${UBOOT_DTB_SIGNED}" ] ; then
+		deploy_dtb $type
+	fi
+
+	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ]; then
+		if [ -n "${type}" ]; then
+			uboot_its_image="u-boot-its-${type}-${PV}-${PR}"
+			uboot_fitimage_image="u-boot-fitImage-${type}-${PV}-${PR}"
+		else
+			uboot_its_image="${UBOOT_ITS_IMAGE}"
+			uboot_fitimage_image="${UBOOT_FITIMAGE_IMAGE}"
+		fi
+
+		install -Dm644 ${UBOOT_FITIMAGE_BINARY} ${DEPLOYDIR}/$uboot_fitimage_image
+		install -Dm644 ${UBOOT_ITS} ${DEPLOYDIR}/$uboot_its_image
+
+		if [ -n "${type}" ]; then
+			ln -sf $uboot_its_image ${DEPLOYDIR}/${UBOOT_ITS_IMAGE}
+			ln -sf $uboot_fitimage_image ${DEPLOYDIR}/${UBOOT_FITIMAGE_IMAGE}
+		fi
+	fi
+
+	if [ "${SPL_SIGN_ENABLE}" = "1" -a -n "${SPL_DTB_SIGNED}" ] ; then
+		deploy_spl_dtb $type
+	fi
 }
 
-addtask uboot_assemble_fitimage before do_deploy after do_compile
+do_deploy:prepend() {
+	if [ -n "${UBOOT_CONFIG}" ]; then
+		unset i j k
+		for config in ${UBOOT_MACHINE}; do
+			i=$(expr $i + 1);
+			for type in ${UBOOT_CONFIG}; do
+				j=$(expr $j + 1);
+				if [ $j -eq $i ]; then
+					cd ${B}/${config}
+					deploy_helper ${type}
+				fi
+			done
+		done
+	else
+		cd ${B}
+		deploy_helper ""
+	fi
 
-do_deploy:prepend:pn-${UBOOT_PN}() {
 	if [ "${UBOOT_SIGN_ENABLE}" = "1" -a -n "${UBOOT_DTB_BINARY}" ] ; then
-		concat_dtb
+		ln -sf ${UBOOT_DTB_IMAGE} ${DEPLOYDIR}/${UBOOT_DTB_BINARY}
+		ln -sf ${UBOOT_DTB_IMAGE} ${DEPLOYDIR}/${UBOOT_DTB_SYMLINK}
+		ln -sf ${UBOOT_NODTB_IMAGE} ${DEPLOYDIR}/${UBOOT_NODTB_SYMLINK}
+		ln -sf ${UBOOT_NODTB_IMAGE} ${DEPLOYDIR}/${UBOOT_NODTB_BINARY}
 	fi
 
 	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" ] ; then
-	# Deploy the u-boot-nodtb binary and symlinks...
-		if [ -f "${SPL_DIR}/${SPL_NODTB_BINARY}" ] ; then
-			echo "Copying u-boot-nodtb binary..."
-			install -m 0644 ${SPL_DIR}/${SPL_NODTB_BINARY} ${DEPLOYDIR}/${SPL_NODTB_IMAGE}
-			ln -sf ${SPL_NODTB_IMAGE} ${DEPLOYDIR}/${SPL_NODTB_SYMLINK}
-			ln -sf ${SPL_NODTB_IMAGE} ${DEPLOYDIR}/${SPL_NODTB_BINARY}
-		fi
-
-
-		# We only deploy the symlinks to the uboot-fitImage and uboot-its
-		# images, as the KERNEL_PN will take care of deploying the real file
-		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_FITIMAGE_BINARY}
-		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_FITIMAGE_SYMLINK}
 		ln -sf ${UBOOT_ITS_IMAGE} ${DEPLOYDIR}/${UBOOT_ITS}
 		ln -sf ${UBOOT_ITS_IMAGE} ${DEPLOYDIR}/${UBOOT_ITS_SYMLINK}
+		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_FITIMAGE_BINARY}
+		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_FITIMAGE_SYMLINK}
 	fi
 
 	if [ "${SPL_SIGN_ENABLE}" = "1" -a -n "${SPL_DTB_BINARY}" ] ; then
-		concat_spl_dtb
+		ln -sf ${SPL_DTB_IMAGE} ${DEPLOYDIR}/${SPL_DTB_SYMLINK}
+		ln -sf ${SPL_DTB_IMAGE} ${DEPLOYDIR}/${SPL_DTB_BINARY}
+		ln -sf ${SPL_NODTB_IMAGE} ${DEPLOYDIR}/${SPL_NODTB_SYMLINK}
+		ln -sf ${SPL_NODTB_IMAGE} ${DEPLOYDIR}/${SPL_NODTB_BINARY}
 	fi
-
-
 }
 
-do_deploy:append:pn-${UBOOT_PN}() {
+do_deploy:append() {
 	# If we're creating a u-boot fitImage, point u-boot.bin
 	# symlink since it might get used by image recipes
 	if [ "${UBOOT_FITIMAGE_ENABLE}" = "1" ] ; then
@@ -458,27 +443,3 @@  do_deploy:append:pn-${UBOOT_PN}() {
 		ln -sf ${UBOOT_FITIMAGE_IMAGE} ${DEPLOYDIR}/${UBOOT_SYMLINK}
 	fi
 }
-
-python () {
-    if (   (d.getVar('UBOOT_SIGN_ENABLE') == '1'
-            or d.getVar('UBOOT_FITIMAGE_ENABLE') == '1')
-        and d.getVar('PN') == d.getVar('UBOOT_PN')
-        and d.getVar('UBOOT_DTB_BINARY')):
-
-        # Make "bitbake u-boot -cdeploy" deploys the signed u-boot.dtb
-        # and/or the U-Boot fitImage
-        d.appendVarFlag('do_deploy', 'depends', ' %s:do_deploy' % d.getVar('KERNEL_PN'))
-
-    if d.getVar('UBOOT_FITIMAGE_ENABLE') == '1' and d.getVar('PN') == d.getVar('KERNEL_PN'):
-        # As the U-Boot fitImage is created by the KERNEL_PN, we need
-        # to make sure that the u-boot-spl.dtb and u-boot-spl-nodtb.bin
-        # files are in the staging dir for it's use
-        d.appendVarFlag('do_uboot_assemble_fitimage', 'depends', ' %s:do_populate_sysroot' % d.getVar('UBOOT_PN'))
-
-        # If the Kernel fitImage is being signed, we need to
-        # create the U-Boot fitImage after it
-        if d.getVar('UBOOT_SIGN_ENABLE') == '1':
-            d.appendVarFlag('do_uboot_assemble_fitimage', 'depends', ' %s:do_assemble_fitimage' % d.getVar('KERNEL_PN'))
-            d.appendVarFlag('do_uboot_assemble_fitimage', 'depends', ' %s:do_assemble_fitimage_initramfs' % d.getVar('KERNEL_PN'))
-
-}
diff --git a/meta/lib/oeqa/selftest/cases/fitimage.py b/meta/lib/oeqa/selftest/cases/fitimage.py
index 14267dbaaa..1570d54dfb 100644
--- a/meta/lib/oeqa/selftest/cases/fitimage.py
+++ b/meta/lib/oeqa/selftest/cases/fitimage.py
@@ -277,8 +277,8 @@  FIT_SIGN_INDIVIDUAL = "1"
 """
         self.write_config(config)
 
-        # The U-Boot fitImage is created as part of linux recipe
-        bitbake("virtual/kernel")
+        # The U-Boot fitImage is created as part of the U-Boot recipe
+        bitbake("virtual/bootloader")
 
         deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
         machine = get_bb_var('MACHINE')
@@ -350,7 +350,8 @@  UBOOT_LOADADDRESS = "0x80080000"
 UBOOT_ENTRYPOINT = "0x80080000"
 UBOOT_FIT_DESC = "A model description"
 KERNEL_IMAGETYPES += " fitImage "
-KERNEL_CLASSES = " kernel-fitimage test-mkimage-wrapper "
+KERNEL_CLASSES = " kernel-fitimage "
+INHERIT += "test-mkimage-wrapper"
 UBOOT_SIGN_ENABLE = "1"
 FIT_GENERATE_KEYS = "1"
 UBOOT_SIGN_KEYDIR = "${TOPDIR}/signing-keys"
@@ -361,8 +362,8 @@  UBOOT_MKIMAGE_SIGN_ARGS = "-c 'a smart U-Boot comment'"
 """
         self.write_config(config)
 
-        # The U-Boot fitImage is created as part of linux recipe
-        bitbake("virtual/kernel")
+        # The U-Boot fitImage is created as part of the U-Boot recipe
+        bitbake("virtual/bootloader")
 
         deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
         machine = get_bb_var('MACHINE')
@@ -432,7 +433,8 @@  UBOOT_MACHINE = "am57xx_evm_defconfig"
 SPL_BINARY = "MLO"
 # The kernel-fitimage class is a dependency even if we're only
 # creating/signing the U-Boot fitImage
-KERNEL_CLASSES = " kernel-fitimage test-mkimage-wrapper "
+KERNEL_CLASSES = " kernel-fitimage"
+INHERIT += "test-mkimage-wrapper"
 # Enable creation and signing of the U-Boot fitImage
 UBOOT_FITIMAGE_ENABLE = "1"
 SPL_SIGN_ENABLE = "1"
@@ -451,8 +453,8 @@  UBOOT_FIT_HASH_ALG = "sha256"
 """
         self.write_config(config)
 
-        # The U-Boot fitImage is created as part of linux recipe
-        bitbake("virtual/kernel")
+        # The U-Boot fitImage is created as part of the U-Boot recipe
+        bitbake("virtual/bootloader")
 
         image_type = "core-image-minimal"
         deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
@@ -540,7 +542,7 @@  UBOOT_FIT_HASH_ALG = "sha256"
             self.assertEqual(len(value), 512, 'Signature value for section %s not expected length' % signed_section)
 
         # Check for SPL_MKIMAGE_SIGN_ARGS
-        result = runCmd('bitbake -e virtual/kernel | grep ^T=')
+        result = runCmd('bitbake -e virtual/bootloader | grep ^T=')
         tempdir = result.output.split('=', 1)[1].strip().strip('')
         result = runCmd('grep "a smart U-Boot comment" %s/run.do_uboot_assemble_fitimage' % tempdir, ignore_status=True)
         self.assertEqual(result.status, 0, 'SPL_MKIMAGE_SIGN_ARGS value did not get used')
@@ -595,7 +597,8 @@  UBOOT_EXTLINUX = "0"
 UBOOT_FIT_GENERATE_KEYS = "1"
 UBOOT_FIT_HASH_ALG = "sha256"
 KERNEL_IMAGETYPES += " fitImage "
-KERNEL_CLASSES = " kernel-fitimage test-mkimage-wrapper "
+KERNEL_CLASSES = " kernel-fitimage "
+INHERIT += "test-mkimage-wrapper"
 UBOOT_SIGN_ENABLE = "1"
 FIT_GENERATE_KEYS = "1"
 UBOOT_SIGN_KEYDIR = "${TOPDIR}/signing-keys"
@@ -605,8 +608,8 @@  FIT_SIGN_INDIVIDUAL = "1"
 """
         self.write_config(config)
 
-        # The U-Boot fitImage is created as part of linux recipe
-        bitbake("virtual/kernel")
+        # The U-Boot fitImage is created as part of the U-Boot recipe
+        bitbake("virtual/bootloader")
 
         image_type = "core-image-minimal"
         deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
@@ -694,7 +697,7 @@  FIT_SIGN_INDIVIDUAL = "1"
             self.assertEqual(len(value), 512, 'Signature value for section %s not expected length' % signed_section)
 
         # Check for SPL_MKIMAGE_SIGN_ARGS
-        result = runCmd('bitbake -e virtual/kernel | grep ^T=')
+        result = runCmd('bitbake -e virtual/bootloader | grep ^T=')
         tempdir = result.output.split('=', 1)[1].strip().strip('')
         result = runCmd('grep "a smart cascaded U-Boot comment" %s/run.do_uboot_assemble_fitimage' % tempdir, ignore_status=True)
         self.assertEqual(result.status, 0, 'SPL_MKIMAGE_SIGN_ARGS value did not get used')
diff --git a/meta/recipes-bsp/u-boot/u-boot.inc b/meta/recipes-bsp/u-boot/u-boot.inc
index f022aed732..5afc161fa2 100644
--- a/meta/recipes-bsp/u-boot/u-boot.inc
+++ b/meta/recipes-bsp/u-boot/u-boot.inc
@@ -305,17 +305,14 @@  do_deploy () {
             unset i
         else
             install -m 644 ${B}/${SPL_BINARY} ${DEPLOYDIR}/${SPL_IMAGE}
-            rm -f ${DEPLOYDIR}/${SPL_BINARYNAME} ${DEPLOYDIR}/${SPL_SYMLINK}
             ln -sf ${SPL_IMAGE} ${DEPLOYDIR}/${SPL_BINARYNAME}
             ln -sf ${SPL_IMAGE} ${DEPLOYDIR}/${SPL_SYMLINK}
         fi
     fi
 
-
     if [ -n "${UBOOT_ENV}" ]
     then
         install -m 644 ${WORKDIR}/${UBOOT_ENV_BINARY} ${DEPLOYDIR}/${UBOOT_ENV_IMAGE}
-        rm -f ${DEPLOYDIR}/${UBOOT_ENV_BINARY} ${DEPLOYDIR}/${UBOOT_ENV_SYMLINK}
         ln -sf ${UBOOT_ENV_IMAGE} ${DEPLOYDIR}/${UBOOT_ENV_BINARY}
         ln -sf ${UBOOT_ENV_IMAGE} ${DEPLOYDIR}/${UBOOT_ENV_SYMLINK}
     fi