Hello.
I have configured a LAVA server and I set up a local Django account to start configuring things:
sudo lava-server manage createsuperuser --username <user> --email=<mail>
Then I want to add LDAP support by adding the relevant fields to /etc/lava-server/settings.conf:
"AUTH_LDAP_SERVER_URI": "ldaps://server.domain.se:636",
"AUTH_LDAP_BIND_DN": "CN=company_ldap,OU=Service Accounts,OU=Resources,OU=Data,DC=domain,DC=se",
"AUTH_LDAP_BIND_PASSWORD": "thepwd",
"AUTH_LDAP_USER_ATTR_MAP": {
"first_name": "givenName",
"email": "mail"
},
"DISABLE_OPENID_AUTH": true
I have restarted both apache2 and lava-server.
I was expecting to get a Sign In page like this one:
https://validation.linaro.org/static/docs/v1/_images/ldap-user-login.png
Unfortunately I'm not familiar with neither django (and Web development) and LDAP and I don't know how to debug this. I have tried to grep for ldap|LDAP in /var/log/lava-server but nothing pops up.
Unfortunately I couldn't find a way to browse the mailing list for previous answers. GMANE search doesn't work today.
How should I proceed?
I have a multi-node test involving 13 roles that is no longer syncing properly after upgrading to 2016.11 this morning. It seems that 2 or 3 nodes end up waiting for a specific message while the other ones finish the message and move on to the next. Looking at the dispatcher log, I don't see any errors, but it's only logging that it's sending to some of the nodes. For example, I see a message like this for the nodes that work in a run:
2016-11-10 13:10:37,295 Sending wait messageID 'qa-network-info' to /var/lib/lava/dispatcher/slave/tmp/7615/device.yaml in group 2651c0a0-811f-4b77-bc07-22af31744fe5: {"/var/lib/lava/dispatcher/slave/tmp/7619/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7613/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7623/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tm
p/7620/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7611/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7621/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7622/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7617/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7618/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7614/device.yaml": {},
"/var/lib/lava/dispatcher/slave/tmp/7615/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7616/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7612/device.yaml": {}}
2016-11-10 13:10:37,295 Sending wait response to /var/lib/lava/dispatcher/slave/tmp/7615/device.yaml in group 2651c0a0-811f-4b77-bc07-22af31744fe5: {"message": {"/var/lib/lava/dispatcher/slave/tmp/7619/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7613/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7623/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7620/
device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7611/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7621/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7622/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7617/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7618/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7614/device.yaml": {}, "/var/l
ib/lava/dispatcher/slave/tmp/7615/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7616/device.yaml": {}, "/var/lib/lava/dispatcher/slave/tmp/7612/device.yaml": {}}, "response": "ack"}
For the nodes that get stuck, there is no message like above.
All of the nodes are qemu type, all on the same host. The nodes that fail are not consistent, but there seems to be always 2 or 3 that fail in every run I tried.
Is there anything I can look at here to figure out what is happening?
--
James Oakley
james.oakley(a)multapplied.net
[Moving to lava-users as suggested by Neil]
On 11/07/2016 03:20 PM, Neil Williams (Code Review) wrote:
> Neil Williams has posted comments on this change.
>
> Change subject: Add support for the depthcharge bootloader
> ......................................................................
>
>
>
> Patch Set 3:
>
> (1 comment)
>
> https://review.linaro.org/#/c/15203/3/lava_dispatcher/pipeline/actions/depl…
>
> File lava_dispatcher/pipeline/actions/deploy/tftp.py:
>
> Line 127: def _ensure_device_dir(self, device_dir):
>> Cannot say that I have fully understood it yet. Would it be correct
>> if the
>
> The Strategy classes must not set or modify anything. The accepts
> method does some very fast checks and returns True or False. Anything
> which the pipeline actions need to know must be specified in the job
> submission or the device configuration. So either this is restricted
> to specific device-types (so a setting goes into the template) or it
> has to be set for every job using this method (for situations where
> the support can be used or not used on the same hardware for
> different jobs).
>
> What is this per-device directory anyway and how is it meant to work
> with tftpd-hpa which does not support configuration modification
> without restarting itself? Jobs cannot require that daemons restart -
> other jobs could easily be using that daemon at the same time.
So each firmware image containing Depthcharge will also contain
hardcoded values for the IP of the TFTP server, and for the paths of a
cmdline.txt file and a FIT image. The FIT image containing a kernel and
a DTB file, and optionally a ramdisk.
Because the paths are set when the FW image is flashed, we cannot use
the per-job directory. Thus we add a parameter to the device that is to
be set in the device-specific template of Chrome devices. If that
parameter is present, then a directory in the root of the TFTP files
tree will be created with the value of that parameter.
The TFTP server doesn't need to be restarted because its configuration
is left unchanged, we just create a directory where depthcharge will
look for the files.
Thanks,
Tomeu
> I think this needs to move from IRC and gerrit to a thread on the
> lava-users mailing list where the principles can be checked through
> more easily.
>
>
Hi everyone,
As I have probably mentioned in previous emails, Im using the yocto
project to generate some linux images that I want to test using lava as
part of the continuous integration development.
So far so good, i can submit the job description to lava using lava-tool
and it will start the tests. I'm happy so far with all the results.
Now my question is to ask you what would be the correct way do this
procedure. Do you think it is reasonable to have a lava-tool submit-job
followed by a waiting step using lava-tool job-status to report the
final build result? or there is a nicer way to do this?
Thanks a lot for your help in advance :)
Best,
Alfonso
By default, a uboot header is automatically added to the ramdisk image.
For bootloaders without INITRD_ATAG support, the ramdisk needs to be
passed on the command line and cannot have the uboot header added.
To enable this feature, add a "ramdisk_raw" option that device files can
set so that a uboot header is not added.
Signed-off-by: Kevin Hilman <khilman(a)baylibre.com>
---
Patch applies on 2016.9
lava_dispatcher/config.py | 1 +
lava_dispatcher/device/bootloader.py | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/lava_dispatcher/config.py b/lava_dispatcher/config.py
index 66a9e70021fa..c91c5634280d 100644
--- a/lava_dispatcher/config.py
+++ b/lava_dispatcher/config.py
@@ -312,6 +312,7 @@ class DeviceSchema(schema.Schema):
uimage_xip = schema.BoolOption(default=False)
append_dtb = schema.BoolOption(default=False)
prepend_blob = schema.StringOption(default=None)
+ ramdisk_raw = schema.BoolOption(default=False)
# for dynamic_vm devices
dynamic_vm_backend_device_type = schema.StringOption(default='kvm')
diff --git a/lava_dispatcher/device/bootloader.py b/lava_dispatcher/device/bootloader.py
index 634d22ef3311..c88fba8937e6 100644
--- a/lava_dispatcher/device/bootloader.py
+++ b/lava_dispatcher/device/bootloader.py
@@ -208,7 +208,7 @@ class BootloaderTarget(MasterImageTarget):
decompress=False)
extract_overlay(overlay, ramdisk_dir)
ramdisk = create_ramdisk(ramdisk_dir, self._tmpdir)
- if self._is_uboot():
+ if self._is_uboot() and not self.config.ramdisk_raw:
# Ensure ramdisk has u-boot header
if not self._is_uboot_ramdisk(ramdisk):
ramdisk_uboot = ramdisk + ".uboot"
--
2.5.0
Hello everyone,
Can you help me on below two questions?
1. I did email notification settings for sending emails after job complete or incomplete.
How can I get whole logs (where are logs?) about email sending process? I need to debug email sending.
2. I want to use script to control device state periodically.
How can I set device to maintenance state using command, like lava-tool command?
Thanks in advance.
Hello everyone,
just a simple question. I might be wrong but I understand that
submitting a job through lava-dispatch and lava-tool should lead to the
same process. Now, with the dispacher you can already specify an
specific target device encoded in yaml format. Does the lava-tool at
some point reach a similar target configuration? does it generate it or
it is stored somewhere? in the latter case, where is it stored?
thanks in advance :)
Best,
Alfonso
Hello everyone,
just a quick question.
I am trying to run a job to test a qemu image which is stored remotely.
The image is packed in a tar.xz file with both the kernel and the file
system.
is there a way to specify in the job description json file that before
the deploy action it must open this tar.xz file and then use the kernel
and filesystem?
Thanks a lot :)
Best,
Alfonso