Hello,
I tried locally with a log file of 1.1G and lavacli fails while the rest api works.
Which version of LAVA are you using?
I looked at ways to improve the speed for both xmlrpc and the rest api.
For xmlrpc, the main problem is the fact that the data is loaded in memory and then base64 encoded and then send back to the user. Nothing we can do about this without writing our own xmlrpc library (or improving the official library).
For the rest api, we can use FileResponse instead of HttpResponse to improve performance. On my local setup the download is now 60% faster. I will send a merge request.
Rgds
Le mer. 8 janv. 2020 à 11:15, Milosz Wasilewski < milosz.wasilewski@linaro.org> a écrit :
On Wed, 8 Jan 2020 at 09:56, Larry Shen larry.shen@nxp.com wrote:
Yes, Milosz,
57MB is fine, in fact, our 100M+ also fine, just if bigger, like 270MB,
it will not fine.
ah, ok. I thought there was some other issue with cli.
I guess it related to gunicorn timeout, if not able to read all log
among the timeout, it will fail.
I tried your suggestion, but even I switch to use rest api, it will also
timeout as it also go the way gunicorn.
I found this:
https://docs.gunicorn.org/en/stable/settings.html#timeout
It said:
Default gunicorn was set 30 seconds for worker timeout.
It seems I have to change " /lib/systemd/system/
lava-server-gunicorn.service" to enlarger the timeout. Just wonder in the day which async http server so popular, if possible lava could adopt some such kinds of technology? Like next: https://docs.gunicorn.org/en/stable/design.html#async-workers
Or any other suggestion you could give for this gunicorn worker timeout,
I mean short turn solution...
I'll defer to Remi. IMHO it would make sense to implement streaming response somewhere. The down side is you don't know the download size in advance but it should not time out.
milosz
-----Original Message----- From: Milosz Wasilewski milosz.wasilewski@linaro.org Sent: Wednesday, January 8, 2020 5:07 PM To: Larry Shen larry.shen@nxp.com Cc: lava-users@lists.lavasoftware.org Subject: [EXT] Re: [Lava-users] lavacli cannot fetch big logs.
Caution: EXT Email
On Wed, 8 Jan 2020 at 08:18, Larry Shen larry.shen@nxp.com wrote:
Hi, there,
We have one job to do some stress test, the final log is about 270MB.
We use `lavacli -i production jobs logs --raw 30686` to fetch the log,
it gives me next:
$ time lavacli -i production jobs logs --raw 30686 /usr/lib/python3/dist-packages/lavacli/__init__.py:101:
YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmsg.pyyaml... for full details.
config = yaml.load(f_conf.read())
Unable to call 'jobs.logs': <ProtocolError for https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flarry .shen%3Axxx%40xxx.nxp.com%2FRPC2&data=02%7C01%7Clarry.shen%40nxp.c om%7Ca778c428e4a947d9b3ab08d7941a20f6%7C686ea1d3bc2b4c6fa92cd99c5c3016 35%7C0%7C0%7C637140712254316438&sdata=t2frtOwreMTjSSVMDca1ob2T5M0% 2B8QCpBcaXUL1tArc%3D&reserved=0: 500 ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))>
real 0m39.006s
user 0m0.989s
sys 0m0.191s
I check the gunicorn log, it give me: [2020-01-08 08:11:12 +0000] [188] [CRITICAL] WORKER TIMEOUT (pid:11506) Any suggestion?
you can try using API instead
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fyour.insta...
I just fetched 57MB log and it worked just fine. In the long run we can
probably switch lavacli to use REST API.
milosz
Lava-users mailing list Lava-users@lists.lavasoftware.org https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist s.lavasoftware.org%2Fmailman%2Flistinfo%2Flava-users&data=02%7C01% 7Clarry.shen%40nxp.com%7Ca778c428e4a947d9b3ab08d7941a20f6%7C686ea1d3bc 2b4c6fa92cd99c5c301635%7C0%7C0%7C637140712254326435&sdata=s2KqYKrm PfCEBOSjF93IpV07I0P6iMp5zQtRRDfYgjU%3D&reserved=0
Lava-users mailing list Lava-users@lists.lavasoftware.org https://lists.lavasoftware.org/mailman/listinfo/lava-users