Usr/lib64/libxcb-aux.so no such file or directory


















The number of LWPs steadily increases. We found it at a peak of before restarting the recording daemon. At that point, we seem to have bumped into the system-wide FD limit:. There is nothing interesting in the logs until the "Too many open files" messages start.

Just fairly routine things like:. The text was updated successfully, but these errors were encountered:. However, this did not seem to have any effect. Sorry, something went wrong. One thing I did learn from attaching gdb to a random selection of these LWPs is that they're all threads spawned by libavfilter :. I suppose one thing I haven't tried is sourcing them from a place other than nux-dextop. One thing I am trying now is a much newer version of the ffmpeg packages from something called awel-media-release :.

So far it is looking promising, but as it is after business hours call volumes have collapsed, so I can't really get truly meaningful feedback until tomorrow possibly.

And about 30 LWPs. That number drops to 28 or 26 from time to time, spikes to 32 or so. Doesn't seem to be moving much beyond this level, but neither do the call volumes. Is there any insight on the relationship between the threads spawned by the recording daemon and the call volumes? It's very difficult to tell if the upgrade of the ffmpeg libs fixed the problem or if the low call volumes after hours are merely masking the same problem.

About the only thing that's different is that there isn't the same all-but-monotonic upward increase as before I wasn't aware that libavfilter or ffmpeg libs in general would spawn any threads. There's certainly nothing in the code that would instruct it to do that.

Gonna have to look into what it's doing there. There have not been any for quite some time. Yet, there are 38 LWPs spawned off of rtpengine-recording :. The recording daemon was invoked without --num-threads value, so it started with a default of Since the last time the recording daemon was restarted, there has been a maximum of about 16 or 18 RTPEngine targets, and the the LWP count has crept up from 10 to about 32, then back down to 28, then back up to 30, and generally hovering somewhere in this area.

Another interesting wrinkle -- it looks like the core rtpengine-recording process is holding a number of file handles open for calls which are long over:. Looking at all these calls, they seem to have one thing in common, e. As another data point from this morning "Serious Call Volumes" have not started yet :. Needless to say, it's a bit hard to make sense of this, though it does seem to be an improvement from the runaway increase of before. But after 9 AM calls will spike into much higher territory and then we can say more.

And, the number of WAV file handles held by the recording daemon as a whole has increased -- to 34 on this particular host. As before, a salient characteristic of the Call-IDs of all the calls whose handles are being held open is that they seemed to have been timed-out streams:.

I cannot help but think that there is some clearer relationship between the number of "stale" file handles opened from "timed out" calls and the number of deadlocked processes, though I cannot find it.

There is certainly a correlation; overall, the more such handles, the more processes. But exactly how much more I am unable to establish; it seems to vary, and the process count isn't accounted for by the number of stale handles per se. Now that we have had production loads all day, I think the verdict is in: the ffmpeg library update didn't really do anything.

That's hard to say. Are you able to run this under valgrind? What do you make of the fact that the stale WAV handles seem to be tied to streams which disappeared from a timeout?

Can you confirm that for sure? Because the recording daemon doesn't really care about how a call was closed, timeout or otherwise. Once the metadata spool file gets deleted, the call is closed. Asked 2 years, 7 months ago. Active 2 years, 7 months ago. Viewed times. Aside from a boot using a live disk, does anyone else have any ideas? I don't have busybox. Improve this question. Is there any reason you don't want to boot using a live disk? That will likely be the easiest solution.

Server has no CD reader, will likely need to boot from a USB, will take a while for me to figure it out and only going to work tomorrow. Wanted to get some stuff done from home today. AS this is a physical machine, what brand of server is it? OK, although I haven't booted from a CD in at least 10 years.

USB is the standard way. Not the Python sub directories. Did you install this Python installation just for doing this, or has it been installed for a while and only using it for this now? There are various other configure options one should really use on Linux to get a good Python installation which matches how the operating system packages would build it.

I will look into that python django error tomorrow, thanks for pointing me to the error log. My understanding is that Linux systems would usually look there, so not sure why it isn't. I just have to resolve the django import error djangoCore is the project - mysite in Django's tutorials. How can I debug the following error log? My guess is that the python script dies somewhere. The reason you want to use it is because it forces your application to run in the main interpreter context of the process rather than a sub interpreter.

Some third party extensions modules aren't coded up to allow them to be used in sub interpreters and attempting to do so will cause crashes or deadlocks. It seems to be a python problem, as python manage.

Maybe Python 3. I'll try with some other py3 later, is there some version you recommend? I would suggest you re install Python 3. Check that blog post I gave earlier about best options to use with Python configure script. I got it working with python 3. Thank you sooo ooooo much for all your help. For future reference, you should really create a new issue, and not seek help on closed issues. Also, did you ensure that you had done a make clean before recompiling if case you had old build results around.

For my environment python 3. Please don't ask about problems as comments on old closed issues. Create a new issue. Skip to content. Star



0コメント

  • 1000 / 1000