Hello,
I have a numpy/pandas dataprocessing program that I am starting from another python program using
subprocess.run(f"python -u {path}/main.py --some arguments --belong here", shell=True)
The dataprocessing program generates a log to its stdout. I haven’t given the subprocess an stdout, so it gets dumped into the stdout of the parent process.
Both run in the same pipenv which is setup beforehand.
Now at some point the dataprocessing just comes to a stall. Always at the same point (according to the log), on both Windows and Linux(in container and directly on hardware). No memory or cpu limits are reached. When I run the dataprocessing software by hand with the same input, it runs without a problem.
I feel like I am hitting some kind of limit of the subprocess that has me in a deadlock, but I have no idea what it is.
Is there a good way to figure that out? Does anyone have an idea what the problem could be?
submitted by /u/anonymouslyjenny
[link] [comments]
r/learnpython Hello, I have a numpy/pandas dataprocessing program that I am starting from another python program using subprocess.run(f”python -u {path}/main.py –some arguments –belong here”, shell=True) The dataprocessing program generates a log to its stdout. I haven’t given the subprocess an stdout, so it gets dumped into the stdout of the parent process. Both run in the same pipenv which is setup beforehand. Now at some point the dataprocessing just comes to a stall. Always at the same point (according to the log), on both Windows and Linux(in container and directly on hardware). No memory or cpu limits are reached. When I run the dataprocessing software by hand with the same input, it runs without a problem. I feel like I am hitting some kind of limit of the subprocess that has me in a deadlock, but I have no idea what it is. Is there a good way to figure that out? Does anyone have an idea what the problem could be? submitted by /u/anonymouslyjenny [link] [comments]
Hello,
I have a numpy/pandas dataprocessing program that I am starting from another python program using
subprocess.run(f"python -u {path}/main.py --some arguments --belong here", shell=True)
The dataprocessing program generates a log to its stdout. I haven’t given the subprocess an stdout, so it gets dumped into the stdout of the parent process.
Both run in the same pipenv which is setup beforehand.
Now at some point the dataprocessing just comes to a stall. Always at the same point (according to the log), on both Windows and Linux(in container and directly on hardware). No memory or cpu limits are reached. When I run the dataprocessing software by hand with the same input, it runs without a problem.
I feel like I am hitting some kind of limit of the subprocess that has me in a deadlock, but I have no idea what it is.
Is there a good way to figure that out? Does anyone have an idea what the problem could be?
submitted by /u/anonymouslyjenny
[link] [comments]