So, I have a python script I’d like to run from time to time from the CLI (on Linux) that resides inside a venv. What’s the recommended/intended way to do this?
Write a wrapper shell script and put it inside a $PATH-accessible directory that activates the virtual environment, runs the python script and deactivates the venv again? This seems a bit convoluted, but I can’t think of a better way.
Use venv/bin/python app.py to run it.
That works nicely. Thanks 👍
I use my own Zsh project (zpy) to manage venvs stored like
~/.local/share/venvs/HASH-OF-PROJECT-PATH/venv
, so use zpy’svpy
function to launch a script with its associated Python executable ad-hoc, or add a full path shebang to the script with zpy’svpyshebang
function.vpy and vpyshebang in the docs
If anyone else is a Zsh fan and has any questions, I’m more than happy to answer or demo.
@Andy The convention is to place the venv in a .venv/ sub folder. Follow the convention!
This is shell agnostic
Learn pyenv and minimize shell scripts (only lives within a Makefile).
Shell scripts within Python packages is depreciated
The convention
That’s one convention. I don’t like it, I prefer to keep my venvs elsewhere. One reason is that it makes it simpler to maintain multiple venvs for a single project, using a different Python version for each, if I ever want to. It shouldn’t matter to anyone else, as it’s my environment, not some aspect of the shared repo. If I ever needed it there for some reason, I could always
ln -s $VIRTUAL_ENV .venv
.Learn pyenv
I have used pyenv. It’s fine. These days I use mise instead, which I prefer. But neither of them dictate how I create and store venvs.
Shell scripts within Python packages is depreciated
I don’t understand if what you’re referencing relates to my comment.
The multiple venv for different Python versions sounds exactly like what tox does
Then setup a github action that does nightly builds. Which will catch issues caused by changes that only tested against one python version or on one platform
py313 is a good version to test against cuz there were many modules removed or depreciated or APIs changed
good luck. Hope some of my advice is helpful
Thanks, yes, I use nox and github actions for automated environments and testing in my own projects, and tox instead of nox when it’s someone else’s project. But for ad hoc, local and interactive multiple environments, I don’t.
This. I’ve experimented by using pex before and one or two other means of executable python wrappers and they suck. Just do as lakeeffect says.
Yep. This is the way.
I think the path to venv should be absolute right?
Just activate the venv and then put it out of your mind. Can activate it with either a relative or absolute path. Doesn’t matter which
Yeah, for the most part but really depends on what you’re trying to do specifically.
I use pipenv with pyenv together. This works pretty well, also in cron jobs. Just add
pipenv run python script.py
to the cron table.Just in case this comment didn’t make it explicitly clear, you can just invoke the python binary inside your venv directly and it will automatically locate all the libraries that are installed in your virtual environment.
To show how this works, you can look at the
sys.path
variable to see which paths python will search for modules when you run import statements. Try runningpython3 -c 'import sys; print(sys.path)'
using your system python, and you will only see system python library paths. Then, try running it again after replacingpython3
with the full path to thepython3
binary in your venv, and you will see an additional entry in the output with thelib
directory in your venv, which shows that python will also look there for modules when an import statement is executed.You could package it and install with pipx
Does it need access to anything local? If not, you could run it as an AWS Lambda on a schedule.