Does Conda Replace the Need for Virtualenv

Does Conda replace the need for virtualenv?

  1. Conda replaces virtualenv. In my opinion it is better. It is not limited to Python but can be used for other languages too. In my experience it provides a much smoother experience, especially for scientific packages. The first time I got MayaVi properly installed on Mac was with conda.

  2. You can still use pip. In fact, conda installs pip in each new environment. It knows about pip-installed packages.

For example:

conda list

lists all installed packages in your current environment.
Conda-installed packages show up like this:

sphinx_rtd_theme          0.1.7                    py35_0    defaults

and the ones installed via pip have the <pip> marker:

wxpython-common           3.0.0.0                   <pip>

Conda env vs venv / pyenv / virtualenv / etc

Update 2021-0602: After researching, experiencing, and googling more I found this article. It is detailed, opinionated in what I found a helpful way, and provided everything I was looking for and more. Highly recommend. Conda is quite different from venv.

Original Answer
After researching and playing around, here's what I've found, particularly focused on the difference between conda environments and venv:

  • High level, there's not that much of a difference between conda environments and venv. There are not large performance differences, time in setup differences, replication differences, etc.
  • The decision to use one or the other should primarily by driven by personal preference, and the convention at work (e.g. if your work uses venv for everything, it probably makes sense to use venv and not conda environments.)

There are some differences worth calling out:

  • Conda environments can set up environments for python and also R, so if you switch between the two conda is probably preferable so you only need to learn one set of tools/conventions.
  • Conda environments all get stored in a single folder. This has pros and cons:
  • Pro: you can easily look up all environments you've created.
  • Pro: you can re-use one environment for multiple projects (e.g. I have a "finance" environment which works well for all my finance-related projects.)
  • Con: you have to name all your environments differently, and remember the names (or look them up).
  • Con: it is more of a pain to store that environment in the project folder you've created. This means you need to remember which environment goes with which project, and you can't simply cd into the project folder and then activate the generically named 'env' that is stored in that folder.

For the type of programming I'm doing, I find conda environments helpful. I could easily see use cases where venv is the better choice.

Lastly, Conda is both an environments manager as well as a package manager like PIP. Useful comparison table here.

In short, if you don't have a strong preference already, conda is more robust than venv or pip, can be combined with pip, and is probably the better default option. That said, if you already have a strong preference it means you likely already know how to do what you want, so it's unlikely to be worth it to change.

question about virtual environments from conda to virtualenv

Does it mean that these packages are not yet ported on freebsd or is there a different way to add them in the .txt file instead of just their name?

It sounds like pip can't find a number of your dependencies, so yes.

Keep in mind that conda and pip are completely different build systems, despite being mostly compatible with each other and despite most packages available on one being available on the other. This also means that conda list usually includes some packages you don't necessarily need to install via pip. So you may be better off starting from scratch with a new requirements.txt file that includes the packages you actually need, and just let pip find what else it needs (which, again, is likely different than what conda needs).

How do you conda install a library in an environment created with virtualenv?

I think the easiest approach would be to create a conda env by it's own.

1) Create a requirement.txt file by doing pip freeze > requirements.txt inside your virtualenv environment

2) Create conda env: conda create --name myenv

3) Activate your environment: source activate myenv

4) Install your dependencies: conda install --file requirements.txt

5) Install missing dependecy: conda install YOUR_MISSING_DEPENDENCY

What is the difference between pyenv, virtualenv, anaconda?

Edit: It's worth mentioning pip here as well, as conda and pip have similarities and differences that are relevant to this topic.

pip: the Python Package Manager.

  • You might think of pip as the python equivalent of the ruby gem command
  • pip is not included with python by default.
  • You may install Python using homebrew, which will install pip automatically: brew install python
  • The final version of OSX did not include pip by default. To add pip to your mac system's version of python, you can sudo easy_install pip
  • You can find and publish python packages using PyPI: The Python Package Index
  • The requirements.txt file is comparable to the ruby gemfile
  • To create a requirements text file, pip freeze > requirements.txt
  • Note, at this point, we have python installed on our system, and we have created a requirements.txt file that outlines all of the python packages that have been installed on your system.

pyenv: Python Version Manager

  • From the docs: pyenv lets you easily switch between multiple versions of Python. It's simple, unobtrusive, and follows the UNIX tradition of single-purpose tools that do one thing well. This project was forked from rbenv and ruby-build, and modified for Python.
  • Many folks hesitate to use python3.
  • If you need to use different versions of python, pyenv lets you manage this easily.

virtualenv: Python Environment Manager.

  • From the docs: The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into /usr/lib/python2.7/site-packages (or whatever your platform’s standard location is), it’s easy to end up in a situation where you unintentionally upgrade an application that shouldn’t be upgraded.
  • To create a virtualenv, simply invoke virtualenv ENV, where ENV is is a directory to place the new virtual environment.
  • To initialize the virtualenv, you need to source ENV/bin/activate. To stop using, simply call deactivate.
  • Once you activate the virtualenv, you might install all of a workspace's package requirements by running pip install -r against the project's requirements.txt file.

Anaconda: Package Manager + Environment Manager + Additional Scientific Libraries.

  • **Anaconda is a commercial distribution of Python with the most popular python libraries, you are not permitted to use Anaconda in an organisation with more than 200 employees.
  • From the docs: Anaconda 4.2.0 includes an easy installation of Python (2.7.12, 3.4.5, and/or 3.5.2) and updates of over 100 pre-built and tested scientific and analytic Python packages that include NumPy, Pandas, SciPy, Matplotlib, and IPython, with over 620 more packages available via a simple conda install <packagename>
  • As a web developer, I haven't used Anaconda. It's ~3GB including all the packages.
  • There is a slimmed down miniconda version, which seems like it could be a more simple option than using pip + virtualenv, although I don't have experience using it personally.
  • While conda allows you to install packages, these packages are separate than PyPI packages, so you may still need to use pip additionally depending on the types of packages you need to install.

See also:

  • conda vs pip vs virtualenv (section in documentation from anaconda)
  • the difference between pip and conda (stackoverflow)
  • the relationship between virtualenv and pyenv (stackoverflow)

Updating python in anaconda virtual environment

You seem to be mixing up venv and conda. Your environment is managed by conda (which is a project separate from the Python language), whereas venv is the de facto virtual environment manager used within the Python language. I'm not even sure how venv handles the command you have provided.

If having 3.10 is a priority, I would suggest that you create a new conda environment using 3.10 and install any packages from your previous 3.6 environment. (This is also what the conda project recommends.)

Set up virtualenv using a requirements.txt generated by conda

I'm setting up a python project, using an Anaconda virtual environment. I was wondering though, when other developers want to contribute to the project, but want to use virtualenv instead of Anaconda, can they do that?

Yes - in fact this is how many of my projects are structured. To accomplish what you're looking for, here is a simple directory that we'll use as reference:

.
├── README.md
├── app
│   ├── __init__.py
│   ├── static
│   ├── templates
├── migrations
├── app.py
├── environment.yml
├── requirements.txt

In the project directory above, we have both environment.yml (for Conda users) and requirements.txt (for pip).

environment.yml

So apparently both outputs are different, and my theory is: once I generate my requirements.txt with conda on my project, other developers can't choose virtualenv instead - at least not if they're not prepared to install a long list requirements by hand (it will be more than just the aiohttp module of course).

To overcome this, we are using two different environment files, each in their own distinct format allowing for other contributors to pick the one they prefer. If Adam uses Conda to manage his environments, then all he need to do create his Conda from the environment.yml file:

conda env create -f environment.yml

The first line of the yml file sets the new environment's name. This is how we create the Conda-flavored environment file. Activate your environment (source activate or conda activate) then:

conda env export > environment.yml

In fact, because the environment file created by the conda env export command handles both the environment's pip packages and conda packages, we don't even need to have two distinct processes to create this file. conda env export will export all packages within your environment regardless of the channel they're installed from. It will have a record of this in environment.yml as well:

name: awesomeflask
channels:
- anaconda
- conda-forge
- defaults
dependencies:
- appnope=0.1.0=py36hf537a9a_0
- backcall=0.1.0=py36_0
- cffi=1.11.5=py36h6174b99_1
- decorator=4.3.0=py36_0
- ...

requirements.txt

Am I right when I think that if developers would like to do this, they would need to programmatically change the package list to the format that virtualenv understands, or they would have to import all packages manually? Meaning that I impose them to choose conda as virtual environment as well if they want to save themselves some extra work?

The recommended (and conventional) way to _change to the format that pip understands is through requirements.txt. Activate your environment (source activate or conda activate) then:

pip freeze > requirements.txt

Say Eve, one of the project contributor want to create an identical virtual environment from requirements.txt, she can either:

# using pip
pip install -r requirements.txt

# using Conda
conda create --name <env_name> --file requirements.txt

A full discussion is beyond the scope of this answer, but similar questions are worth a read.

An example of requirements.txt:

alembic==0.9.9
altair==2.2.2
appnope==0.1.0
arrow==0.12.1
asn1crypto==0.24.0
astroid==2.0.2
backcall==0.1.0
...

Tips: always create requirements.txt

In general, even on a project where all members are Conda users, I still recommend creating both the environment.yml (for the contributors) as well as the requirements.txt environment file. I also recommend having both these environment files tracked by version control early on (ideally from the beginning). This has many benefits, among them being the fact that it simplifies your debugging process and your deployment process later on.

A specific example that spring to mind is that of Azure App Service. Say you have a Django / Flask app, and want to deploy the app to a Linux server using Azure App Service with git deployment enabled:

az group create --name myResourceGroup --location "Southeast Asia"
az webapp create --resource-group myResourceGroup --plan myServicePlan

The service will look for two files, one being application.py and another being requirements.txt. You absolutely need both of these file (even if they're blank files) for the automation to work. This varies by cloud infrastructure / providers of course, but I find that having requirements.txt in our project generally saves us a lot of trouble down the road and worth the initial set-up overhead.

If your code is on GitHub, having requirements.txt also give you extra peace of mind by having its vulnerability detection pick up on any issue before alerting you / repo owner. That's a lot of great value for free, on the merit of having this extra dependency file (small price to pay).

GitHub alerts

This is as easy as making sure that every time a new dependency is installed, we run both the conda env export and pip freeze > requirements.txt command.



Related Topics



Leave a reply



Submit