r/JupyterNotebooks Oct 31 '24

Jupyter not honoring Conda environments?

1 Upvotes

Hi all!

I've been using jupyter on an off for a while, but I need to start using it a lot more regularly, and I need to integrate with conda virtual environments.

Working on a new ubuntu 24.04 install, I installed Anaconda, then created a new virtual environment and installed jupyter:

conda create -n jupyter python=3.12
conda activate jupyter
pip install jupyterlab
jupyter lab
... 

So far so good, everything running as expected. So I then create another conda environment for a new project and register it with jupyter via ipykernel.

conda create -n rag-llama3.2 python=3.11
conda activate rag-llama3.2
python -m ipykernel install --user --name=rag-llama3.2

The ipykernel part was completely new to me, I was following a medium post: https://medium.com/@nrk25693/how-to-add-your-conda-environment-to-your-jupyter-notebook-in-just-4-steps-abeab8b8d084

So I now have jupyter running in its own conda env, and a new env to use for my project. This is where things get very strange. I jump in to the jupyter console, create a new notebook, and select the newly registered kernel from the dropdown, all seems fine. I start installing a few packages and writing a little code:

! pip install langchain-nomic
! pip install -qU langchain-ollama
! pip list | grep langchain
langchain-core            0.3.14
langchain-nomic           0.1.3
langchain-ollama          0.2.0

Packages installed, so I begin with an import:

# LLM using local Ollama

### LLM
from langchain_ollama import ChatOllama

local_llm = "llama3.2:3b-instruct-fp16"
docker_host = "http://127.0.0.1:11434"

llm = ChatOllama(model=local_llm, temperature=0, api_base_url=docker_host)
llm_json_mode = ChatOllama(model=local_llm, temperature=0, format="json", api_base_url=docker_host)

Computer says no!

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[4], line 4
      1 # LLM using local Ollama
      2 
      3 ### LLM
----> 4 from langchain_ollama import ChatOllama
      6 local_llm = "llama3.2:3b-instruct-fp16"
      7 docker_host = "http://127.0.0.1:11434"

ModuleNotFoundError: No module named 'langchain_ollama'---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[4], line 4
      1 # LLM using local Ollama
      2 
      3 ### LLM
----> 4 from langchain_ollama import ChatOllama
      6 local_llm = "llama3.2:3b-instruct-fp16"
      7 docker_host = "http://127.0.0.1:11434"

ModuleNotFoundError: No module named 'langchain_ollama'

So the modules are installed, but I can't import them. At this point I started hunting around and found a few commands to help identify the problem:

!jupyter kernelspec list --json

{
  "kernelspecs": {
    "python3": {
      "resource_dir": "/home/gjws/anaconda3/envs/jupyter/share/jupyter/kernels/python3",
      "spec": {
        "argv": [
          "python",
          "-m",
          "ipykernel_launcher",
          "-f",
          "{connection_file}"
        ],
        "env": {},
        "display_name": "Python 3 (ipykernel)",
        "language": "python",
        "interrupt_mode": "signal",
        "metadata": {
          "debugger": true
        }
      }
    },
    "rag-llama3.2": {
      "resource_dir": "/home/gjws/.local/share/jupyter/kernels/rag-llama3.2",
      "spec": {
        "argv": [
          "/home/gjws/anaconda3/envs/rag-llama3.2/bin/python",
          "-Xfrozen_modules=off",
          "-m",
          "ipykernel_launcher",
          "-f",
          "{connection_file}"
        ],
        "env": {},
        "display_name": "rag-llama3.2",
        "language": "python",
        "interrupt_mode": "signal",
        "metadata": {
          "debugger": true
        }
      }
    }
  }
}
/home/gjws/anaconda3/envs/jupyter/bin/python{
  "kernelspecs": {
    "python3": {
      "resource_dir": "/home/gjws/anaconda3/envs/jupyter/share/jupyter/kernels/python3",
      "spec": {
        "argv": [
          "python",
          "-m",
          "ipykernel_launcher",
          "-f",
          "{connection_file}"
        ],
        "env": {},
        "display_name": "Python 3 (ipykernel)",
        "language": "python",
        "interrupt_mode": "signal",
        "metadata": {
          "debugger": true
        }
      }
    },
    "rag-llama3.2": {
      "resource_dir": "/home/gjws/.local/share/jupyter/kernels/rag-llama3.2",
      "spec": {
        "argv": [
          "/home/gjws/anaconda3/envs/rag-llama3.2/bin/python",
          "-Xfrozen_modules=off",
          "-m",
          "ipykernel_launcher",
          "-f",
          "{connection_file}"
        ],
        "env": {},
        "display_name": "rag-llama3.2",
        "language": "python",
        "interrupt_mode": "signal",
        "metadata": {
          "debugger": true
        }
      }
    }
  }
}

!which -a python
/home/gjws/anaconda3/envs/jupyter/bin/python

So to my untrained eyes, jupyter is seeing both the jupyter conda environment and the rag-llama3.2 environment and getting confused.

Now I don't know where to go.

Have I done something fundamentally wrong?

Should I NOT be running jupyter in its own venv and just install it globally?

Have I screwed up the ipykernel steps somewhere?

Any help would be much appreciated. I've been at this for hours and have hit a brick wall :(

Thanks for taking the time to read all this!!!