Spaces:
Sleeping
Sleeping
| title: π€ Agentic Browser | |
| A powerful, privacy-focused AI assistant that runs locally on your machine. This application allows you to interact with various open-source language models directly in your browser, with support for both lightweight and more powerful models. | |
| ## π Features | |
| - **Local AI Models**: Run models entirely on your hardware | |
| - **Multiple Models**: Choose between different models based on your needs | |
| - **Privacy-Focused**: Your data never leaves your machine | |
| - **Easy to Use**: Simple and intuitive chat interface | |
| - **Customizable**: Adjust parameters like temperature for different response styles | |
| ## π Getting Started | |
| ### Prerequisites | |
| - Python 3.10 or higher | |
| - Git | |
| - At least 8GB RAM (16GB+ recommended for larger models) | |
| - At least 10GB free disk space for models | |
| ### Installation | |
| 1. **Clone the repository**: | |
| ```bash | |
| git clone https://huggingface.co/spaces/anu151105/agentic-browser | |
| cd agentic-browser | |
| ``` | |
| 2. **Install dependencies**: | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| 3. **Download models** (this may take a while): | |
| ```bash | |
| python scripts/download_models.py --model tiny-llama | |
| ``` | |
| For the full experience with all models: | |
| ```bash | |
| python scripts/download_models.py --model all | |
| ``` | |
| ### Running the Application | |
| 1. **Start the Streamlit app**: | |
| ```bash | |
| streamlit run src/streamlit_app.py | |
| ``` | |
| 2. **Open your browser** to the URL shown in the terminal (usually http://localhost:8501) | |
| 3. **Select a model** from the sidebar and click "Load Model" | |
| 4. **Start chatting!** | |
| ## π€ Available Models | |
| - **TinyLlama (1.1B)**: Fast and lightweight, runs well on most hardware | |
| - Great for quick responses | |
| - Lower resource requirements | |
| - Good for testing and development | |
| - **Mistral-7B (7B)**: More powerful but requires more resources | |
| - Better understanding and generation | |
| - Requires more RAM and VRAM | |
| - Slower response times | |
| ## βοΈ Configuration | |
| You can customize the behavior of the application by setting environment variables in a `.env` file: | |
| ```env | |
| # Model settings | |
| MODEL_DEVICE=auto # 'auto', 'cuda', 'cpu' | |
| MODEL_CACHE_DIR=./models | |
| # Text generation settings | |
| DEFAULT_TEMPERATURE=0.7 | |
| MAX_TOKENS=1024 | |
| # UI settings | |
| THEME=light # 'light' or 'dark' | |
| ``` | |
| ## π οΈ Development | |
| ### Adding New Models | |
| 1. Edit `config/model_config.py` to add your model configuration | |
| 2. Update the `DEFAULT_MODELS` dictionary with your model details | |
| 3. The model will be available in the UI after restarting the app | |
| ### Running Tests | |
| ```bash | |
| pytest tests/ | |
| ``` | |
| ## π License | |
| This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. | |
| ## π Acknowledgments | |
| - [Hugging Face](https://huggingface.co/) for the amazing Transformers library | |
| - [Streamlit](https://streamlit.io/) for the awesome web framework | |
| - The open-source AI community for making these models available | |
| emoji: π | |
| colorFrom: red | |
| colorTo: red | |
| sdk: docker | |
| app_port: 8501 | |
| tags: | |
| - streamlit | |
| pinned: false | |
| short_description: An autonomus browser agent for browser based tasks | |
| license: mit | |
| # Welcome to Streamlit! | |
| Edit `/src/streamlit_app.py` to customize this app to your heart's desire. :heart: | |
| If you have any questions, checkout our [documentation](https://docs.streamlit.io) and [community | |
| forums](https://discuss.streamlit.io). | |