Skip to content

[Feature Request]: Add installation to RamaLama #1124

@oxfighterjet

Description

@oxfighterjet

Is your feature request related to a problem? Please describe.

This feature request is not related to a problem. The current version proposes Ollama, a popular tool to run local LLMs. Ollama is not a good steward of open-source practices (ggml-org/llama.cpp#11016 (comment)). RamaLama can be a good alternative, making use of robust technologies like containers (both Docker and Podman.) and using standards as jinja.

Describe the solution you'd like

Similar to how other tools are made accessible and can be installed using LinUtil, I think it could be interesting to see RamaLama join the club. Additionally, it would be an opportunity to facilitate some few steps in the installation and upgrading process for RamaLama, as after nvidia drivers updates, it may be necessary to re-run some commands.

Describe alternatives you've considered

RamaLama is itself an alternative to Ollama.

Additional context

RamaLama's project page can be found: https://ramalama.ai/
And its source code is available here: https://github.com/containers/ramalama/

The installation is provided here: https://github.com/containers/ramalama/?tab=readme-ov-file#install-script-linux-and-macos
Some additional steps for nvidia hardware acceleration (some of which need to be re-run after upgrading nvidia drivers): https://github.com/containers/ramalama/blob/main/docs/ramalama-cuda.7.md

Checklist

  • I checked for duplicate issues.
  • I checked already existing discussions.
  • This feature is not included in the roadmap.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions