This week’s episode of Destination Linux, we follow up on some feedback about Red Hat’s RHEL and CentOS changes. We also discuss an open-source alternative to ChatGPT and Google Bard called Open Assistant and then we take a look at some AI integrations with the Linux desktop and ONLYOFFICE. Plus, we have our tips/tricks and software picks. All this and much more on Destination Linux.

Hosts of Destination Linux:

Ryan (DasGeek) =
Michael Tunnell =
Jill Bryant =

Want to Support the Show?

Become a Patron =
Store =

Community Feedback

You can comment on the forum thread below or fill out the contact form at


Leave a Comment

Notable Replies

  1. I recommend trying it is a docker container that allows you to run a ton of different AI language models locally on your computer.

  2. Appreciate it Batvin, added to the list!

    There’s also GT4All if you want a native GUI though serge seems to support a lot more models.

  3. “… Microsoft is including AI in their Office Suite.” - Gah! I am still flustered with the LAST time they tried this!!

    1997 - “It looks like you’re typing a letter. Would you like help?” - Clippy

    If Microsoft ever gets a hold of virtual reality, we’ll certainly witness the return of Microsoft Bob!

  4. Open source AI is like an infinite time sink to learn right now and it’s been a real struggle parsing the signal from the noise.

    Strictly from what i’ve learned so far… Open Assistant seems just be interested in selling generic SaaS products.

    What hardware will be required to run the models?
    The current smallest (Pythia) model is 12B parameters and is challenging to run on consumer hardware, but can run on a single professional GPU. In future there may be smaller models and we hope to make progress on methods like integer quantisation which can help run the model on smaller hardware.

    Frequently Asked Questions | Open Assistant

    The leading models they’re offering, falcon, oasst, pythia, galactica are made by other people, are already quantized to run on “smaller hardware” available here and the method for quantizing for off-the-shelf Nvidia GPUs is here. You don’t need an expensive cloud server running a Discord bot to use these models, you can run them at home on Linux right now (or on a much cheaper cloud server).

    What I think Linux desperately needs is a project that makes these easier to use similar to GPT4All but utilizing proper GPU quantized models, not just the CPU ggml models. On the cloud API front things seem to be going pretty well with things like langflow and Flowise, and most self-hostable AI projects tend to have good APIs though they can get buried a bit under all the SaaS marketting.

Continue the discussion at


Avatar for Batvin123 Avatar for Ulfnic Avatar for MichaelTunnell Avatar for jastombaugh

About Destination Linux

Destination Linux is a weekly conversational podcast about sharing our passion for Linux & Open Source. Destination Linux is a show for all experience levels, whether you’re a beginner to Open Source or a Guru of Sudo, this is the podcast for you. Destination Linux covers a wide range of topics from the latest news, discussions on Linux & Open Source, gaming on Linux, unique in-depth interviews and much more!

More Episodes

Related Podcasts