This week’s episode of Destination Linux, we follow up on some feedback about Red Hat’s RHEL and CentOS changes. We also discuss an open-source alternative to ChatGPT and Google Bard called Open Assistant and then we take a look at some AI integrations with the Linux desktop and ONLYOFFICE. Plus, we have our tips/tricks and software picks. All this and much more on Destination Linux.
Hosts of Destination Linux:
Ryan (DasGeek) = dasgeekcommunity.com
Michael Tunnell = tuxdigital.com
Jill Bryant = jilllinuxgirl.com
Want to Support the Show?
Become a Patron = https://tuxdigital.com/contribute
Store = http://tuxdigital.com/store
Community Feedback
You can comment on the forum thread below or fill out the contact form at https://tuxdigital.com/contact
Chapters:
- 00:00 DL 330 Intro
- 00:51 Community Feedback about RHEL’s Source Code Changes
- 14:57 LINBIT – [ linbit.com ]
- 16:06 Open Assistant, Open Source AI Chat Bot – [ link ]
- 36:36 Bitwarden – [ bitwarden.com/tux ]
- 37:15 AI Plugin for ONLYOFFICE – [ link ]
- 43:49 Gaming: A Car That Turns – [ link ]
- 48:26 Sofware Spotlight: Newelle – [ link ]
- 53:13 Tip of the Week: Prompts for AI Chats
- 56:47 Outro
I recommend trying serge.chat. it is a docker container that allows you to run a ton of different AI language models locally on your computer.
Appreciate it Batvin, added to the list!
There’s also GT4All if you want a native GUI though
serge
seems to support a lot more models.“… Microsoft is including AI in their Office Suite.” - Gah! I am still flustered with the LAST time they tried this!!
1997 - “It looks like you’re typing a letter. Would you like help?” - Clippy
If Microsoft ever gets a hold of virtual reality, we’ll certainly witness the return of Microsoft Bob!
Open source AI is like an infinite time sink to learn right now and it’s been a real struggle parsing the signal from the noise.
Strictly from what i’ve learned so far… Open Assistant seems just be interested in selling generic SaaS products.
The leading models they’re offering, falcon, oasst, pythia, galactica are made by other people, are already quantized to run on “smaller hardware” available here and the method for quantizing for off-the-shelf Nvidia GPUs is here. You don’t need an expensive cloud server running a Discord bot to use these models, you can run them at home on Linux right now (or on a much cheaper cloud server).
What I think Linux desperately needs is a project that makes these easier to use similar to GPT4All but utilizing proper GPU quantized models, not just the CPU ggml models. On the cloud API front things seem to be going pretty well with things like langflow and Flowise, and most self-hostable AI projects tend to have good APIs though they can get buried a bit under all the SaaS marketting.
Continue the discussion at forum.tuxdigital.com